diff --git a/data_all_eng_slimpj/shuffled/split2/finalzxnj b/data_all_eng_slimpj/shuffled/split2/finalzxnj new file mode 100644 index 0000000000000000000000000000000000000000..c8bd83a33e1f4bf03d08d6458c0dd6d8b9177918 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzxnj @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:Intro}\n\n\n With the increased use of electric vehicles (EVs) over the past decade, a large network of EV charging stations has been installed on the electric grid.\n Data collected from city-wide deployment of EV charging stations can be used for both academic and industrial purposes, e.g., through statistical analysis \\cite{nasrinEV2016} and modeling. Studies require reliable sessions data for understanding behaviors and exploring flexibility. The scarcity of reliable data has been discussed previously \\cite{EVDSreview}, and its necessity has been pointed out for further research purposes. Even when data exists, it may be protected under confidentiality for private collectors, and not freely available for academic or public use. Lack of wide-spread availability and accessibility of realistic EV charging sessions data pose a significant hurdle, impeding further research in the field.\n \n We propose a Synthetic Data Generator (SDG), that can be used to generate a sample of EV charging sessions. This implies temporal statistical modeling of arrivals and modeling of departures and the associated electrical load (i.e., required energy). We define a parametric model that can be trained from a real-world dataset. This trained model can be subsequently used to generate realistic data samples of EV sessions. \n In this paper, we contribute with:\n \\begin{itemize}[topsep=0pt]\n \\setlength\\itemsep{0em}\n \\item An approach to model sessions data for EVs over a group of charging stations defined as the SDG (\\secref{sec:SDG});\n \\item An outline of the model parameters, and discussion over benefits and drawbacks of different models;\n \\item Answers to our main research questions, being:\n \n \n \t\\begin{question}\\label{q:SDGdefination}~Which parametric models can be used to describe sessions of EVs. What are the input parameters and latent variables for these models? \\end{question}\n \n \t\\begin{question}\\label{q:generate}~How can we generate synthetic samples of EV sessions data from these parametric models? \\end{question}\n \t \n \\end{itemize}\n\n\\section{Synthetic Data Generator}\n\\label{sec:SDG}\n\nEach EV session can be described using three parameters:\n\\begin{enumerate*}[(i)]\n \\item \\textit{Arrival time},\n \\item \\textit{Departure time}, and \n \\item \\textit{Energy charged} (in kWh).\n\\end{enumerate*}\nWe can define three models for each of these as:\n\\begin{itemize}[topsep=0pt]\n \\item \\textit{Arrivals} = $AM$(Horizon)\n \\item \\textit{Connected times} = $MM_{c}$(Arrivals) \n \\item \\textit{Energy} = $MM_{e}$(Arrivals)\n\\end{itemize}\nWhere $AM$ means arrival model, $MM_{c}$ means mixture model for connection times and $MM_{e}$ means mixture model for the energy charged. `Horizon' is the time horizon for which the data needs to be generated and is an input parameter. Departure time in an EV charging session can be calculated as the sum of its arrival time and connected time.\n\n\\subsection{Arrival Models}\n\\label{sec:arrivalmodel}\n \n Arrivals of EVs in a group of charging stations can be considered as events on a continuous timescale. Supported by our large-scale dataset, we assume that the inter-arrival time of EVs\n follows an exponential distribution,\n characterized by a parameter $\\lambda$ representing the arrival rate (EVs per time unit).\n This means we can either model the times in between successive events, or rather the number of events in a given time interval:\n \\begin{itemize}[topsep=0pt]\n \\item \\textit{Exponential IAT distribution:} we generate the arrival of the next EV using $t_{i} = t_{i-1} + \\Delta t $. Where $t_{i} $ is the time of $i^{th}$ arrival. $\\Delta t $ is the time difference between the $i^\\textrm{th}$ and ${i-1}^\\textrm{th}$ arrivals. The probability density of $\\Delta t $ (inter-arrival time, IAT) is exponentially distributed, characterized by $\\lambda = f(\\textrm{month}, \\textrm{daytype}, \\textrm{time-of-day})$, where `daytype' is either weekday or weekend.\n \\item \\textit{Poisson process:} we generate the number of arrivals in a given time slot (e.g., slots of 60 minutes). The number of arrivals $N_\\textit{arr}$ in a given time slot can be generated using sampling from a Poisson distribution\n with mean $\\lambda \\cdot T$ (with $T$ the duration of a timeslot).\n \\end{itemize}\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\noindent In \\figref{fig:expdensity} we plot density of inter-arrival times where the original data refers to real-world data collected by ELaadNL. A fitted exponential distribution validates this assumption that inter-arrival times are exponentially distributed: Kolmogorov\u2013Smirnov test results for all combinations of month, daytype and time-of-day slots have $p$-value $< 0.05$.\n \n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth,height=6cm]{Resources\/exponential-dist-excel.png}\n \n \\caption{Inter-arrival time probability density for real world data collected by ELaadNL.\n \\label{fig:expdensity}\n \\setlength{\\belowcaptionskip}{-00pt}\n \\end{figure} \n \n\n We can model $\\lambda$ (in function of month, daytype, time-of-day) using averaging (discontinuous) over each timeslot or rather fitting a continuous function of time (continuous). While fitting continuous curves, capturing the peaks during the day becomes very important. Also, fitted curves could result in negative values of $\\lambda$, thus we need to impose lower and upper bounds on it.\n \n \n Using the IAT approach, the time of next EV arrival is determined relatively to the time of the previous EV. In case $\\lambda$ at a given time-of-day is very low, this implies the next EV arrival may be be generated very late (with a large $\\Delta t $), thus skipping several consecutive timeslots. This is problematic if those next timeslots exhibit a much higher $\\lambda$, and thus have a high probability of EV arrivals --- which would not be generated because of the very large $\\Delta t $.\n \n \n \n \n The second modeling approach, as a Poisson process, circumvents the above problem, since when certain timeslots have low $\\lambda$, we will likely generate 0 arrivals, but still proceed to the immediately next timeslot (with possibly high $\\lambda$ to generate possible arrivals there). A potential drawback still is that the variance and mean of the Poisson distribution are equal. In case this assumption does not hold, we adopt a negative binomial distribution instead.\n \n\n\n\n\n\n\n\n\n\n\n\\subsection{Mixture Models for Connected Times and Energy} \n\\label{sec:departuremodel}\n\nAside from arrival events, also EV departure events, as well as the EV charging load (or total energy charged) need to be modeled.\nArrivals are conditional on previous arrivals, obviously, and we model the probability distribution for connected times ($t_\\textit{depart} - t_\\textit{arr}$) as Gaussian Mixture Models (GMMs). Also for EV charging energy we use GMMs.\nAs in the arrival generation models, we define and fit different models for each month and time-of-day.\n\n\n\n\n\n\nIn summary, answering \\qref{q:SDGdefination} (Which models can be used in SDG?), we propose\n\\begin{enumerate*}[(i)]\n \\item exponential IAT distribution or Poisson distribution for arrivals per timeslot ($\\textit{AM}$),\n \\item GMMs for both the duration of EV sessions ($\\textit{MM}_c$) and \n \\item their associated energy charged ($\\textit{MM}_e$).\n\\end{enumerate*}\n\n\n\n\\section{Generating Samples}\n\\label{sec:training}\n\nAfter fitting the SDG on a real-world data, we have $AM$, $MM_{c}$ and $MM_{e}$ as parametric models for arrival times, connection times and energy required. For synthetic generation of arrival times, e.g., $\\lambda$ values can be used to generate $\\Delta t $ (and hence a series of arrivals). For connection times and energy required, a sample from the PDFs can be used as a synthetically generated connection time and required energy. This answers \\qref{q:generate}, on how we can generate samples.\n\nWith our SDG (AM, $MM_{c}$ and $MM_{e}$) can be saved as a separate file. These models will be supplied with the code (that we plan to make publicly available) to generate new synthetic data samples. These models do not include the actual real-world EV session data that the SDG was trained on. Hence, these models can be shared without violating confidentiality constraints. Generated samples from these models can be subsequently used for flexibility analysis, load balancing and other research purposes. \n\n\n\n\\section{Conclusion and Future Work}\n\\label{sec:future work}\n Our paper summarized the modeling approach of EV charging sessions. \n We adopted these models for training a synthetic data generator (SDG) with real-world data, that can thus generate synthetic samples of EV sessions data.\n \n We plan to release the source code including the \\emph{training} scripts, but also \\emph{generation} code (including the settings thereof reflecting the trained model characteristics based on a large-scale real-world dataset) to produce synthetic EV session data reflecting real-world behavior. We believe this fits a strong need of researchers in both academic and industrial settings.\n \n \n As future work, we aim to propose modeling methods for the time-varying arrival distribution model parameters.\n Further, we will tackle the following challenges in depth:\n \\begin{enumerate*}[label={(\\arabic*)}]\n \\item Studying the properties of the real-world data with the goal to define evaluations metrics for comparing real-world data with generated samples.\n \\item Correctly modeling peaks of arrivals during the day; study effective methods that avoid negative values in continuous $\\lambda$ curves. \n\\end{enumerate*}\n \n \n\\begin{acks}\nThis research received funding from the Flemish Government under the \"Onderzoeksprogramma Artifici\u00eble Intelligentie (AI) Vlaanderen\" programme.\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n\n\\section{Introduction}\nACM's consolidated article template, introduced in 2017, provides a\nconsistent \\LaTeX\\ style for use across ACM publications, and\nincorporates accessibility and metadata-extraction functionality\nnecessary for future Digital Library endeavors. Numerous ACM and\nSIG-specific \\LaTeX\\ templates have been examined, and their unique\nfeatures incorporated into this single new template.\n\nIf you are new to publishing with ACM, this document is a valuable\nguide to the process of preparing your work for publication. If you\nhave published with ACM before, this document provides insight and\ninstruction into more recent changes to the article template.\n\nThe ``\\verb|acmart|'' document class can be used to prepare articles\nfor any ACM publication --- conference or journal, and for any stage\nof publication, from review to final ``camera-ready'' copy, to the\nauthor's own version, with {\\itshape very} few changes to the source.\n\n\\section{Template Overview}\nAs noted in the introduction, the ``\\verb|acmart|'' document class can\nbe used to prepare many different kinds of documentation --- a\ndouble-blind initial submission of a full-length technical paper, a\ntwo-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''\njournal article, a SIGCHI Extended Abstract, and more --- all by\nselecting the appropriate {\\itshape template style} and {\\itshape\n template parameters}.\n\nThis document will explain the major features of the document\nclass. For further information, the {\\itshape \\LaTeX\\ User's Guide} is\navailable from\n\\url{https:\/\/www.acm.org\/publications\/proceedings-template}.\n\n\\subsection{Template Styles}\n\nThe primary parameter given to the ``\\verb|acmart|'' document class is\nthe {\\itshape template style} which corresponds to the kind of publication\nor SIG publishing the work. This parameter is enclosed in square\nbrackets and is a part of the {\\verb|documentclass|} command:\n\\begin{verbatim}\n \\documentclass[STYLE]{acmart}\n\\end{verbatim}\n\nJournals use one of three template styles. All but three ACM journals\nuse the {\\verb|acmsmall|} template style:\n\\begin{itemize}\n\\item {\\verb|acmsmall|}: The default journal template style.\n\\item {\\verb|acmlarge|}: Used by JOCCH and TAP.\n\\item {\\verb|acmtog|}: Used by TOG.\n\\end{itemize}\n\nThe majority of conference proceedings documentation will use the {\\verb|acmconf|} template style.\n\\begin{itemize}\n\\item {\\verb|acmconf|}: The default proceedings template style.\n\\item{\\verb|sigchi|}: Used for SIGCHI conference articles.\n\\item{\\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.\n\\item{\\verb|sigplan|}: Used for SIGPLAN conference articles.\n\\end{itemize}\n\n\\subsection{Template Parameters}\n\nIn addition to specifying the {\\itshape template style} to be used in\nformatting your work, there are a number of {\\itshape template parameters}\nwhich modify some part of the applied template style. A complete list\nof these parameters can be found in the {\\itshape \\LaTeX\\ User's Guide.}\n\nFrequently-used parameters, or combinations of parameters, include:\n\\begin{itemize}\n\\item {\\verb|anonymous,review|}: Suitable for a ``double-blind''\n conference submission. Anonymizes the work and includes line\n numbers. Use with the \\verb|\\acmSubmissionID| command to print the\n submission's unique ID on each page of the work.\n\\item{\\verb|authorversion|}: Produces a version of the work suitable\n for posting by the author.\n\\item{\\verb|screen|}: Produces colored hyperlinks.\n\\end{itemize}\n\nThis document uses the following string as the first command in the\nsource file:\n\\begin{verbatim}\n\\documentclass[sigconf]{acmart}\n\\end{verbatim}\n\n\\section{Modifications}\n\nModifying the template --- including but not limited to: adjusting\nmargins, typeface sizes, line spacing, paragraph and list definitions,\nand the use of the \\verb|\\vspace| command to manually adjust the\nvertical spacing between elements of your work --- is not allowed.\n\n{\\bfseries Your document will be returned to you for revision if\n modifications are discovered.}\n\n\\section{Typefaces}\n\nThe ``\\verb|acmart|'' document class requires the use of the\n``Libertine'' typeface family. Your \\TeX\\ installation should include\nthis set of packages. Please do not substitute other typefaces. The\n``\\verb|lmodern|'' and ``\\verb|ltimes|'' packages should not be used,\nas they will override the built-in typeface families.\n\n\\section{Title Information}\n\nThe title of your work should use capital letters appropriately -\n\\url{https:\/\/capitalizemytitle.com\/} has useful rules for\ncapitalization. Use the {\\verb|title|} command to define the title of\nyour work. If your work has a subtitle, define it with the\n{\\verb|subtitle|} command. Do not insert line breaks in your title.\n\nIf your title is lengthy, you must define a short version to be used\nin the page headers, to prevent overlapping text. The \\verb|title|\ncommand has a ``short title'' parameter:\n\\begin{verbatim}\n \\title[short title]{full title}\n\\end{verbatim}\n\n\\section{Authors and Affiliations}\n\nEach author must be defined separately for accurate metadata\nidentification. Multiple authors may share one affiliation. Authors'\nnames should not be abbreviated; use full first names wherever\npossible. Include authors' e-mail addresses whenever possible.\n\nGrouping authors' names or e-mail addresses, or providing an ``e-mail\nalias,'' as shown below, is not acceptable:\n\\begin{verbatim}\n \\author{Brooke Aster, David Mehldau}\n \\email{dave,judy,steve@university.edu}\n \\email{firstname.lastname@phillips.org}\n\\end{verbatim}\n\nThe \\verb|authornote| and \\verb|authornotemark| commands allow a note\nto apply to multiple authors --- for example, if the first two authors\nof an article contributed equally to the work.\n\nIf your author list is lengthy, you must define a shortened version of\nthe list of authors to be used in the page headers, to prevent\noverlapping text. The following command should be placed just after\nthe last \\verb|\\author{}| definition:\n\\begin{verbatim}\n \\renewcommand{\\shortauthors}{McCartney, et al.}\n\\end{verbatim}\nOmitting this command will force the use of a concatenated list of all\nof the authors' names, which may result in overlapping text in the\npage headers.\n\nThe article template's documentation, available at\n\\url{https:\/\/www.acm.org\/publications\/proceedings-template}, has a\ncomplete explanation of these commands and tips for their effective\nuse.\n\n\\section{Rights Information}\n\nAuthors of any work published by ACM will need to complete a rights\nform. Depending on the kind of work, and the rights management choice\nmade by the author, this may be copyright transfer, permission,\nlicense, or an OA (open access) agreement.\n\nRegardless of the rights management choice, the author will receive a\ncopy of the completed rights form once it has been submitted. This\nform contains \\LaTeX\\ commands that must be copied into the source\ndocument. When the document source is compiled, these commands and\ntheir parameters add formatted text to several areas of the final\ndocument:\n\\begin{itemize}\n\\item the ``ACM Reference Format'' text on the first page.\n\\item the ``rights management'' text on the first page.\n\\item the conference information in the page header(s).\n\\end{itemize}\n\nRights information is unique to the work; if you are preparing several\nworks for an event, make sure to use the correct set of commands with\neach of the works.\n\n\\section{CCS Concepts and User-Defined Keywords}\n\nTwo elements of the ``acmart'' document class provide powerful\ntaxonomic tools for you to help readers find your work in an online\nsearch.\n\nThe ACM Computing Classification System ---\n\\url{https:\/\/www.acm.org\/publications\/class-2012} --- is a set of\nclassifiers and concepts that describe the computing\ndiscipline. Authors can select entries from this classification\nsystem, via \\url{https:\/\/dl.acm.org\/ccs\/ccs.cfm}, and generate the\ncommands to be included in the \\LaTeX\\ source.\n\nUser-defined keywords are a comma-separated list of words and phrases\nof the authors' choosing, providing a more flexible way of describing\nthe research being presented.\n\nCCS concepts and user-defined keywords are required for all short- and\nfull-length articles, and optional for two-page abstracts.\n\n\\section{Sectioning Commands}\n\nYour work should use standard \\LaTeX\\ sectioning commands:\n\\verb|section|, \\verb|subsection|, \\verb|subsubsection|, and\n\\verb|paragraph|. They should be numbered; do not remove the numbering\nfrom the commands.\n\nSimulating a sectioning command by setting the first word or words of\na paragraph in boldface or italicized text is {\\bfseries not allowed.}\n\n\\section{Tables}\n\nThe ``\\verb|acmart|'' document class includes the ``\\verb|booktabs|''\npackage --- \\url{https:\/\/ctan.org\/pkg\/booktabs} --- for preparing\nhigh-quality tables.\n\nTable captions are placed {\\itshape above} the table.\n\nBecause tables cannot be split across pages, the best placement for\nthem is typically the top of the page nearest their initial cite. To\nensure this proper ``floating'' placement of tables, use the\nenvironment \\textbf{table} to enclose the table's contents and the\ntable caption. The contents of the table itself must go in the\n\\textbf{tabular} environment, to be aligned properly in rows and\ncolumns, with the desired horizontal and vertical rules. Again,\ndetailed instructions on \\textbf{tabular} material are found in the\n\\textit{\\LaTeX\\ User's Guide}.\n\nImmediately following this sentence is the point at which\nTable~\\ref{tab:freq} is included in the input file; compare the\nplacement of the table here with the table in the printed output of\nthis document.\n\n\\begin{table}\n \\caption{Frequency of Special Characters}\n \\label{tab:freq}\n \\begin{tabular}{ccl}\n \\toprule\n Non-English or Math&Frequency&Comments\\\\\n \\midrule\n \\O & 1 in 1,000& For Swedish names\\\\\n $\\pi$ & 1 in 5& Common in math\\\\\n \\$ & 4 in 5 & Used in business\\\\\n $\\Psi^2_1$ & 1 in 40,000& Unexplained usage\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTo set a wider table, which takes up the whole width of the page's\nlive area, use the environment \\textbf{table*} to enclose the table's\ncontents and the table caption. As with a single-column table, this\nwide table will ``float'' to a location deemed more\ndesirable. Immediately following this sentence is the point at which\nTable~\\ref{tab:commands} is included in the input file; again, it is\ninstructive to compare the placement of the table here with the table\nin the printed output of this document.\n\n\\begin{table*}\n \\caption{Some Typical Commands}\n \\label{tab:commands}\n \\begin{tabular}{ccl}\n \\toprule\n Command &A Number & Comments\\\\\n \\midrule\n \\texttt{{\\char'134}author} & 100& Author \\\\\n \\texttt{{\\char'134}table}& 300 & For tables\\\\\n \\texttt{{\\char'134}table*}& 400& For wider tables\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\section{Math Equations}\nYou may want to display math equations in three distinct styles:\ninline, numbered or non-numbered display. Each of the three are\ndiscussed in the next sections.\n\n\\subsection{Inline (In-text) Equations}\nA formula that appears in the running text is called an inline or\nin-text formula. It is produced by the \\textbf{math} environment,\nwhich can be invoked with the usual\n\\texttt{{\\char'134}begin\\,\\ldots{\\char'134}end} construction or with\nthe short form \\texttt{\\$\\,\\ldots\\$}. You can use any of the symbols\nand structures, from $\\alpha$ to $\\omega$, available in\n\\LaTeX~\\cite{Lamport:LaTeX}; this section will simply show a few\nexamples of in-text equations in context. Notice how this equation:\n\\begin{math}\n \\lim_{n\\rightarrow \\infty}x=0\n\\end{math},\nset here in in-line math style, looks slightly different when\nset in display style. (See next section).\n\n\\subsection{Display Equations}\nA numbered display equation---one set off by vertical space from the\ntext and centered horizontally---is produced by the \\textbf{equation}\nenvironment. An unnumbered display equation is produced by the\n\\textbf{displaymath} environment.\n\nAgain, in either environment, you can use any of the symbols and\nstructures available in \\LaTeX\\@; this section will just give a couple\nof examples of display equations in context. First, consider the\nequation, shown as an inline equation above:\n\\begin{equation}\n \\lim_{n\\rightarrow \\infty}x=0\n\\end{equation}\nNotice how it is formatted somewhat differently in\nthe \\textbf{displaymath}\nenvironment. Now, we'll enter an unnumbered equation:\n\\begin{displaymath}\n \\sum_{i=0}^{\\infty} x + 1\n\\end{displaymath}\nand follow it with another numbered equation:\n\\begin{equation}\n \\sum_{i=0}^{\\infty}x_i=\\int_{0}^{\\pi+2} f\n\\end{equation}\njust to demonstrate \\LaTeX's able handling of numbering.\n\n\\section{Figures}\n\nThe ``\\verb|figure|'' environment should be used for figures. One or\nmore images can be placed within a figure. If your figure contains\nthird-party material, you must clearly identify it as such, as shown\nin the example below.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{sample-franklin}\n \\caption{1907 Franklin Model D roadster. Photograph by Harris \\&\n Ewing, Inc. [Public domain], via Wikimedia\n Commons. (\\url{https:\/\/goo.gl\/VLCRBB}).}\n \\Description{The 1907 Franklin Model D roadster.}\n\\end{figure}\n\nYour figures should contain a caption which describes the figure to\nthe reader. Figure captions go below the figure. Your figures should\n{\\bfseries also} include a description suitable for screen readers, to\nassist the visually-challenged to better understand your work.\n\nFigure captions are placed {\\itshape below} the figure.\n\n\\subsection{The ``Teaser Figure''}\n\nA ``teaser figure'' is an image, or set of images in one figure, that\nare placed after all author and affiliation information, and before\nthe body of the article, spanning the page. If you wish to have such a\nfigure in your article, place the command immediately before the\n\\verb|\\maketitle| command:\n\\begin{verbatim}\n \\begin{teaserfigure}\n \\includegraphics[width=\\textwidth]{sampleteaser}\n \\caption{figure caption}\n \\Description{figure description}\n \\end{teaserfigure}\n\\end{verbatim}\n\n\\section{Citations and Bibliographies}\n\nThe use of \\BibTeX\\ for the preparation and formatting of one's\nreferences is strongly recommended. Authors' names should be complete\n--- use full first names (``Donald E. Knuth'') not initials\n(``D. E. Knuth'') --- and the salient identifying features of a\nreference should be included: title, year, volume, number, pages,\narticle DOI, etc.\n\nThe bibliography is included in your source document with these two\ncommands, placed just before the \\verb|\\end{document}| command:\n\\begin{verbatim}\n \\bibliographystyle{ACM-Reference-Format}\n \n\\section{Introduction}\nACM's consolidated article template, introduced in 2017, provides a\nconsistent \\LaTeX\\ style for use across ACM publications, and\nincorporates accessibility and metadata-extraction functionality\nnecessary for future Digital Library endeavors. Numerous ACM and\nSIG-specific \\LaTeX\\ templates have been examined, and their unique\nfeatures incorporated into this single new template.\n\nIf you are new to publishing with ACM, this document is a valuable\nguide to the process of preparing your work for publication. If you\nhave published with ACM before, this document provides insight and\ninstruction into more recent changes to the article template.\n\nThe ``\\verb|acmart|'' document class can be used to prepare articles\nfor any ACM publication --- conference or journal, and for any stage\nof publication, from review to final ``camera-ready'' copy, to the\nauthor's own version, with {\\itshape very} few changes to the source.\n\n\\section{Template Overview}\nAs noted in the introduction, the ``\\verb|acmart|'' document class can\nbe used to prepare many different kinds of documentation --- a\ndouble-blind initial submission of a full-length technical paper, a\ntwo-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''\njournal article, a SIGCHI Extended Abstract, and more --- all by\nselecting the appropriate {\\itshape template style} and {\\itshape\n template parameters}.\n\nThis document will explain the major features of the document\nclass. For further information, the {\\itshape \\LaTeX\\ User's Guide} is\navailable from\n\\url{https:\/\/www.acm.org\/publications\/proceedings-template}.\n\n\\subsection{Template Styles}\n\nThe primary parameter given to the ``\\verb|acmart|'' document class is\nthe {\\itshape template style} which corresponds to the kind of publication\nor SIG publishing the work. This parameter is enclosed in square\nbrackets and is a part of the {\\verb|documentclass|} command:\n\\begin{verbatim}\n \\documentclass[STYLE]{acmart}\n\\end{verbatim}\n\nJournals use one of three template styles. All but three ACM journals\nuse the {\\verb|acmsmall|} template style:\n\\begin{itemize}\n\\item {\\verb|acmsmall|}: The default journal template style.\n\\item {\\verb|acmlarge|}: Used by JOCCH and TAP.\n\\item {\\verb|acmtog|}: Used by TOG.\n\\end{itemize}\n\nThe majority of conference proceedings documentation will use the {\\verb|acmconf|} template style.\n\\begin{itemize}\n\\item {\\verb|acmconf|}: The default proceedings template style.\n\\item{\\verb|sigchi|}: Used for SIGCHI conference articles.\n\\item{\\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.\n\\item{\\verb|sigplan|}: Used for SIGPLAN conference articles.\n\\end{itemize}\n\n\\subsection{Template Parameters}\n\nIn addition to specifying the {\\itshape template style} to be used in\nformatting your work, there are a number of {\\itshape template parameters}\nwhich modify some part of the applied template style. A complete list\nof these parameters can be found in the {\\itshape \\LaTeX\\ User's Guide.}\n\nFrequently-used parameters, or combinations of parameters, include:\n\\begin{itemize}\n\\item {\\verb|anonymous,review|}: Suitable for a ``double-blind''\n conference submission. Anonymizes the work and includes line\n numbers. Use with the \\verb|\\acmSubmissionID| command to print the\n submission's unique ID on each page of the work.\n\\item{\\verb|authorversion|}: Produces a version of the work suitable\n for posting by the author.\n\\item{\\verb|screen|}: Produces colored hyperlinks.\n\\end{itemize}\n\nThis document uses the following string as the first command in the\nsource file:\n\\begin{verbatim}\n\\documentclass[sigconf]{acmart}\n\\end{verbatim}\n\n\\section{Modifications}\n\nModifying the template --- including but not limited to: adjusting\nmargins, typeface sizes, line spacing, paragraph and list definitions,\nand the use of the \\verb|\\vspace| command to manually adjust the\nvertical spacing between elements of your work --- is not allowed.\n\n{\\bfseries Your document will be returned to you for revision if\n modifications are discovered.}\n\n\\section{Typefaces}\n\nThe ``\\verb|acmart|'' document class requires the use of the\n``Libertine'' typeface family. Your \\TeX\\ installation should include\nthis set of packages. Please do not substitute other typefaces. The\n``\\verb|lmodern|'' and ``\\verb|ltimes|'' packages should not be used,\nas they will override the built-in typeface families.\n\n\\section{Title Information}\n\nThe title of your work should use capital letters appropriately -\n\\url{https:\/\/capitalizemytitle.com\/} has useful rules for\ncapitalization. Use the {\\verb|title|} command to define the title of\nyour work. If your work has a subtitle, define it with the\n{\\verb|subtitle|} command. Do not insert line breaks in your title.\n\nIf your title is lengthy, you must define a short version to be used\nin the page headers, to prevent overlapping text. The \\verb|title|\ncommand has a ``short title'' parameter:\n\\begin{verbatim}\n \\title[short title]{full title}\n\\end{verbatim}\n\n\\section{Authors and Affiliations}\n\nEach author must be defined separately for accurate metadata\nidentification. Multiple authors may share one affiliation. Authors'\nnames should not be abbreviated; use full first names wherever\npossible. Include authors' e-mail addresses whenever possible.\n\nGrouping authors' names or e-mail addresses, or providing an ``e-mail\nalias,'' as shown below, is not acceptable:\n\\begin{verbatim}\n \\author{Brooke Aster, David Mehldau}\n \\email{dave,judy,steve@university.edu}\n \\email{firstname.lastname@phillips.org}\n\\end{verbatim}\n\nThe \\verb|authornote| and \\verb|authornotemark| commands allow a note\nto apply to multiple authors --- for example, if the first two authors\nof an article contributed equally to the work.\n\nIf your author list is lengthy, you must define a shortened version of\nthe list of authors to be used in the page headers, to prevent\noverlapping text. The following command should be placed just after\nthe last \\verb|\\author{}| definition:\n\\begin{verbatim}\n \\renewcommand{\\shortauthors}{McCartney, et al.}\n\\end{verbatim}\nOmitting this command will force the use of a concatenated list of all\nof the authors' names, which may result in overlapping text in the\npage headers.\n\nThe article template's documentation, available at\n\\url{https:\/\/www.acm.org\/publications\/proceedings-template}, has a\ncomplete explanation of these commands and tips for their effective\nuse.\n\n\\section{Rights Information}\n\nAuthors of any work published by ACM will need to complete a rights\nform. Depending on the kind of work, and the rights management choice\nmade by the author, this may be copyright transfer, permission,\nlicense, or an OA (open access) agreement.\n\nRegardless of the rights management choice, the author will receive a\ncopy of the completed rights form once it has been submitted. This\nform contains \\LaTeX\\ commands that must be copied into the source\ndocument. When the document source is compiled, these commands and\ntheir parameters add formatted text to several areas of the final\ndocument:\n\\begin{itemize}\n\\item the ``ACM Reference Format'' text on the first page.\n\\item the ``rights management'' text on the first page.\n\\item the conference information in the page header(s).\n\\end{itemize}\n\nRights information is unique to the work; if you are preparing several\nworks for an event, make sure to use the correct set of commands with\neach of the works.\n\n\\section{CCS Concepts and User-Defined Keywords}\n\nTwo elements of the ``acmart'' document class provide powerful\ntaxonomic tools for you to help readers find your work in an online\nsearch.\n\nThe ACM Computing Classification System ---\n\\url{https:\/\/www.acm.org\/publications\/class-2012} --- is a set of\nclassifiers and concepts that describe the computing\ndiscipline. Authors can select entries from this classification\nsystem, via \\url{https:\/\/dl.acm.org\/ccs\/ccs.cfm}, and generate the\ncommands to be included in the \\LaTeX\\ source.\n\nUser-defined keywords are a comma-separated list of words and phrases\nof the authors' choosing, providing a more flexible way of describing\nthe research being presented.\n\nCCS concepts and user-defined keywords are required for all short- and\nfull-length articles, and optional for two-page abstracts.\n\n\\section{Sectioning Commands}\n\nYour work should use standard \\LaTeX\\ sectioning commands:\n\\verb|section|, \\verb|subsection|, \\verb|subsubsection|, and\n\\verb|paragraph|. They should be numbered; do not remove the numbering\nfrom the commands.\n\nSimulating a sectioning command by setting the first word or words of\na paragraph in boldface or italicized text is {\\bfseries not allowed.}\n\n\\section{Tables}\n\nThe ``\\verb|acmart|'' document class includes the ``\\verb|booktabs|''\npackage --- \\url{https:\/\/ctan.org\/pkg\/booktabs} --- for preparing\nhigh-quality tables.\n\nTable captions are placed {\\itshape above} the table.\n\nBecause tables cannot be split across pages, the best placement for\nthem is typically the top of the page nearest their initial cite. To\nensure this proper ``floating'' placement of tables, use the\nenvironment \\textbf{table} to enclose the table's contents and the\ntable caption. The contents of the table itself must go in the\n\\textbf{tabular} environment, to be aligned properly in rows and\ncolumns, with the desired horizontal and vertical rules. Again,\ndetailed instructions on \\textbf{tabular} material are found in the\n\\textit{\\LaTeX\\ User's Guide}.\n\nImmediately following this sentence is the point at which\nTable~\\ref{tab:freq} is included in the input file; compare the\nplacement of the table here with the table in the printed output of\nthis document.\n\n\\begin{table}\n \\caption{Frequency of Special Characters}\n \\label{tab:freq}\n \\begin{tabular}{ccl}\n \\toprule\n Non-English or Math&Frequency&Comments\\\\\n \\midrule\n \\O & 1 in 1,000& For Swedish names\\\\\n $\\pi$ & 1 in 5& Common in math\\\\\n \\$ & 4 in 5 & Used in business\\\\\n $\\Psi^2_1$ & 1 in 40,000& Unexplained usage\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTo set a wider table, which takes up the whole width of the page's\nlive area, use the environment \\textbf{table*} to enclose the table's\ncontents and the table caption. As with a single-column table, this\nwide table will ``float'' to a location deemed more\ndesirable. Immediately following this sentence is the point at which\nTable~\\ref{tab:commands} is included in the input file; again, it is\ninstructive to compare the placement of the table here with the table\nin the printed output of this document.\n\n\\begin{table*}\n \\caption{Some Typical Commands}\n \\label{tab:commands}\n \\begin{tabular}{ccl}\n \\toprule\n Command &A Number & Comments\\\\\n \\midrule\n \\texttt{{\\char'134}author} & 100& Author \\\\\n \\texttt{{\\char'134}table}& 300 & For tables\\\\\n \\texttt{{\\char'134}table*}& 400& For wider tables\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\section{Math Equations}\nYou may want to display math equations in three distinct styles:\ninline, numbered or non-numbered display. Each of the three are\ndiscussed in the next sections.\n\n\\subsection{Inline (In-text) Equations}\nA formula that appears in the running text is called an inline or\nin-text formula. It is produced by the \\textbf{math} environment,\nwhich can be invoked with the usual\n\\texttt{{\\char'134}begin\\,\\ldots{\\char'134}end} construction or with\nthe short form \\texttt{\\$\\,\\ldots\\$}. You can use any of the symbols\nand structures, from $\\alpha$ to $\\omega$, available in\n\\LaTeX~\\cite{Lamport:LaTeX}; this section will simply show a few\nexamples of in-text equations in context. Notice how this equation:\n\\begin{math}\n \\lim_{n\\rightarrow \\infty}x=0\n\\end{math},\nset here in in-line math style, looks slightly different when\nset in display style. (See next section).\n\n\\subsection{Display Equations}\nA numbered display equation---one set off by vertical space from the\ntext and centered horizontally---is produced by the \\textbf{equation}\nenvironment. An unnumbered display equation is produced by the\n\\textbf{displaymath} environment.\n\nAgain, in either environment, you can use any of the symbols and\nstructures available in \\LaTeX\\@; this section will just give a couple\nof examples of display equations in context. First, consider the\nequation, shown as an inline equation above:\n\\begin{equation}\n \\lim_{n\\rightarrow \\infty}x=0\n\\end{equation}\nNotice how it is formatted somewhat differently in\nthe \\textbf{displaymath}\nenvironment. Now, we'll enter an unnumbered equation:\n\\begin{displaymath}\n \\sum_{i=0}^{\\infty} x + 1\n\\end{displaymath}\nand follow it with another numbered equation:\n\\begin{equation}\n \\sum_{i=0}^{\\infty}x_i=\\int_{0}^{\\pi+2} f\n\\end{equation}\njust to demonstrate \\LaTeX's able handling of numbering.\n\n\\section{Figures}\n\nThe ``\\verb|figure|'' environment should be used for figures. One or\nmore images can be placed within a figure. If your figure contains\nthird-party material, you must clearly identify it as such, as shown\nin the example below.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{sample-franklin}\n \\caption{1907 Franklin Model D roadster. Photograph by Harris \\&\n Ewing, Inc. [Public domain], via Wikimedia\n Commons. (\\url{https:\/\/goo.gl\/VLCRBB}).}\n \\Description{The 1907 Franklin Model D roadster.}\n\\end{figure}\n\nYour figures should contain a caption which describes the figure to\nthe reader. Figure captions go below the figure. Your figures should\n{\\bfseries also} include a description suitable for screen readers, to\nassist the visually-challenged to better understand your work.\n\nFigure captions are placed {\\itshape below} the figure.\n\n\\subsection{The ``Teaser Figure''}\n\nA ``teaser figure'' is an image, or set of images in one figure, that\nare placed after all author and affiliation information, and before\nthe body of the article, spanning the page. If you wish to have such a\nfigure in your article, place the command immediately before the\n\\verb|\\maketitle| command:\n\\begin{verbatim}\n \\begin{teaserfigure}\n \\includegraphics[width=\\textwidth]{sampleteaser}\n \\caption{figure caption}\n \\Description{figure description}\n \\end{teaserfigure}\n\\end{verbatim}\n\n\\section{Citations and Bibliographies}\n\nThe use of \\BibTeX\\ for the preparation and formatting of one's\nreferences is strongly recommended. Authors' names should be complete\n--- use full first names (``Donald E. Knuth'') not initials\n(``D. E. Knuth'') --- and the salient identifying features of a\nreference should be included: title, year, volume, number, pages,\narticle DOI, etc.\n\nThe bibliography is included in your source document with these two\ncommands, placed just before the \\verb|\\end{document}| command:\n\\begin{verbatim}\n \\bibliographystyle{ACM-Reference-Format}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{{#1}}}\n\n\n\\newcommand{\\uple}[1]{\\text{\\boldmath${#1}$}}\n\n\\def\\stacksum#1#2{{\\stackrel{{\\scriptstyle #1}}\n{{\\scriptstyle #2}}}}\n\n\\def--{--}\n\n\\def\\map#1#2#3#4{\\begin{matrix}#1&\\mapsto \n\\\\#3 &\\mapsto \n\\end{matrix}}\n\\newcommand{\\uple{\\alpha}}{\\uple{\\alpha}}\n\\newcommand{\\uple{\\beta}}{\\uple{\\beta}}\n\\newcommand{\\uple{b}}{\\uple{b}}\n\\newcommand{\\uple{a}}{\\uple{a}}\n\\newcommand{\\uple{h}}{\\uple{h}}\n\\newcommand{\\uple{l}}{\\uple{l}}\n\\newcommand{\\bft}{\\uple{t}} \n\\newcommand{\\uple{x}}{\\uple{x}}\n\\newcommand{\\uple{y}}{\\uple{y}}\n\\newcommand{\\uple{m}}{\\uple{m}}\n\\newcommand{\\uple{n}}{\\uple{n}}\n\\newcommand{C(\\mathcal{F})}{C(\\mathcal{F})}\n\\newcommand{\\uple{I}}{\\uple{I}}\n\\newcommand{\\uple{J}}{\\uple{J}}\n\\newcommand{\\mathrm{e}_q}{\\mathrm{e}_q}\n\\newcommand{\\mathrm{Std}}{\\mathrm{Std}}\n\\newcommand{\\mathrm{Sym}}{\\mathrm{Sym}}\n\\newcommand{\\mathrm{sym}}{\\mathrm{sym}}\n\\newcommand{\\mathrm{arith}}{\\mathrm{arith}}\n\\newcommand{\\mathrm{Irr}}{\\mathrm{Irr}}\n\\newcommand{\\mathrm{geom}}{\\mathrm{geom}}\n\\newcommand{G^{\\mathrm{arith}}}{G^{\\mathrm{arith}}}\n\\newcommand{G_n^{\\mathrm{arith}}}{G_n^{\\mathrm{arith}}}\n\\newcommand{G^{\\mathrm{geom}}}{G^{\\mathrm{geom}}}\n\\newcommand{G_{\\mathcal{F},\\mathrm{arith}}}{G_{\\mathcal{F},\\mathrm{arith}}}\n\\newcommand{G_{\\mathcal{F},\\mathrm{geom}}}{G_{\\mathcal{F},\\mathrm{geom}}}\n\\newcommand{\\Garithd}[1]{G_{{#1},\\mathrm{arith}}}\n\\newcommand{\\Ggeomd}[1]{G_{{#1},\\mathrm{geom}}}\n\\newcommand{K^{\\mathrm{sep}}}{K^{\\mathrm{sep}}}\n\\newcommand{K_x^{\\mathrm{sep}}}{K_x^{\\mathrm{sep}}}\n\\newcommand{\\mathbf{0}}{\\mathbf{0}}\n\\newcommand{\\mathbf{R}}{\\mathbf{R}}\n\\newcommand{\\mathbf{S}}{\\mathbf{S}}\n\\newcommand{\\mathbf{F}}{\\mathbf{F}}\n\\newcommand{\\mathbf{K}}{\\mathbf{K}}\n\\newcommand{\\mathbf{M}}{\\mathbf{M}}\n\\newcommand{\\mathbf{N}}{\\mathbf{N}}\n\\newcommand{\\mathbf{C}}{\\mathbf{C}}\n\\newcommand{\\mathbf{S}}{\\mathbf{S}}\n\\newcommand{\\mathbf{C}^\\times}{\\mathbf{C}^\\times}\n\\newcommand{\\mathbf{N}}{\\mathbf{N}}\n\\newcommand{\\mathbf{A}}{\\mathbf{A}}\n\\newcommand{\\mathbf{G}_{m}}{\\mathbf{G}_{m}}\n\\newcommand{\\mathbf{G}_{m,{\\mathbf{F}_q}}}{\\mathbf{G}_{m,{\\mathbf{F}_q}}}\n\\newcommand{\\mathbf{B}}{\\mathbf{B}}\n\\newcommand{\\mathbf{D}}{\\mathbf{D}}\n\\newcommand{\\mathbf{Z}}{\\mathbf{Z}}\n\\newcommand{\\mathbf{P}}{\\mathbf{P}}\n\\newcommand{\\mathbf{R}}{\\mathbf{R}}\n\\newcommand{\\mathbf{G}}{\\mathbf{G}}\n\\newcommand{q^{1\/2}}{q^{1\/2}}\n\\newcommand{q^{-1\/2}}{q^{-1\/2}}\n\n\\newcommand{\\mathbf{H}}{\\mathbf{H}}\n\\newcommand{\\mathbf{Q}}{\\mathbf{Q}}\n\\newcommand{\\mathbf{Q}_{\\ell}}{\\mathbf{Q}_{\\ell}}\n\\newcommand{\\ov{\\mathbf{Q}_{\\ell}}}{\\ov{\\mathbf{Q}_{\\ell}}}\n\\newcommand{\\ov{{\\mathbf{F}_q}}}{\\ov{{\\mathbf{F}_q}}}\n\\newcommand{{\\mathbf{F}_p}}{{\\mathbf{F}_p}}\n\\newcommand{{\\mathbf{F}^\\times_p}}{{\\mathbf{F}^\\times_p}}\n\\newcommand{{\\mathbf{F}_q}}{{\\mathbf{F}_q}}\n\\newcommand{{\\mathbf{F}_{q^n}}}{{\\mathbf{F}_{q^n}}}\n\\newcommand{{\\mathbf{F}^\\times_{q^n}}}{{\\mathbf{F}^\\times_{q^n}}}\n\\newcommand{{\\mathbf{F}^\\times_q}}{{\\mathbf{F}^\\times_q}}\n\\newcommand{{\\mathbf{F}_{q^d}}}{{\\mathbf{F}_{q^d}}}\n\\newcommand{{\\mathbf{F}^\\times_{q^d}}}{{\\mathbf{F}^\\times_{q^d}}}\n\\newcommand{\\mathbf{F}}{\\mathbf{F}}\n\\newcommand{\\bar{\\Ff}_p}{\\bar{\\mathbf{F}}_p}\n\\newcommand{\\bar{\\Ff}_q}{\\bar{\\mathbf{F}}_q}\n\\newcommand{\\bar{\\Qq}_{\\ell}}{\\bar{\\mathbf{Q}}_{\\ell}}\n\\newcommand{\\mathbf{T}}{\\mathbf{T}}\n\\newcommand{\\mathbf{G}}{\\mathbf{G}}\n\\newcommand{g^\\natural}{g^\\natural}\n\\newcommand{\\boldsymbol{\\mu}}{\\boldsymbol{\\mu}}\n\\newcommand{\\mathcal{O}}{\\mathcal{O}}\n\\newcommand{\\mathcal{V}}{\\mathcal{V}}\n\\newcommand{\\mathcal{O}}{\\mathcal{O}}\n\\newcommand{\\mathcal{N}}{\\mathcal{N}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{K}\\ell}{\\mathcal{K}\\ell}\n\\newcommand{\\mathcal{K}\\ell}{\\mathcal{K}\\ell}\n\n\\newcommand{\\overline{\\mathbf{F}}}{\\overline{\\mathbf{F}}}\n\\newcommand{\\mathcal{E}}{\\mathcal{E}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\text{\\boldmath$P$}}{\\mathbf{P}}\n\\newcommand{\\text{\\boldmath$E$}}{\\mathbf{E}}\n\\newcommand{\\mathbf{V}}{\\mathbf{V}}\n\\newcommand{\\mathbf{1}}{\\mathbf{1}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{g^{\\sharp}}{g^{\\sharp}}\n\\newcommand{y^{\\sharp}}{y^{\\sharp}}\n\\newcommand{\\clconj}[1]{{{#1}}^{\\sharp}}\n\n\\newcommand{\\mods}[1]{\\,(\\mathrm{mod}\\,{#1})}\n\n\\newcommand{\\sli}[1]{\\underline{{#1}}}\n\n\\newcommand{\\ideal}[1]{\\mathfrak{{#1}}}\n\n\\newcommand{\\mathrm{Id}}{\\mathrm{Id}}\n\n\n\n\n\n\\newcommand{\\widehat}{\\widehat}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathbf{G}}{\\mathbf{G}}\n\\newcommand{\\mathbf{B}}{\\mathbf{B}}\n\\newcommand{\\mathbf{D}}{\\mathbf{D}}\n\\newcommand{\\mathbf{G}^{opt}}{\\mathbf{G}^{opt}}\n\\newcommand{\\hautk}[2]{\\mathbf{G}_{{#1},{#2}}}\n\\newcommand{\\hautz}[2]{\\mathbf{G}^{a}_{{#1},{#2}}}\n\\newcommand{\\hauti}[3]{\\mathbf{G}^{{#1}}_{{#2},{#3}}}\n\n\\DeclareMathOperator{\\frob}{Fr}\n\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\skl}[1]{\\sheaf{K}^{({#1})}}\n\\newcommand{\\hk}[1]{\\sheaf{K}\\ell_{{#1}}}\n\\newcommand{\\mutw}[3]{\\mu_{{#3},{#2}}}\n\\newcommand{\\frtr}[3]{(\\Tr{{#1}})({#2},{#3})}\n\\DeclareMathOperator{\\hypk}{Kl}\n\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\n\n\n\\newcommand{\\rightarrow}{\\rightarrow}\n\\newcommand{\\longrightarrow}{\\longrightarrow}\n\\newcommand{\\twoheadrightarrow}{\\twoheadrightarrow}\n\\newcommand{\\hookrightarrow}{\\hookrightarrow}\n\\newcommand{\\hookleftarrow}{\\hookleftarrow}\n\\newcommand{\\Longleftrightarrow}{\\Longleftrightarrow}\n\\newcommand{\\fleche}[1]{\\stackrel{#1}{\\longrightarrow}}\n\\newcommand{\\flecheinj}[1]{\\stackrel{#1}{\\hookrightarrow}}\n\\newcommand{\\flechecinj}[1]{\\stackrel{#1}{\\hookleftarrow}}\n\n\\newcommand{\\barre}[1]{\\overline{{#1}}}\n\n\n\\DeclareMathOperator{\\Spec}{Spec}\n\\DeclareMathOperator{\\Vol}{Vol}\n\\DeclareMathOperator{\\proj}{Proj}\n\\DeclareMathOperator{\\Card}{Card}\n\\DeclareMathOperator{\\rank}{rank}\n\\DeclareMathOperator{\\rk}{rk}\n\\DeclareMathOperator{\\res}{Res}\n\\DeclareMathOperator{\\reg}{reg}\n\\DeclareMathOperator{\\ord}{ord}\n\\DeclareMathOperator{\\cl}{Cl}\n\\DeclareMathOperator{\\Div}{Div}\n\\DeclareMathOperator{\\divg}{divg}\n\\DeclareMathOperator{\\Pic}{Pic}\n\\DeclareMathOperator{\\vol}{Vol}\n\\DeclareMathOperator{\\Imag}{Im}\n\\DeclareMathOperator{\\Reel}{Re}\n\\DeclareMathOperator{\\syms}{Sym^{2}}\n\\DeclareMathOperator{\\symk}{Sym}\n\\DeclareMathOperator{\\li}{li}\n\\DeclareMathOperator{\\Frob}{\\mathrm{Frob}}\n\\DeclareMathOperator{\\Fr}{\\mathrm{Frob}}\n\\DeclareMathOperator{\\Kl}{\\mathrm{Kl}}\n\\DeclareMathOperator{\\shKl}{\\mathrm{Kl}}\n\\DeclareMathOperator{\\ET}{\\mathrm{ET}}\n\\DeclareMathOperator{\\tr}{\\mathrm{tr}}\n\\DeclareMathOperator{\\nr}{\\mathrm{Nr}}\n\\DeclareMathOperator{\\Gal}{Gal}\n\\DeclareMathOperator{\\Ind}{Ind}\n\\DeclareMathOperator{\\Res}{Res}\n\\DeclareMathOperator{\\supp}{supp}\n\\DeclareMathOperator{\\im}{Im}\n\\DeclareMathOperator{\\Tr}{Tr}\n\\DeclareMathOperator{\\Hom}{Hom}\n\\DeclareMathOperator{\\End}{End}\n\\DeclareMathOperator{\\Aut}{Aut}\n\\DeclareMathOperator{\\varia}{Var}\n\\DeclareMathOperator{\\argu}{Arg}\n\\DeclareMathOperator{\\spect}{Spec}\n\\DeclareMathOperator{\\disc}{disc}\n\\DeclareMathOperator{\\swan}{Swan}\n\\DeclareMathOperator{\\Sing}{Sing}\n\\DeclareMathOperator{\\Drop}{drop}\n\\DeclareMathOperator{\\sw}{Swan}\n\\DeclareMathOperator{\\bb}{B}\n\\DeclareMathOperator{\\codim}{codim}\n\\DeclareMathOperator{\\ft}{FT}\n\\DeclareMathOperator{\\cond}{\\mathbf{c}}\n\\DeclareMathOperator{\\Ad}{Ad}\n\\DeclareMathOperator{\\dual}{D}\n\\DeclareMathOperator{\\nearb}{R\\Psi}\n\\DeclareMathOperator{\\van}{R\\Phi}\n\\DeclareMathOperator{\\class}{c\\ell}\n\n\n\\newcommand{\\varepsilon}{\\varepsilon}\n\\renewcommand{\\rho}{\\varrho}\n\n\n\\DeclareMathOperator{\\SL}{SL}\n\n\\DeclareMathOperator{\\GL}{GL}\n\\DeclareMathOperator{\\PGL}{PGL}\n\\DeclareMathOperator{\\PGLd}{PGL_2}\n\\DeclareMathOperator{\\rmT}{T}\n\\DeclareMathOperator{\\rmB}{B}\n\\DeclareMathOperator{\\rmG}{G}\n\\DeclareMathOperator{\\rmN}{N}\n\\DeclareMathOperator{\\rmU}{U}\n\\DeclareMathOperator{\\PSL}{PSL}\n\\DeclareMathOperator{\\Sp}{Sp}\n\\DeclareMathOperator{\\GSp}{GSp}\n\\DeclareMathOperator{\\SO}{SO}\n\\DeclareMathOperator{\\Ort}{O}\n\\DeclareMathOperator{\\SU}{SU}\n\\DeclareMathOperator{\\Un}{U}\n\\DeclareMathOperator{\\USp}{USp}\n\n\n\\newcommand{{\\textstyle{\\frac{1}{2}}}}{{\\textstyle{\\frac{1}{2}}}}\n\\newcommand{{\\textstyle{\\frac{1}{4}}}}{{\\textstyle{\\frac{1}{4}}}}\n\\newcommand{{\\textstyle{\\frac{3}{2}}}}{{\\textstyle{\\frac{3}{2}}}}\n\n\\newcommand{\\avg}[1]{A[{#1}]}\n\\newcommand{\\underline{O}}{\\underline{O}}\n\\newcommand{O}{O}\n\n\n\\newcommand{\\sheaf}[1]{\\mathcal{{#1}}}\n\\newcommand{M}{M}\n\\newcommand{linearly disjoint}{linearly disjoint}\n\\newcommand{\\sheafm}[1]{\\tilde{\\sheaf{{#1}}}_{\\ell}}\n\n\n\n\\DeclareMathSymbol{\\gena}{\\mathord}{letters}{\"3C}\n\\DeclareMathSymbol{\\genb}{\\mathord}{letters}{\"3E}\n\n\n\\def\\mathop{\\sum \\Bigl.^{\\flat}}\\limits{\\mathop{\\sum \\Bigl.^{\\flat}}\\limits}\n\\def\\mathop{\\sum \\Bigl.^{+}}\\limits{\\mathop{\\sum \\Bigl.^{+}}\\limits}\n\\def\\mathop{\\sum \\sum}\\limits{\\mathop{\\sum \\sum}\\limits}\n\\def\\mathop{\\sum \\sum \\sum \\sum}\\limits{\\mathop{\\sum \\sum \\sum \\sum}\\limits}\n\\def\\mathop{\\sum\\cdots \\sum}\\limits{\\mathop{\\sum\\cdots \\sum}\\limits}\n\\def\\mathop{\\sum\\bigl.^{\\flat}}\\limits{\\mathop{\\sum\\bigl.^{\\flat}}\\limits}\n\\def\\mathop{\\sum \\Bigl.^{*}}\\limits{\\mathop{\\sum \\Bigl.^{*}}\\limits}\n\\def\\mathop{\\sum\\sum \\Bigl.^{*}}\\limits{\\mathop{\\sum\\sum \\Bigl.^{*}}\\limits}\n\\def\\mathop{\\sum\\sum \\Bigl.^{\\sharp}}\\limits{\\mathop{\\sum\\sum \\Bigl.^{**}}\\limits}\n\\def\\mathop{\\sum\\sum \\Bigl.^{\\sharp}}\\limits{\\mathop{\\sum\\sum \\Bigl.^{\\sharp}}\\limits}\n\\def\\mathop{\\prod \\Bigl.^{*}}\\limits{\\mathop{\\prod \\Bigl.^{*}}\\limits}\n\\def\\mathop{\\sum \\Bigl.^{h}}\\limits{\\mathop{\\sum \\Bigl.^{h}}\\limits}\n\\def\\frac{1}{2i\\pi}\\mathop{\\int}\\limits{\\frac{1}{2i\\pi}\\mathop{\\int}\\limits}\n\\def\\mathop{\\oplus}\\limits{\\mathop{\\oplus}\\limits}\n\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem*{theorem*}{Theorem}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{exo}[theorem]{Exercise}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{conjecture}[theorem]{Conjecture}\n\\newtheorem{problem}[theorem]{Problem}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{variant}[theorem]{Variant}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{comment}[theorem]{Comment}\n\n\\theoremstyle{remark}\n\\newtheorem*{convention}{Conventions}\n\\newtheorem*{ack}{Acknowledgement}\n\\newtheorem*{warning}{Warning}\n\\newtheorem{rem}[theorem]{Remark}\n\\newtheorem*{property}{Properties}\n\n\\theoremstyle{definition}\n\\newtheorem*{claim}{Claim}\n\n\\newtheorem{assumption}[theorem]{Assumption}\n\\newtheorem*{question}{Question}\n\\newtheorem{example}[theorem]{Example}\n\\newtheorem{remark}[theorem]{Remark}\n\\newtheorem*{application}{Application}\n\\newtheorem{xca}{Exercise}\n\\newcommand{\\indic}[1]{[\\underline{Hint}:\\ {#1}]}\n\\newcommand{\\abs}[1]{\\lvert#1\\rvert}\n\n\\newcommand{\\blankbox}[2]{%\n \\parbox{\\columnwidth}{\\centering\n \\setlength{\\fboxsep}{0pt}%\n \\fbox{\\raisebox{0pt}[#2]{\\hspace{#1}}}%\n }%\n}\n\n\\newcommand{w}{w}\n\\newcommand{\\mathfrak{p}}{\\mathfrak{p}}\n\\newcommand{$g$-equivalent}{$g$-equivalent}\n\\newcommand{$g$-equivalence}{$g$-equivalence}\n\\newcommand{G^g}{G^g}\n\\newcommand{\\Psi}{\\Psi}\n\\newcommand{\\Upsilon}{\\Upsilon}\n\\newcommand{(\\sieve,\\siftable)}{(\\Psi,\\Upsilon)}\n\n\\newenvironment{epigraph}\n{\\hfill\\begin{minipage}{0.6\\linewidth}\\raggedleft\\footnotesize}{\\end{minipage}\\bigskip\\bigskip}\n\n\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\mathrm{P}}{\\mathrm{P}}\n\\newcommand{\\mathrm{L}}{\\mathrm{L}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\\newcommand{\\mathcal{Q}}{\\mathcal{Q}}\n\\newcommand{\\mathcal{K}}{\\mathcal{K}}\n\\newcommand{\\mathcal{R}}{\\mathcal{R}}\n\\newcommand{\\mathcal{J}}{\\mathcal{J}}\n\\newcommand{\\mathcal{I}}{\\mathcal{I}}\n\\newcommand{\\mathcal{G}}{\\mathcal{G}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{E}}{\\mathcal{E}}\n\n\\newcommand{\\mathfrak{a}}{\\mathfrak{a}}\n\\newcommand{\\mathfrak{p}}{\\mathfrak{p}}\n\n\\newcommand{\\lambda_f}{\\lambda_f}\n\\newcommand{\\rho_f}{\\rho_f}\n\\newcommand{\\lambda_g}{\\lambda_g}\n\\newcommand{\\rho_g}{\\rho_g}\n\n\\newcommand{\\varphi}{\\varphi}\n\n\\renewcommand{\\geq}{\\geqslant}\n\\renewcommand{\\leq}{\\leqslant}\n\\renewcommand{\\Re}{\\mathfrak{Re}\\,}\n\\renewcommand{\\Im}{\\mathfrak{Im}\\,}\n\n\\newcommand{\\eqref}{\\eqref}\n\\newcommand{\\backslash}{\\backslash}\n\n\\newcommand{\\ov}[1]{\\overline{#1}}\n\n\\newcommand{\\norm}[1]{\\|{#1}\\|}\n\n\\newcommand{\\peter}[1]{\\langle{#1}\\rangle}\n\\newcommand\\sumsum{\\mathop{\\sum\\sum}\\limits}\n\\newcommand\\sumsumsum{\\mathop{\\sum\\sum\\sum}\\limits}\n\\newcommand\\sumsumnd{\\mathop{{\\sum\\sum}^{nd}}\\limits}\n\\newcommand\\delval{1\/8}\n\\newcommand\\delvaln{1\/16}\n\\newcommand\\finalexponent{1\/24}\n\\newcommand\\rpfree{1\/144}\n\n\\begin{document}\n\n\\title{Periodic twists of $\\GL_3$-automorphic forms}\n\n\\author{Emmanuel Kowalski}\n\\address{ETHZ, Switzerland }\n\\email{kowalski@math.ethz.ch}\n\n\\author{Yongxiao Lin}\n\\address{EPFL\/MATH\/TAN, Station 8, CH-1015 Lausanne, Switzerland }\n\\email{yongxiao.lin@epfl.ch}\n\n\\author{Philippe Michel}\n\\address{EPFL\/MATH\/TAN, Station 8, CH-1015 Lausanne, Switzerland }\n\\email{philippe.michel@epfl.ch}\n\n\\author{Will Sawin}\n\\address{Columbia University, USA }\n\\email{sawin@math.columbia.edu}\n\n\\date{\\today,\\ \\thistime} \n\n\\subjclass[2010]{11F55,11M41,11L07, 11T23, 32N10}\n\n\\keywords{Automorphic forms on $\\GL_3$, Fourier coefficients, Hecke\n eigenvalues, discrete Fourier transform, trace functions,\n subconvexity}\n\n\\begin{abstract}\n We prove that sums of length about $q^{3\/2}$ of Hecke eigenvalues of\n automorphic forms on~$\\SL_3(\\mathbf{Z})$ do not correlate with $q$-periodic\n functions with bounded Fourier transform. This generalizes the earlier\n results of Munshi and Holowinsky--Nelson, corresponding to\n multiplicative Dirichlet characters, and applies in particular to\n trace functions of small conductor modulo primes.\n\\end{abstract}\n\n\\thanks{Y. L., Ph.\\ M.\\ and E.\\ K.\\ were partially supported by a\n DFG-SNF lead agency program grant (grant number\n 200020L\\_175755). W. S. was partially supported by the Clay Mathematics\n Institute. \\today\\ \\currenttime}\n\n\\maketitle \n\n\n\\section{Introduction}\n\nLet $\\varphi$ be a cusp form for $\\SL_3(\\mathbf{Z})$ which is an eigenfunction\nof all Hecke operators.\nFor any prime number~$q$ and any primitive Dirichlet character~$\\chi$\nmodulo~$q$, we can then define the twisted $L$-function\n$L(\\varphi\\otimes\\chi,s)$, which is an entire function satisfying a\nfunctional equation relating $s$ to $1-s$.\nIn a recent breakthrough, Munshi~\\cite{Munshi,Munshi1} solved the\nsubconvexity problem for these twisted $L$-functions\n$L(\\varphi\\otimes \\chi,s)$ in the conductor aspect:\n\n\\begin{theorem}[Munshi]\\label{th-munshi}\n Let~$s$ be a complex number such that $\\Re s=1\/2$. For any\n prime~$q$, any primitive Dirichlet character~$\\chi$ modulo~$q$, and\n for any~$\\varepsilon>0$, we have\n\\begin{equation}\\label{eq:subconvex}\n L(\\varphi\\otimes \\chi,s)\\ll q^{3\/4-1\/308+\\varepsilon},\n\\end{equation}\nwhere the implied constant depends on $\\varphi$, $s$ and~$\\varepsilon$.\n\\end{theorem}\n\nThis result was recently analyzed in depth by Holowinsky and\nNelson~\\cite{HN}, who discovered a remarkable simplification (and\nstrenghtening) of Munshi's ideas. They proved:\n\n\\begin{theorem}[Holowinsky--Nelson]\\label{th-hn}\n With notation and assumptions as in Theorem~\\emph{\\ref{th-munshi}},\n we have\n \\begin{equation}\\label{eq:hn}\n L(\\varphi\\otimes \\chi,s)\\ll q^{3\/4-1\/36+\\varepsilon}\n \\end{equation}\n where the implied constant depends on $\\varphi$, $s$ and~$\\varepsilon$.\n\\end{theorem}\n\n\\begin{remark}\n We mention further variants, simplifications and improvements, by\n Aggarwal, Holowinsky, Lin and Sun~\\cite{AHLS}, Holowinsky, Munshi\n and Qi~\\cite{HMQ}, Lin~\\cite{Lin}, Sun and Zhao ~\\cite{SZ}.\n\\end{remark}\n\nLet $(\\lambda(m,n))$ denote the Hecke-eigenvalues of~$\\varphi$. By\nthe approximate functional equation for the twisted $L$-functions, the\nbound \\eqref{eq:hn} is essentially equivalent to the bound\n\\begin{equation}\\label{eq:sumbound}\n \\sum_{n\\geq 1}\\lambda(1,n)\\chi(n)\n V\\Bigl(\\frac{n}{q^{3\/2}}\\Bigr)\\ll q^{3\/2-\\delta},\n\\end{equation}\nfor~$\\delta<1\/36$, where $V$ is any smooth compactly supported\nfunction and the implied constant depends on~$\\varphi$, $\\delta$\nand~$V$.\n\nFrom the perspective of such sums, motivated by the previous work of\nFouvry, Kowalski and Michel~\\cite{FKM1}, which relates to automorphic\nforms on~$\\GL_2$, it is natural to ask whether this\nbound~\\eqref{eq:sumbound} holds when $\\chi$ is replaced by a more\ngeneral trace function $K:{\\mathbf{F}_q}\\to \\mathbf{C}$. Our main result shows that this\nis the case, and in fact extends the result to a much wider range of\n$q$-periodic functions by obtaining estimates only in terms of the\nsize of the discrete Fourier transform modulo~$q$.\n\nPrecisely, for any function~$V$ with compact support on~$\\mathbf{R}$, we set\n\\begin{equation}\\label{defSKX}\n S_{V}(K,X):=\\sum_{n\\geq 1}\\lambda(1,n)K(n)V\\Bigl(\\frac{n}{X}\\Bigr).\n\\end{equation}\n\nWe will assume that $V:\\mathbf{R}\\to \\mathbf{C}$ satisfies the following conditions\nfor some parameter~$Z\\geq 1$:\n\\begin{equation}\\label{eq:Vprop}\n \\mathrm{supp}(V)\\subset ]1,2[,\\text{ and }V^{(i)}(x)\\ll Z^i\\text{\n for all $i\\geq 0$},\n\\end{equation}\nwhere the implied constant depends only on~$i$.\n\nFor any integer~$q\\geq 1$ and any $q$-periodic function\n$K\\colon \\mathbf{Z}\\to \\mathbf{C}$, we denote by\n\\begin{equation}\\label{eq:fourierK}\n \\widehat K(n)=\\frac{1}{q^{1\/2}}\\sum_{x\\in{\\mathbf{F}_q}}K(x)e\\Bigl(\\frac{nx}{q}\\Bigr),\n\\end{equation}\nfor~$n\\in\\mathbf{Z}$, its (unitarily normalized) discrete Fourier transform\nmodulo~$q$. We write~$\\norm{\\widehat{K}}_{\\infty}$ for the maximum of\n$|\\widehat{K}(n)|$ for~$n\\in\\mathbf{Z}$. We then have the discrete Fourier\ninversion formula\n$$\nK(x)=\\frac{1}{q^{1\/2}}\\sum_{n\\in{\\mathbf{F}_q}} \\widehat{K}(n)\ne\\Bigl(-\\frac{nx}{q}\\Bigr)\n$$\nfor~$x\\in\\mathbf{Z}$.\n\nOur main result is a general bound for \\eqref{defSKX} which matches\nprecisely the bound of Holowinsky--Nelson \\cite{HN} in the case of a\nmultiplicative character:\n\n\\begin{theorem}\\label{thm:main}\n Let $\\varphi$ be an $\\SL_3(\\mathbf{Z})$-invariant cuspidal Hecke-eigenform\n with Hecke eigenvalues $(\\lambda(m,n))$. Let~$q$ be a prime number,\n and $K\\colon \\mathbf{Z}\\to\\mathbf{C}$ be a $q$-periodic function.\n \n \n Let $V$ be a smooth, compactly supported function satisfying\n \\eqref{eq:Vprop} for some $Z\\geq 1$. Assume that\n$$\n Z^{2\/3}q^{4\/3}\\leq X \\leq Z^{-2}q^{2}.\n$$\t\n For any~$\\varepsilon>0$, we have\n \\begin{equation}\\label{eq:sumboundK}\n S_V(K,X) \\ll\n \\norm{\\widehat{K}}_{\\infty}Z^{10\/9}q^{2\/9+\\varepsilon}X^{5\/6},\n \\end{equation}\n where the implied constant depends only on~$\\varepsilon$, on~$\\varphi$, and on\n the implicit constants in~\\emph{(\\ref{eq:Vprop})}.\n\\end{theorem}\n\n\\begin{remark}\n (1) Suppose that we vary~$q$ and apply this bound with functions~$K$\n modulo~$q$ that have absolutely bounded Fourier transforms. Take\n $X=q^{3\/2}$. We then obtain the bound\n $$\n S_V(K,q^{3\/2}) \\ll Z^{10\/9}q^{3\/2-1\/36+\\varepsilon}\n $$\n for any~$\\varepsilon>0$.\n \\par\n (2) For the bound \\eqref{eq:sumboundK} to be non-trivial (i.e.,\n assuming~$K$ to be absolutely bounded, better than $X$), it is\n enough that\n $$\n X\\geq Z^{20\/3}q^{4\/3+\\delta}\n $$\n for some~$\\delta>0$.\n \\par\n (3) As in the paper~\\cite{short-sums} of Fouvry, Kowalski, Michel,\n Raju, Rivat and Soundararajan, where the main estimate is also\n phrased in Fourier-theoretic terms only,\\footnote{Although the size\n of~$K$ enters in~\\cite{short-sums} as well as that of its Fourier\n transform.} the motivating examples of functions~$K$ satisfying\n uniform bounds on their Fourier transforms are the trace functions\n of suitable $\\ell$-adic sheaves modulo~$q$. The simplest example\n is~$K(n)=\\chi(n)$, which recovers the bound of Munshi (up to the\n value of the exponent) and Holowinsky--Nelson, since the values of\n the Fourier transform are normalized Gauss sums of modulo~$\\leq\n 1$. We recall some other examples below in\n Section~\\ref{sec-examples}.\n\\end{remark}\n\nWe can deduce from Theorem~\\ref{thm:main} a weak but non-trivial\nbound for the first moment of the twisted central $L$-values, with an\nadditional twist by a discrete Mellin transform. We first recall the\ndefinition\n$$\n\\Kl_3(n)=\\frac{1}{q}\\sum_{\\substack{x,y,z\\in{\\mathbf{F}^\\times_q}\\\\xyz=n}}\ne\\Bigl(\\frac{x+y+z}{q}\\Bigr)\n$$\nfor a hyper-Kloosterman sum with two variables modulo a prime~$q$.\n\n\\begin{corollary}\\label{cor-average}\n Let $\\varphi$ be an $\\SL_3(\\mathbf{Z})$-invariant cuspidal Hecke-eigenform\n with Hecke eigenvalues $(\\lambda(m,n))$. Let~$q$ be a prime number\n and let~$\\chi\\mapsto M(\\chi)$ be a function of Dirichlet characters\n modulo~$q$.\n \\par\n Let~$K$ and~$L$ be the $q$-periodic functions defined\n by~$K(0)=L(0)=0$ and\n \\begin{align*}\n K(n)&=\\frac{q^{1\/2}}{q-1}\\sum_{\\chi\\mods{q}}\\chi(n)M(\\chi)\\\\\n L(n)&=\\frac{1}{q^{1\/2}} \\sum_{x\\in{\\mathbf{F}_q}}K(x)\\Kl_3(nx)\n \\end{align*}\n for~$n$ coprime to~$q$. We then have\n $$\n \\frac{1}{q-1} \\sum_{\\chi\\mods{q}}M(\\chi)L(\\varphi\\otimes\\chi, 1\/2) \\ll\n \\Bigl(\\norm{\\widehat{K}}_{\\infty}+\\norm{\\widehat{L}}_{\\infty}\\Bigr) q^{2\/9\n + \\varepsilon},\n $$\n for any~$\\varepsilon>0$, where the implied constant depends\n on~$\\varphi$ and,~$\\varepsilon$.\n \n \n \n \n \n \n \n \n\\end{corollary}\n\n\n\nA further natural application concerns the symmetric square lift, $\\mathrm{sym}_2(\\psi)$, of a\n$\\GL_2$-cusp form of level~$1$. Precisely, let $\\psi$ be a cuspidal\nHecke-eigenform for~$\\SL_2(\\mathbf{Z})$ with Hecke eigenvalues\n$(\\lambda(n))_{n\\geq 1}$. This implies the following\n\\begin{corollary}\\label{cor-gl2}\n Let $K$ and $V$ be as above and assume that\n $Z^{2\/3}q^{4\/3}\\leq X \\leq Z^{-2}q^{2}$. Then, for any~$\\varepsilon>0$, we\n have\n $$\n \\sum_{n\\geq 1}\\lambda(n^2)K(n)V\\Bigl(\\frac{n}{X}\\Bigr) \\ll\n \\norm{\\widehat{K}}_{\\infty}Z^{10\/9}q^{2\/9+\\varepsilon}X^{5\/6}+\n \\norm{K}_{\\infty}Z^{1\/3}q^{2\/3}X^{1\/2+\\varepsilon},\n $$\n where the implied constant depends only on~$\\varepsilon$, on~$\\psi$, and on\n the implicit constants in~\\emph{(\\ref{eq:Vprop})}.\n\\end{corollary}\n\\begin{remark}\\label{blomerremark} As pointed out to us by V. Blomer, when $K=\\chi$ is a Dirichlet character a stronger bound should be available: for $\\chi$ quadratic, one has (see \\cite{Blomer}) the stronger subconvex bound for the central value\n\\begin{equation}\\label{blomerbound}\n\tL(\\mathrm{sym}_2(\\psi)\\otimes\\chi,s)\\ll_s q^{3\/4-1\/8+o(1)},\\ \\Re s=1\/2.\n\\end{equation}\nThis would amount to a bound of the shape\n$$\n \\sum_{n\\geq 1}\\lambda(n^2)\\chi(n)V\\Bigl(\\frac{n}{q^{3\/2}}\\Bigr) \\ll_Z\n q^{3\/2-1\/8+\\varepsilon}.\n $$\nThe bound \\eqref{blomerbound} actually extends to any character $\\chi\\mods q$ by the same method, using the Petrow-Young variant of the Conrey-Iwaniec method \\cite{CI,PY}. However, since this approach uses positivity of central values, it is not entirely clear yet whether this could be extended to general trace functions. \n\\end{remark}\n\n\nFrom this corollary, one can easily derive an estimate for twists of\nthe arithmetic function~$\\lambda(n)^2=|\\lambda(n)|^2$, which is related\nto~$\\lambda(n^2)$ by the convolution identity\n\\begin{equation}\\label{convol}\n \\lambda(n)^2=\\sum_{ab=n}\\lambda(a^2).\t\n\\end{equation}\n\nHowever, in terms of $L$-functions, a straightforward estimate\nconcerns sums of length close to~$q^2$, and not~$q^{3\/2}$ anymore (it\namounts, when~$K=\\chi$, to a subconvexity estimate\nfor~$L(f\\otimes f\\otimes \\chi,{\\textstyle{\\frac{1}{2}}})$, which results directly from the\nfactorization of this $L$-function of degree~$4$).\n\nOne can however recover a bound for sums of length about~$q^{3\/2}$\nwith more work, and here we require that $K$ be a trace function (more\nprecisely, a \\emph{non-exceptional} trace function, in the sense\nof~\\cite[p. 1686]{FKM2}).\n\n\n\\begin{corollary}\\label{cor2-gl2}\\label{RScor}\n Let $V$ be as above. Let $K$ be a the trace function of an\n $\\ell$-adic sheaf $\\mathcal{F}$ modulo~$q$ which is a geometrically\n irreducible middle-extension sheaf, pure of weight~$0$, on the\n affine line over~$\\mathbf{F}_q$. Assume that the sheaf $\\mathcal{F}$ is not\n geometrically isomorphic to the tensor product\n $\\mathcal{L}_\\psi\\otimes\\mathcal{L}_\\chi$ of an Artin-Schreier sheaf and a Kummer\n sheaf.\n \\par\n If $Z^{-4\/3}q^{4\/3+8\\gamma\/3} \\leq X \\leq Z^{-2}q^{2}$, then we have\n $$\n \\sum_{n\\geq 1}\\lambda(n)^2K(n)V\\Bigl(\\frac{n}{X}\\Bigr) \\ll\n X^{2\/3+\\varepsilon}q^{1\/3}+Z^{5\/6}X^{7\/8+\\varepsilon}q^{1\/6}+X^{1+\\varepsilon}q^{-\\gamma}\n $$\n for any~$\\varepsilon>0$, where the implied constant depends only\n on~$\\psi~$, $\\varepsilon$ and on the conductor~$\\cond(\\mathcal{F})$ of~$\\mathcal{F}$.\n\\end{corollary}\n\n\\begin{remark}\n (1) Suppose that~$Z$ is fixed. The estimate is then non-trivial as\n long as $X\\gg q^{4\/3+\\delta}$; for $X=q^{3\/2}$, it saves a factor\n $q^{1\/48}$ over the trivial bound.\n \\par\n (2) The assumption that~$\\mathcal{F}$ is not exceptional means intuitively\n that $K$ is not proportional to the product of an additive and a\n multiplicative character modulo~$q$. We then have in particular\n $$\n \\|K\\|_{\\infty}+\\|\\widehat{K}\\|_{\\infty}\\ll 1\n $$\n where the implied constant depends only on the conductor of~$\\mathcal{F}$.\n\\end{remark}\n\n\\begin{remark}\\label{remcor17}\n (1) The reader may wonder why this paper is much shorter\n than~\\cite{FKM1}, and (with the exception of Corollary \\ref{RScor})\n requires much less input from algebraic geometry in the case of\n trace functions. One reason is that we are considering (essentially)\n sums of length~$q^{3\/2}$ whereas the coefficients functions~$K$ are\n $q$-periodic. This means that periodicity properties of the\n summand~$K(n)$ have a non-trivial effect, whereas they do not for\n the sums of length about~$q$ which are considered in~\\cite{FKM1} in\n the context of~$\\GL_2$.\n \\par\n Moreover, observe that an analogue of Theorem~\\ref{thm:main}, with\n an estimate that depends (in terms of~$K$) only on the size of the\n Fourier transform~$\\widehat{K}$, is \\emph{false} in the setting\n of~\\cite{FKM1}, i.e., for sums\n $$\n \\sum_{n\\geq 1}\\lambda(n)K(n)V\\Bigl(\\frac{n}{X}\\Bigr)\n $$\n with~$X$ of size about~$q$, where $\\lambda(n)$ are the\n Hecke-eigenvalues of a cusp forms~$\\psi$ for $\\SL_2(\\mathbf{Z})$ (as in\n Corollary~\\ref{cor-gl2}). Indeed, if we take $X=q$ and define~$K$ to\n be the $q$-periodic function that coincides with the (real-valued)\n function $n\\mapsto \\lambda(n)$ for $1\\leq n\\leq q$, then~$K$ has\n discrete Fourier transform of size~$\\ll \\log q$ by the well-known\n Wilton estimate (see, e.g.,~\\cite[Th. 5.3]{iwaniec}, when~$\\psi$ is\n holomorphic), and yet\n $$\n \\sum_{n\\leq q}K(n)\\lambda(n)=\\sum_{n\\leq q}|\\lambda(n)|^2\\asymp q\n $$\n by the Rankin--Selberg method.\n \\par\n On the other hand, the same bound of Wilton combined with discrete\n Fourier inversion implies quickly that if~$K$ is any $q$-periodic\n function, then\n $$\n \\sum_{n\\leq q^{3\/2}}\\lambda(n) K(n)\\ll\n q^{1+1\/4+\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\n $$\n for any~$\\varepsilon>0$. However, the natural length for applications\n is~$q$ in the $\\GL_2$ case.\n \\par\n (2) The most obvious function $K$ for which Theorem~\\ref{thm:main}\n gives trivial results is an additive character $K(n)=e(an\/q)$ for\n some integer~$a\\in\\mathbf{Z}$, since the Fourier transform takes one value\n of size~$q^{1\/2}$. However, a useful estimate also exists in this\n case: Miller~\\cite{Miller} has proved that\n $$\n \\sum_{n\\geq 1}\\lambda(1,n)e(\\alpha\n n)V\\Bigl(\\frac{n}{X}\\Bigr)\\ll_{\\varphi,Z} X^{3\/4+\\varepsilon}\n $$\n for~$X\\geq 2$, any~$\\alpha\\in\\mathbf{R}$ and any~$\\varepsilon>0$, where the\n implied constant is independent of~$\\alpha$. This is the\n generalization to~$\\GL_3$ of the bound of Wilton mentioned in the\n first remark.\n \\par\n (3) Using either the functional equation for the $L$-functions\n $L(\\varphi\\otimes\\chi,s)$, or the Voronoi summation formula, one can\n show that the estimate of Miller implies a bound of the shape\n $$\n S_V(\\Kl_2(a\\cdot;q),X)\\ll_{\\varphi,Z} (qX)^{\\varepsilon}X^{1\/4}q^{3\/4}\n $$\n for any~$\\varepsilon>0$, where\n $$\n \\Kl_2(n;q)=\\frac{1}{q^{1\/2}}\\sum_{x\\in{\\mathbf{F}^\\times_q}}e_q(\\ov x+nx)\n $$ \n is a normalized Kloosterman sum. This bound is non-trivial as long\n as $X\\geq q$. Since~$\\Kl_2$ is a trace function that is bounded\n by~$2$ and has Fourier transform bounded by~$1$, this gives (in a\n special case) a stronger bound than what follows from\n Theorem~\\ref{thm:main}.\n \\par\n (4) Remark (2) suggests a direct approach by the discrete Fourier\n inversion formula, which gives\n $$\n \\sum_{n\\leq X}\\lambda(1,n)K(n)=\\frac{1}{\\sqrt{q}} \\sum_{0\\leq h0$. Note that, under the generalized\nRamanujan--Petersson conjecture $\\lambda(1,n)\\ll n^{\\varepsilon}$, we would\nobtain the stronger bound $q^{1\/2+\\varepsilon}$ (and knowing the\napproximation~$\\lambda(1,n)\\ll n^{\\theta}$ for some $\\theta<1\/3$ would\nbe enough to get a non-trivial bound). We discuss this case in\nfurther details in Remark \\ref{lastremark}, in the context of\nCorollary \\ref{cor-average}.\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec-reminders}\n\n\n\\subsection{A Fourier-theoretic estimate}\n\nA key estimate in Section~\\ref{sec:O} will arise from the following\ngeneral bound (special cases of which have appeared before, e.g. in\nthe case of multiplicative characters for problems concerning sums\nover sumsets).\n \n\\begin{proposition}\\label{willprop}\n Let $A$ be a a finite abelian group, with group operation denoted\n additively. Let $\\alpha$, $\\beta$ and $K$ be functions from~$A$\n to~$\\mathbf{C}$. We have\n $$\n \\Bigl|\\sum_{m,n\\in A}\\alpha(m)\\beta(n)K(m-n)\\Bigr|\\leq\n |A|^{1\/2}\\|\\widehat K\\|_\\infty \\|\\alpha\\|_2 \\|\\beta \\|_2.\n $$\n\\end{proposition}\n\n\\begin{proof}\n Using orthogonality of characters, we write\n $$\n \\sum_{m,n\\in A}\\alpha(m)\\beta(n) K(m-n)=\n \\sum_{m,n,h\\in A}\\alpha(m)\\beta(n) K(h)\\frac{1}{|A|}\n \\sum_{\\psi\\in\\widehat{A}}\\psi(h-(m-n)).\n $$\n Moving the sum over~$\\psi$ to the outside, this is equal to\n $$\n |A|^{1\/2}\\sum_{\\psi\\in\\widehat{A}}\n \\widehat{\\alpha}(\\psi^{-1})\\widehat{\\beta}(\\psi)\\widehat{K}(\\psi),\n $$\n whose absolute value is\n $$\n \\leq |A|^{1\/2}\\norm{\\widehat{K}}_{\\infty}\n \\sum_{\\psi\\in\\widehat{A}}|\\widehat{\\alpha}(\\psi^{-1})\\widehat{\\beta}(\\psi)|\n \\leq |A|^{1\/2}\n \\|\\widehat K\\|_\\infty \\|\\alpha\\|_2 \\|\\beta \\|_2,\n $$\n by the Cauchy--Schwarz inequality and the discrete Plancherel\n formula.\n\\end{proof}\n\n\n\n\\subsection{Background on $\\GL_3$-cusp forms}\n\nWe refer to \\cite[Chap. 6]{Goldfeld} for notations. Let $\\varphi$ be a\ncusp form on~$\\GL_3$ with level~$1$ and with Langlands parameters\n$\\mu=(\\mu_1,\\mu_2,\\mu_3)\\in\\mathbf{C}^3$. We denote by\n$(\\lambda(m,n))_{m,n\\not=0}$ its Fourier--Whittaker coefficients, and\nassume that\n$\\varphi$ is an eigenform of the Hecke operators $T_n$ and $T_n^*$,\nnormalized so that $\\lambda(1,1)=1$. The eigenvalue of~$T_n$ is\nthen~$\\lambda(1,n)$ for~$n\\geq 1$.\n\n\nLet $\\theta_3=5\/14$. The archimedean parameters and the Hecke\neigenvalues are bounded individually by\n$$\n|\\Re(\\mu_{i})|\\leq \\theta_3.\\quad\\quad |\\lambda(1,p)|\\leq 3p^{\\theta_3}\n$$\nfor any~$i$ and any prime number~$p$ (see ~\\cite{KimSar}).\n\nAverage estimates follow from the Rankin--Selberg method. We have\n\\begin{equation}\n \\label{eq:RS}\n \\sum_{1\\leq n\\leq X}|\\lambda(1,n)|^2\\ll X^{1+\\varepsilon},\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq-RS2}\n \\sum_{1\\leq m^2n\\leq X}m|\\lambda(m,n)|^2\\ll X^{1+\\varepsilon},\n\\end{equation}\nfor $X\\geq 2$ and any $\\varepsilon>0$, where the implied constant depends\nonly on $\\varphi$ and $\\varepsilon$. (See~\\cite{Molteni} and ~\\cite[Lemma 2]{Munshi1}.)\n\nThe key analytic feature of $\\GL_3$-cusp forms that we use (as in\nprevious works) is the Voronoi summation formula for~$\\varphi$\n(originally due to Miller--Schmid, and Goldfeld--Li\nindependently). Since our use of the ``archimedean'' part of the\nformula is quite mild, we use the same compact formulation as\nin~\\cite[\\S 2.3]{HN}, where references are given.\n\nLet~$q\\geq 1$ be an integer (not necessarily prime). For $n\\in\\mathbf{Z}$, we\ndenote\n$$\n\\Kl_2(n;q)=\\frac{1}{\\sqrt{q}}\n\\sum_{x\\in (\\mathbf{Z}\/q\\mathbf{Z})^{\\times}}\ne\\Bigl(\\frac{nx+\\bar{x}}{q}\\Bigr)\n$$\nwhere~$\\bar{x}$ is the inverse of~$x$ modulo~$q$.\n\n\n\\begin{lemma}[Voronoi summation formula]\\label{Voronoi}\n For $\\sigma\\in\\{-1,1\\}$, there exist functions $\\mathcal{G}^{\\sigma}$,\n meromorphic on~$\\mathbf{C}$, holomorphic for~$\\Re(s)>\\theta_3$, with\n polynomial growth in vertical strips~$\\Re(s)\\geq \\alpha$ for\n any~$\\alpha>\\theta_3$, such that the following properties hold.\n \\par\n Let $a$ and~$q\\geq 1$ be coprime integers, let $X>0$, and let $V$ be\n a smooth function on~$]0,+\\infty[$ with compact support.\n \n We have\n $$\n \\sum_{n\\geq 1}\\lambda(1,n)e_q(an)V\\Bigl(\\frac{n}{X}\\Bigr) = q^{3\/2}\n \\sum_{\\sigma\\in\\{-1,1\\}} \\sum_{n\\geq 1} \\sum_{m\\mid q}\n \\frac{\\lambda(n,m)}{nm^{3\/2}} \\Kl_2\\Bigl(\\sigma n\\ov\n a;\\frac{q}{m}\\Bigr) \\mathcal{V}_{\\sigma}\\Bigl(\\frac{m^2n}{q^3\/X}\\Bigr),\n $$\n where \n $$\n \\mathcal{V}_{\\sigma}(x)= \\frac{1}{2\\pi i}\\int_{(1)}x^{-s}\\mathcal{G}^{\\sigma} (s+1)\n \\Bigl(\\int_{0}^{+\\infty}V(y)y^{-s}\\frac{dy}{y}\\Bigr)ds.\n $$\n\\end{lemma}\n\nNote that the functions~$\\mathcal{G}^{\\sigma}$ depend (only) on the\narchimedean parameters of~$\\varphi$. We record some properties of the\nfunctions $\\mathcal{V}_{\\sigma}(x)$; for~$Z$ fixed they are already explained\nin~\\cite[\\S 2.3]{HN}.\n\n\\begin{lemma}\\label{bounds-for-V}\n Let $\\sigma\\in\\{-1,1\\}$. For any $j\\geq 0$, any $A\\geq 1$ and\n any~$\\varepsilon>0$, we have\n$$\nx^j\\mathcal{V}_{\\sigma}^{(j)}(x)\\ll \\min\\Bigl( Z^{j+1}x^{1-\\theta_3-\\varepsilon},\nZ^{j+5\/2+\\varepsilon}\\Bigl(\\frac{Z^3}{x}\\Bigr)^A\\Bigr)\n$$\nfor~$x>0$, where the implied constant depends on~$(j,A,\\varepsilon)$.\nMoreover, for $x\\geq 1$, we have\n$$\nx^j\\mathcal{V}_{\\sigma}^{(j)}(x)\\ll x^{2\/3}\\min(Z^j, x^{j\/3})\n$$\nwhere the implied constant depends on~$j$.\n\\end{lemma}\n\n\\begin{proof}\n The first inequality in the first bound follows by shifting the\n contour in $\\mathcal{V}_{\\pm}(x)$ to $\\Re s=\\theta_3-1+\\varepsilon$, while the\n second one follows by shifting contour to the far right. The second\n bound follows from \\cite[Lemma 6]{Blomer}.\n\\end{proof}\n\nIn particular, we see from the lemma that the functions\n$\\mathcal{V}_{\\sigma}(x)$ decay very rapidly as soon as\n$x\\geq X^{\\delta}Z^{3}$ for some~$\\delta>0$.\n\n\\begin{remark}\n The bound $x^j\\mathcal{V}_{\\sigma}^{(j)}(x)\\ll Z^{j+1}x^{1-\\theta_3-\\varepsilon}$\n can be replaced by $x^j\\mathcal{V}_{\\sigma}^{(j)}(x)\\ll Z^{j}x^{1-\\varepsilon}$,\n under the Ramanujan-Selberg conjecture, i.e., if $\\Re (\\mu_i)=0$ for\n all~$i$.\n\\end{remark}\n\n\\begin{remark}\n Let~$N\\geq 1$, and define a congruence\n subgroup~$\\Gamma_N\\subset \\SL_3(\\mathbf{Z})$ by\n $$\n \\Gamma_N=\\Bigl\\{\\gamma\\in\\SL_3(\\mathbf{Z})\\,\\mid\\, \\gamma\\equiv\\begin{pmatrix}\n *&*&*\\\\ *&*&*\\\\0&0&*\n \\end{pmatrix}\\mods N\n \\Bigr\\}.\n $$\n Zhou~\\cite{FZ} has established an explicit Voronoi summation\n formula for $\\GL_3$-cuspforms that are invariant under~$\\Gamma_N$,\n \n \n \n \n \n \n \n \n for additive twists by~$e_q(an)$ when either $(q,N)=1$ or $N\\mid\n q$. It should then be possible to use this formula to generalise\n Theorem~\\ref{thm:main} to such cuspforms by slight adaptations of\n the argument below.\n \n \n \n \n \n \n\\end{remark}\n\n\n\\section{Amplification of the trace function}\n\nWe now begin the proof of Theorem~\\ref{thm:main}. Let $q$ be a prime\nnumber and $K$ a $q$-periodic function on $\\mathbf{Z}$. Let $\\widehat{K}$ be its\ndiscrete Fourier transform~(\\ref{eq:fourierK}), which is also a\n$q$-periodic function on~$\\mathbf{Z}$. \n\nLet $P,L\\geq 1$ be two parameters to be chosen later, with $2P0$, where\n\\begin{equation}\\label{eq-m2}\n\\mathcal{O}= \\frac{1}{|\\mathrm{P}||\\mathrm{L}|}\\sum_{p\\in\\mathrm{P}}\\sum_{l\\in\\mathrm{L}}\n\\sum_{h\\neq 0}\\widehat{W}\\Bigl(\\frac{h}{H}\\Bigr)\n\\sum_{n\\geq 1}\\lambda(1,n)K(n,hp\\ov l)V\\Bigl(\\frac{n}{X}\\Bigr).\n\\end{equation}\nIndeed, the contribution of $h=0$ is\n\\begin{align*}\n \\frac{1}{|\\mathrm{P}||\\mathrm{L}|}\\sum_{p\\in\\mathrm{P}}\\sum_{l\\in\\mathrm{L}}\n S_{V}(K(\\cdot,0),X)\\widehat{W}(0)\n &=S_{V}(K,X)-\n \\frac{\\widehat K(0)}{|\\mathrm{P}||\\mathrm{L}|q^{1\/2}}\n \\sum_{p\\in\\mathrm{P}}\\sum_{l\\in\\mathrm{L}}\n \\sum_{n\\geq 1}\\lambda(1,n)V\\Bigl(\\frac{n}{X}\\Bigr)\\\\\n &=S_{V}(K,X)+O\\Bigl(\\frac{\\norm{\\widehat{K}}_{\\infty}X^{1+\\varepsilon}}{q^{1\/2}}\\Bigr),\n\\end{align*}\nfor any $\\varepsilon>0$, by \\eqref{eq:RS}.\n\n\\section{Evaluation of $\\mathcal{F}$}\\label{sec:Fsum}\n\nThe evaluation of~$\\mathcal{F}$ is close to the arguments of~\\cite{HN}\nand~\\cite[\\S 6]{Lin}. In fact, we could extract the desired bounds\nfrom these sources (especially~\\cite{Lin}) in the important special\ncase when the parameter~$Z$ is fixed as~$q$ varies. The reader who is\nfamiliar with one of these references may therefore wish to skip the\nproof of the next proposition in a first reading.\n\n\n\\begin{proposition}\\label{proposition-for-F}\nLet $\\eta>0$. Assume that \n\\begin{equation}\n \\label{XZlower}\n X\/Z\\geq q^{1+\\eta}.\n\\end{equation}\nand\n\\begin{equation}\\label{boundsPL}\nL\\leq P^4.\n\\end{equation}\nThen for any $\\varepsilon>0$, we have\n$$\n\\mathcal{F}\\ll q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\n\\Bigl(\\frac{Z^{2}X^{3\/2}P}{qL^{1\/2}}+Z^{3\/2}X^{3\/4}(qPL)^{1\/4}\\Bigr),\n$$\nwhere the implied constant depends on $\\varphi$, $\\varepsilon$ and $\\eta$.\n\\end{proposition}\n\nThe remainder of this section is dedicated to the proof of this\nproposition. We fix $\\eta$ satisfying~(\\ref{XZlower}).\n \nWe apply the Poisson summation formula to the sum over $h$ in\n$\\mathcal{F}$, for each $(p,l)$. We obtain\n$$\n\\sum_{h\\in \\mathbf{Z}}K(n,hp\\ov\nl)\\widehat{W}\\Bigl(\\frac{h}{H}\\Bigr)=\\frac{H}{q^{1\/2}}\\sum_{(r,q)=1}\\widehat\nK(-p\\ov l\\ov r)e_q(np\\ov l\\ov r)W\\Bigl(\\frac{r}{R}\\Bigr),\n$$\nwhere\n\\begin{equation}\\label{eq-R}\nR=q\/H=\\frac{XP}{qL}.\n\\end{equation}\nHence it follows that\n$$\n\\mathcal{F}= \\frac{q^{3\/2}L}{XP|\\mathrm{P}||\\mathrm{L}|}\n\\sum_{p\\in\\mathrm{P}}\\sum_{l\\in\\mathrm{L}} \\sum_{(r,q)=1}\\widehat K(-p\\ov l\\ov r)\n\\sum_{n\\geq 1}e_q(n p\\ov l\\ov r) \\lambda(1,n)V\\Bigl(\\frac{n}{X}\\Bigr)\nW\\Bigl(\\frac{r}{R}\\Bigr).\n$$\n\nSince $l\\leq 2L0$ (see~\\cite[\\S 6]{Lin} for a similar computation,\nwhere such contribution is denoted $\\mathcal{F}_1^{\\sharp}$).\n\nNow let $p$ be such that $(p,rl)=1$. By the Voronoi summation\nformula (Lemma \\ref{Voronoi}), we have\n$$\n\\sum_{n\\geq 1}\\lambda(1,n)e_{rl}(-np\\ov q)\nV_1\\Bigl(\\frac{n}{X}\\Bigr)=(rl)^{3\/2} \\sum_{\\sigma\\in\\{-1,1\\}}\n\\sum_{n\\geq 1} \\sum_{m\\mid rl} \\frac{\\lambda(n,m)}{nm^{3\/2}}\n\\Kl_2(\\sigma\\ov\npqn;rl\/m)\\mathcal{V}_{1,\\sigma}\n\\Bigl(\\frac{m^2n}{r^3l^3\/X}\\Bigr).\n$$\nTherefore $\\mathcal{F}'=\\mathcal{F}'_{1}+\\mathcal{F}'_{-1}$, where\n\\begin{multline*}\n \\mathcal{F}'_{\\sigma}\n =\\frac{q^{3\/2}L}{XP|\\mathrm{P}||\\mathrm{L}|}\\sum_{p\\in\\mathrm{P}}\\sum_{l\\in\\mathrm{L}}\n \\sum_{r\\geq 1}\\widehat K(-p\\ov l\\ov\n r)\n W\\Bigl(\\frac{r}{R}\\Bigr)(rl)^{3\/2}\n \\\\\n \\sum_{n\\geq 1}\\sum_{m\\mid rl}\n \\frac{\\lambda(n,m)}{nm^{3\/2}}\\Kl_2(\\sigma \\ov\n pqn;rl\/m)\\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2n}{r^3l^3\/X}\\Bigr).\n\\end{multline*}\nWe re-arrange the sums to get\n\\begin{multline*}\n \\mathcal{F}'_{\\sigma} =\\frac{(qRL)^{3\/2}L}{XP|\\mathrm{P}||\\mathrm{L}|}\n \\sum_{r\\geq 1}\\Bigl(\\frac{r}{R}\\Bigr)^{3\/2}\n W\\Bigl(\\frac{r}{R}\\Bigr) \\sum_{n,m}\n \\frac{\\lambda(n,m)}{\\sqrt{nm}} \\\\\n \\sum_{p\\in\\mathrm{P}}\\sum_{\\substack{l\\in\\mathrm{L}\\\\m\\mid rl}}\n \\frac{(l\/L)^{3\/2}}{\\sqrt{n}m}\\widehat K(-p\\ov l\\ov r)\\Kl_2(\\sigma \\ov\n pqn;rl\/m)\\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2n}{r^3l^3\/X}\\Bigr).\n\\end{multline*}\n\nLet~$\\delta>0$ be a small parameter. For fixed $r$ and $l$, using the\nbounds from Lemma \\ref{bounds-for-V} with a suitably large value\nof~$A$, the contribution to the sum over~$m$ and~$n$ of $(m,n)$\nsuch that\n\\begin{equation}\\label{eq-truncate}\n m^2n\\geq q^{\\delta}\\frac{Z^3(rl)^3}{X}\\asymp \\frac{q^{\\delta}Z^3X^2P^3}{q^3}\n\\end{equation}\nis $\\ll \\norm{\\widehat{K}}_{\\infty}q^{-10}$ (say).\n\\par\nTo handle the remaining part of the sum, we apply the Cauchy--Schwarz\ninequality to the sum over $(m,n)$, and we obtain\n\\begin{equation}\\label{eq-mcfsigma}\n \\mathcal{F}'_{\\sigma} \\ll \\frac{(qRL)^{3\/2}L}{XP|\\mathrm{P}||\\mathrm{L}|}\n \\Bigl(\\sum_{r\\sim R} \\sum_{\\substack{n,m\\geq 1\\\\m^2n<\n q^{\\delta}Z^3X^2P^3\/q^3}}\n \\frac{|\\lambda(n,m)|^2}{nm}\\Bigr)^{1\/2}\\mathcal{N}_{\\sigma}^{1\/2}\n +\\norm{\\widehat{K}}_{\\infty}q^{-1},\n\\end{equation}\nwhere\n\\begin{multline*}\n \\mathcal{N}_{\\sigma}= \\sum_{r,m\\geq 1} W\\Bigl(\\frac{r}{R}\\Bigr)\n \\frac{1}{m^2}\n \\sumsum_{\\substack{p_1,p_2,l_1,l_2\\\\p_i\\in\\mathrm{P},l_i\\in\\mathrm{L}\\\\m\\mid\n (rl_1,rl_2)}} \\Bigl(\\frac{l_1l_2}{L^2}\\Bigr)^{3\/2}\n \\widehat K(-p_1\\ov l_1\\ov r)\\ov{\\widehat K(- p_2\\ov l_2\\ov r)}\\\\\n \\times\\sum_{n\\geq 1}\\frac{1}{n}\\Kl_2(\\sigma \\ov\n p_1qn;rl_1\/m)\\ov{\\Kl_2(\\sigma \\ov p_2qn;rl_2\/m)}\n \\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2n}{r^3l_1^3\/X}\\Bigr)\n \\ov{\\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2n}{r^3l_2^3\/X}\\Bigr)}.\n\\end{multline*}\nWe will prove the bound\n\\begin{equation}\\label{Nsigmabound}\n\\mathcal{N}_{\\sigma}\\ll\nq^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}^2\n\\Bigl(\nZ^4RPL\n+\\frac{Z^3R^{3\/2}q^3L^3}{X^2P}\\Bigr).\t\n\\end{equation}\nfor any~$\\varepsilon>0$.\nIf we select~$\\delta>0$ small enough in terms of~$\\varepsilon$, then by \nthe Rankin--Selberg bound~(\\ref{eq-RS2}), we deduce that\n\\begin{equation}\\label{eq-fpsigma}\n\\mathcal{F}'_{\\sigma} \\ll \\frac{q^{3\/2+\\varepsilon}Z^{\\varepsilon}L^{5\/2}R^2}\n{XP|\\mathrm{P}||\\mathrm{L}|}\\mathcal{N}_{\\sigma}^{1\/2}+\n\\norm{\\widehat{K}}_{\\infty}q^{-1}\n\\end{equation}\nfor any $\\varepsilon>0$.\nWe conclude, using~(\\ref{eq-mcfsigma}) and recalling\nthat~$R=XP\/(qL)$, that\n\\begin{align*}\n \\mathcal{F}'_{\\sigma}\n &\\ll \n \\frac{q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\n R^2(qL)^{3\/2}}{XP^2}\\bigg(Z^4RPL+\\frac{Z^3R^{3\/2}q^3L^3}{X^2P}\n \\bigg)^{1\/2}\n \\\\\n &\\ll\n q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\n \\Bigl(\\frac{Z^{2}X^{3\/2}P}{qL^{1\/2}}+Z^{3\/2}X^{3\/4}(qPL)^{1\/4}\\Bigr).\n\\end{align*}\nfor any~$\\varepsilon>0$. Assuming \\eqref{Nsigmabound}, this concludes the proof of\nProposition~\\ref{proposition-for-F}.\n\n\\subsection{Proof of \\eqref{Nsigmabound}} We will now investigate the inner sum over~$n$\nin~$\\mathcal{N}_{\\sigma}$, and then perform the remaining summations\n(over $r$, $m$, $p_i$, $l_i$) essentially trivially. We let\n$$\nU=\\frac{q^{\\delta\/2}Z^{3\/2}XP^{3\/2}}{q^{3\/2}},\n$$\nso that the sum over~$m$ has been truncated to~$m\\leq U$.\n\nLet~$F$ be a smooth non-negative function on $\\mathbf{R}$ which is supported\non $[1\/2,3]$ and equal to $1$ on $[1,2]$. Let $Y\\geq 1$ be a parameter\nwith\n\\begin{equation}\\label{eq-boundy}\nY\\leq \\frac{q^{\\delta}Z^3X^2P^3}{m^2q^3},\n\\end{equation}\nand define\n$$\n\\mathcal{W}_{Y}(x)=\\frac{1}{x}\n\\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2xY}{r^3l_1^3\/X}\\Bigr)\n\\ov{\\mathcal{V}_{1,\\sigma}\\Bigl(\\frac{m^2xY}{r^3l_2^3\/X}\\Bigr)} F(x).\n$$\nWe study the sums\n$$\n\\mathcal{P}_Y= \\frac{1}{Y}\\sum_{n\\geq 1}\\Kl_2(\\ov\np_1qn;rl_1\/m)\\ov{\\Kl_2(\\ov p_2qn;rl_2\/m)}\n\\mathcal{W}_{Y}\\Bigl(\\frac{n}{Y}\\Bigr),\n$$\nand their combinations\n\\begin{equation}\\label{eq-py}\n \\mathcal{N}_{Y,\\sigma}= \\sum_{r\\geq 1}\\sum_{1\\leq m\\leq U}\n W\\Bigl(\\frac{r}{R}\\Bigr)\n \\frac{1}{m^2}\n \\sumsum_{\\substack{p_1,p_2,l_1,l_2\\\\p_i\\in\\mathrm{P},l_i\\in\\mathrm{L}\\\\m\\mid\n (rl_1,rl_2)}} \\Bigl(\\frac{l_1l_2}{L^2}\\Bigr)^{3\/2}\n \\widehat K(-p_1\\ov l_1\\ov r)\\ov{\\widehat K(- p_2\\ov l_2\\ov\n r)}\\,\\mathcal{P}_Y.\n\\end{equation}\n\nWe will prove the following bound: for any $\\varepsilon>0$, if $\\delta$ is chosen small enough we have\n\\begin{equation}\\label{NYbound}\n\\mathcal{N}_{Y,\\sigma}\\ll q^{\\varepsilon}Z^4\\norm{\\widehat{K}}_{\\infty}^2RPL+q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}^2\\frac{Z^3R^{3\/2}q^3L^3}{X^2P}.\n\\end{equation}\nPerforming a dyadic partition of unity on the $n$ variable in $\\mathcal{N}_{\\sigma}$ we deduce \\eqref{Nsigmabound}.\n\\subsection{Bounding $\\mathcal{P}_Y$} \nWe apply the Poisson\nsummation formula~(\\ref{eq-poisson}) with modulus $r[l_1,l_2]\/m$, to\nget\n\\begin{equation}\\label{after-poisson}\n \\mathcal{P}_Y\n =\\frac{1}{r[l_1,l_2]\/m}\\sum_{n\\in\\mathbf{Z}}C(n,p_1,p_2, l_1,l_2,r, m)\n \\widehat{\\mathcal{W}}_Y\\Bigl(\\frac{nY}{r[l_1,l_2]\/m}\\Bigr),\n\\end{equation}\nwhere\n\\begin{eqnarray*}\n C(n,p_1,p_2,l_1,l_2, r,m)=\n \\sum_{\\beta \\mods {r[l_1,l_2]\/m}}\n \\Kl_2(\\ov p_1q\\beta;rl_1\/m)\n \\ov{\\Kl_2(\\ov p_2q\\beta;rl_2\/m)} \\, e_{r[l_1,l_2]\/m}(\\beta n),\n\\end{eqnarray*}\nwith~$\\ov{p}_i$ denoting the inverse of~$p_i$ modulo~$rl_i\/m$. We write\n$$\n\\mathcal{P}_Y=\\mathcal{P}_0+\\mathcal{P}_1\n$$\nwhere\n$$\n\\mathcal{P}_{0}=\\frac{1}{r[l_1,l_2]\/m} C(0,p_1,p_2,l_1,l_2,r,\nm)\\widehat{\\mathcal{W}}_Y(0)\n$$\nis the contribution of the term~$n=0$ and~$\\mathcal{P}_1$ is the\nremainder in \\eqref{after-poisson}. We show below that for any $\\varepsilon>0$, if $\\delta$ is chosen small enough we have\n\\begin{equation}\\label{P0bound}\n\\mathcal{P}_{0}\\ll \\delta_\\stacksum{l_1=l_2}{p_1=p_2}(qr)^{\\varepsilon}Z^4+\\delta_\\stacksum{l_1=l_2}{p_1\\not=p_2}\\mathcal{P}_0\\ll\nq^{\\varepsilon}Z^4\\frac{m}{rl}\n\\Bigl(\\frac{rl}{m},p_2-p_1\\Bigr)\t\n\\end{equation}\nand that\n\\begin{equation}\\label{P1bound}\\mathcal{P}_{1}\\ll q^{2\\varepsilon}Z^3\\Bigl(\\frac{r[l_1,l_2]1}{m}\\Bigr)^{1\/2}\\frac{m^2q^3}{X^2P^3}.\\end{equation}\nUsing \\eqref{P0bound} in the sum~(\\ref{eq-py}), we find that the contribution to~$\\mathcal{N}_{Y,\\sigma}$ of $\\mathcal{P}_0$ is bounded by \n\\begin{gather}\\nonumber\n \\sum_{r\\geq 1} W\\Bigl(\\frac{r}{R}\\Bigr) \\sum_{1\\leq m\\leq U}\n \\frac{1}{m^2}\n \\sumsum_{\\substack{p_1,p_2,l\\\\p_i\\in\\mathrm{P},l\\in\\mathrm{L}\\\\m\\mid rl}}\n \\Bigl(\\frac{l}{L}\\Bigr)^{3} \\widehat K(-p_1\\ov l\\ov r)\\ov{\\widehat K(-\n p_2\\ov l\\ov r)}\\,\\mathcal{P}_0\n \\\\ \\nonumber\n \\ll q^{\\varepsilon}Z^4\\norm{\\widehat{K}}_{\\infty}^2 \\sum_{r\\asymp R}\n \\sum_{1\\leq m\\leq U}\\frac{1}{m^2} \\sum_{\\substack{l\\in\\mathrm{L}\\\\m\\mid\n rl}} \\Bigl(\\sum_{p\\in\\mathrm{P}}1+\n \\frac{m}{rl}\\sum_{\\substack{p_1,p_2\\in\\mathrm{P}\\\\p_1\\not=p_2}}\\Bigl(\\frac{rl}{m},p_2-p_1\\Bigr)\n \\Bigr)\\\\ \\nonumber\n \\ll q^{\\varepsilon}Z^4\\norm{\\widehat{K}}_{\\infty}^2 \\Bigl(RPL+ \\sum_{r\\asymp\n R} \\frac{1}{r}\\sum_{1\\leq m\\leq U}\\frac{1}{m}\n \\sum_{\\substack{l\\in\\mathrm{L}\\\\m\\mid rl}}\\frac{1}{l} \\sum_{d\\mid rl\/m}\\varphi(d)\n \\sum_{\\substack{p_1,p_2\\in\\mathrm{P}\\\\p_1\\equiv p_2\\mods{d}}}1\n \\Bigr)\\\\\n \\ll q^{\\varepsilon}Z^4\\norm{\\widehat{K}}_{\\infty}^2 (RPL+P^2)\\ll\n q^{\\varepsilon}Z^4\\norm{\\widehat{K}}_{\\infty}^2RPL.\\label{NYP0}\n\\end{gather}\nHere $RPL=XP^2\/q\\gg P^2$, since $X$ satisfies \\eqref{XZlower}.\nUsing \\eqref{P1bound} we find that the contribution of~$\\mathcal{P}_1$\nto~$\\mathcal{N}_{\\sigma,Y}$ is bounded by\n\\begin{multline}\\label{NYP1}\n \\ll q^{\\varepsilon}Z^3\\norm{\\widehat{K}}_{\\infty}^2\\sum_{r\\asymp R}\n \\sum_{1\\leq m\\leq U}\n \\frac{1}{m^2}\n \\sum_{\\substack{p_1,p_2,l_1,l_2\\\\m\\mid (rl_1,rl_2)}}\n \\Bigl(\\frac{r[l_1,l_2]}{m}\\Bigr)^{1\/2}\\frac{m^2q^3}{X^2P^3}\\\\\n \\ll q^{\\varepsilon}Z^3\\norm{\\widehat{K}}_{\\infty}^2\n R\\frac{q^3}{X^2P^3}(P^2L^2)(RL^2)^{1\/2}\n \\ll q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}^2\\frac{Z^3R^{3\/2}q^3L^3}{X^2P}.\n\\end{multline}\n\n\n\nCombining \\eqref{NYP0} and \\eqref{NYP1} we obtain \\eqref{NYbound}. \n\n\\subsection{Proof of \\eqref{P0bound} and \\eqref{P1bound}}\n\nThe next two lemmas evaluate the archimedan and non-archimedan Fourier transforms which occur in \\eqref{after-poisson} :\n\n\\begin{lemma}\\label{lm-trunc}\n With notation as above, in particular~\\emph{(\\ref{eq-boundy})},\n let~$j\\geq 0$ be an integer and let~$\\varepsilon>0$.\n \\par\n \\emph{(1)} We have\n \n \n \n \n \\begin{equation}\\label{eq-wyzero}\n \\widehat{\\mathcal{W}}_Y(0)\\ll q^{\\delta}Z^4.\n \\end{equation}\n \\par \n \\emph{(2)} We have\n \\begin{equation}\\label{derivatives-of-F}\n x^j\\mathcal{W}^{(j)}_{Y}(x)\\ll\n \\begin{cases}\n Z^{2+j}\\Bigl(\\frac{m^2Yq^3}{X^2P^3}\\Bigr)^{2-2\\theta_3-\\varepsilon}\n & \\quad \\text{if } Y<\\frac{X^2P^3}{m^2q^3}\\\\\n \\Bigl(\\frac{m^2Yq^3}{X^2P^3}\\Bigr)^{4\/3+j\/3} & \\quad \\text{if }\n Y\\geq \\frac{X^2P^3}{m^2q^3},\n \n \\end{cases}\t\n \\end{equation}\n where the implied constants depends on~$(\\varphi,j,\\varepsilon)$.\n \\par\n \\emph{(3)} If $1\\leq |n|\\leq q^{\\delta}Z\\frac{r[l_1,l_2]}{mY}$\n \n \n then we have\n $$\n \\widehat{\\mathcal{W}}_{Y}\\Bigl(\\frac{nY}{r[l_1,l_2]\/m}\\Bigr)\\ll\n q^{\\delta}Z^{2}\\frac{m^2Yq^3}{X^2P^3}.\n $$\n\\end{lemma}\n\n\\begin{proof}\n Since~$F_Y$ has support in $[1\/2,3]$, part~(1) follows from the\n bound~$\\mathcal{V}_{\\sigma}(x)\\ll x^{2\/3}$ for~$x\\geq 1$ and the\n fact that\n $$\n \\frac{m^2xY}{r^3l_i^3\/X}\\asymp \\frac{m^2Y}{X^2P^3\/q^3}\\ll\n q^{\\delta}Z^3.\n $$\n \\par\n Part~(2) is obtained using the estimates\n \\begin{gather*}\n x^j\\mathcal{V}_{\\pm}^{(j)}(x)\\ll Z^{j+1}x^{1-\\theta_3-\\varepsilon}\\quad \\text{\n if }\n 00$.\n\\end{lemma}\n\n\\begin{proof}\n Part (1) follows by direct computation (the sum vanishes unless\n $[l_1,l_2]=l_1$ and $[l_1,l_2]=l_2$). If $n=0$ and $l_1=l_2$, then\n$$\n|C(0,p_1,p_2,l,l,r,m)|=\\bigg|\\sum_{\\substack{x\\bmod rl\/m\\\\(x,rl\/m)=1}}\ne\\Bigl(\\frac{( p_2-p_1)x}{rl\/m}\\Bigr)\\bigg| = \\bigg| \\sum_{d|(rl\/m, p_2-p_1)} d \\mu\\Bigl(\\frac{rl}{md} \\Bigr) \\bigg| \\leq (rl\/m,p_2-p_1)\n$$\nby a classical bound for Ramanujan's sum, which proves (2). Finally, part (3) is a special case of~\\cite[Lemma\nA.2 (A.3)]{HN} (applied with \n$(\\xi,s_1,s_2)=(n,rl_1\/m,rl_2\/m)$, and\n$(a_1,b_1,a_2,b_2)=(q,p_1,q,p_2)$ in the definition of~$\\Delta$).\nIf $\\Delta=0$, then necessarily $p_1=p_2$ and $l_1=l_2$, and we\nobtain~(4) immediately.\n\\end{proof}\n\n\\subsubsection{Estimation of $\\mathcal{P}_0$}\n\nNote that~$\\mathcal{P}_0=0$ unless $l_1=l_2$. If that is the case, we\ndenote~$l=l_1=l_2$. We then have two bounds for~$\\mathcal{P}_{0}$. If\nwe have also~$p_1=p_2$, then the quantity~$\\Delta$ of\nLemma~\\ref{lm-delta} (3) is zero.\nSince~$\\widehat{\\mathcal{W}}_Y(0)\\ll q^{\\varepsilon}Z^4$ for any~$\\varepsilon>0$\n(provided~$\\delta>0$ is chosen small enough) by Lemma~\\ref{lm-trunc}\n(1), we obtain\n$$\n\\mathcal{P}_0\\ll q^{\\varepsilon}Z^4\\frac{m}{rl}|C(0,p_1,p_1,l,l,r,m)|\\ll\n(qr)^{\\varepsilon}Z^4\n$$\nby the last part of Lemma~\\ref{lm-delta} (4).\n\\par\nOn the other hand, if~$p_1\\not=p_2$, we have~$\\Delta\\not=0$ hence\n$$\n\\mathcal{P}_0\\ll\nq^{\\varepsilon}Z^4\\frac{m}{rl}\n\\Bigl(\\frac{rl}{m},p_2-p_1\\Bigr)\n$$\nby Lemma~\\ref{lm-delta} (1) (which shows that the sum\n$C(0,p_1,p_2,l_1,l_2,r,m)$ is zero unless~$l_1=l_2$) and (2).\n\\par\n\n\\subsubsection{Estimation of $\\mathcal{P}_1$}\nUsing Lemma~\\ref{lm-trunc} (2) for a\nsuitable value of~$j$, we obtain first\n$$\n\\mathcal{P}_{1}= \\frac{1}{r[l_1,l_2]\/m}\\sum_{1\\leq |n|\\leq\n q^{\\delta}Z\\frac{r[l_1,l_2]}{mY}} C(n,p_1,p_2,l_1,l_2,r,m)\n\\widehat{\\mathcal{W}}_Y\\Bigl(\\frac{nY}{r[l_1,l_2]\/m}\\Bigr) +O(q^{-1})\n$$\nfor any~$\\varepsilon>0$ if~$\\delta$ is chosen small enough. Then, by\nLemma~\\ref{lm-delta} and Lemma~\\ref{lm-trunc} (3), we deduce that\n\\begin{align*}\n \\mathcal{P}_1\n &\\ll q^{\\varepsilon+2\\delta}\\frac{1}{(r[l_1,l_2]\/m)^{1\/2}}\n \\sum_{1\\leq |n|\\leq q^{\\delta}Z\\frac{r[l_1,l_2]}{mY}}\n \\frac{(\\Delta,n,rl_1\/m,rl_2\/m)}{(n,rl_1\/m,rl_2\/m)^{1\/2}}\n \\frac{Z^{2}m^2Yq^3}{X^2P^3}\n \\\\\n &\\ll q^{2\\varepsilon}Z^3\\Bigl(\\frac{r[l_1,l_2]1}{m}\\Bigr)^{1\/2}\\frac{m^2q^3}{X^2P^3}\n\\end{align*}\nif $\\delta<\\varepsilon\/2$.\n\n\n\\section{Estimate of $\\mathcal{O}$}\\label{sec:O}\n\nIn this section, we bound the sum~$\\mathcal{O}$ defined\nin~(\\ref{eq-m2}). Our goal is:\n\n\\begin{proposition}\\label{corollary-for-O}\n Let~$\\eta>0$ be a parameter such that \\eqref{XZlower} holds.\n Let~$\\varepsilon>0$. If~$\\delta$ is a sufficiently small positive real\n number and if $P,L,X$ satisfy\n \\begin{equation}\\label{eqboundsPHLX}\n XP\\leq q^2L,\\quad q^{1+\\delta}L^20$ (by~(\\ref{eq:RS}) and discrete Fourier inversion) to\ntruncate the sum over~$h$ to~$|h|\\leq q^{\\delta}H$, for\nsome~$\\delta>0$ that may be arbitrarily small.\n\n\nLet~$T\\geq 0$ be a smooth function with compact support such that\n$T(x)=\\norm{V}_{\\infty}$ for $x\\in [1\/2,3]$ and such that~$T$\nsatisfies \\eqref{eq:Vprop} with a fixed value of~$Z$. We then\nhave~$|V|\\leq T$.\n\nIn the sum~$\\mathcal{O}_2$, we split the $h$-sum into $O(\\log q)$ dyadic\nsums. We then apply the Cauchy--Schwarz inequality to smooth the\n$n$-variable, and we obtain\n$$\n\\mathcal{O}_2\\ll \\frac{\\log^3 q}{PL} \\Bigl(\\sum_{n\\sim\n X}|\\lambda(1,n)|^2\\Bigr)^{1\/2} \\mathop{\\mathrm{Max}}\\limits_{1\\leq H'\\leq\n q^{\\delta}H}\\mathcal{R}_{H'}^{1\/2}\\ll \\frac{X^{1\/2}\\log^3 q}{PL}\n\\mathop{\\mathrm{Max}}\\limits_{1\\leq H'\\leq q^{\\delta}H}\\mathcal{R}_{H'}^{1\/2},\n$$\nby~(\\ref{eq:RS}) again, where\n$$\n\\mathcal{R}_{H'}= \\sumsum_{p_1,h_1,l_1,p_2,h_2,l_2} \\sum_{n\\geq\n 1}K(n,h_1p_1\\ov l_1)\\ov{K(n,h_2p_2\\ov l_2)}\n\\widehat{W}\\Bigl(\\frac{h_1}{H}\\Bigr)\n\\ov{\\widehat{W}\\Bigl(\\frac{h_2}{H}\\Bigr)}\nT\\Big(\\frac{n}X\\Bigr),\n$$\nwith the variables in the sums constrained by the conditions\n$$\np_i\\in\\mathrm{P},\\quad l_i\\in\\mathrm{L},\\quad\nH'0$.\n\\end{lemma}\n\n\\begin{proof}\nFrom the last condition in~(\\ref{eqboundsPHLX}), we have the\nimplications\n\\begin{equation}\\label{PHLbound}\n h_2p_2\\ov l_2=h_1p_1\\ov l_1\\mods q\\Longleftrightarrow\n l_1h_2p_2\\equiv l_2h_1p_1\\mods q\\Longleftrightarrow l_1h_2p_2=l_2h_1p_1.\n\\end{equation}\nTherefore, if $(p_1,h_1,l_2)$ are given, the number of possibilities\nfor $(p_2,h_2,l_1)$ is $\\ll q^{\\varepsilon}$ for any~$\\varepsilon>0$.\nThe bound\n$$\n\\sum_{x\\in{\\mathbf{F}^\\times_q}}\\nu(x)^2\\ll q^{\\varepsilon}PH'L\\leq q^{\\varepsilon+\\delta}PHL\n$$\nfollows immediately.\n\\end{proof}\n\nWe can now combine these two lemmas with Proposition~\\ref{willprop} to\ndeduce that\n\\begin{align*}\n \\frac{X}{\\sqrt{q}}\\Bigl|\\sumsum_{x_1,x_2\\in{\\mathbf{F}_q}}\n \\nu(x_1)\\nu(x_2)\\frac{1}{\\sqrt{q}}\\sum_{u\\in{\\mathbf{F}_q}}\n \\widehat K(u,x_1)\\ov{\\widehat K(u,x_2)}\\widehat{T}(0)\\Bigr|\n &\\leq X\\norm{\\widehat{L}}_{\\infty}\\norm{\\nu}_2^2|\\widehat{T}(0)|\\\\\n &\\ll q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}^2XPHL\n\\end{align*}\nfor any~$\\varepsilon>0$, by taking~$\\delta$ small enough in terms of~$\\varepsilon$.\nHence we obtain\n\\begin{equation}\\label{diagonal1total}\n \\mathcal{O}_2\\ll q^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}X\\Bigl(\\frac{H}{PL}\\Bigr)^{1\/2}\\ll\n q^{1+\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\\frac{X^{1\/2}}{P}.\n\\end{equation}\n\n\\subsection{Bounding $\\mathcal{O}_1$ and end of the proof of Proposition~\\ref{corollary-for-O}}\n\nThe treatment of $\\mathcal{O}_1$ is similar to that of~$\\mathcal{O}_2$, but simpler,\nso we will be brief. We have\n$$\n\\mathcal{O}_1=\\frac{1}{|\\mathrm{L}|}\\sum_{l\\in\\mathrm{L}}\\frac{1}{|\\mathrm{P}|}\n\\sum_{p\\in\\mathrm{P}}\\sum_{h\\not=0} \\widehat{W}\\Bigl(\\frac{h}{H\/l}\\Bigr)\n\\sum_{n\\geq 1}\\lambda(1,n)K(n,hp)V\\Bigl(\\frac{n}{X}\\Bigr).\n$$\nWe bound the sum over~$p$ for each individual~$l\\in\\mathrm{L}$, with\n$h\\ll H\/l\\asymp H\/L$, by repeating the arguments of the previous\nsection with $H$ replaced by $H\/l$ and $L$ replaced by $1$. We\nobtain\n\\begin{equation}\\label{O1bound}\n \\mathcal{O}_1\\ll \\norm{\\widehat{K}}_{\\infty}\n q^{\\varepsilon}X\\Bigl(\\frac{H}{PL}\\Bigr)^{1\/2}\n \\ll q^{1+\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\\frac{X^{1\/2}}{P}\n\\end{equation}\nfor any~$\\varepsilon>0$, as in the previous case.\n\nFinally, since~$\\mathcal{O}=\\mathcal{O}_1+\\mathcal{O}_2$, this bound combined\nwith~(\\ref{diagonal1total}) implies Proposition~\\ref{corollary-for-O}.\n\n\n\\section{End of the proof}\n\nWe can now finish the proof of our main theorem. We recall that $X,Z$\nare such that\n \\begin{equation}\n \\label{eqcondZXq}\nZ^{2\/3}q^{4\/3}\\leq X \\leq Z^{-2}q^{2}.\n\\end{equation}\nIn particular $Z\\leq q^{1\/4}$ and\n$$X\\geq Z^{2\/3}q^{4\/3}\\geq Zq^{1+1\/4}$$\ntherefore \\eqref{XZlower} holds for~$\\eta=1\/4$.\n\nAssuming that the conditions \\eqref{eqboundsPHLX} hold,\ncombining~\\eqref{an-identity}, Proposition~\\ref{proposition-for-F} and\nProposition~\\ref{corollary-for-O}, we deduce the estimate\n$$\nS_V(K,X) \\ll\nq^{\\varepsilon}\\norm{\\widehat{K}}_{\\infty}\n\\Biggl(\\frac{Z^{2}X^{3\/2}P}{qL^{1\/2}}+Z^{3\/2}X^{3\/4}(qPL)^{1\/4}\n+\\frac{qX^{1\/2}}{P}\\biggr)\n$$\nfor any~$\\varepsilon>0$. When $L=Z^{2\/3}XP\/q^{5\/3}$, the first two terms are\nequal to $Z^{5\/3}XP^{1\/2}\/q^{1\/6}$. For $P=q^{7\/9}\/(X^{1\/3}Z^{10\/9})$,\nthey are also equal to the third term which is\n$Z^{10\/9}q^{2\/9}X^{5\/6}$. Moreover, the conditions \\eqref{eqcondZXq}\nand~$Z\\leq q^{1\/4}$ imply then by simple computations that\n$$\n1\\leq P,\\ 1\\leq L,\\ L\\leq P^4,\\ XP\\leq q^2L\n$$\n(for instance, $X^3Z^{10}\\leq Z^{10}(q^2\/Z^2)^3=Z^4q^6\\leq q^7$\ngives~$P\\geq 1$), and then we get\n$$\nq^{1+\\delta}L^2<\\frac{X}{8}\n$$\nfor~$\\delta=1\/18$ provided~$q$ is large enough (since\n$qL^2=q^{-7\/9}Z^{-8\/9}X^{4\/3}\\leq X(X^{1\/3}q^{-7\/9})\\leq Xq^{-1\/9}$\nusing~$X\\leq q^2$). By~(\\ref{eq-H}), this also implies\nthat~$q^{\\delta}PHL0$.\n\n\n\\section{Applications}\n\nIn this section, we prove \nCorollaries~\\ref{cor-average}, \\ref{cor-gl2}.\n\n\\subsection{ Proof of Corollary~\\ref{cor-average}} Applying the approximate functional\nequation for~$L(\\varphi\\otimes\\chi,s)$ in balanced form, we\nimmediately express the first moment\n$$\n\\frac{1}{q-1} \\sum_{\\chi\\mods{q}}M(\\chi)L(\\varphi\\otimes\\chi, 1\/2)\n$$\nin terms of the sums\n$$\n\\frac{1}{\\sqrt{q}}\n\\sum_{n\\geq 1} \\frac{\\lambda(1,n)}{\\sqrt{n}}\nK(n)V\\Bigl(\\frac{n}{q^{3\/2}}\\Bigr)\n$$\nand\n$$\n\\frac{1}{\\sqrt{q}}\\sum_{n\\geq 1}\n\\frac{\\overline{\\lambda(1,n)}}{\\sqrt{n}}\nL(n)V\\Bigl(\\frac{n}{q^{3\/2}}\\Bigr),\n$$\nfor suitable test functions satisfying~(\\ref{eq:Vprop}) for~$Z=1$,\nwhere\n$$\nK(n)=\\frac{q^{1\/2}}{q-1}\\sum_{\\chi\\mods{q}}M(\\chi){\\chi(n)},\\quad\nL(n)=\\frac{q^{1\/2}}{q-1}\\sum_{\\chi\\mods{q}}\\tau(\\chi)^3M(\\chi)\\ov{\\chi(n)},\n$$\nin terms of the normalized Gauss sum~$\\tau(\\chi)$. An elementary\ncomputation shows that this function~$L$ coincides with the function\nin the statement of Corollary~\\ref{cor-average}. Since moreover\nthe~$\\overline{\\lambda(1,n)}$ are the Hecke-eigenvalues of the dual\ncusp form~$\\widetilde{\\varphi}$, the corollary follows from\nTheorem~\\ref{thm:main} applied to~$K$ and~$L$.\n\n\\begin{remark}\\label{lastremark}\n (1) If\n$$\nM(\\chi)=\\frac{1}{\\sqrt{q}}\\sum_{x\\in{\\mathbf{F}^\\times_q}}K(x)\\ov{\\chi(x)}\n$$\nis the discrete Mellin transform of the trace function~$K$ of a\nFourier sheaf~$\\mathcal{F}$ that is a middle-extension sheaf on~$\\mathbf{G}_m$ of\nweight~$0$, and if no sheaf of the\nform~$[x\\mapsto x^{-1}]^*\\dual(\\mathcal{K}\\ell_3)$ is among the geometrically\nirreducible components of~$\\mathcal{F}$, then \nboth~$\\norm{\\widehat{K}}_{\\infty}$ and~$\\norm{\\widehat{L}}_{\\infty}$ are\nbounded in terms of the conductor of~$\\mathcal{F}$ only and we obtain\n$$\n\\frac{1}{q-1} \\sum_{\\chi\\mods{q}}M(\\chi)L(\\varphi\\otimes\\chi,\n1\/2)\\ll q^{2\/9+\\varepsilon}\n$$\nfor any~$\\varepsilon>0$, where the implied constant depends only\non~$\\varepsilon$,~$\\varphi$ and the conductor of~$\\mathcal{F}$.\n\\par\n(2) Applying the approximate functional equation in a balanced form\nmay not always be the best move. For instance, consider the important\nspecial case where $M(\\chi)=1$. We are then evaluating the first\nmoment\n\\begin{equation}\\label{eqfirstmoment}\n\\frac{1}{q-1}\\sum_{\\chi\\mods q}L(\\varphi\\otimes\\chi,1\/2)\n\\end{equation}\nof the central values of the twisted $L$-functions. Then we are\nworking with the functions\n$$\nK(n)=q^{1\/2}\\delta_{n\\equiv 1\\mods q},\\quad L(n)=\\Kl_3( n;q),\n$$\nwhose Fourier transforms are bounded by absolute constants. Hence the\nabove leads to\n$$\n\\frac{1}{q-1} \\sum_{\\chi\\mods{q}}L(\\varphi\\otimes\\chi, 1\/2)\n\\ll q^{2\/9 + \\varepsilon}\n$$\nfor any~$\\varepsilon>0$, where the implied constant depends on~$\\varphi$\nand~$\\varepsilon$.\n\\par\nOn the other hand, the approximate functional equation in unbalanced\nform yields sums of the shape\n$$\n\\sum_{n\\equiv 1\\mods q} \\frac{\\lambda(1,n)}{\\sqrt{n}}\nV\\Bigl(\\frac{n}{Yq^{3\/2}}\\Bigr)\n\\quad\\text{ and }\\quad\n\\frac{1}{\\sqrt{q}}\\sum_{n\\geq 1}\n\\frac{\\overline{\\lambda(1,n)}}{\\sqrt{n}}\n\\Kl_3(n;q)V\\Bigl(\\frac{nY}{q^{3\/2}}\\Bigr),\n$$\nfor some parameter $Y>0$ at our disposal. Assuming the\nRamanujan--Petersson conjecture for~$\\varphi$\nand~$\\widetilde{\\varphi}$, and using Deligne's bound\n$|\\Kl_3(n;q)|\\leq 3$ for $(n,q)=1$, we obtain the much stronger bound\n$$\n\\frac{1}{q-1}\\sum_{\\chi\\mods\n q}L(\\varphi\\otimes\\chi,1\/2)=1+(qY)^{\\varepsilon}\n\\bigl({Y^{1\/2}}\/{q^{1\/4}}+q^{1\/4}\/Y^{1\/2}\\bigr)\\ll\nq^{\\varepsilon}\n$$\nfor any~$\\varepsilon>0$, on choosing $Y=q^{1\/2}$.\n\\par\nNote that, again under the Ramanujan--Petersson conjecture\nfor~$\\varphi$ and its dual, we would obtain an \\emph{asymptotic\n formula} for the first moment \\eqref{eqfirstmoment} \\emph{provided}\nwe could obtain an estimate for $S_V(\\Kl_3,X)$ with a power-saving in\nterms of~$q$, when $X$ is a bit smaller than $q$. Results of this type\nare however currently only known if $\\varphi$ is an Eisenstein series\n(starting from the work~\\cite{FI} of Friedlander and Iwaniec for the\nternary divisor function; see also the papers of Fouvry, Kowalski and\nMichel~\\cite{FKMd3}, of Kowalski, Michel and Sawin~\\cite{KMS} and of\nZacharias~\\cite{Zac}).\n\nThis illustrates the importance of the problem of obtaining\nnon-trivial bounds for short sums in Theorem \\ref{thm:main}. However,\nwe expect that much more refined properties of trace functions and\ntheir associated sheaves will be necessary for such a purpose (as\nindicated by Remark~\\ref{remcor17}).\n\\end{remark}\n\n\\subsection{Proof of Corollary~\\ref{cor-gl2}}\n\nThe symmetric square~$\\varphi$ of~$\\psi$ has Hecke eigenvalues\n\\begin{equation}\\label{lambda(n2)}\n\\lambda(1,n)=\\sum_{d^2\\mid n}\\lambda\\Bigl(\\frac{n^2}{d^2}\\Bigr),\n\\end{equation}\nand hence, by M\\\"obius inversion, we have\n$$\n\\lambda(n^2)=\\sum_{d^2\\mid n}\\mu(d)\\lambda\\Bigl(1,\\frac{n}{d^2}\\Bigr).\n$$\nWe deduce that\n$$\n\\sum_{n\\geq 1}\\lambda(n^2)K(n)V\\Bigl(\\frac{n}{X}\\Bigr)= \\sum_{d\\geq\n 1}\\mu(d) \\sum_{n\\geq\n 1}K(nd^2)\\lambda(1,n)V\\Bigl(\\frac{nd^2}{X}\\Bigr).\n$$\nFor\n$$\n1\\leq d\\leq \\frac{X^{1\/2}}{Z^{1\/3}q^{2\/3}},\n$$\nwe can apply Theorem~\\ref{thm:main} to the sum over~$n$ and the\n$q$-periodic function $L(n)=K(nd^2)$, with~$X$ replaced\nby~$X\/d^2$. Since~$q\\nmid d$, we have\n$\\widehat{L}(x)=\\widehat{K}(\\bar{d}^2x)$ for any~$x\\in \\mathbf{Z}$, so\nthat~$\\norm{\\widehat{L}}_{\\infty}=\\norm{\\widehat{K}}_{\\infty}$, and we get\n\\begin{align*}\n \\sum_{d\\leq X^{1\/2}\/(Z^{1\/3}q^{2\/3})}\\mu(d) \\sum_{n\\geq\n 1}K(nd^2)\\lambda(1,n)V\\Bigl(\\frac{nd^2}{X}\\Bigr)\n &\\ll\n \\norm{\\widehat{K}}_{\\infty}Z^{10\/9}q^{2\/9+\\varepsilon} \\sum_{d\\geq\n 1}\\frac{X^{5\/6}}{d^{5\/3}}\n \\\\\n &\\ll\n \\norm{\\widehat{K}}_{\\infty}Z^{10\/9}q^{2\/9+\\varepsilon}X^{5\/6}\n\\end{align*}\nfor any~$\\varepsilon>0$.\n\\par\nSince~$V$ has compact support in $[1\/2,3]$, the sum over~$n$ is empty\nif $d\\geq \\sqrt{3X}$. Since\n$$\n\\sum_{n\\geq 1}K(nd^2)\\lambda(1,n)V\\Bigl(\\frac{nd^2}{X}\\Bigr) \\ll\n\\norm{K}_{\\infty}\\Bigl(\\frac{X}{d^2}\\Bigr)^{1+\\varepsilon}\n$$\nfor any~$\\varepsilon>0$, by the Rankin--Selberg bound~(\\ref{eq:RS}), we can\nestimate the remaining part of the sum as follows:\n\\begin{align*}\n \\sum_{X^{1\/2}\/(Z^{1\/3}q^{2\/3})0$.\n\n\n\\begin{remark}\n The additional dependency on~$\\norm{K}_{\\infty}$ seems to be\n unavoidable in Corollary~\\ref{cor-gl2}.\n\\end{remark}\n\n\\section{Proof of Corollary~\\ref{RScor}}\n\nThe proof of Corollary~\\ref{RScor} requires additional ingredients\nbesides Theorem~\\ref{thm:main}. We will be somewhat brief in handling\nthese additional arguments (especially standard analytic arguments),\nsince similar computations have been performed in a number of other\npapers (e.g.~\\cite{FKM2}).\n\nFirst, in terms of the\nHecke-eigenvalues $\\lambda(m,n)$ of the symmetric square~$\\psi$\nof~$\\varphi$, we have the identity\n$$\n\\lambda(n)^2=\\sum_{d^2bc=n}\\mu(d)\\lambda(1,c)\n$$\n(see~(\\ref{lambda(n2)}) and~(\\ref{convol})). Thus we have\n$$\n\\sum_{n\\geq 1}\\lambda(n)^2K(n)V\\Bigl(\\frac{n}{X}\\Bigr)=\n\\sum_{d,m,n\\geq\n 1}\\mu(d)\\lambda(1,n)K(d^2mn)V\\Bigl(\\frac{d^2mn}{X}\\Bigr).\n$$\n\\par\nWe bound the contribution of the integers~$n$ divisible by~$q$ using\nthe Kim--Sarnak bound~\\cite{KimSar} for~$\\lambda(1,n)$, which shows\nthat it is\n$$\n\\ll\n\\|K\\|_\\infty X^{1+\\varepsilon}q^{-1+7\/32},\n$$\nfor any~$\\varepsilon>0$, and hence is negligigle. We may therefore restrict\nthe sum on the right-hand side to integers such that $(dmn,q)=1$.\n\nFor a fixed value of $d\\leq D$, coprime to~$q$, we consider the sum\n\\begin{equation}\\label{eq-Td}\n T_{d,x}=\\sum_{m,n\\geq\n 1}\\lambda(1,n)K(d^2mn)V\\Bigl(\\frac{d^2mn}{X}\\Bigr).\n\\end{equation}\nWe need to estimate the sum of~$T_{d,x}$ over~$d\\geq 1$.\n\nLet $D\\geq 1$ be some small parameter to be fixed later. The\ncontribution of the integers $d> D$ is bounded trivially and is\n$$\n\\sum_{d>D}T_{d,X}\\ll \\frac{\\|K\\|_\\infty X^{1+\\varepsilon}}{D}\n$$\nfor any~$\\varepsilon>0$.\n\nWe now fix $d\\leq D$, coprime to~$q$. We handle the sum~$T_{d,X}$ by a\nsmooth dyadic partition of unity on the $m$-variable. This reduces the\nproblem to estimates of $O(\\log X)$ sums of the form\n\\begin{equation}\\label{DMsum}\n S_{d,M}= \\sum_{\\substack{m,n\\geq 1\\\\(mn,q)=1}}\n \\lambda(1,n)K(d^2mn)V\\Bigl(\\frac{d^2mn}{X}\\Bigr)\n W\\Bigl(\\frac{m}M\\Bigr)\n\\end{equation}\nwhere $W$ is smooth and compactly supported in $[1\/2,5\/2]$. We\nset\n$$\nN=\\frac{X}{d^2M},\n$$\nso that $n\\sim N$ in the sum.\n\nThe estimate for~(\\ref{DMsum}) splits in three cases, depending on the\nsize of~$M$.\n\n\\subsection{When $M$ is small}\n\nWe assume that $\\frac{X}{d^2m}\\geq \\frac{X}{D^2M}\\geq\nZ^{2\/3}q^{4\/3}$, or in another words, that\n\\begin{equation}\\label{condition-on-M}\nD^2M\\leq \\frac{X}{Z^{2\/3}q^{4\/3}}.\n\\end{equation}\nWe can then apply Theorem~\\ref{thm:main}, and we derive\n\\begin{align}\\nonumber\n S_{d,M}&\\ll \\|K\\|_\\infty q^{7\/32-1}\\frac{X^{1+\\varepsilon}}{d^2}\n +\\|\\widehat K\\|_\\infty Z^{10\/9}q^{2\/9+\\varepsilon}\n \\sum_{m\\sim M}\\Bigl(\\frac{X}{d^2m}\\Bigr)^{5\/6}\\\\\n &\\ll \\|K\\|_\\infty X^{\\varepsilon}q^{7\/32-1}\\frac{X}{d^2}\n +\\|\\widehat K\\|_\\infty X^{\\varepsilon}Z^{10\/9}\n \\Bigl(\\frac{X}{d^2}\\Bigr)^{5\/6}q^{2\/9}{M^{1\/6}}\n\\label{bound1}\n\\end{align}\nfor any~$\\varepsilon>0$ (the first term corresponds to removing the\nconstraint $(n,q)=1$).\n\n\\subsection{When $M$ is in the Fourier range}\n\nIf $M\\geq q^{1\/2}$, then it is\nbeneficial to apply the Poisson summation formula to the\n$m$-variable. As in the previous case, the cost of removing the\ncondition $(m,q)=1$ is $\\ll \\|K\\|_\\infty q^{7\/32-1}X^{1+\\varepsilon}\/d^2$\nfor~$\\varepsilon>0$. The Poisson summation formula implies that\n$$\n\\sum_{m\\geq\n 1}K(d^2mn)V\\Bigl(\\frac{d^2mn}{X}\\Bigr)W\\Bigl(\\frac{m}M\\Bigr)\\ll\n\\|\\widehat K\\|_\\infty \\Bigl(\\frac{M}{q^{1\/2}}+q^{1\/2}\\Bigr)\n$$\nand therefore\n\\eqref{DMsum} is bounded by\n\\begin{equation}\\label{bound2}\n S_{d,M}\\ll \\|K\\|_\\infty X^{\\varepsilon}q^{7\/32-1}\\frac{X}{d^2}\n +\\|\\widehat K\\|_\\infty X^{\\varepsilon}\\frac{X}{d^2}\n \\Bigl(\\frac{1}{q^{1\/2}}+\\frac{q^{1\/2}}{M}\\Bigr)\n\\end{equation}\nfor any~$\\varepsilon>0$.\n\n\\subsection{When $M$ is large but not in Fourier range}\n\nIf~$M\\leq q^{1\/2}$, thinking of the prototypical case when\n$X\\sim q^{3\/2}$ and $D$ is close to one, the $n$-sum is of length\nclose to $q$, so the natural move is to smooth the $n$-sum, and then\nuse the Poisson summation formula on the resulting sums.\n\nThus we apply the Cauchy--Schwarz inequality to \\eqref{DMsum}, leaving\nthe $n$ variable outside, namely\n\\begin{equation}\\label{eqCS}\n |S_{d,M}|^2\n \\ll\\sum_{n\\sim X\/d^2M}|\\lambda(1,n)|^2\\times\n \\sumsum_\\stacksum{m_i\\sim\n M}{(m_i,q)=1}\\sum_{n\\geq 1}K(d^2m_1n)\\ov{K(d^2m_2n)}\n V\\Bigl(\\frac{d^2m_1n}X\\Bigr)\\ov{V}\\Bigl(\\frac{d^2m_2n}X\\Bigr).\n\\end{equation}\n\nHere, we have dropped the constraint $(n,q)=1$ on the right-hand side\nby positivity, and replaced the expressions $W\\Bigl(\\frac{m_i}M\\Bigr)$\nby the summation conditions $ m_i\\sim M$.\n\nBy the Poisson summation formula, we have\n\\begin{equation}\\label{applypoisson}\n \\sum_{n\\geq 1}\n K(d^2m_1n)\\ov{K(d^2m_2n)}\n V\\Bigl(\\frac{d^2m_1n}X\\Bigr)\\ov{V}\\Bigl(\\frac{d^2m_2n}X\\Bigr)\n =\\frac{N}{q^{1\/2}}\\sum_{h\\in\\mathbf{Z}}\n \\widehat{K}_{(2)}(h)\\mathcal{W}\\Bigl(\\frac{h}{q\/(X\/d^2M)}\\Bigr),\n\\end{equation}\nwhere $\\mathcal{W}(y)$ is a smooth function depending on $d,m_1,m_2$,\nrapidly decaying as $y\\rightarrow\\infty$, and\n$$\n\\widehat{K}_{(2)}(h)=\\frac{1}{\\sqrt{q}}\\sum_{n\\in\\mathbf{F}_q}\nK(d^2m_1n)\\ov{K(d^2m_2n)}e\\Bigl(\\frac{nh}{q}\\Bigr).\n$$\n\\par\nTo go further, we use the assumption of Corollary~\\ref{RScor} that $K$\nis the trace function of a middle-extension $\\ell$-adic sheaf~$\\mathcal{F}$\nthat is not exceptional. Indeed, from \\cite[Theorem 6.3]{FKM2}, we can\ndeduce that there exists a set $B\\subset {\\mathbf{F}^\\times_q}$ such\nthat $|B|$ is bounded in terms of the conductor of~$\\mathcal{F}$ only, and\nsuch that whenever\n\\begin{equation}\\label{eqdiag}\n m_1\/m_2\\mods q\\not\\in B,\n\\end{equation}\nthen we have\n$$\n\\|\\widehat{K}_{(2)}\\|_\\infty\\ll 1\n$$\nwhere the implied constant depends on the conductor of~$\\mathcal{F}$ only.\n\nReturning to \\eqref{eqCS}, we apply the bound \\eqref{applypoisson} to\nthe pairs pairs $(m_1,m_2)$ which satisfy \\eqref{eqdiag}, and apply\nthe trivial bound otherwise.\n\nWe see then that the contribution to the second factor of \\eqref{eqCS}\nof the ``diagonal'' pairs not satisfying \\eqref{eqdiag} is bounded by\n$$\n\\ll X^{\\varepsilon}M\\Bigl(\\frac{M}q+1\\Bigr)\\frac{X\/M}{d^2}\n$$\nfor any~$\\varepsilon>0$, while the contribution of the pairs $(m_1,m_2)$\nsatisfying \\eqref{eqdiag} is bounded by\n$$\n\\ll X^{\\varepsilon}M^2\\Bigl(\\frac{X\/M}{d^2q^{1\/2}}+q^{1\/2}\\Bigr),\n$$\nfor any~$\\varepsilon>0$, where in both cases the implied constant depends\nonly on~$\\varepsilon$ and on the conductor of~$\\mathcal{F}$.\n\n\nCollecting these bounds, we obtain from \\eqref{eqCS} the bound\n\\begin{equation}\\label{bound3}\n S_{d,M} \\ll\n \\frac{X^{1+\\varepsilon}}{d^2}\n \\Bigl(\\frac{1}{M^{1\/2}}+\\frac{1}{q^{1\/4}}+q^{1\/4}M^{1\/2}\n \\frac{d}{X^{1\/2}}\\Bigr),\n\\end{equation}\nfor any~$\\varepsilon>0$, where the implied constant depends only on~$\\varepsilon$\nand on the conductor of~$\\mathcal{F}$.\n\n\n\\subsection{End of the proof}\n\nNow we can combine the previous bounds. Let~$\\eta>0$ and~$\\delta$\nwith~$0<\\delta<1\/4$ be parameters to be determined later.\n\n\\subsubsection*{-- If $M\\leq q^{2\\delta}$,} we then apply the bound\n\\eqref{bound1} (and the dyadic decomposition of~$T_{d,x}$ in a\ncombination of sums~$S_{d,M}$) to derive\n\\begin{equation}\\label{bound-c}\n \\sum_{d\\leq D}T_{d,X}\\ll X^{1+\\varepsilon}q^{7\/32-1}\n +Z^{10\/9}X^{5\/6+\\varepsilon}q^{2\/9+\\delta\/3},\n\\end{equation}\nunder the condition that\n\\begin{equation}\\label{condition-X}\nX\\geq Z^{2\/3}D^2q^{4\/3+2\\delta}\n\\end{equation}\n(see \\eqref{condition-on-M}).\n\n\\subsubsection*{-- If $M\\geq q^{1\/2+\\eta}$,} we apply the bound\n\\eqref{bound2} and sum over $d\\leq D$, to find that\n\\begin{equation}\\label{bound-a}\n \\sum_{d\\leq D} T_{d,X}\n \\ll X^{1+\\varepsilon}\\Bigl(\\frac{1}{q^{1\/2}}+\\frac{q^{1\/2}}{M}\\Bigr)\n \\ll X^{1+\\varepsilon}q^{-\\eta}\n\\end{equation}\nin that case.\n\n\\subsubsection*{-- If $q^{2\\delta}\\leq M< q^{1\/2+\\eta}$,} we then\napply the bound \\eqref{bound3} and sum over $d\\leq D$, obtaining\n\\begin{equation}\\label{bound-b}\n \\sum_{d\\leq D}T_{d,X}\\ll\n X^{1+\\varepsilon}\\Bigl(\\frac{1}{q^{\\delta}}\n +\\frac{1}{q^{1\/4}}+\\frac{q^{1\/2+\\eta\/2}}{X^{1\/2}}\\Bigr)\n \\ll X^{1+\\varepsilon}\\Bigl(q^{-\\delta}+\\frac{q^{1\/2+\\eta\/2}}{X^{1\/2}}\\Bigr).\n\\end{equation}\n\nThis covers all of the ranges for $M$. We now choose $\\eta, \\delta>0$\nsuch that the bound in \\eqref{bound-a} is equal to the second term in\n\\eqref{bound-b}, and the first term in \\eqref{bound-b} is consistent\nwith the second term in \\eqref{bound-c}. That is, we choose\n$q^{\\eta}=(X\/q)^{1\/3}$ and\n$q^{\\delta}=\\frac{X^{1\/8}}{Z^{5\/6}q^{1\/6}}$. Therefore we have in all\ncases the estimate\n$$\n\\sum_{d\\leq D}T_{d,X} \\ll\nX^{2\/3+\\varepsilon}q^{1\/3}+Z^{5\/6}X^{7\/8+\\varepsilon}q^{1\/6}+X^{1+\\varepsilon}q^{7\/32-1},\n$$\nfor any~$\\varepsilon>0$, under the assumption that\n$$\nX\\gg D^{8\/3}q^{4\/3}Z^{-4\/3},\n$$\nand the implied constant depends only on~$\\varepsilon$ and the conductor\nof~$\\mathcal{F}$.\n\nFinally we combine this with the previously noted estimate\n$$\n\\sum_{d>D}T_{d,X}\\ll\n\\frac{\\|K\\|_\\infty X^{1+\\varepsilon}}{D}\n$$\n(recall that for a non-exceptional trace function, we\nhave~$\\|\\widehat{K}\\|_{\\infty}\\ll 1$ where the implied constant depends\nonly on the conductor of~$\\mathcal{F}$), to conclude that\n$$\n\\sum_{n\\geq 1}\\lambda(n)^2K(n)V\\Bigl(\\frac{n}{X}\\Bigr)\\ll\nX^{2\/3+\\varepsilon}q^{1\/3}+Z^{5\/6}X^{7\/8+\\varepsilon}q^{1\/6}+X^{1+\\varepsilon}\/D.\n$$\nWe take $D=q^{\\gamma}$ for some small~$\\gamma>0$, and then we have\n$$\n\\sum_{n\\geq 1}\\lambda(n)^2K(n)V\\Bigl(\\frac{n}{X}\\Bigr)\\ll\nX^{2\/3+\\varepsilon}q^{1\/3}+Z^{5\/6}X^{7\/8+\\varepsilon}q^{1\/6}+X^{1+\\varepsilon}q^{-\\gamma},\n$$\nprovided that\n$$\nX\\gg q^{4\/3+8\\gamma\/3}\/Z^{4\/3}, \n$$\nwhere the implied constant depends only on~$\\varepsilon$ and the conductor\nof~$\\mathcal{F}$.\n\nThis concludes the proof of Corollary \\ref{cor2-gl2}.\n\n\n\n\\begin{bibdiv}\n\n\\begin{biblist}\n\n\\bib{AHLS}{article}{\n author={Aggarwal, K.},\n author={Holowinsky, R.},\n author={Lin, Y.},\n author={Sun, Q.},\n title={The Burgess bound via a trivial delta method},\n note={\\url{arXiv:1803.00542v1}},\n date={2018},\n} \n\n\\bib{Blomer}{article}{\n author={Blomer, V.},\n title={Subconvexity for twisted $L$-functions on ${\\rm GL}(3)$},\n journal={Amer. J. Math.},\n volume={134},\n date={2012},\n number={5},\n pages={1385--1421},\n\n}\n \\bib{CI}{article}{\n author={Conrey, J. B.},\n author={Iwaniec, H.},\n title={The cubic moment of central values of automorphic $L$-functions},\n journal={Ann. of Math. (2)},\n volume={151},\n date={2000},\n number={3},\n pages={1175--1216},\n\n doi={10.2307\/121132},\n}\n\n\\bib{FKM1}{article}{\n author={Fouvry, {\\'E}.},\n author={Kowalski, E.},\n author={Michel, Ph.},\n title={Algebraic twists of modular forms and Hecke orbits},\n journal={GAFA},\n volume={25},\n note={\\url{arXiv:1207.0617}},\n date={2015},\n number={2},\n pages={580-657},\n }\n\n \\bib{FKMd3}{article}{\n author={Fouvry, {\\'E}.},\n author={Kowalski, E.},\n author={Michel, Ph.},\n title={On the exponent of distribution of the ternary divisor function},\n journal={Mathematika},\n note={\\url{arXiv:1304.3199}},\n date={2015},\n volume={61},\n number={1},\n pages={121-144},\n }\n\n\\bib{FKM2}{article}{\n author={Fouvry, \\'E.},\n author={Kowalski, E.},\n author={Michel, Ph.},\n title={Algebraic trace functions over the primes},\n note={\\url{arXiv:1211.6043}},\n journal={Duke Math. Journal},\n date={2014},\n volume={163},\n pages={1683-1736},\n number={9},\n }\n\n\n \\bib{pisa}{article}{\n author={Fouvry, {\\'E}.},\n author={Kowalski, E.},\n author={Michel, Ph.},\n title={Trace functions over finite fields and their applications},\n book={\n series={Colloquio de Giorgi},\n publisher={Springer},\n },\n date={2014},\n }\n \n\n \\bib{short-sums}{article}{\n author={Fouvry, {\\'E}.},\n author={Kowalski, E.},\n author={Michel, Ph.},\n author={Raju, C.},\n author={Rivat, J.},\n author={Soundararajan, K.},\n title={On short sums of trace functions},\n journal={Ann. Inst. Fourier},\n date={2017},\n volume={67},\n pages={423--449},\n }\n \n \\bib{FI}{article}{\n author={Friedlander, J.B.},\n author={Iwaniec, H.},\n title={Incomplete Kloosterman sums and a divisor problem},\n note={(with an appendix by\n B. J. Birch and E. Bombieri)},\n journal={Ann. of Math. (2)},\n volume={121},\n date={1985},\n number={2},\n pages={319--350},\n}\n\n \n\\bib{Goldfeld}{book}{\n author={Goldfeld, D.},\n title={Automorphic forms and $L$-functions for the group ${\\rm\n GL}(n,\\mathbf{R})$},\n series={Cambridge Studies in Advanced Mathematics},\n volume={99},\n note={With an appendix by Kevin A. Broughan},\n publisher={Cambridge University Press, Cambridge},\n date={2006},\n pages={xiv+493},\n}\n\n\n\\bib{HMQ}{article}{\n author={Holowinsky, R.},\n author={Munshi, R.},\n author={Qi, Z.},\n title={Character sums of composite moduli and hybrid subconvexity},\n conference={\n title={Advances in the theory of automorphic forms and their\n $L$-functions},\n },\n book={\n series={Contemp. Math.},\n volume={664},\n publisher={Amer. Math. Soc., Providence, RI},\n },\n date={2016},\n pages={135--148},\n }\n\n \\bib{HN}{article}{\n author={Holowinsky, R.},\n author={Nelson, P.},\n title={Subconvex bounds on $\\GL(3)$ via degeneration to frequency zero},\n journal={Math. Ann.},\n volume={372},\n date={2018},\n number={1-2},\n pages={299--319},\n} \n\n\\bib{iwaniec}{book}{\n author={Iwaniec, H.},\n title={Topics in classical automorphic forms},\n series={Graduate Studies in Mathematics},\n volume={17},\n publisher={American Mathematical Society, Providence, RI},\n date={1997},\n pages={xii+259},\n}\n\n\\bib{ESDE}{book}{\n author={Katz, N. M.},\n title={Exponential sums and differential equations},\n series={Annals of Mathematics Studies},\n volume={124},\n publisher={Princeton University Press},\n address={Princeton, NJ},\n date={1990},\n}\n\n\n\\bib{KimSar}{article}{\n author={Kim, Henry H.},\n author={Sarnak, Peter}\n title={Refined estimates towards the Ramanujan and Selberg conjectures}\n note={Appendix to H. Kim, Functoriality for the exterior square of ${\\rm GL}_4$ and the\n symmetric fourth of ${\\rm GL}_2$},\n journal={J. Amer. Math. Soc.},\n volume={16},\n date={2003},\n number={1},\n pages={139--183},\n}\n\n\\bib{KMS}{article}{\n author={Kowalski, Emmanuel},\n author={Michel, Ph.},\n author={Sawin, Will},\n title={Stratification and averaging for exponential sums: Bilinear forms with generalized Kloosterman sums},\n journal={Annali della Scuola Normale Superiore di Pisa (to appear)}\n note={\\url{arXiv:1802.09849}},\n date={2018}\n}\n\n\n\\bib{Lin}{article}{\n author={Lin, Y.},\n title={Bounds for twists of $\\GL(3)$ $L$-functions},\n note={\\url{arXiv:1802.05111}},\n date={2018},\n} \n\n\\bib{Miller}{article}{\n author={Miller, S. D.},\n title={Cancellation in additively twisted sums on ${\\rm GL}(n)$},\n journal={Amer. J. Math.},\n volume={128},\n date={2006},\n number={3},\n pages={699--729},\n}\n\n\\bib{Molteni}{article}{\n author={Molteni, Giuseppe},\n title={Upper and lower bounds at $s=1$ for certain Dirichlet series with\n Euler product},\n journal={Duke Math. J.},\n volume={111},\n date={2002},\n number={1},\n pages={133--158},\n}\n\n\n\n\\bib{Munshi}{article}{\n author={Munshi, R.},\n title={The circle method and bounds for $L$-functions---IV: Subconvexity\n for twists of $\\rm GL(3)$ $L$-functions},\n journal={Ann. of Math. (2)},\n volume={182},\n date={2015},\n number={2},\n pages={617--672},\n}\n\n\\bib{Munshi1}{article}{\n author={Munshi, Ritabrata},\n title={Twists of $\\GL(3)$ $L$-functions},\n note={\\url{arXiv:1604.08000}},\n date={2016},\n }\n \n \\bib{PY}{article}{\n author={Petrow, Ian},\n author={Young, Matthew},\n title={The Weyl bound for Dirichlet $L$-functions of cube-free conductor},\n note={\\url{arXiv:1811.02452}},\n date={2018},\n }\n\n \n \\bib{SZ}{article}{\n author={Sun, Qingfeng},\n author={Zhao, Rui},\n title={Bounds for ${\\rm GL}_3$ $L$-functions in depth aspect},\n journal={Forum Math.},\n volume={31},\n date={2019},\n number={2},\n pages={303--318},\n}\n\n\\bib{Zac}{article}{\n author = {Zacharias, Rapha\\\"el},\n title = {Simultaneous non-vanishing for Dirichlet $L$-functions},\n journal = {Annales de l'Institut Fourier},\n publisher = {Association des Annales de l'institut Fourier},\n volume = {69},\n number = {4},\n year = {2019},\n pages = {1459-1524},\n }\n\n\n \\bib{FZ}{article}{\n author={Zhou, Fan},\n title={The Voronoi formula on $GL(3)$ with ramification},\n note={\\url{arXiv:1806.10786}},\n date={2018},\n}\n\n\n\n\\end{biblist}\n\n\\end{bibdiv} \n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Shape Priors for Object SLAM}\n\nIn this section, we explain in more detail the benefit of using learnt shape priors in object SLAM. We evaluate the related object-oriented SLAM methods under three important properties: \\textit{1. Can the system reconstruct previously unseen objects? 2. Is the reconstruction complete? 3. Does the reconstruction preserve detailed shape?} \n\nThe first line of works~\\cite{SLAM++, Tateno2016When2.5D} relies on a pre-scanned CAD model database to perform online detection and registration. The reconstructed object shapes are complete and detailed, but the system cannot handle new objects. The second category of works~\\cite{maskfusion, fusion++} leverages 2D segmentation masks, treat each segmented object as individual identities and perform reconstruction on-the-fly. This reconstruction-by-segmentation strategy can reconstruct arbitrary unseen object shapes in great detail, but the reconstructed shape is incomplete due to partial observation and missing depth. \nThe last line of research represents objects as simple geometric shapes, such as ellipsoids~\\cite{quadricslam} or cuboids~\\cite{cubeslam}. \nThis kind of geometric representation can be applied to arbitrary unseen shapes and preserve some important properties like scale and orientation, but we've lost the level of details in the shapes.\n\nWith learnt shape priors, all the three properties can be achieved at the same time. Its generative property means it can generalize to unseen shapes. Object shape decoded from latent code is guaranteed to be detailed and complete, even from a single partial view. The compact representation also benefits the optimization process. Table \\ref{tab:related_work} provides an overview of the related works under the different properties. Only NodeSLAM~\\cite{node-slam} and DSP-SLAM can achieve all the three important properties. Unlike NodeSLAM, DSP-SLAM can also deal with large scale environments and monocular RGB inputs.\n\n\\section{Full Derivation of Jacobians}\n\nAs mentioned in the main paper, one of our contribution is the fast Gauss-Newton optimizer, which is crucial to achieve real-time performance. This section provides full derivation of Jacobians of each individual residual term.\n\n\\subsection{Jacobian of Surface Term Residual \\label{sec:J_surf}}\n\nAs shown in Eq. 1 in the main paper, the surface term residual at pixel $\\matr{u} \\in \\mathbf{\\Omega}_s$ is simply the SDF value of the back-projected point under object coordinate:\n\n\\begin{equation}\n e_s(\\matr{u}) = G(\\matr{T}_{oc} \\pi^{-1}(\\matr{u}, {\\mathcal{D}}), \\matr{z}) = G(^o\\matr{x}, \\matr{z})\n\\end{equation}\n\nIts Jacobian with respect to object pose and shape $[\\boldsymbol{\\xi}_{oc};\\matr{z}]$ is defined as:\n\n\\begin{equation} \\label{eq:J_surf}\n \\matr{J}_s = \\frac{\\partial e_s(\\matr{u})}{\\partial [\\boldsymbol{\\xi}_{oc};\\matr{z}]}\n\\end{equation}\n\nwhere $\\boldsymbol{\\xi}_{oc} \\in \\mathbb{R}^7$ is the Cartesian representation (twist coordinate) of the corresponding element in Lie Algebra $\\mathfrak{sim}(3)$. The Jacobian with respect to the shape code $\\frac{\\partial e_s(\\matr{u})}{\\partial \\matr{z}}$ could be obtained directly via back-propagation. To derive the Jacobian with respect to object pose $\\boldsymbol{\\xi}_{oc}$, we first factorize it using chain rule:\n\n\\begin{table}[tbp]\n \\center\n \\setlength\\tabcolsep{4pt}\n \\begin{tabular}{c||l|l|l||l|l}\n Method & \\specialcell[b]{Unseen\\\\Objects} & \\specialcell[b]{Full\\\\Shape} & \\specialcell[b]{Detailed\\\\Shape} & \\specialcell[b]{RGB\\\\Input} & \\specialcell[b]{Large\\\\Scene} \\\\ \\hline \\hline\n \\specialcell[b]{2.5D is not\\\\enough\\cite{Tateno2016When2.5D}} & ~ & \\checkmark & \\checkmark & ~ & ~ \\\\ \\hline\n \\specialcell[b]{SLAM++\\\\\\cite{SLAM++}} & ~ & \\checkmark & \\checkmark & ~ & ~\\\\ \\hline\\hline\n \\specialcell[b]{Mask-\\\\Fusion \\cite{maskfusion}} & \\checkmark & ~ & \\checkmark & ~ & ~ \\\\ \\hline\n \\specialcell[b]{Fusion++\\\\\\cite{fusion++}} & \\checkmark & ~ & \\checkmark & ~ & ~\\\\ \\hline\\hline\n \\specialcell[b]{Quadric-\\\\SLAM \\cite{quadricslam}} & \\checkmark & \\checkmark & ~ & \\checkmark &\\checkmark \\\\ \\hline\n \\specialcell[b]{Cube-\\\\SLAM \\cite{cubeslam}} & \\checkmark & \\checkmark & ~ & \\checkmark & \\checkmark\\\\ \\hline\\hline\n \\specialcell[b]{Node-\\\\SLAM \\cite{node-slam}} & \\checkmark & \\checkmark & \\checkmark & ~ & ~ \\\\ \\hline\n \\textbf{\\specialcell[b]{DSP-\\\\SLAM}} & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\n \\end{tabular}\n \\caption{Comparison of the properties of DSP-SLAM with respect\n to other object-oriented SLAM systems. \n \\vspace{-1.0cm}}\n \\label{tab:related_work}\n\\end{table}\n\n\\begin{equation} \\label{eq:J_surf_pose_1}\n \\frac{\\partial e_s(\\matr{u})}{\\partial \\boldsymbol{\\xi}_{oc}} = \\frac{\\partial G(^o\\matr{x}, \\matr{z})}{\\partial ^o\\matr{x}} \\frac{\\partial ^o\\matr{x}}{\\partial \\boldsymbol{\\xi}_{oc}}\n\\end{equation}\n\nThen the first term $\\frac{\\partial G(^o\\matr{x}, \\matr{z})}{\\partial ^o\\matr{x}}$ can also be obtained via back-propagation. The second Jacobian term could be computed by applying a left perturbation to the pose $\\matr{T}_{oc}$:\n\n\\begin{align}\n \\frac{\\partial ^o\\matr{x}}{\\partial \\boldsymbol{\\xi}_{oc}} & = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\exp{\\big(\\delta \\boldsymbol{\\xi}^\\wedge}\\big) \\matr{T}_{oc} {^c\\matr{x}} - \\matr{T}_{oc} {^c\\matr{x}}}{\\delta \\boldsymbol{\\xi}} \\\\\n & = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\big(\\matr{I} + \\delta \\boldsymbol{\\xi}^\\wedge\\big) \\matr{T}_{oc} {^c\\matr{x}} - \\matr{T}_{oc} {^c\\matr{x}}}{\\delta \\boldsymbol{\\xi}} \\\\\n & = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\delta \\boldsymbol{\\xi}^\\wedge \\matr{T}_{oc} {^c\\matr{x}}}{\\delta \\boldsymbol{\\xi}} = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\delta \\boldsymbol{\\xi}^\\wedge {^o\\matr{x}}}{\\delta \\boldsymbol{\\xi}} \\label{eq:J_surf_pose_2}\n\\end{align}\n\nwhere $\\exp(\\cdot)$ is the exponential map from Lie Algebra $\\mathfrak{sim}(3)$ to the corresponding Lie Group $\\mathbf{Sim}(3)$, and $\\cdot ^\\wedge$ is the hat-operator that maps a twist coordinate in $\\mathbb{R}^7$ to $\\mathfrak{sim}(3)$:\n\n\\begin{equation} \\label{eq:hat}\n \\boldsymbol{\\xi}^ \\wedge = \\begin{bmatrix} \\boldsymbol{\\nu} \\\\ \\boldsymbol{\\phi} \\\\ s \\end{bmatrix}^ \\wedge = \\begin{bmatrix} \\boldsymbol{\\phi}_{\\times} + s\\matr{I} & \\boldsymbol{\\nu} \\\\ \\matr{0} & 0\\end{bmatrix}\n\\end{equation}\n\n$\\boldsymbol{\\nu} \\in \\mathbb{R}^3$, $\\boldsymbol{\\phi} \\in \\mathbb{R}^3$ and $s \\in \\mathbb{R}$ represent the translation, rotation and scale part of the twist coordinate respectively. $(\\cdot)_\\times$ maps from $\\mathbb{R}^3$ to $\\mathfrak{so}(3)$ (skew-symmetric matrices). With the equation above, the Jacobian term in Eq. 6 can be computed as: \\vspace{-0.3cm}\n\n\\begin{align}\n \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\delta \\boldsymbol{\\xi}^\\wedge {^o\\matr{x}}}{\\delta \\boldsymbol{\\xi}} & = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{\\delta \\boldsymbol{\\phi}_\\times {^o\\matr{x}} + \\delta s{^o\\matr{x}} + \\delta \\boldsymbol{\\nu}}{\\delta \\boldsymbol{\\xi}} \\\\\n & = \\lim_{\\delta \\boldsymbol{\\xi} = \\matr{0}} \\frac{ \\delta \\boldsymbol{\\nu} - {^o\\matr{x}}_\\times \\delta \\boldsymbol{\\phi}+ \\delta s{^o\\matr{x}}}{\\delta \\boldsymbol{\\xi}} \\\\\n & = \\begin{bmatrix} \\matr{I} & -^o\\matr{x}_{\\times} & ^o\\matr{x} \\end{bmatrix} \\label{eq:J_surf_pose_3}\n\\end{align}\n\nThe full Jacobian of the surface consistency term residual with respect to the object pose can be obtained by combining Eq. \\ref{eq:J_surf_pose_1}, \\ref{eq:J_surf_pose_2}, \\ref{eq:J_surf_pose_3}. Note that here we derive the Jacobian with slight abuse of notations and neglected the notation of homogeneous representation. For a more detailed explanation of Lie Theory, please refer to ~\\cite{barfoot_2017} or \\cite{sola2018_micro_lie}. \n\n\\subsection{Jacobian of Rendering Term Residual}\n\nAs stated in the main paper, the rendering term residual of pixel $\\matr{u}$ is\n\n\\begin{equation}\n e_r(\\matr{u}) = d_\\matr{u} - \\hat{d}_{\\matr{u}}\n\\end{equation}\n\nTo compute the Jacobian of the rendering terms $\\matr{J}_{r}$, we can expand it using chain rule:\n\n\\begin{align} \n \\matr{J}_{r} & = \\frac{\\partial e_{r}}{\\partial \\left[\\boldsymbol{\\xi}_{oc}; \\matr{z} \\right]} \\\\\n & = \\sum_{k=1}^M \\frac{\\partial e_{r}}{\\partial o_k} \\frac{\\partial o_k}{\\partial s_k} \\frac{\\partial{G({^o\\matr{x}_k}, \\matr{z})}}{\\partial \\left[ \\boldsymbol{\\xi}_{oc}; \\matr{z} \\right]} \\label{eq:J_render}\n\\end{align}\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/rendering_values.png}\n\\caption{Demonstration of the depth rendering process. Only very few ray points contribute to the overall Jacobian term.\n\\vspace{-0.5cm}}\n\\label{fig:render_values}\n\\end{figure}\n\nwhere $\\{^o\\matr{x}_k\\}_{k=1}^M$ is the depth-ordered set of sampled points along the ray of back-projecting pixel $\\matr{u}$. The third term $\\frac{\\partial{G( ^o\\matr{x}_k, \\matr{z})}}{\\partial \\left[ \\boldsymbol{\\xi}_{oc}; \\matr{z} \\right]}$ has exactly the same form as the surface term Jacobian in Eq.~\\ref{eq:J_surf}, thus it can be computed following Sec.~\\ref{sec:J_surf} . The second term $\\frac{\\partial o_k}{\\partial s_k}$ is either a constant value or $0$, as the sdf-occupancy conversion is linear inside the cut-off threshold and constant elsewhere. \n\n\\begin{equation} \\label{eq:conversion_grad}\n \\frac{\\partial o_k}{\\partial s_k} = \n \\begin{cases}\n -\\frac{1}{2\\sigma} & |s_k| < \\sigma\\\\\n 0 & \\text{elsewhere}\n \\end{cases}\n\\end{equation}\n\nTo compute the first term, we expand the residual term using Eq. 3 and Eq. 4 in the main paper following~\\cite{multi-view_supervision}: \\vspace{-0.3cm}\n\n\\begin{equation*}\n \\begin{aligned}\n e_r & = d_\\matr{u} - \\bigg( \\sum_{i=1}^M d_i \\ o_i \\prod_{j=1}^{i-1} (1-o_j) + d_{M+1} \\prod_{j=1}^M (1 - o_j) \\bigg) \\\\\n & = \\sum_{i=1}^M \\psi(i) \\ o_i \\prod_{j=1}^{i-1} (1-o_j) + \\psi(M+1) \\prod_{j=1}^M (1 - o_j) \\\\\n \n & = \\sum_{i=1}^{M+1} \\psi(i) \\prod_{j=1}^{i-1} (1-o_j) - \\sum_{i=1}^M \\psi(i) \\prod_{j=1}^i (1-o_j) \\\\\n & = \\psi(1) + \\sum_{i=1}^M (\\psi(i+1) - \\psi(i)) \\prod_{j=1}^i (1-o_j)\n \\end{aligned}\n\\end{equation*}\n\nwhere $\\psi(i) = d_{\\matr{u}} - d_i$ is the depth residual for each of the sampled points along the ray. Differentiating with respect to the sample point, we reach: \\vspace{-0.2cm}\n\n\\begin{align} \n \\frac{\\partial e_{r}}{\\partial o_k} \n & = \\sum_{i=1}^M (\\psi(i+1) - \\psi(i)) \\frac{\\partial}{\\partial o_k}\\prod_{j=1}^i (1-o_j) \\\\\n & = \\sum_{i=k}^M (\\psi(i) - \\psi(i+1)) \\prod_{j=1, j \\neq k}^i (1-o_j) \\\\\n & = \\Delta d\\sum_{i=k}^M \\prod_{j=1, j \\neq k}^i (1-o_j) \\label{eq:dr_do}\n\\end{align}\n\nAs we are sampling uniformly along the ray, the coefficient $\\psi(i) - \\psi(i+1) = d_{i+1} - d_i$ reduces to $\\Delta d$. To understand this expression we can multiply $(1-o_k)$ on both sides, of Eq.~\\ref{eq:dr_do}, which gives us:\n\n\\begin{equation}\n \\frac{\\partial e_{r}}{\\partial o_k}(1 - o_k) = \\Delta d\\sum_{i=k}^M t_i\n\\end{equation}\n\nwhere $t_i = \\prod_{j=1}^i (1 - o_j)$ represents the accumulated transmittance at point $^o\\matr{x}_i$ along the ray. Before hitting the surface, $o_k$ is zero, so the term $(1-o_k)$ has no effect, and the Jacobian term becomes the sum of transmittance of all the points starting from point k. And after the ray entering the solid body, this Jacobian term will be zero. This means only points before entering the solid body contribute to the rendered depth value.\n\nNow from Eq.~\\ref{eq:conversion_grad} and Eq.~\\ref{eq:dr_do}, we already know that many Jacobian terms that make up the final Jacobian in Eq.~\\ref{eq:J_render} should be zero. Therefore, we could further speed up the optimization by not evaluating the third term $\\frac{\\partial{G( ^o\\matr{x}_k, \\matr{z})}}{\\partial \\left[ \\boldsymbol{\\xi}_{oc}; \\matr{z} \\right]}$ at those points. Figure~\\ref{fig:render_values} is a demonstration of the rendering process and the sparse nature of the resulting Jacobians.\n\n\\subsection{Jacobian of Prior Terms}\n\nBased on the shape regularisation energy defined in Eq. 6 in the main paper. The residual of this energy term is simply the shape code vector itself:\n\n\\begin{equation} \\label{eq:E_code}\n e_c = \\matr{z}\n\\end{equation}\n\nBased on Eq.~\\ref{eq:E_code}, we have\n\n\\begin{equation}\n \\frac{\\partial e_c}{\\partial \\boldsymbol{\\xi}_{oc}} = \\matr{0}, \\frac{\\partial e_c}{\\partial \\matr{z}} = \\matr{I}\n\\end{equation}\n\nOptionally, we can also apply structural priors such as constraining the object orientation to be aligned with the ground plane, as in~\\cite{wang2020directshape}. We can define the rotation prior residual as\n\n\\begin{align}\n e_{rot} & = 1 - \\matr{R}_{co}(0:2,1) \\cdot \\matr{n}_g \\\\\n & = 1 - \\begin{bmatrix} 0 & 1 & 0 \\end{bmatrix}\\matr{R}_{oc}\\matr{n}_g\n\\end{align}\n\n\\begin{figure}[tpb]\n\\centering\n\\includegraphics[width=0.48\\linewidth]{figures\/redwood\/gt_together.png}\n\\includegraphics[width=0.48\\linewidth]{figures\/redwood\/no_gt_together.png}\n\\caption{Results with GT ($\\leftarrow$) vs. MaskRCNN ($\\rightarrow$) masks \\vspace{-0.6cm}}\n\\label{fig:gt_mask}\n\\end{figure}\n\n\nwhere $\\matr{R} \\in \\mathbf{SO}(3)$ denotes the rotation part of the transformation matrix, and $(0:2, 1)$ represents the operation of getting the second column of the matrix. $\\matr{n}_g$ is the normal direction to the ground plane under camera coordinate frame. The Jacobian with respect to pose can be easily obtained following Eq.~\\ref{eq:J_surf_pose_3}:\n\n\\begin{align}\n \\frac{\\partial e_{rot}}{\\partial \\boldsymbol{\\xi}_{oc}} &= - \\begin{bmatrix} 0 & 1 & 0 \\end{bmatrix}\\frac{\\partial \\matr{R}_{oc}\\matr{n}_g}{\\partial \\boldsymbol{\\xi}_{oc}} \\\\\n & = \\begin{bmatrix} 0 & 1 & 0 \\end{bmatrix} \\begin{bmatrix} \\matr{0} & (\\matr{R}_{oc}\\matr{n}_g)_{\\times} & \\matr{0} \\end{bmatrix} \\\\\n & = \\begin{bmatrix} 0 & 0 & 0 & (\\matr{R}_{oc}\\matr{n}_g)_{\\times}(1,0:2) & 0 \\end{bmatrix}\n\\end{align}\n\nwhere $(0:2, 1)$ represents the operation of getting the second row of the matrix. As this residual has no effect on object shape, we have:\n\n\\begin{equation}\n \\frac{\\partial e_{rot}}{\\partial \\matr{z}} = \\matr{0}\n\\end{equation}\n\n\\section{Experiment Details and Run-time Analysis}\n\nWe evaluate the run-time performance of our full SLAM system (stereo+LiDAR) on a Linux system with Intel Core i9 9900K CPU at 3.60GHz, and an nVidia GeForce RTX2080 GPU with 8GB of memory. The 2D detection boxes and masks are obtained using MaskRCNN~\\cite{maskrcnn},\\footnote{\\url{https:\/\/github.com\/facebookresearch\/detectron2}} throughout all the experiments. Initial object poses are inferred using the LiDAR-based 3D b-box detector SECOND~\\cite{second}.\\footnote{\\url{https:\/\/github.com\/traveller59\/second.pytorch}} Our object reconstruction pipeline is implemented in Python with PyTorch. The back-end object-SLAM framework is implemented in C++ as an extension of the original ORB-SLAM2 implementation. The Python part is embedded into C++ using pybind11. \n\nTable~\\ref{tab:timing} lists the breakdown of the run-time performance of different major components. Note that all those components are performed only at key-frames. \nFor KITTI sequences, we adopted two design choices to achieve real-time performance. Firstly, we noticed shape estimation did not improve much over time in KITTI, thus we perform full optimization only for new objects and only update poses for existing objects (as stated in the main paper). Secondly, we abort new object optimization whenever necessary to ensure new key-frame is inserted in time. These make our system able to operate at $10$Hz on KITTI. In sequences such as Freiburg and Redwood-OS, it is beneficial to update the shape more frequently, with fewer GN iterations to guarantee real-time performance. Object meshes can be extracted optionally using marching cube with an extra GPU, for visualization only.\n\n\\begin{table}[tbp]\n\\centering\n\\begin{tabular}{l|c}\n\\textbf{Component} & \\textbf{Time (ms)} \\\\ \\hline\nMask R-CNN Detector & 70 \/ frame \\\\\n3D LiDAR Detector & 60 \/ frame \\\\\nPose-shape Optimization & 20$\\times$10 \/ new object \\\\\nPose-only Optimization & 4$\\times$5 \/ vis. object \\\\ \\hline\n\\end{tabular}\n\\caption{Run-time analysis with system components \\vspace{-0.4cm}}\n\\label{tab:timing}\n\\end{table}\n\n\n\\section{Ablation with GT masks}\nWe have shown promising reconstruction results on cars on both KITTI and Freiburg dataset. Chairs have thin structures and a complex topology and are more challenging to reconstruct than cars. We noticed the thin legs of the last chair in Fig.~10 in the main paper were not properly reconstructed. This is because our reconstruction makes use of Mask-RCNN segmentation masks, which are noisy and affected by shadows. This can result in background points being associated to the chair, leading to incorrect results. We conducted ablation study using ground-truth masks as input. Fig.~\\ref{fig:gt_mask} illustrates an upper bound quality that could be achieved with clean GT masks.\n\n\\section{More Qualitative Results}\n\nWe show more qualitative reconstruction results in Fig.~\\ref{fig:recon_results}. For each scene the RGB image is shown on the top and the reconstruction results are shown underneath. We also show an example of reconstructed map in Fig.~\\ref{fig:map_result}. \n\n\n\\begin{figure*}[tpb]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/00-000117.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/00-001190.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-00-000117.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-00-001190.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/00-001848.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/00-003264.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-00-001848.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-00-003264.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/00-003516.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/07-000195.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-00-003516.png}\n\\includegraphics[width=0.48\\textwidth]{figures\/supp\/recon-07-000195.png}\n\\caption{More qualitative results on object reconstruction from a single view-point \\vspace{-1cm}}\n\\label{fig:recon_results}\n\\end{figure*}\n\n\\begin{figure*}[tpb]\n\\centering\n\\includegraphics[width=0.90\\textwidth]{figures\/supp\/kitti-07-map.png}\n\\caption{Reconstructed map of KITTI-07. Note the dense reconstruction of the objects and the shape variations. \\vspace{-1cm}}\n\\label{fig:map_result}\n\\end{figure*}\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\\section{Introduction}\nSimultaneous Localization and Mapping (SLAM) is the process of estimating the trajectory of a moving camera while reconstructing its surrounding environment. From a purely geometric perspective, SLAM is often regarded as a well-understood or even solved problem. Many state-of-the-art dense SLAM algorithms can achieve accurate trajectory estimation and create high-quality geometric reconstructions that can be used in obstacle avoidance or path planning for mobile robots. However, when it comes to more complex tasks that require scene understanding, geometry-only scene representations fall short of providing key semantic information. \nTaking advantage of recent deep learning breakthroughs in semantic segmentation and object detection algorithms \\cite{maskrcnn, faster-rcnn, yolo} semantic SLAM systems augment geometric low-level map primitives by fusing semantic labels into the 3D reconstruction~\\cite{semantic-fusion, semantic-stereo-fusion,Chen2019SuMa++}. However, the resulting scene maps merely consist of a set of labelled 3D points where reasoning about the scene at the level of objects to infer meaningful information such as the number of objects of each category, their size, shape or relative pose remains a challenging task. Better use of the semantic information is required in the form of an object-centric map representation that allows detailed shape estimation and meaningful instantiation of scene objects. \n\n\nOur proposed approach forms part of a more recent family of \\emph{object-aware} SLAM methods that reconstruct object-centric maps grouping all the low-level geometric primitives (voxels, points ...) that make up the same object into a single instance. Front-end camera tracking and back-end optimization are both performed at the level of object instances. While the first object-level methods, such as SLAM++~\\cite{SLAM++}, mapped previously known object instances, more recent systems have taken advantage of instance level semantic segmentation masks \\cite{maskrcnn} to achieve object level reconstruction for unknown objects \\cite{fusion++} even in the presence of dynamic objects \\cite{maskfusion, midfusion}.\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=0.235\\textwidth]{figures\/08-2758.png}\n\\includegraphics[width=0.235\\textwidth]{figures\/recon-08-2758.png}\n\\caption{Qualitative shape and pose results on a stereo+LiDAR KITTI sequence. A very sparse set of LiDAR points was used to reconstruct each car. LiDAR points on the road are only shown for illustration.\\vspace{-0.5cm}}\n\\label{fig:single-view}\n\\end{figure}\n\nHowever, these early object level SLAM systems exhibit major drawbacks: They either require a pre-known database of object instances \\cite{SLAM++}; or reconstruct objects from scratch without exploiting shape priors \\cite{maskfusion, midfusion, fusion++}, which results in partial or incomplete object reconstructions. We improve this by exploiting the regularity of shapes within an object category in the form of learned shape priors, defined as a latent code $\\matr{z}$ and a generative model $G(\\matr{z})$ that decodes it into its full geometry. This brings us several advantages; object shapes decoded from latent codes are guaranteed to be detailed and complete, regardless of partial observations or changes in view-points, they provide a compact representation and they can be optimized using the gradients obtained through back-propagation. \n\nUsing ORB-SLAM2~\\cite{orbslam2} as a sparse camera tracking and mapping backbone, DSP-SLAM takes the reconstructed 3D point-cloud as input and fits a latent code to each detected object instance, using a combination of 3D surface consistency and rendered depth losses. Foreground objects, background features and camera poses are further refined via bundle adjustment using a joint factor graph. We show DSP-SLAM operating in $3$ different modes: monocular, stereo, and stereo+LiDAR. The monocular and stereo systems use the respective ORB-SLAM2 modalities as the SLAM backbone and the reconstructed 3D point-clouds to reconstruct the detected objects. The stereo+LiDAR system uses stereo ORB-SLAM2 as the SLAM backbone but in addition it incorporates a sparse set of LiDAR measurements (as few as 50 per object) for object reconstruction and pose-only optimization. \n\n\\noindent\\textbf{Contributions:} While DSP-SLAM is not the first approach to leverage shape priors for 3D reconstruction~\\cite{frodo,node-slam} from image sequences, it innovates in various ways. \nFirstly, unlike~\\cite{frodo,node-slam}, our map does not only represent objects, but also reconstructs the background as sparse feature points, optimizing them together in a joint factor graph, marrying the best properties of feature-based~\\cite{orbslam2} (highly accurate camera tracking) and object-aware SLAM (high level semantic map). Secondly, although Node-SLAM \\cite{node-slam} also incorporates shape priors within a real-time SLAM system~\\cite{node-slam}, it uses dense depth images for shape optimization, while DSP-SLAM can operate with RGB-only monocular streams and requires as few as $50$ 3D points per object to obtain accurate shape estimates. Finally, although both FroDO~\\cite{frodo} and DSP-SLAM can operate in a monocular RGB setting, FroDO is a slow batch approach that requires all frames to be acquired in advance and associated with their camera poses, while \nDSP-SLAM is an online, sequential method that can operate at $10$ frames per second. \n\nIn terms of object shape and pose estimation, we improve quantitative and qualitatively over auto-labelling~\\cite{Zakharov2020Autolabeling3D}, a state-of-the-art prior-based object reconstruction method. Experiments on the KITTI odometry \\cite{kitti_odom} dataset show that, with stereo+LiDAR input our joint bundle adjustment offers improvements in trajectory estimation over the feature-only stereo system ORB-SLAM2~\\cite{orbslam2}, used as our backbone. Moreover, DSP-SLAM offers comparable tracking performance to state-of-the-art stereo~\\cite{Wang2017StereoDSO}, LiDAR-only~\\cite{Chen2019SuMa++} and dynamic~\\cite{Bescos2018DynaSLAM} SLAM systems, while providing rich dense object reconstructions. DSP-SLAM also achieves promising qualitative reconstruction results with monocular input on Freiburg Cars \\cite{freiburg-cars} and Redwood-OS \\cite{choi2016redwood} dataset.\n\n\n\n\n\n\\section{\\label{system-overview}System Overview}\n\nDSP-SLAM is a sequential localisation and mapping method that reconstructs the complete detailed shape of detected objects while representing the background coarsely as a sparse set of feature points. Each object is represented as a compact and optimizable code vector $\\matr{z}$. We employ DeepSDF \\cite{deepsdf} as the shape embedding, that takes as input a shape code $\\matr{z} \\in \\mathbb{R}^{64}$ and a 3D query location $\\matr{x} \\in \\mathbb{R}^{3}$, and outputs the signed distance function (SDF) value $s = G(\\matr{x}, \\matr{z})$ at the given point. An overview of DSP-SLAM is shown in Fig.~\\ref{fig:system}. DSP-SLAM runs at almost real time ($10$ frames per second) and can operate on different modalities: monocular, stereo or stereo with LiDAR; depending on the available input data. \n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/000140.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/000168.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/000428.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/000655.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/140r.png} \\hspace{-0.6em}\n\\includegraphics[width=0.1115\\textwidth]{figures\/single_view_shape\/168r.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/428r.png} \\hspace{-0.6em}\n\\includegraphics[width=0.115\\textwidth]{figures\/single_view_shape\/655r.png} \\hspace{-0.6em}\n\\caption{Shape reconstruction: qualitative results. \\vspace{-0.5cm}}\n\\label{fig:single-view-shape}\n\\end{figure}\n\n\\noindent\\textbf{Sparse SLAM backbone:} ORB-SLAM2~\\cite{orbslam2} is used as the tracking and mapping backbone, a feature-based SLAM framework that can operate on monocular or stereo sequences. While the tracking thread estimates camera pose at frame-rate from correspondences, the mapping thread builds a sparse map by reconstructing 3D landmarks.\n\n\\noindent\\textbf{Detections:} We perform object detection at each key-frame, to jointly infer 2D bounding boxes and segmentation masks. In addition, an initial estimate for the object pose estimation is obtained via 3D bounding box detection~\\cite{smoke,second}.\n\n\\noindent\\textbf{Data association:} New detections will either be associated to existing map objects, or instantiated as a \\textit{new} object via object-level data association. \nEach detected object instance $I$ consists of a 2D bounding box ${\\mathcal{B}}$, a 2D mask ${\\mathcal{M}}$, the dpeth observation of sparse 3D point cloud ${\\mathcal{D}}$, and the initial object pose $\\matr{T}_{co, 0}$. \n\n\\noindent\\textbf{Prior-based object reconstruction:} Newly instantiated objects will be reconstructed following the object reconstruction pipeline described in Sec.~\\ref{object-recon}. DSP-SLAM takes the set of sparse 3D point observations ${\\mathcal{D}}$, which can come from reconstructed SLAM points (in monocular and stereo modes) or LiDAR input (in stereo+LiDAR mode) and optimises the shape code and object pose to minimise surface consistency and depth rendering losses. Objects already present in the map will only have their 6-dof pose updated via pose-only optimization. \n\n\n\\noindent\\textbf{Joint map optimisation:} A joint factor graph of point features (from SLAM), objects and camera poses is optimised via bundle adjustment to maintain a consistent map and incorporate loop closure. New objects are added as nodes to the joint factor graph and their relative pose estimates $\\matr{T}_{co}$ as camera-object edges. Object-level data association and joint bundle adjustment will be discussed in Sec.~\\ref{object-slam}.\n\n\n\n\n\n\n\\section{\\label{object-recon}Object Reconstruction with Shape Priors}\n\nWe aim to estimate the full dense shape $\\matr{z}$ and 7-DoF pose $\\matr{T}_{co}$, represented as a homogeneous transformation matrix $\\matr{T}_{co} = [s\\matr{R}_{co}, \\matr{t}_{co}; \\matr{0}, 1] \\in \\matr{Sim}(3)$, for an object with detections $I = \\{{\\mathcal{B}}, {\\mathcal{M}}, {\\mathcal{D}}, \\matr{T}_{co, 0}\\}$. We formulate this as a joint optimization problem, which iteratively refines the shape code and object pose from an initial estimate. We propose two energy terms $E_{surf}$ and $E_{rend}$ and formulate a Gauss-Newton solver with analytical Jacobians.\n\n\\subsection{Surface Consistency Term}\nThis term measures the alignment between observed 3D points and the reconstructed object surface. \n\\begin{equation} \\label{E_surf}\n E_{surf} = \\frac{1}{\\left| \\mathbf{\\Omega}_s \\right|}\\sum_{\\matr{u} \\in \\mathbf{\\Omega}_s} G^2(\\matr{T}_{oc} \\pi^{-1}(\\matr{u}, {\\mathcal{D}}), \\matr{z})\n\\end{equation}\nwhere $\\mathbf{\\Omega}_s$ denotes the pixel coordinates of the set of sparse 3D points ${\\mathcal{D}}$, which can come from reconstructed SLAM points (in monocular and stereo modes) or LiDAR input (in stereo+LiDAR mode). Ideally, the back-projected point at pixel $\\matr{u}$ should perfectly align with the object surface resulting in zero SDF value, giving a zero error residual. In practice, we observed that the surface consistency term alone is not sufficient for correct shape and pose estimation in the case of partial observations. Fig.~\\ref{fig:shape-ablation} illustrates a case where only points on the back and the right side of a car are observed (shown in green). Using the surface consistency alone term leads to incorrect shape estimation -- much larger than its actual size. To address this issue, we propose a rendering loss, that provides point-to-point depth supervision and enforces silhouette consistency to penalize shapes that grow outside of the segmentation mask. \n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figures\/shape_ablation_success.png}\n \\caption{Illustration of the effectiveness of the rendering term in the presence of partial observations. \\textbf{Left:} Detected object and partial surface point observations (green). \\textbf{Middle:} Optimisation result with $E_{surf}$ only. The loss is minimised but the shape grows larger than its actual size. \\textbf{Right:} Optimisation result with the rendering term. Enforcing the silhouette constraint results in the correct scale. \\vspace{-0.6cm}}\n \\label{fig:shape-ablation}\n\\end{figure}\n\n\\subsection{Differentiable SDF Renderer}\n\nFollowing \\cite{multi-view_supervision, node-slam}, we build our SDF renderer via differentiable ray-tracing. For each pixel $\\matr{u}$, we back-project a ray $^c\\matr{x} = \\matr{o} + d \\matr{K}^{-1} \\dot{\\matr{u}}$ parameterized by the depth value $d$ under camera coordinate space, with $\\matr{o}$ being the camera optical centre and $\\matr{K}$ being camera intrinsic matrix. We sample $M$ discrete depth values $\\{d_i\\}$ along each ray within the range $[d_{min}, d_{max}]$, with $d_i = d_{min} + (i-1) \\Delta d$, and $\\Delta d = (d_{max} - d_{min}) \/ (M-1)$. The bounds of the depth range are determined by the current estimation of object translation and scale, and are re-computed at each iteration.\n\n\\noindent\\textbf{Occupancy Probabilities} \nThe SDF value $s_i$ at each sampled point can be obtained by transforming sampled points to the object coordinate frame and passing through the DeepSDF decoder. The SDF value encodes the probability that a given point is occupied by the object or belongs to free space. We apply a piecewise linear function to the predicted SDF values to indicate the occupancy probability $o_i$, defined in Eq.~\\ref{sdf2occ}, where $\\sigma$ represents the cut-off threshold which controls the smoothness of the transition. We fix $\\sigma = 0.01$ throughout our experiments. \\vspace{-0.4cm}\n\n\\begin{equation} \\label{sdf2occ}\n s_i = G(\\matr{T}_{oc} {^c\\matr{x}},\\, \\matr{z}) \\quad \\textrm{and} \\quad \n o_i = \n \\begin{cases}\n 1 & s_i < -\\sigma\\\\\n -\\frac{s_i}{2 \\sigma} & \\left|s_i\\right| \\leq \\sigma \\\\ \n 0 & s_i > \\sigma\n \\end{cases}\n\\end{equation}\n\n\n\\noindent\\textbf{Event Probabilities} When tracing points along the ray, the ray either terminates or escapes without hitting other points. These $M+1$ event probabilities can be defined as: \n\\begin{eqnarray} \\label{term_prob}\n & \\phi_i = o_i \\prod_{j=1}^{i-1} (1- o_j), i = 1, \\dots, M \\nonumber \\\\\n & \\phi_{M + 1} = \\prod_{j=1}^{M} (1- o_j)\n\\end{eqnarray}\n\n\\noindent\\textbf{Rendered Depth and Rendering Term} With the probabilities defined above, the rendered depth value at each pixel $\\matr{u}$ can be computed as the expected depth value of the terminating point as in Eq.~\\ref{render_depth}. To make it consistent, we set $d_{M+1}$, the depth value associated with escape probability, to a constant value $1.1 d_{max}$, as in \\cite{node-slam}.\n\\begin{equation} \\label{render_depth}\n \\hat{d}_{\\matr{u}} = \\sum_{i=1}^{M+1} \\phi_i d_i\n\\end{equation}\nSince the rendering is fully differentiable, it can be integrated in our optimization. Unlike~\\cite{node-slam, multi-view_supervision}, we perform ray-tracing in continuous space and do not require to discretize the object model. The final rendering term is as follows:\n\\begin{equation}\n E_{rend} = \\frac{1}{\\left| \\mathbf{\\Omega}_r \\right|} \\sum_{\\matr{u} \\in \\mathbf{\\Omega}_r} (d_\\matr{u} - \\hat{d}_{\\matr{u}})^2\n\\end{equation}\nwhere $\\mathbf{\\Omega}_r = \\mathbf{\\Omega}_s \\cup \\mathbf{\\Omega}_b$ is the union of surface pixels and pixels not on object surface but inside the 2D bounding box ${\\mathcal{B}}$. Surface pixels $\\mathbf{\\Omega}_s$ are the same set of pixels used in Eq.~\\ref{E_surf}, obtained by projecting the 3D reconstucted SLAM points onto the image masks as discussed in Sec.~\\ref{system-overview}. The pixels in $\\mathbf{\\Omega}_b$ are assigned the same depth value as $d_{M+1} = 1.1 d_{max}$ and provide important silhouette supervision for our optimization since they penalize renderings that lie outside the object boundary, forcing empty space. As the pixels in $\\mathbf{\\Omega}_b$ do not require a depth measurement, we perform uniform sampling inside the 2D bounding box and filter out those inside the segmentation mask.\n\n\\subsection{Optimization details}\n\nOur final energy is the weighted sum of the surface and rendering terms and a shape code regularization term:\n\\begin{equation}\n E = \\lambda_s E_{surf} + \\lambda_r E_{rend} + \\lambda_c \\norm{\\matr{z}}^2\n\\end{equation}\nThe hyperparameter values used for optimization $\\lambda_s = 100$, $\\lambda_r = 2.5$ and $\\lambda_c = 0.25$ were tuned such that the Hessian matrices of the energy terms are of the same order of magnitude. Since all terms are quadratic, we adopt a Gauss-Newton optimisation approach with analytical Jacobians (Please refer to supplemental material for detail), initialized from a zero shape code $\\matr{z} = \\matr{0}$. The initialisation for the object pose \n$\\matr{T}_{co, 0}$ is given by a LiDAR 3D detector~\\cite{second} when LiDAR is available. In the monocular\/stereo case, it is given by an image-based 3D detector~\\cite{smoke} or by performing PCA on the sparse object point cloud. \n\n\n\n\\section{\\label{object-slam}Object SLAM}\n\nAs an object-based SLAM system, DSP-SLAM builds a joint factor graph of camera poses, 3D feature points and object locations and poses. As Fig.~\\ref{system-overview} shows, the factor graph introduces object nodes and camera-object edges. \n\n\n\\subsection{\\label{data-association}Object Data Association}\n\nData association between new detections and reconstructed objects is an important step in object-level SLAM. We aim to associate each detection $I$ to its \\textit{nearest} object $o$ in the map, adopting different strategies depending on the different input modalities. When LiDAR input is available we compare the distance between 3D bounding box and reconstructed object. When only stereo or monocular images are used as input, we count the number of matched feature points between the detection and object. If multiple detections are associated with the same object, we keep the \\textit{nearest} one and reject others. Detections not associated with any existing objects are initialised as new objects and their shape and pose optimised following Sec.~\\ref{object-recon}. For stereo and monocular input modes, reconstruction only happens when enough surface points are observed. For detections associated with existing objects, only the pose is optimised by running pose-only optimization and a new camera-object edge added to the joint factor-graph.\n\n\\begin{table*}\n\\renewcommand{\\arraystretch}{1.0}\n\\centering\n\\begin{tabular}{c|c c c c|c c c c}\n\\hline\n\\multirow{2}{*}{Diff.} & \\multicolumn{4}{c|}{Auto-labelling\\cite{Zakharov2020Autolabeling3D}} & \\multicolumn{4}{c}{Ours} \\\\\n\\cline{2-9}\n & BEV@0.5 & 3D@0.5 & NS@0.5 & NS@1.0 & BEV@0.5 & 3D@0.5 & NS@0.5 & NS@1.0 \\\\\n\\hline\nE & 80.70 & \\textbf{63.96} & 86.52 & 94.31 & \\textbf{83.31} & 62.58 & \\textbf{88.01} & \\textbf{96.86} \\\\ \nM & 63.36 & 44.79 & 64.44 & 85.24 & \\textbf{75.28} & \\textbf{47.76} & \\textbf{76.15} & \\textbf{89.97} \\\\\n\\hline\n\\end{tabular}\n\\caption{Quantitative comparison of object cuboid prediction quality with Auto-labelling on KITTI3D on Easy and Moderate samples. Results of Auto-labelling are taken from their paper. Best results are shown as bold numbers.\n\\vspace{-0.1cm}}\n\\label{tab:object_detection}\n\\end{table*}\n\n\n\\begin{figure*}[tbp]\n\\centering\n\\includegraphics[width=1.00\\textwidth]{figures\/labelling_ours\/ours-labelling-cropped.png}\n\\caption{A qualitative comparison of shape reconstruction and pose estimation against Auto-labelling~\\cite{Zakharov2020Autolabeling3D}. \\textbf{Left:} input RGB image. \\textbf{Middle:} result with DSP-SLAM \\textbf{Right:} result with auto-labelling~\\cite{Zakharov2020Autolabeling3D}\\vspace{-0.5cm}}\n\\label{fig:comparison-auto-labelling}\n\\end{figure*}\n\n\\subsection{\\label{joint_ba}Joint Bundle Adjustment}\n\nOur joint map consists of a set of camera poses $C = \\{\\matr{T}_{wc_i}\\}_{i=1}^{M}$, object poses $O = \\{\\matr{T}_{wo_j}\\}_{j=1}^{N}$ and map points $P = \\{^w\\matr{p}_k\\}_{k=1}^{K}$. Our joint BA can be formulated as a non-linear least squares optimization problem:\n\\begin{eqnarray}\n C^*, O^*, P^* &{} = {}& \\mathop{\\arg\\min}_{\\{C, O, P\\}} \\sum_{i, j} \\norm{\\matr{e}_{co}(\\matr{T}_{wc_i}, \\matr{T}_{wo_j})}_{\\Sigma_{i,j}} \\nonumber \\\\\n && {+}\\:\\sum_{i, k} \\norm{\\matr{e}_{cp}(\\matr{T}_{wc_i}, ^w\\matr{p}_k)}_{\\Sigma_{i,k}}\n\\label{eq:ba}\n\\end{eqnarray}\nwhere $\\matr{e}_{co}$ and $\\matr{e}_{cp}$ represent the residuals for camera-object and camera-point measurements and $\\Sigma$ is the co-variance matrix of measurement residuals. Objects act as additional landmarks, which results in improvements in tracking performance as shown in our evaluation on KITTI. The optimization is solved with Levenberg-Marquardt in g2o \\cite{g2o}.\n\n\\noindent\\textbf{Camera-Object Measurements:} Object-camera poses $\\matr{T}_{co}$ are evaluated by minimising the surface alignment term in Eq. \\ref{E_surf} while keeping the shape code and scale fixed. New pose observations serve as edges between camera pose $\\matr{T}_{wc}$ and object pose $\\matr{T}_{wo}$, and the residual is defined as:\n $ \\matr{e}_{co} = \\log (\\matr{T}^{-1}_{co} \\cdot \\matr{T}^{-1}_{wc} \\cdot \\matr{T}_{wo})$\nwhere $\\log$ is the logarithm mapping from $\\matr{SE}(3)$ to $\\mathfrak{se}(3)$. Poses in the factor graph are 6-DoF, as object scale is only optimised when first detected.\n\n\\noindent\\textbf{Camera-Point Measurements:} We use the standard formulation of re-projection error used in ORB-SLAM2~\\cite{orbslam2}:\n$ \\matr{e}_{cp} = \\pi(\\matr{T}_{wc}^{-1} {^w\\matr{p}}) - \\matr{\\Tilde{u}}$, \nwhere $\\matr{\\Tilde{u}}$ is the measured pixel coordinate of map point $\\matr{p}$. We follow a similar strategy as ORB-SLAM2 to tune $\\Sigma_{ij}$ such that the two energy terms contribute roughly the same to the overall optimization. \n\\section{Related work} \n\n\\noindent\\textbf{Object-aware SLAM:}\nSLAM++ \\cite{SLAM++} pioneered object-aware RGB-D SLAM, representing the scene at the level of objects using a joint pose-graph for camera and object poses. A database of pre-scanned objects was created in advance and object instances were detected and mapped using a pre-trained 3D detector, ICP losses and pose-graph optimization. In later work, Tateno \\emph{et al.}~\\cite{Tateno2016When2I} aligned object instances from a pre-trained database to volumetric maps while Stuckler \\emph{et al.}~\\cite{stuckler2012model} performed online exploration, learning object models on the fly and tracking them in real time. An important drawback of instance-based approaches is their inability to scale to a large number of objects and their need for object models to be known in advance. More recent object-aware RGB-D SLAM systems have dropped the requirement for known models and instead take advantage of state-of-the art 2D instance-level semantic segmentation masks~\\cite{maskrcnn} to obtain object-level scene graphs~\\cite{semantic-fusion} and per-object reconstructions via depth fusion, even in the case of dynamic scenes~\\cite{maskfusion,midfusion}.\n\nExtensions of object-aware SLAM to the case of monocular video input deal with the additional challenge of relying only on RGB-only information~\\cite{mono-object-slam, mono-object-sparse-slam, quadricslam, cubeslam, category-specific-models} which results in the use of simplistic shape representations. In QuadricSLAM \\cite{quadricslam} objects are represented as ellipsoids and fit to monocular observations while in CubeSLAM \\cite{cubeslam} cuboid proposals generated from single-view detections are optimized in a joint bundle adjustment optimization. \n\nWhile the above SLAM systems represent an important step forward towards equipping robots with the capability of building semantically meaningful object-oriented maps, they fall short of exploiting semantic priors for object reconstruction. In this paper we take this direction of using a category-specific learnt shape prior and embed this within an object-aware SLAM system. \n\n\\noindent\\textbf{3D Reconstruction with Shape Priors:}\nThe use of learnt compact shape embeddings as priors for 3D reconstruction has a long tradition in computer vision. From 3D morphable models for the reconstruction of faces or bodies~\\cite{blanz1999morphable,loper2015smpl}, to PCA models to represent category specific object shape priors~\\cite{directshape}.\nOther examples of the use of embedding spaces for single or multi-view shape reconstruction include GPLVMs \\cite{dame2013dense, prisacariu2011shared, prisacariu2012simultaneous} or neural representations~\\cite{hu2018dense-embedding, zhu2018object} such as a variational autoencoder~\\cite{node-slam}, AltlasNet~\\cite{AtlasNet,2019-lin} or DeepSDF~\\cite{deepsdf,frodo,2019-xu,Zakharov2020Autolabeling3D}. DeepSDF \\cite{deepsdf} provides a powerful implicit learnt shape model that encapsulates the variations in shape across an object category, in the form of an auto-decoder network that regresses the signed distance function (SDF) values of a given object and has been used as a shape prior for single-view~\\cite{2019-xu} and multi-view~\\cite{frodo} reconstruction. Similarly to~\\cite{Zakharov2020Autolabeling3D} DSP-SLAM adopts DeepSDF as the shape prior and takes sparse LiDAR and images as input, however~\\cite{Zakharov2020Autolabeling3D} takes single frames and is not a SLAM method. DOPS~\\cite{Najibi2020DOPSLT} is a single-pass 3D object detection architecture for LiDAR that estimates both 3D bounding boxes and shape. \n\nOur approach is most closely related to those that build consistent multi-object maps over an entire sequence such as FroDO~\\cite{frodo} and Node-SLAM~\\cite{node-slam}. Unlike FroDO ~\\cite{frodo} ours is a sequential SLAM system and not a batch method. Unlike Node-SLAM \\cite{node-slam}, in our system low-level point features and high-level objects are jointly optimized to bring the best of both worlds: accurate tracking and rich semantic shape information. DeepSLAM++~\\cite{hu2019deep-slam++} leverages shape priors in a SLAM pipeline by selecting 3D shapes predicted by Pix3D~\\cite{Sun_2018_CVPR}, but forward shape generation is often unstable and lead to poor results on real data. \n\\section{Experimental Results}\n\n\\begin{table*}\n\\centering\n\\setlength{\\arrayrulewidth}{0.5mm}\n\\resizebox{1.00\\textwidth}{!}{\n\\begin{tabular}{c c c c c c c c c c c c c}\n\\hline\n\\multirow{3}{*}{Approach} & \\multicolumn{11}{c}{Sequence} & \\multirow{3}{*}{Average} \\\\\n& 00* & 01 & 02* & 03 & 04 & 05* & 06* & 07* & 08* & 09* & 10 & \\\\\n& rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & rpe\/rre & \\\\\n\\hline\nSuMa++ \\cite{Chen2019SuMa++} & \\textbf{0.64}\/\\textbf{0.22} & 1.60\/0.46 & 1.00\/0.37 & 0.67\/0.46 & \\textbf{0.37}\/0.26 & 0.40\/0.20 & 0.46\/0.21 & \\textbf{0.34}\/\\textbf{0.19} & 1.10\/0.35 & \\textbf{0.47}\/\\textbf{0.23} & \\textbf{0.66}\/0.28 & \\textbf{0.70}\/0.29 \\\\\nOurs St+LiDAR (250pts) & 0.75\/\\textbf{0.22} & \\textbf{1.49}\/0.20 & \\textbf{0.79}\/\\textbf{0.23} & \\textbf{0.60}\/\\textbf{0.18} & 0.47\/0.11 & \\textbf{0.32}\/\\textbf{0.15} & 0.39\/0.21 & 0.52\/0.28 & \\textbf{0.94}\/\\textbf{0.27} & 0.79\/0.28 & 0.69\/\\textbf{0.26} & \\textbf{0.70}\/\\textbf{0.22} \\\\\nOurs St+LiDAR (50pts) & 0.80\/0.24 & 1.50\/\\textbf{0.15} & 0.84\/0.26 & 0.61\/0.18 & 0.44\/\\textbf{0.10} & 0.32\/0.16 & \\textbf{0.35}\/\\textbf{0.15} & 0.57\/0.24 & 1.03\/0.30 & 0.78\/0.27 & 0.67\/0.30 & 0.72\/\\textbf{0.22} \\\\\n\\hline\nORB-SLAM2 \\cite{orbslam2} & 0.70\/0.25 & \\textbf{1.38}\/0.20 & 0.76\/0.23 & 0.71\/0.17 & 0.45\/0.18 & \\textbf{0.40}\/\\textbf{0.16} & 0.51\/0.15 & \\textbf{0.50}\/\\textbf{0.28} & 1.07\/0.31 & \\textbf{0.82}\/0.25 & 0.58\/0.28 & \\textbf{0.72}\/0.22 \\\\\nSt DSO \\cite{Wang2017StereoDSO} & 0.84\/0.26 & 1.43\/\\textbf{0.09} & 0.78\/\\textbf{0.21} & 0.92\/\\textbf{0.16} & 0.65\/0.15 & 0.68\/0.19 & 0.67\/0.20 & 0.83\/0.36 & \\textbf{0.98}\/\\textbf{0.25} & 0.98\/\\textbf{0.18} & \\textbf{0.49}\/\\textbf{0.18} & 0.84\/\\textbf{0.20} \\\\\nSt LSD-SLAM \\cite{engel2015stereo-lsd} & \\textbf{0.63}\/0.26 & 2.36\/0.36 & 0.79\/0.23 & 1.01\/0.28 & \\textbf{0.38}\/0.31 & 0.64\/0.18 & 0.71\/0.18 & 0.56\/0.29 & 1.11\/0.31 & 1.14\/0.25 & 0.72\/0.33 & 0.91\/0.27 \\\\\nDynaSLAM \\cite{Bescos2018DynaSLAM} & 0.74\/0.26 & 1.57\/0.22 & 0.80\/0.24 & 0.69\/0.18 & 0.45\/\\textbf{0.09} & \\textbf{0.40}\/\\textbf{0.16} & 0.50\/0.17 & 0.52\/0.29 & 1.05\/0.32 & 0.93\/0.29 & 0.67\/0.32 & 0.76\/0.23 \\\\\nOurs St only & 0.71\/\\textbf{0.24} & 1.45\/0.30 & \\textbf{0.75}\/0.23 & 0.73\/0.19 & 0.47\/0.11 & 0.57\/0.23 & 0.57\/0.22 & 0.51\/0.29 & 1.02\/0.32 & 0.87\/0.26 & 0.65\/0.31 & 0.75\/0.25 \\\\\nOurs St only (5Hz) & 0.71\/0.26 & 1.43\/0.23 & 0.78\/0.24 & \\textbf{0.67}\/0.18 & 0.46\/\\textbf{0.09} & \\textbf{0.40}\/\\textbf{0.16} & \\textbf{0.47}\/\\textbf{0.14} & 0.52\/0.29 & 0.99\/0.31 & 0.90\/0.28 & 0.63\/0.31 & \\textbf{0.72}\/0.22 \\\\\n\\hline\n\\end{tabular}}\n\\caption{Comparison of camera tracking accuracy - average $t_{rel}$ [\\%] and $r_{rel}$ [\\si{\\degree}\/100m] against state-of-the-art stereo and LiDAR SLAM systems. Sequences marked with * contain loops. Note that Stereo-DSO is a purely visual odometry system, so their result is without loop closing. We keep it in the table for completeness. \\vspace{-0.5cm}}\n\\label{tab:traj-lp}\n\\end{table*}\n\nWe perform a quantitative evaluation of our novel prior-based object reconstruction optimisation, using LiDAR input on the KITTI3D Dataset \\cite{kitti3d}, comparing with auto-labelling~\\cite{Zakharov2020Autolabeling3D}, the most related approach. In addition, we evaluate the camera trajectory errors of our full DSP-SLAM system on both stereo+LiDAR and stereo-only input on the KITTI Odometry~\\cite{kitti_odom} benchmark, comparing with state-of-the-art approaches. We also provide qualitative results of our full SLAM system on pure monocular input on Freiburg Cars \\cite{freiburg-cars} and Redwood-OS \\cite{choi2016redwood} Chairs dataset. \n\n\\subsection{\\label{det3d}3D Object Reconstruction}\n\nWe conduct a quantitative comparison of our object pose estimation on the KITTI3D benchmark, against auto-labeling~\\cite{Zakharov2020Autolabeling3D}, a recent approach to prior-based object shape and pose reconstruction based on image and LiDAR inputs, and using the same shape prior embedding (DeepSDF~\\cite{deepsdf}) and similar level of supervision (object masks and sparse depth from the LiDAR measurements).\n\n\n\\noindent\\textbf{Experimental Setting:} For a fair comparison, we evaluate our approach using a single image and LiDAR input and take the 2D segmentation masks and initial pose estimates from the auto-labelling code release~\\cite{Zakharov2020Autolabeling3D} as initialization for our own optimization approach. We evaluate the results of pose estimation on the trainval split of KITTI3D which consists of 7481 frames, using the same metrics proposed in~\\cite{Zakharov2020Autolabeling3D}: BEV AP @ 0.50, 3D AP @ 0.50, and the distance threshold metric (NS) from the nuscenes dataset~\\cite{caesar2020nuscenes}.\n\n\\noindent\\textbf{Results:} We report quantatitive results in Tab.~\\ref{tab:object_detection}. Our method achieves better performance under almost all metrics, especially on harder samples. We also visualize the comparison of reconstructed shapes and pose in Fig.~\\ref{fig:comparison-auto-labelling}. Auto-labelling~\\cite{Zakharov2020Autolabeling3D} does not capture shape accurately for several vehicles: The first two cars on the left side are sedans, but auto-labelling~\\cite{Zakharov2020Autolabeling3D} reconstructs them as \"beetle\"-shaped. In addition, some of the cars on the right side are reconstructed with incorrect poses which do not align with the image. In contrast, DSP-SLAM obtains accurate shape and pose.\n\n\n\n\\noindent\\textbf{Timing Analysis:} To achieve close to real-time performance, we employ a Gauss-Newton solver with faster convergence than first-order methods during our optimization, leading to significant speed-ups.\nTab.~\\ref{tab:timing} shows a run-time comparison between a first-order optimizer and our Gauss-Newton solver with analytical gradients. Our method is approximately one order of magnitude faster to complete a single iteration, and requires fewer iterations to converge.\n\\begin{table}[tbp]\n\\centering\n\\begin{tabular}{c|l|c|c}\nMethod & Energy Terms & msec. \/ iter & \\# of iter \\\\ \\hline\n1st order & $E_{surf} + E_{rend}$ & 183 & 50 \\\\\n1st order & $E_{surf}$ & 88 & 50 \\\\ \\hline\nOurs GN & $E_{surf} + E_{rend}$ & 20 & 10 \\\\\nOurs GN & $E_{surf}$ & 4 & 10 \\\\ \\hline\n\\end{tabular}\n\\caption{Speed comparison between first-order optimization and our Gauss-Newton method with analytical Jacobians\\vspace{-0.5cm}}\n\\label{tab:timing}\n\\end{table}\n\n\\noindent\\textbf{Ablation Study:} We conducted an ablation study for DSP-SLAM with stereo+LiDAR input to analyse the effect of the number of LiDAR points used for shape optimization on the reconstruction error. Fig.~\\ref{fig:recon-num-pts} shows that there is no significant difference when reducing the number of LiDAR points from 250 to 50. The reconstruction quality starts to degrade when the number of points is further reduced to 10. \n\n\\subsection{\\label{full_slam}KITTI Odometry Benchmark}\n\nWe evaluate the camera trajectory error for our full DSP-SLAM system on the KITTI odometry benchmark with both stereo+LiDAR and stereo-only input. We evaluate on the 11 training sequences and compare with state-of-the-art SLAM systems of different input modalities using relative translation error $t_{rel}$ (\\%) and relative rotation error $r_{rel}$ (degree per 100m). Quantitative results are shown in Table~\\ref{tab:traj-lp}.\n\n\\noindent\\textbf{Stereo+LiDAR input:} \nThe upper part of Tab.~\\ref{tab:traj-lp} shows trajectory errors of our system with stereo+LiDAR input. Results suggest our method achieves comparable results with SuMa++, a state-of-the-art LiDAR-based semantic SLAM system~\\cite{Chen2019SuMa++}. Note however, that our method only takes very few LiDAR points (several hundred per frame) while SuMa++ uses a full LiDAR point-cloud. It is interesting to see the comparison between our stereo+LiDAR system and stereo ORB-SLAM2, which is used as our backbone system. With our LiDAR-enhanced object reconstruction and joint BA, tracking accuracy improves on most sequences, especially 03, 05, 06, 08 where adequate number of static objects are observed throughout the sequence.\nHowever, our system performs slightly worse on some sequences which contain only moving objects (01, 04) or long trajectory segments where no static objects are observed (02, 10).\nThe table also shows the effect on the camera trajectory error when using 250 vs 50 points for object reconstruction. The results suggest that the impact of reducing the number of points on camera tracking accuracy is minimal.\n\n\\noindent\\textbf{Stereo-only input:} The lower part of Tab.~\\ref{tab:traj-lp} contains the results of our stereo-only system. It can be seen that our stereo-only system performs slightly worse than stereo ORB-SLAM2, which means dense shape reconstruction and joint BA does not help improve tracking accuracy with stereo-only input. We argue that the reason is two-fold. Firstly, 3D measurements based on stereo images are noisier than LiDAR-based measurements, giving rise to lower accuracy in object pose estimates.\nSecondly, in the stereo-only case, the surface points are obtained from the SLAM system, where the same features are repeatedly measured and not from multiple (LiDAR) measurements.\nWe also noticed that, to guarantee timings, we were performing bundle-adjustment less frequently than ORB-SLAM2. We re-ran DSP-SLAM, at a slightly reduced frame-rate (5Hz), performing BA after every key-frame (as ORB-SLAM2) and the average performance increased, matching ORB-SLAM2 at $0.72\/0.22$. \nA comparison with state-of-the-art stereo SLAM systems is also included in Tab.~\\ref{tab:traj-lp}. \n\n\n\\begin{figure*}[htbp]\n\\centering\n\\includegraphics[width=1.00\\textwidth]{figures\/reconstruction_number_pts_single_row.png}\n\\caption{Object reconstruction results when using different number of LiDAR points per object (N=250, 50, 10). There is no noticeable difference when the number of points is reduced from 250 to 50. The reconstruction quality starts to degrade when further reducing to 10. The degraded parts are marked with a red circle.\n\\vspace{-0.1cm}}\n\\label{fig:recon-num-pts}\n\\end{figure*}\n\n\\subsection{Freiburg Cars \\& Redwood-OS Dataset}\n\n\\begin{figure*}[tbph!]\n\\centering\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/Frame-001.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/Frame-002.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/Frame-003.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/Frame-010.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/render_001.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/render_002.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/render_003.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/freiburg\/render_010.png}\n\\caption{Qualitative results on Freiburg Cars dataset\n\\vspace{-0.1cm}}\n\\label{fig:freiburg}\n\\end{figure*}\n\n\\begin{figure*}[htbp!]\n\\centering\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/Frame-01053.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/Frame-02484.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/Frame-09374.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/Frame-09647.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/01053-front-side.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/02484-new-front-side.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/09374-front-side.png}\n\\includegraphics[width=0.246\\textwidth]{figures\/redwood\/09647-front-side.png}\n\\caption{Qualitative results on Redwood-OS Chairs dataset\n\\vspace{-0.4cm}\n}\n\\label{fig:redwood}\n\\end{figure*}\n\nFinally, we evaluate our SLAM system with monocular input on the Freiburg Cars dataset \\cite{freiburg-cars} and Redwood-OS Chairs dataset. Both datasets consist of object-centric sequences with the camera moving around the object. Demonstrations can be seen on Fig.~\\ref{fig:freiburg} and~\\ref{fig:redwood} and in the supplementary video.\n\n\\noindent\\textbf{Experimental Setting:}\n3D Bounding boxes are estimated using PCA on the reconstructed surface points. Note that this approach cannot differentiate between the front and back side of the car. To address this issue, we initialize with two flipped hypothesis and keep the orientation that yields a smaller loss.\n\n\\noindent\\textbf{Results:} Fig.~\\ref{fig:freiburg} provides qualitative reconstruction results on 4 Freiburg Cars sequences. Our system is capable of reconstructing dense, accurate and high-quality shapes for cars solely from monocular input at 10-20 fps. Fig.~\\ref{fig:redwood} illustrates results on chairs from the Redwood-OS \\cite{choi2016redwood} dataset. Reconstruction accuracy is slightly worse than on cars as chairs have more complex shape variations. Results are promising nonetheless -- our method still produces dense meshes that capture the overall object shape from monocular RGB-only sequences, in quasi-real time. \n\n\n\n\n\n\n\\section{Conclusions}\n\nWe have presented DSP-SLAM, a new object-aware real-time SLAM system that exploits deep shape priors for object reconstruction, produces a joint map of sparse point features for the background and dense shapes for detected objects. We show almost real-time performance on challenging real-world datasets such as KITTI (stereo and stereo+LiDAR), and even on monocular setting Freiburg cars and Redwood-OS. Our quantitative comparisons with competing approaches on camera trajectory estimation and shape\/pose reconstruction show comparable or superior performance to state of the art methods.\n\n\\vspace{-0.1cm}\n\n\\section*{Acknowledgements}\n\n\\vspace{-0.1cm}\n\nResearch presented here has been supported by the UCL Centre for Doctoral Training in Foundational AI under UKRI grant number EP\/S021566\/1. We thank Wonbong Jang and Adam Sherwood for fruitful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\@hangfrom@section#1#2#3{\\@hangfrom{#1#2}\\MakeTextUppercase{#3}}%\n\\def\\subsection{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\subsubsection{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n \\centering\n }%\n}%\n\\def\\paragraph{%\n \\@startsection\n {paragraph}%\n {4}%\n {\\parindent}%\n {\\z@}%\n {-1em}%\n {\\normalfont\\normalsize\\itshape}%\n}%\n\\def\\subparagraph{%\n \\@startsection\n {subparagraph}%\n {5}%\n {\\parindent}%\n {3.25ex \\@plus1ex \\@minus .2ex}%\n {-1em}%\n {\\normalfont\\normalsize\\bfseries}%\n}%\n\\def\\section@preprintsty{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsection@preprintsty{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsubsection@preprintsty{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n }%\n}%\n \\@ifxundefined\\frontmatter@footnote@produce{%\n \\let\\frontmatter@footnote@produce\\frontmatter@footnote@produce@endnote\n }{}%\n\\def\\@pnumwidth{1.55em}\n\\def\\@tocrmarg {2.55em}\n\\def\\@dotsep{4.5pt}\n\\setcounter{tocdepth}{3}\n\\def\\tableofcontents{%\n \\addtocontents{toc}{\\string\\tocdepth@munge}%\n \\print@toc{toc}%\n \\addtocontents{toc}{\\string\\tocdepth@restore}%\n}%\n\\def\\tocdepth@munge{%\n \\let\\l@section@saved\\l@section\n \\let\\l@section\\@gobble@tw@\n}%\n\\def\\@gobble@tw@#1#2{}%\n\\def\\tocdepth@restore{%\n \\let\\l@section\\l@section@saved\n}%\n\\def\\l@part#1#2{\\addpenalty{\\@secpenalty}%\n \\begingroup\n \\set@tocdim@pagenum{#2}%\n \\parindent \\z@\n \\rightskip\\tocleft@pagenum plus 1fil\\relax\n \\skip@\\parfillskip\\parfillskip\\z@\n \\addvspace{2.25em plus\\p@}%\n \\large \\bf %\n \\leavevmode\\ignorespaces#1\\unskip\\nobreak\\hskip\\skip@\n \\hb@xt@\\rightskip{\\hfil\\unhbox\\z@}\\hskip-\\rightskip\\hskip\\z@skip\n \\par\n \\nobreak %\n \\endgroup\n}%\n\\def\\tocleft@{\\z@}%\n\\def\\tocdim@min{5\\p@}%\n\\def\\l@section{%\n \\l@@sections{}{section\n}%\n\\def\\l@f@section{%\n \\addpenalty{\\@secpenalty}%\n \\addvspace{1.0em plus\\p@}%\n \\bf\n}%\n\\def\\l@subsection{%\n \\l@@sections{section}{subsection\n}%\n\\def\\l@subsubsection{%\n \\l@@sections{subsection}{subsubsection\n}%\n\\def\\l@paragraph#1#2{}%\n\\def\\l@subparagraph#1#2{}%\n\\let\\toc@pre\\toc@pre@auto\n\\let\\toc@post\\toc@post@auto\n\\def\\listoffigures{\\print@toc{lof}}%\n\\def\\l@figure{\\@dottedtocline{1}{1.5em}{2.3em}}\n\\def\\listoftables{\\print@toc{lot}}%\n\\let\\l@table\\l@figure\n\\appdef\\class@documenthook{%\n \\@ifxundefined\\raggedcolumn@sw{\\@booleantrue\\raggedcolumn@sw}{}%\n \\raggedcolumn@sw{\\raggedbottom}{\\flushbottom}%\n}%\n\\def\\tableft@skip@float{\\z@ plus\\hsize}%\n\\def\\tabmid@skip@float{\\@flushglue}%\n\\def\\tabright@skip@float{\\z@ plus\\hsize}%\n\\def\\array@row@pre@float{\\hline\\hline\\noalign{\\vskip\\doublerulesep}}%\n\\def\\array@row@pst@float{\\noalign{\\vskip\\doublerulesep}\\hline\\hline}%\n\\def\\@makefntext#1{%\n \\def\\baselinestretch{1}%\n \\reset@font\n \\footnotesize\n \\leftskip1em\n \\parindent1em\n \\noindent\\nobreak\\hskip-\\leftskip\n \\hb@xt@\\leftskip{%\n \\Hy@raisedlink{\\hyper@anchorstart{footnote@\\the\\c@footnote}\\hyper@anchorend}%\n \\hss\\@makefnmark\\\n }%\n #1%\n \\par\n}%\n\\prepdef\n\\section{Introduction}\n\\label{sec::intro}\n\nWe observe our Universe primarily though thermal and non-thermal radiation as well as by means of atomic line transitions. The thermal component probes gas (including ionized and neutral components) \nin thermal equilibrium, while the non-thermal emission is emitted by particles such as leptons (e.g. electrons and positrons) \nor hadrons (e.g., protons and ions of heavier elements) that are not in thermal equilibrium.\nThanks to multi-wavelength observations of various astrophysical objects such as supernova\nremnants \\citep[SNRs,][]{2012Helder}, active galactic nuclei \\citep{2010Abdo},\ngamma-ray bursts \\citep{2009Abdo}, galaxy clusters \\citep{Brunetti2014,2019vanWeeren}, and galaxies \\citep{Beck2015}, observational insights\ninto the emission processes and the radiating particle\ndistributions in collisionless astrophysical plasmas is now possible.\nObserved non-thermal emission in our Universe spans a vast range of energies: from radio to TeV gamma-ray energies, and occurs in \nvarious astrophysical objects spanning many length scales: from sub-AU to Mpc scales.\nThe observed non-thermal power-law emission indicates that these plasmas are far from thermal equilibrium \nand suggests an efficient particle acceleration process giving rise to a population of cosmic rays\n(CRs) composed of electrons, protons, and heavier ions \\citep[][]{1987Blandford,Marcowith2016}. In our Galaxy, the acceleration of \nparticles near SNR shocks is thought to be the main mechanism for producing CRs up to particle energies of $10^{16}$eV \\citep[e.g.,][]{Gaisser2016}.\n\n\n\nThe interaction of plasmas on both sides of the shock wave produces electromagnetic perturbations.\nProtons and heavier ions, with speeds larger than a few times the shock speed, interact resonantly with these electromagnetic perturbations and, as a result, are transported diffusively in the vicinity of shock waves.\nBecause these magnetic perturbations are converging at the shock front, this leads to proton and ion acceleration through a process known as diffusive shock acceleration \\citep[DSA,][]{Krymskii1977,Axford1977,Blandford1978,Bell1978a,Bell1978b}. \nThe level of magnetic fluctuations determines the acceleration efficiency of particles, and vice versa, \naccelerated particles excite plasma instabilities \\citep{Bell2004} which further amplify the magnetic field near shock fronts.\n\nObservational evidences for electron acceleration at shocks are numerous. These include puzzling giant radio shocks that are observed at the outskirts of merging galaxy clusters.\nThese diffuse radio-emitting structures are often coincident with weak merger-driven shocks \\citep{2019vanWeeren}, and interpreted as synchrotron emitting regions powered by relativistic CR electrons that underwent the DSA process \\citep{Ensslin1998,Pfrommer2008,Kang2012,Pinzke2013}.\nMoreover, X-ray observations show that the non-thermal emission component at the edges of SNRs such as SN 1006 is due to accelerated CR electrons emitting synchrotron radiation \\citep{Willingale+1996}.\n\nThe particular morphology of SN 1006 in radio and X-rays has raised the question about the impact of the orientation of the upstream magnetic field on the acceleration mechanism of CR electrons. Radio polarization observations suggest a large-scale magnetic field that is aligned along the north-east to south-west direction \\citep{Reynoso2013}. Hence, the magnetic field seems to be aligned with the polar caps visible in radio \\citep{Dyer2009}, X-rays \\citep{Cassam-Chenaie2008} and TeV gamma rays \\citep{Acero2010}, suggesting preferentially quasi-parallel acceleration \\citep{Voelk2003}. Azimuthal variations of X-ray cutoff energies \\citep{Rothenflug2004, Katsuda2010} and the radius ratio of the forward shock to the tangential discontinuity \\citep{Cassam-Chenaie2008} reinforce this picture. Three-dimensional magneto-hydrodynamics (MHD) simulations support the interpretation of preferred quasi-parallel shock acceleration of electrons \\citep{Bocchino2011,Schneiter2015,Winner+2020} and protons \\citep{Winner+2020,Pais2020a,Pais2020b}.\n\n\n\n\nHowever, we still lack a detailed understanding of how electrons are accelerated to high energies at these shocks: for electrons and protons traveling with the same non-relativistic speed, the kinetic energy of electrons is much smaller than that of protons owing to their substantially smaller mass in comparison to protons. Hence, unlike protons, these electrons can not directly scatter off of the magnetic perturbations near shock waves and DSA is not possible for electrons without some process(es) that can provide and sustain pre-acceleration of incoming upstream electrons.\nThis is known as the electron injection problem in shocks \\citep{Amano+2010,Guo+2015}.\nThe importance of understanding the acceleration mechanism of electrons is also reinforced by their much higher radiative yield in comparison to that of protons at the same energy because the Thompson cross section scales inversely with particle mass squared.\n\nIn general, electron pre-acceleration proceeds in two steps: 1.\\ electrons are heated in the downstream region to energies much higher than their initial kinetic energy, and 2.\\ an acceleration phase where electrons scatter off of electromagnetic waves with a wavelength shorter than the shock width, which further increases the electron energy and enables them to participate in the DSA process.\nThe nature of these waves, which accelerate electrons in step 2, depends on the relative angle, $\\theta_{B}$, between the direction of the large-scale background magnetic field and the direction of the propagation of the shock \\citep[see, e.g.,][]{Guo+2015}.\nMoreover, recent global MHD modeling of the multi-wavelength emission from the supernova remnant SN 1006 shows that efficiency of electron acceleration at quasi-parallel ($0^{\\circ}<\\theta_{B} < 45^{\\circ}$) shocks has to be at least an order of magnitude higher in comparison to quasi-perpendicular ($45^{\\circ}<\\theta_{B} < 90^{\\circ}$) shock configurations \\citep{Winner+2020}.\n\n\nIn quasi-perpendicular shocks, electron pre-acceleration is thought to occur via shock drift acceleration (SDA) where electrons are\nreflected and accelerated via a combination of the cross shock electric potential and the shock rest frame motional electric field \\citep{Wu1984,Krauss+Wu1989}\nand\/or shock surfing acceleration (SSA) where electron acceleration occurs when they interact with electrostatic waves at the shock front \\citep{Shimada+2000,Xu2020,KumarReville2021}.\nTheoretically, the SDA mechanism is shown to be inefficient for accelerating electrons at planar shocks in the so-called scattering-free limit \\citep{Ball+Melrose_2001}. However, in the presence of strong pitch-angle scattering, this mechanism is modified to become stochastic SDA, which could result in efficient acceleration \\citep{Katou+2019}. \nOn the other hand, particle-in-cell (PIC) simulations have shown that the SSA mechanism mediated by waves that are generated due to the Buneman instability is efficient in pre-accelerating electrons \\citep[see, e.g.,][]{Bohdan2019}. While these simulations used unrealistically low ion-to-electron mass ratios and (too) high {\\alf} speeds, it remains to be shown how these results can carry over to more realistic astrophysical conditions.\n\n\nFor quasi-parallel shocks, it is usually assumed that hot electrons generate whistler waves that lead to pitch-angle scattering and acceleration of these electrons. Efficient wave production in this scenario requires high values of {\\alf}ic Mach number, $\\mathcal{M}_{\\rm A}$. However, when tested with PIC simulations, this scenario did not result in any significant electron acceleration \\citep{Riquelme+2011,Niemiec+2012}. \nRecently, \\citet{sharp2} discovered a new instability (called the intermediate-scale instability) that drives comoving ion-cyclotron waves unstable in the vicinity of the shock front. This presents {\\it a new mechanism for generating waves} that can scatter electrons and potentially enables an efficient DSA mechanism for electrons. Unlike the whistler-mediated case, this mechanism requires low values of $\\mathcal{M}_{\\rm A}$ which is a condition for the new instability to operate. \n\nIn this paper we test this mechanism using 1D3V PIC simulations (i.e., one spatial and three velocity-space dimensions) of parallel electron-ion shocks and show that it indeed leads to very efficient electron acceleration.\nThe paper is organized as follows.\nIn Section~\\ref{sec::setup}, we present the setup for our simulations, and compute the linear growth rates for the expected unstable wavemodes at the shock front region.\nThe growth of these wavemodes is responsible for the formation of the shock and for creating non-thermal populations of electrons and ions.\nIn Section~\\ref{sec:Bamp}, we present the evolution of density and magnetic field amplification in our simulations.\nWe study the impact of the intermediate-scale instability driven wavemodes on the acceleration of electrons in the downstream region of the shock in Section~\\ref{sec:nth1}.\nIn Section~\\ref{sec:thermal}, we discuss the heating of the shocked plasmas in the downstream and the shock front region. We also compare this heating to analytical heating models at shocks.\nThe fraction of energy that is channeled into non-thermal electrons and ions is quantified and its evolution is presented in Section~\\ref{sec:nothermal}. We discuss our findings and their implications in Section~\\ref{sec:dis}, and conclude the paper in Section~\\ref{sec:concl}.\nThroughout this paper, we assume the SI system of units.\n\n\n\n\\section{Non-relativistic shock simulations}\n\\label{sec::setup}\n\n\\begin{figure}\n\\includegraphics[width=8.8cm]{Plots\/Fig01.pdf}\n\\caption{The formation of a strong parallel shock in the contact discontinuity rest frame (simulation frame).\nLeft-hand side: the reflection of an incident electron-ion plasma drifting to the left side of the simulation with speed $\\varv_u$ leads to the formation of the shock.\nRight-hand side: after shock formation, the shock speed relative to the upstream plasma is $\\varv_{\\rm sh} \\sim 4 \\varv_u\/3$ (assuming a strong shock and adopting the Rankine-Hugoniot jump conditions), and the density jump at the shock front is on average $\\sim 4 n_0$, where $n_0$ is the number density of the far upstream plasma.\nThat is, at the shock front the {\\alf} speed of ions $\\varv_{\\rm A}$ is $0.5$ of that in the far upstream, which means that the {\\alf}ic Mach number, $\\mathcal{M}_{\\rm A}$, in this region is twice of that in the far upstream.\n\\label{fig:ShockSketch}}\n\\end{figure}\n\nHere we discuss the setup for our shock simulations and compute the scales and the growth rates for the expected unstable linear wavemodes at the shock front regions.\n\n\\subsection{Simulation setup}\n\nWe perform 1D3V particle-in-cell (PIC) simulations using the SHARP code \\citep{sharp,resolution-paper,sharp2}, where the shock is formed by reflecting the upstream electron-ion plasma (initially moving towards the left, i.e., negative $\\bs{\\hat{\\vec{x}}}$ direction) at $x=0$.\nThat is, the shock is formed by the interaction of two exact replica of the same plasma moving in opposite directions, and the simulations are performed in the rest-frame of the contact discontinuity. A sketch for the initial configuration and the resulting shock formation is shown in Figure~\\ref{fig:ShockSketch}.\n\nIn all simulations, the upstream plasma skin-depth is resolved by 10 computation cells, and the CFL stability condition on the computational time step, $\\Delta t$, is such that $c \\Delta t = 0.45 ~ \\Delta x$, where $c$ is the speed of light in vacuum.\nWe use 200 particles per species in each computational cell.\nThe boundary of the computational domain expands with time on the right-hand side to save computational cost while containing all energetic particles in the precursor of the shock.\nThe upstream plasma is initially moving with velocity $\\varv_u = -0.1 c$, and both electrons (with mass $m_e$) and ions (with mass $m_i$) have the same physical temperature $T_e = T_p = 4 \\times 10^{-8} m_i c^2\/k_{\\rm B}$, where $k_{\\rm B}$ is Boltzmann's constant.\nInitially, both species have the same uniform number density $n_i=n_e=n_0$. \n\n\nFor such shocks, the expected shock speed (in the rest-frame of upstream plasma) is \n$\\varv_{\\rm sh} = 4 |\\varv_u| \/3 \\sim 0.133 c$.\nTherefore, the sonic Mach number $ \\mathcal{M}_s = \\varv_{\\rm sh} \/ \\varv_s \\sim \n365$, where the sonic speed is $\\varv_s = \\sqrt{\\Gamma_{\\rm ad} k_{\\rm B} (T_e+T_p)\/m_i \n}$, and $\\Gamma_{\\rm ad} = 5\/3$ is the adiabatic index. That is, all shock simulations presented here have high sonic Mach number.\nThe initial (large-scale) background magnetic field $\\vec{ B}_0 = B_0 \\bs{\\hat{\\vec{x}}}$ is such that the {\\alf}ic Mach number $\\mathcal{M}_{\\rm A} = \\varv_{\\rm sh}\/\\varv_{\\rm A}$, where $\\varv_{\\rm A} = B_0\/\\sqrt{ \\mu_0 n_i m_i}$ is the {\\alf} speed of ions.\n\n\nIt is important to note here that collisionless shocks found in the intracluster medium have $\\mathcal{M}_s \\sim 1$ to 3 \\citep{Ryu+2003, Pfrommer+2006,Vazza+2009,Schaal+2016}, and for such low values of the sonic Mach number, the intermediate-scale growth rates are much larger in comparison to that at the gyro-scale \\citep{sharp2}. This implies a stronger impact of the intermediate-scale unstable modes on the acceleration of electrons in the intracluster medium.\nHowever, we leave a demonstration of this point in simulations to future works.\n\n\n\\subsection{Unstable linear wavemodes at the shock front}\n\\label{sec::theory}\n\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{Plots\/Fig02.pdf}\n\\caption{Top: growth rates ($\\Gamma$) normalized to the non-relativistic ion gyro-frequency, $\\Omega_i$.\nBottom: the phase velocity ($\\varv_{\\rm ph}$) of the fastest growing parallel (unstable) electromagnetic wavemodes created by the penetration of upstream cold plasma drifting into the denser plasma at the shock front (in the simulation frame).\nThe solutions are shown for $\\mathcal{M}_{\\rm A} = 21.3$, with a realistic ion-to-electron mass ratio, $m_r=m_i\/m_e$ (blue curves).\nIn the case of $\\mathcal{M}_{\\rm A} = 5.3$, solutions are shown for a realistic value of $m_r$ (red curves) and a reduced $m_r$ (black curves).\nOnly the case with $\\mathcal{M}_{\\rm A}=5.3$ and a realistic mass ratio is expected to excite the intermediate-scale instability~\\citep{sharp2} and thus the small scale ion-cyclotron wave modes that are comoving with the upstream plasma with velocity $\\varv_u = -0.1 c$ are amplified, i.e., come from solutions of $D^{+}=0$ (see Equation~\\ref{eq:disp}).\nSmall scale unstable wavemodes in the other (blue and black) cases are whistler waves, i.e., they derive from solutions of $D^{-}=0$ (see Equation~\\ref{eq:disp}).\n\\label{fig:GrowthRate}}\n\\end{figure}\n\n\nTo study the impact of the intermediate-scale instability on the heating and the acceleration of electrons at parallel non-relativistic shocks, we present simulations that differ in their ability to excite the intermediate-scale instability, which manifests itself by exciting ion-cyclotron waves that are comoving with the upstream plasma~\\citep{sharp2}.\nThe upstream drifting plasma drives wavemodes unstable at the shock front\\footnote{In the linear dispersion calculation we present here, the background number density is assumed to be uniform.\nNumber density non-uniformity could change the growth rates if the non-uniformity scale is close to the wavelengths of the unstable modes.\nHowever studying this theoretically, even in the linear regime, is tedious \\citep[see, e.g.,][]{sim_inho_18, th_inho_20}.}. \nAssuming gyrotropic momentum distributions for various species, the \ndispersion relation, in the simulation frame, for parallel (w.r.t.\\ the background magnetic field $B_0$) propagating electromagnetic wavemodes with wavenumber $k$ and complex frequency \n$\\omega$ can be written as~\\citep{Schlickeiser+2002}\n\\begin{eqnarray}\nD^{\\pm}\n&=& 1- \\frac{k^2c^2}{\\omega ^2} \n\\nonumber \\\\\n&&+\n\\sum_{s=1}^{4}\n\\frac{ \\omega_s^2 }{ \\gamma_s \\omega ^2 }\n\\left[\n\\frac{ \\omega -k \\varv_{\\parallel,s} }\n{ k \\varv_{\\parallel,s} - \\omega \\pm \\Omega_s}\n-\n\\frac{ \\varv_{\\perp,s}^2 \\left(k^2c^2-\\omega ^2\\right) \/c^2 }\n{2 \\left(k \\varv_{\\parallel,s} -\\omega \\pm \\Omega_s \\right)^2}\n\\right] ~\\hspace{-0.16cm} .\n\\label{eq:disp}\n\\end{eqnarray} \nHere, $s=\\{1,2\\}$ denote the shocked electron and ion plasmas at the shock front, respectively, and $s=\\{3,4\\}$ denote the incoming (upstream) cold electron and ion plasmas, respectively. The drift speeds are $\\varv_{\\parallel,s} = \\{ - \\varv_{u}\/3, - \\varv_{u}\/3, \\varv_{u}, \\varv_{u} \\}$, \nand the relativistic gyro-frequencies $\\Omega_s = \\{- m_r \\Omega_0, \n\\Omega_0, -m_r \\Omega_0, \\Omega_0\\}\/\\gamma_s$, where \n$\\Omega_0 = e B_0\/m_i $ is the non-relativistic ion gyro-frequency, and \nthe Lorentz factor is $\\gamma_s = (1 - \\varv_{\\parallel,s}^2 - \\varv_{\\perp,s}^2)^{-1\/2}$.\n\nSolutions to $D^{+}=0$ ($D^{-}=0$) are solutions for the left (right) handed polarized electromagnetic wavemodes. When solving for the maximum growth rates at some $k$, we solve the dispersion relation with both signs, and find the fastest growing mode regardless of its polarization.\nFor various species $s$, the perpendicular speeds are $\\varv_{\\perp,s} = \\varv_s\/\\sqrt{2} ~ \\forall s $,\nthe plasma frequencies are $\\omega_s = \\{ \\sqrt{3 m_r} ~\\omega_i, \n\\sqrt{3}~ \\omega_i, \\sqrt{m_r} ~\\omega_i, \\omega_i \\}$, where $\\omega^2_i= e^2 n_i\/(m_i \\epsilon_0)$ is the square of the ion plasma frequency in the far upstream of the shock, $e$ is the elementary charge, and $\\epsilon_0$ is the permittivity of free space. \n\nWe present a simulation that can excite the intermediate-scale instability at the shock front region. It has an upstream {\\alf}ic Mach number $\\mathcal{M}_{\\rm A} = 5.3$ and uses a realistic mass ratio $m_r = m_i\/m_e=1836.$\nThat is, the shock front {\\alf}ic Mach number \n\\begin{eqnarray}\n\\mathcal{M}^f_{\\rm A} \\sim 10.6 < \\sqrt{m_r}\/2 \\sim 21.4 .\n\\end{eqnarray}\nThis condition represents the necessary criteria for driving such comoving ion-cyclotron wave modes \\citep{sharp2}.\nThe solution of the dispersion relation (Equation~\\ref{eq:disp}) for \nthis case is shown in Figure~\\ref{fig:GrowthRate} (red curves).\nThe lower panel shows that the phase velocity of the unstable wave modes (at $kc\/\\omega_i \\sim 5$) are comoving with the upstream drifting plasmas ($\\varv_{\\rm ph} \\sim -0.1 c$) for the simulation with $\\mathcal{M}_{\\rm A} = 5.3$ and $m_r=1836$ (red curve).\nTo demonstrate the importance of this instability in electron-heating, we present two more simulations where the condition of this instability is violated.\n\n\\begin{deluxetable}{ cccccc }\n\\tablewidth{8.7cm}\n\\tabletypesize{\\footnotesize}\n\\tablecolumns{5} \n\\tablecaption{\nParameters of our electron-ion parallel shock simulations \\vspace{-0.2cm}\n\\label{table:Sims}\n}\n\\tablehead{ Name & $\\varv_u\/c$\\tablenotemark{a} & $\\mathcal{M}_{\\rm A}$\\tablenotemark{b}\n& $\\mathcal{M}_s$\\tablenotemark{c}\n& $m_i\/m_e$ \n& Condition\\tablenotemark{d} \n}\n\\startdata\n\\rule{0pt}{12pt} Ma5Mr1836 & -0.1 & 5.3 & 365 & 1836 & \\checkmark \n\\\\\n\\rule{0pt}{12pt} Ma5Mr100\t & -0.1 & 5.3 & 365 & 100\t & $\\times$ \n\\\\\n\\rule{0pt}{12pt} Ma21Mr1836 & -0.1 & 21.3 & 365 & 1836 & $\\times$\n\\enddata\n\\tablenotetext{a}{Upstream plasma velocity in the contact-discontinuity rest frame.}\n\\tablenotetext{b}{Upstream {\\alf}ic Mach number.}\n\\tablenotetext{c}{Upstream sonic Mach number.}\n\\tablenotetext{d}{This shows whether the condition ($\\varv_{\\rm sh}\/\\varv_{\\rm A} < \\sqrt{m_r}\/2 $) for exciting the intermediate-scale instability at the shock front is satisfied.}\n\\end{deluxetable}\n\nThe first simulation has $\\mathcal{M}_{\\rm A} = 21.3$, with a realistic mass ratio, $m_r=1836$.\nIn this case, $\\mathcal{M}_{\\rm A}^f \\sim 40.6 > \\sqrt{m_r}\/2 \\sim 21.4$. \nIndeed, the solutions of the dispersion relation for this case (blue curves in Figure~\\ref{fig:GrowthRate}) show that sub ion-skin-depth unstable whistler wavemodes are not comoving with the upstream plasma and thus can be easily quenched.\nFor this simulation, we increase $\\mathcal{M}_{\\rm A}$ by decreasing $B_0$, i.e., by lowering $\\varv_{\\rm A}$. \n\n\n\n\n\nThe second simulation has $\\mathcal{M}_{\\rm A} = 5.3$, but with a reduced mass ratio, $m_r=100$. In this case, $\\mathcal{M}_{\\rm A}^f \\sim 10.3 > \\sqrt{m_r}\/2 = 5 $.\nAgain, the solutions of the dispersion relation for this case (black curves in Figure~\\ref{fig:GrowthRate}) show that the sub ion-skin-depth unstable whistler wavemodes are not comoving with the upstream plasma and thus can also be easily quenched.\nFor this simulation, we increase $T_e=T_i$ such that $T_i\/m_i$ and thus the sonic speed $\\varv_s$ is unchanged. \nConsequently, the sonic Mach number $\\mathcal{M}_s$ is also unchanged.\n\n\n\n\nA summary of parameters of the three simulation that we present here is given in Table~\\ref{table:Sims}.\nIt is important to note that larger values of the number density at the shock front, i.e., $n > 4n_0$, increases $\\mathcal{M}_{\\rm A}^f$ in simulations. Hence, we have added a safety margin to our chosen values of $\\mathcal{M}_{\\rm A}^f$ that can in principle account for on additional factor of four enhancement in density over its MHD prediction without changing our conclusion on the condition of whether the intermediate-scale instability is excited in our simulations, as given in Table~\\ref{table:Sims}.\n\n\n\n\\section{Shock formation and magnetic field amplification; self-driven scattering centers}\n\\label{sec:Bamp}\n\n\\begin{figure*}\n\\includegraphics[width=18.5cm]{Plots\/Fig03.png}\n\\caption{Left: evolution of the number density normalized to the far upstream number density ($n_0$).\nRight: evolution of $B_y$ normalized to the far upstream parallel magnetic field ($B_0$). \nThese are the evolution in the simulation Ma5Mr1836 (top), Ma5Mr100 (middle), and Ma20Mr1836 (bottom). \n\\label{fig:XT-evol}\n}\n\\end{figure*}\n\n\n\nThe interaction of the drifting and reflected plasma results in instabilities on scales both longer and shorter than the ion-skin depth.\nSuch unstable modes slow down the reflected plasma and thus create an over-density ($n> 2 n_0$) behind the shock transition region\\footnote{Here, $n$ is the number density of both ions and electrons together.}, where $n_0$ is the far upstream number density.\nAfter the formation of the shock, the interaction of the incoming upstream plasma with the denser plasma behind the shock also forms an unstable plasma configuration.\nThis leads to particle scattering and a constant heating and acceleration of the upstream plasma as it is swept over by the shock.\nThe left-hand side of Figure~\\ref{fig:XT-evol} shows the time evolution of the number density, $n$, normalized to $n_0$, in all the simulations.\nThe right-hand side shows the time evolution of $B_y$ normalized to the initial background (parallel) magnetic fields $B_0$ (the $B_z$ evolution is very similar to that of $B_y$, albeit shifted slightly in space).\nFigure~\\ref{fig:XT-evol} shows the formation of the shock via the excitation of unstable magnetic field wavemodes in various simulations.\nAfter the formation of the shock, wavemodes on small and large scales are unstable as seen in Figure~\\ref{fig:GrowthRate}. These modes are continuously driven at the shock front region as the shock swipes through the upstream plasma.\n\nIn Figure~\\ref{fig:XT-evol}, the white lines that separate the shock front and the upstream regions indicates the location of the shock in various simulations. While the exact position of the white line is determined visually, we note that its slope is approximately given by \n\\begin{eqnarray} \n\\frac{ \\Delta t \\Omega_i }{ \\Delta x \\omega_i\/c } =\n\\frac{ \\Omega_i\/\\omega_i }{ \\varv_{\\rm sh} \/c } \n= \\frac{ 1}{\\mathcal{M}_{\\rm A} }.\n\\end{eqnarray}\nThat is, the slope for the Ma20Mr1836 simulation is smaller by a factor of $\\sim 4$ in comparison to the other simulations, and thus the range in $x$-direction is larger by the same factor.\nWe define the shock front (transition) to be a region that is 200 $c\/\\omega_i$ wide behind the location of the shock, i.e., it is the region between the two white lines in various panels of Figure~\\ref{fig:XT-evol}.\n\nAs particles (especially ions) scatter back and forth across the shock front (transition) region, they are accelerated to higher energies, and thus some particles escape from this region toward the upstream plasma. \nThese excites unstable wavemodes, which in turn scatter most of these counter-streaming energetic particles back toward the shock front region \\citep{Bell1978a}.\nThis process generates the so called shock precursor region ahead of the shock in the upstream plasma as seen in the right-hand panels of Figure~\\ref{fig:XT-evol}.\nIn the next sections, we closely look at the nature of these driven modes and their impact on particle acceleration and heating for both ions and electrons.\n\nThe overall density jump between the far upstream and far downstream region ($n\/n_0$) is notably larger than expected from the MHD jump condition that predicts $n\/n_0= 4$ (for a strong parallel shock). Instead, our simulations with $\\mathcal{M}_{\\rm A}=5.3$ show an overall density jump of $n\/n_0 \\sim 6$.\nThis has already been seen in a number of PIC simulation before as summarized by \\citet{Bret2020}.\nIn hybrid-PIC simulations, where the jump in the density is not constrained by using a very stiff electron equation of state, \n\\citet{Colby+2019} shows that for a simulation with $\\mathcal{M}_{\\rm A}=20$ the density jump grows to $n\/n_0 \\gtrsim 5.5$.\n\n\n\n\\section{Impact of intermediate-scale instability driven wavemodes on acceleration}\n\\label{sec:nth1}\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{Plots\/Fig04.pdf}\n\\caption{Top: downstream electron (solid) and ion (dashed) momentum spectra at $t ~\\Omega_i= 260$ from Ma5Mr1836 (red), Ma5Mr100 (black), Ma20Mr1836 (blue) simulations.\nThe downstream is defined to be at a distance that is larger than 200 $c\/\\omega_i$ from the shock front; see Figure~\\ref{fig:XT-evol}.\nBottom: downstream perpendicular magnetic energy in Fourier space at $t ~\\Omega_i= 260$ .\nThis shows that the co-moving (traveling towards the downstream) unstable waves that are driven at the shock front in the Ma5Mr1836 simulation generate a much higher level of small-scale magnetic fields. These increased magnetic fluctuations imply a much stronger scattering and thus, more efficient acceleration of electrons in comparison to the other simulations.\n\\label{fig:spectra300}}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18.5cm]{Plots\/Fig05.png}\n\\caption{Time evolution of the average perpendicular magnetic field power, $\\langle \\delta B^k_{\\perp} \\rangle$, in the downstream (left) and shock front (right) regions.\nThe dashed lines indicate the small-scale ($k c\/\\omega_i > 2$) time evolution while solid lines show the large-scale ($k c\/\\omega_i< 2$) time evolution. \nDifferent simulations are indicated with different colors; Ma5Mr1836 with red, Ma5Mr100 with black, and Ma20Mr1836 with blue.\n\\label{fig:KLS}\n}\n\\end{figure*}\n\nThe importance of the intermediate-scale instability on the particle acceleration process can be quantified by comparing simulations which differ in their ability to excite comoving ion-cyclotron modes via the intermediate-scale instability.\nAs seen in Figure~\\ref{fig:GrowthRate}, all simulations are predicted to have unstable wavemodes on scales smaller than the ion skin depth at the shock front (transition) region.\nHowever, only in the simulation Ma5Mr1836, these modes are comoving with the incoming flow (red, see the bottom panel of Figure~\\ref{fig:GrowthRate}), and hence their mode power can be transferred to the downstream region where they can scatter electrons.\n\nThe top panel of Figure~\\ref{fig:spectra300} shows the electron and proton spectra as a function of dimensionless velocity $u_s\/c$. To obtain these spectra, we first transform to the frame in which the plasma is at rest before we compute the momentum spectra from which we estimate the plasma temperature as laid out in Appendix~\\ref{app:temp}. \nAs shown in the top panel of Figure~\\ref{fig:spectra300}, the simulation Ma5Mr1836 (where the intermediate-scale instability can grow) has a much higher efficiency in converting the incoming flow kinetic energy into non-thermal energy in electrons in the downstream region. In fact, it is more efficient (by about 2 orders of magnitude) in accelerating electrons in comparison to the simulation with a higher $\\mathcal{M}_{\\rm A}$ (blue), suggesting that there is a much more efficient process at work that enables electrons to be efficiently accelerated in our low-$\\mathcal{M}_{\\rm A}$ model (red).\n\nMoreover, the Ma5Mr100 simulation (black) has the same $\\mathcal{M}_{\\rm A}$ and $\\mathcal{M}_{\\rm s}$ as the simulation shown in red but uses a lower mass ratio: $m_r=100$.\nAs shown in the top panel of Figure~\\ref{fig:spectra300}, this results in a much smaller electron acceleration efficiency in this simulation.\nAdditionally, the heating of electrons in the downstream region of the simulation with a reduced mass ratio (black) is much smaller compared to the simulations with a realistic mass ratio (red and blue)\\footnote{In Appendix~\\ref{app:temp}, we show that the normalized temperature, $k_{\\rm B} T_s\/m_s c^2$, of the thermal part of the plasma population of species $s$ is proportional to the value of $u\/c$ where the $u^4 f(u)$ is maximized, see Equation~\\eqref{Eq:TR}.}.\nThat is, the top panel of Figure~\\ref{fig:spectra300} demonstrates that using a lower mass ratio results not only in an artificially suppressed electron acceleration efficiency but also leads to an erroneous electron heating in the downstream region.\n\nAs we will now argue, the differences in the acceleration efficiency of electrons as well as for particle heating at shocks is a direct consequence of the differences in the nature of the unstable small-scale modes.\nThe bottom panel of Figure~\\ref{fig:spectra300} shows that the small-scale power of the perpendicular magnetic field in the downstream region is much larger in the simulation model Ma5Mr1836 in comparison to the simulations Ma5Mr100 (black) and Ma21Mr1836 (blue).\nIn the latter two cases, the small-scale unstable modes are not comoving with the incoming flow, and thus can be easily quenched, which leads to a much smaller amount of small-scale perpendicular magnetic power in the downstream regions.\n\nTo demonstrate the growth of these small scale wavemodes, in Figure~\\ref{fig:KLS} we plot the time evolution of the average perpendicular magnetic field for small (dashed) and large (solid) scale wave modes in various simulations. To this end, we take the power of the perpendicular magnetic field as shown in the bottom panel of Figure~\\ref{fig:spectra300} and compute the average for larger scales, i.e., for $kc\/\\omega_i<2$ and for small scales, i.e., for $2T_i$ as can be seen from the bottom panel of Figure~\\ref{fig:Temp}.\nSaturation temperatures where $T_e < T_i$ are observed for example in Balmer-dominated shocks~\\citep{van-Adelsberg+2008}.\nThat is, $T_e\/T_i>1$ found in the simulation with low ion-to-electron mass ratio disagrees with observed saturation temperature ratios.\nSurprisingly, for the simulation with $m_r=100$, the ion and electron temperatures continue to increase from the shock front region to the downstream region. This indicate that further heating occurs in the downstream region of this simulation in disagreement with the results from the simulations with a realistic mass ratio.\n\n\n\\section{Formation of non-thermal particles}\n\\label{sec:nothermal}\n\n\\begin{figure*}\n\\includegraphics[width=18.5cm]{Plots\/Fig07.pdf}\n\\caption{Top: the time evolution of the acceleration efficiency (as defined in Equation~\\ref{Eq:Eff-sh}) in the downstream (left) and shock front (right) region for ions (dashed) and electrons (solid) in various simulations.\nBottom: the time evolution of the fraction of non-thermal energy of electrons relative to that of ions ($K_{\\rm ei}$) in the downstream (left) and shock front (right) regions in various simulations.\n\\label{fig:Eff}\n}\n\\end{figure*}\n\n\n\nAs shown above, the upstream kinetic energy is channeled into magnetic energy (Section~\\ref{sec:Bamp}), non-thermal energy (Section~\\ref{sec:nth1}) and thermal energy (Section~\\ref{sec:thermal}) in the downstream and shock front regions of the system. \nHere we quantify the amount of energy carried by non-thermal ions and electrons in these regions in Figure~\\ref{fig:Eff}.\nIn Appendix~\\ref{app:temp}, we show that the fraction of particles with non-thermal energy can be quantified in various ways, and here we focus on how much energy density is carried by these particles when compared to the upstream kinetic energy density, i.e., using Equation~\\eqref{Eq:Eff-sh}.\nWe define non-thermal particles to be particles with $u>5u_m$~\\citep{Caprioli2014a,Xu2020}, where $u=\\gamma \\varv$ is the spatial-part of the 4-velocity, $\\gamma$ is the Lorenz factor, and $u_m$ is defined as the value of $u$ where $u^4 f(u)$ is maximized.\nAs an example, particles to the right of the blue vertical lines in Figure~\\ref{fig:Thermal} are considered to form the non-thermal part of the momentum distribution. \nThis quantification is done for different regions as a function of time and was used to compute the time evolution of the efficiency shown in the top panel of Figure~\\ref{fig:Eff}.\nThe top panel shows the time evolution of the efficiency $\\epsilon_{\\rm sh}$, i.e., the percentage of upstream kinetic energy density that is channeled into non-thermal electrons and ions in the downstream (right) and shock front regions (left).\nBy contrast, the bottom panel shows the time evolution of the ratio of non-thermal energy of electrons to that of ions, $K_{\\rm ei}$, in various simulations also in the downstream (left) and shock front (right) regions.\n\nThe non-thermal energy density of ions (dashed curves) is about 20-30\\% of the upstream kinetic energy density, and this result is roughly independent of the ion-to-electron mass ratio $m_r$ at the shock front region.\nThe simulation with a higher $\\mathcal{M}_{\\rm A}$ has a slightly higher efficiency in producing non-thermal ions at the shock front (transition) region.\nMoreover, in the downstream region, it contains a higher fraction of non-thermal ions indicating that this simulation is more efficient in scattering ions that are accelerated in the shock front (transition) region back to the downstream region.\n\nOn the other hand, the fraction that is channeled into non-thermal electrons is strongly dependent on both, $\\mathcal{M}_{\\rm A}$ and $m_r$.\nThe top panel of Figure~\\ref{fig:Eff} shows that, at the shock front, the simulation with $\\mathcal{M}_{\\rm A}=5.2$ and a realistic mass ratio efficiently accelerates electrons, leading to 0.05--0.1\\% of initial flow kinetic energy converted into non-thermal electrons, while in the simulation with a higher $\\mathcal{M}_{\\rm A}$ and a realistic mass ratio, a smaller fraction (0.01--0.05\\%) of the upstream kinetic energy is channeled into non-thermal electrons.\nThe simulation with a reduced mass ratio leads to about 2-3 orders of magnitude reduction in electron acceleration efficiency.\n\nClearly, the simulation where the intermediate-scale instability is allowed to grow at the shock front region (red) is much more efficient in confining the accelerated electrons by scattering them back to the downstream regions, leading to a much larger non-thermal electron energy in the downstream region in comparison to the simulation with a higher $\\mathcal{M}_{\\rm A}$ as seen in the left-hand panel of Figure~\\ref{fig:Eff}.\nMoreover, the simulation with a low mass ratio (black) leads to significant reduction in the energy channeled into non-thermal electrons in the downstream region.\n\nAs shown in the lower left-hand panel of Figure~\\ref{fig:Eff}, in the downstream region, the simulation that allows for growth of comoving ion-cyclotron waves (red) has a 2-3 orders of magnitude larger non-thermal electron-to-ion energy ratio, $K_{\\rm ei}$, in comparison to that in the simulations where the condition for the intermediate-scale instability is not satisfied.\n\n\n\\section{Discussion}\n\\label{sec:dis}\n\nIn this section, we discuss the impact of a finite thermal temperature on the growth of the intermediate-scale instability, followed by a comparison of our proposed electron pre-acceleration mechanism in comparison to the mechanism proposed by \\citet{2015phrvl.114h5003p}. Finally, we discuss how our results can be used to possibly infer a lower limit on the amplitude of the magnetic field at shocks of supernova remnants.\n\n \n\n\\subsection{Impact of finite plasma beta}\n\n\nThe dispersion relation used in Section~\\ref{sec::theory} to predict the unstable modes at the shock transition region assumes a cold background of electrons and ions.\nThus, it is of great importance to consider the impact of the finite plasma beta $\\beta_s$ on the growth of the intermediate-scale instability, where $\\beta_s \\equiv 2 \\varv_{{\\rm th},s}^2\/\\varv_{{\\rm A},s}^2$, the square of the thermal speed is $\\varv_{{\\rm th},s}^2 \\equiv k_B T_s\/m_s$ and $s=e,i$ for electrons and ions respectively.\n\nThe dispersion relation studied in \\citet{sharp2} and presented in Section~\\ref{sec::theory} are computed assuming $\\beta_i =\\beta_e=0$ while the simulation presented by \\citet{sharp2} has $\\beta_e = \\beta_i = 2$. Interestingly, the growth rates of the analytical derivation in the cold-background limit and the simulations with the finite $\\beta$ values show an excellent agreement, thus indicating that there is no impact of $\\beta_e$ or $\\beta_i$ on the maximum growth rate of the intermediate-scale instability.\n\n\\citet{sharp2} argue that the damping due to thermal ion-cyclotron damping (at $k \\varv_{\\rm th,i} = (1$--$2) \\Omega_i$) could at most impact one peak of the instability if parameters are fine-tuned and thus typically it cannot impact instability growth for typical astrophysical plasma conditions. Similarly, it is strainght-forward to show that this is also true for thermal electron-cyclotron damping because the separation between the two peaks of the intermediate-scale instability does not solely depend on the ion-to-electron mass ratio.\n\nFor the shock problem studied in the present paper, there are two very different regimes of the beta factor: a low-beta regime in the pre-shock region and a high-beta regime in the shock transition zone. In the pre-shock region, we have $\\beta_e = \\beta_i = 1.28\\times10^{-4}$ ($\\mathcal{M}_{\\rm A}=5.3$) and $\\sim 2 \\times 10^{-3}$ $(\\mathcal{M}_{\\rm A} = 21.3$). The case of $\\beta_e \\gtrsim1$ would imply a much faster growth of the intermediate-scale instability in comparison to the growth rate of the gyro-scale instability because of the dependence of the growth rate on $\\varv_\\perp$ as given in equation (6) of \\citet{sharp2}. Thus, we expect that in this case the instability will have an even larger impact on the mechanism of electron acceleration but postpone a detailed study of this to future works.\n\nAt the shock transition zone of the simulations with $m_r=1836$, the electron are hot and the temperature is such that $k_{\\rm B} T_e \\sim m_e c^2$. Therefore, we obtain $\\beta_e = 1.74$ for the $\\mathcal{M}_{\\rm A} = 5.3$ simulation and $\\beta_e = 27.88$ for the $\\mathcal{M}_{\\rm A} = 21.3$ simulation, i.e., in both cases, the electron beta is larger than one. However, only in the simulation with $\\mathcal{M}_{\\rm A} = 5.3$, the relative drift is such that the instability condition is fulfilled and thus allows for the growth of the intermediate-scale instability. This allows resonant interactions between electrons and unstable modes and results in much larger acceleration efficiency as shown in the simulation.\n\n\n\\subsection{On the electron acceleration mechanism}\n\nThe proposed mechanism for pre-accelerating electrons works by exciting intermediate-scale waves at the shock transition region.\nThese sub-ion skin-depth (ion-cyclotron) waves are comoving with the incoming upstream plasma (see bottom panel of Figure~\\ref{fig:GrowthRate}) and hence are coherently transported to the downstream region (see Figure~\\ref{fig:KLS}).\nIn the downstream and shock transition regions, the hot electrons scatter off of these unstable waves and the waves reflected at the contact discontinuity (see top-left panel of Figure~\\ref{fig:XT-evol}). This leads to the high acceleration efficiency shown in the red simulation (with $\\mathcal{M}_{\\rm A} = 5.3$) as seen in Figures~\\ref{fig:spectra300} and \\ref{fig:Eff}, possibly in a first-order Fermi type process.\n\nThe simulation with $\\mathcal{M}_A=21.3$ (depicted in blue) shows excitation of sub-ion skin-depth (Whistler) waves at the shock front region. However, these modes are not transported to the downstream region (see Figure~\\ref{fig:KLS}), which thus results in a much lower electron acceleration efficiency.\nThat is, this simulation shows that for a parallel shock geometry, electron pre-acceleration due to scattering with Whistler waves has a much lower efficiency in comparison to that in the red simulation (see Figure~\\ref{fig:Eff}).\n\n\nWe emphasize that our proposed mechanism for electron pre-acceleration does not depend on ion acceleration.\nThis is clearly manifested by the large fraction of pre-accelerated electrons before we observe any significant ion acceleration in the simulation with $\\mathcal{M}_{\\rm A} = 5.3$, as shown in the top-panel (red curves) of Figure~\\ref{fig:spectra300}.\nOn the other hand, the mechanism proposed by~\\citet{2015phrvl.114h5003p} for electron pre-acceleration relies on the amplification of Bell modes in the shock pre-cursor driven by the propagation of accelerated ions and thus the combination of SDA and scattering with Bell modes resulted in electron pre-acceleration and injection in DSA cycles.\n\n\\subsection{Applications to supernova remnants}\n\\label{sec:SNR}\nWhile we have clearly demonstrated the importance of the intermediate-scale instability for thermalizing and accelerating electrons in our PIC simulations, we will now turn to discuss the potential relevance of this instability in observations of astrophysical shocks. Perhaps the cleanest astrophysical object is the supernova remnant SN~1006, which enables testing our ideas of the prevailing plasma instabilities that are responsible for electron scattering and acceleration at the quasi-parallel shock conditions we encounter at the polar cap regions of that remnant shock. \\citet{Winner+2020} perform three-dimensional MHD simulations of the Sedov Taylor explosion and evolve the electron distribution function while accounting for magnetic obliquity-dependent electron acceleration. To this end, their method finds and characterizes the MHD shocks, injects a pre-defined electron power-law spectrum into the post-shock region \\citep{Winner+2019}, and explores the effects of varying the magnetic amplification factor on the surface brightness maps as well as the multi-frequency spectrum of SN~1006.\n\nMatching the radial emission profiles and the gamma-ray spectrum requires a volume-filling, turbulently amplified magnetic field of $B\\approx35~\\mu$G in the downstream region of the parallel shock and that the Bell-amplified magnetic field \\citep{Bell2004} is starting to get damped in the further post-shock region \\citep[see figure~2 of][]{Winner+2020}. The exact value of the Bell amplification factor $f_\\mathrm{B}$ barely changes the multi-frequency spectrum so that we obtain a post-shock Alfv\\'en velocity of $\\varv_\\mathrm{A}=B f_\\mathrm{B}\/\\sqrt{\\mu_0\\mu m_p n}\\approx200~f_\\mathrm{B}~\\mathrm{km~s}^{-1}$, where $\\mu=1.25$ is the mean molecular weight of the warm interstellar medium, $m_p$ is the proton rest mass, and $n=0.12~\\mathrm{cm}^{-3}$. While the Bell amplified field is maximized in the immediate post-shock regime \\citep{Caprioli2014b}, the turbulently amplified magnetic field keeps rising in the post-shock regime \\citep{Ji+2016}, so that it is appropriate to set $f_\\mathrm{B}=1$ while noting that the turbulently amplifying field on its route to saturation is replacing the Bell-amplified field as it is damping further downstream.\n\nAdopting the lab frame shock velocity of SN~1006 at the current epoch, $\\varv_s=3000~\\mathrm{km~s}^{-1}$, we obtain a post-shock Alfv\\'enic Mach number of\n\\begin{align}\n \\mathcal{M}_\\mathrm{A}=\\frac{\\varv_s}{\\varv_\\mathrm{A}}=15.2<\\frac{\\sqrt{m_r}}{2},\n\\end{align}\nwhich obeys the condition for exciting the intermediate-scale instability and thus should enable efficient electron acceleration at the polar cap regions of the parallel shocks of SN~1006 \\citep[see figure~4 of][]{Winner+2020}.\n\nIn fact, efficient electron acceleration at parallel shocks enables us to put a lower limit on the post-shock magnetic field, \n\\begin{align}\n \\label{eq:Bmin}\n B&>2 \\varv_s \\sqrt{\\frac{\\mu_0\\rho}{m_r A}}=B_\\mathrm{min}\\\\\n &= 22.7~\\left(\\frac{\\varv_s}{3000~\\mathrm{km~s}^{-1}}\\right)\n \\left(\\frac{n}{A\\, 0.1~\\mathrm{cm}^{-3}}\\right)^{1\/2}\\,\\mu\\mathrm{G},\n\\end{align}\nwhere $m_r$ is the proton-to-electron mass ratio and $A$ is the atomic mass number of the element responsible for driving the intermediate~scale instability. Hence, if the plasma is composed of abundant ${}^{56}\\mathrm{Fe}$ ions, the minimum post-shock magnetic field is lowered to $3~\\mu$G for the same shock parameters.\nFor heavy ions to dominate the growth of the intermediate instability, we require them to be very abundant because the growth rate, $\\Gamma$, of the intermediate-scale instability depends on the number density of ions via $\\Gamma \\propto n_{\\rm Fe}^{1\/3}$.\n\n\n\\section{Conclusions}\n\n\\label{sec:concl}\n\n\nIn collisionless shocks, electrons are heated to energies much larger than the kinetic energy of upstream electrons impinging on the shock. However, in non-relativistic shocks, their Larmor radii fall short of those of ions so that another acceleration mechanism is needed to boost their energies to the point where they can actively participate in the process of DSA.\nPreviously suggested mechanisms, which were based on driving whistler waves unstable by hot downstream or cold upstream electrons, require high values of the {\\alf}ic Mach number and were shown to not work in PIC simulations \\citep{Niemiec+2012}. \nIn this paper we consider a new mechanism for electron pre-acceleration in quasi-parallel, non-relativistic electron-ion shocks that is based on driving ion-cyclotron waves unstable by means of ions that are drifting with their upstream velocity through the shock transition zone. The corresponding intermediate-scale instability \\citep{sharp2} only works for low values of the {\\alf}ic Mach number at the shock front, $\\mathcal{M}_{\\rm A}^f < \\sqrt{m_i\/m_e} \\approx 21$, which is the condition for the instability to operate.\n\nWe present results from three 1D3V PIC simulations for parallel electron-ion shocks with sonic Mach number $\\mathcal{M}_{\\rm s} \\sim 365$:\nthe first (red) simulation uses a realistic ion-to-electron mass ratio of $m_r=1836$ and provides favorable conditions for exciting the intermediate scale instability.\nThe second (black) simulation employs identical physical initial conditions but has an artificially lower value for the ion-to-electron mass ratio, $m_r=100$, which violates the instability condition.\nIn the third (blue) simulation, the condition is also violated by using a higher value of $\\mathcal{M}_{\\rm A}$ and $m_r=1836$.\nHighlight results include:\n\\begin{itemize}\n\n\\item Only the simulation that grows the intermediate scale instability scatters hot electrons in the downstream off of driven ion-cyclotron waves. We demonstrate that this efficiently pre-accelerates electrons to energies that enable them to participate in DSA cycles and yields a non-thermal power-law distribution of electrons (see Figure~\\ref{fig:spectra300}).\n\n\\item This effective electron acceleration comes about because the excited ion-cyclotron waves at the shock front are comoving with the upstream plasma and hence, survive advection into the downstream. In consequence, the amplitude of perpendicular magnetic field fluctuations, which are resonantly scattering hot electrons, is substantially increased in comparison to the other simulations that preclude instability growth (see Figures~\\ref{fig:spectra300} and \\ref{fig:KLS}).\n\n\\item The simulation with the higher value of $\\mathcal{M}_{\\rm A}$ shows a reduction, by more than 2 orders of magnitude, in the efficiency of electron acceleration (see Figure~\\ref{fig:Eff}). However, the electrons in the downstream are heated to the same temperature as the red simulation (see Figure~\\ref{fig:Temp}).\n \n\\item The simulation with the lower mass ratio ($m_r=100$) results in much lower heating of electrons in the downstream region and thus an even lower electron acceleration efficiency (see Figures~\\ref{fig:Eff} and \\ref{fig:Temp}). We conclude that accurate PIC simulations of collisionless electron-ion shocks require realistic mass ratios.\n \n\\end{itemize}\n\nThese findings put the magnetic amplification processes at shocks into the limelight because of the strict instability criterion that favors low values of $\\mathcal{M}_{\\rm A}$ and as such, is able to relate efficient electron acceleration with a lower limit on the downstream magnetic field strength (see Equation~\\ref{eq:Bmin}). Most importantly, this paper provides an important cornerstone in our understanding of diffusive shock acceleration of electrons, and potentially enables us to construct an analytical subgrid model for including electron acceleration into modern MHD models.\n\n\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Anatoly Spitkovsky for discussions on various aspects of the simulations. We also acknowledge comments by Damiano Caprioli and the anonymous referee.\nM.S., R.L., T.T., and C.P. acknowledge support by the European Research Council under ERC-CoG grant CRAGSMAN-646955.\nThe authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). This research was in part supported by the Munich Institute for Astro- and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany\u00b4s Excellence Strategy \u2013 EXC-2094 \u2013 390783311.\n\n\n\\begin{appendix}\n\n\n\n\n\\section{Distribution of relativistic particles}\n\\label{app:temp}\n\nHere we review the shape of the momentum distribution of charged particles in thermal equilibrium, and the impact of the existence of a non-thermal tail.\nWe discuss how it can be characterized in such cases, and different ways to quantify the fraction of density and energy in its non-thermal tail.\n\n\\subsection{Average thermal energy}\n\nIn thermal equilibrium, the rest-frame (R) isotropic momentum distribution of relativistic particles (with elementary mass $m_s$) is given by the Maxwell-J{\\\"u}ttner distribution~\\citep{Wright1975}\n\\begin{eqnarray}\n\\label{Eq:distribution}\nf_{\\rm th}(\\vec{u}) = \\frac{ n_R e^{-\\gamma\/\\tilde{T}_R} }{4 \\pi \\tilde{T}_R K_2(1\/\\tilde{T}_R)} ,\n\\end{eqnarray}\nwhere $K_2$ is the modified Bessel function of the second kind, the Lorentz-factor is $\\gamma = \\sqrt{1+ \\vec{u} \\cdot \\vec{u}}$, and $\\tilde{T}_R$ is the thermodynamical equilibrium temperature in the rest frame\\footnote{\nIt is worth mentioning here that because the phase-space volume is invariant under Lorentz transformation, this distribution takes the same form in any other frame, e.g., in the laboratory frame \\citep{Wright1975}.} normalized by $m_s c^2\/k_{\\rm B}$, all velocities are normalized with the speed of light $c$, and $n_R$ is the rest-frame number density of the particles.\nTherefore, the average (thermal + rest-mass) energy per particle, normalized by $m_s c^2$, is\n\n\\begin{eqnarray}\n\\langle \\bar{E} \\rangle &=& \\int d^3u \\gamma \\frac{f_{\\rm th}(\\vec{u})}{n_R} = 4 \\pi \\int_0^{\\infty} du \\frac{u^2 \\gamma e^{-\\gamma\/\\tilde{T}_R} }{4 \\pi \\tilde{T}_R K_2(1\/\\tilde{T}_R)}\n\\nonumber \\\\\n&=&\n\\frac{1}{ \\tilde{T}_R K_2(1\/\\tilde{T}_R)} \\int_0^{\\infty} du u^2 \\gamma e^{-\\gamma\/\\tilde{T}_R} \n\\nonumber \\\\\n&=&\n\\frac{-1}{ \\tilde{T}_R K_2(1\/\\tilde{T}_R)} \\frac{d}{d(1\/\\tilde{T}_R)} \\int_0^{\\infty} du u^2 e^{-\\gamma\/\\tilde{T}_R}\n\\nonumber \\\\\n&=&\n\\frac{-1}{ \\tilde{T}_R K_2(1\/\\tilde{T}_R)} \\frac{d ( \\tilde{T}_R K_2(1\/\\tilde{T}_R) )}{d(1\/\\tilde{T}_R)} \n=3 \\tilde{T}_R +\\frac{K_1 \\left(1\/\\tilde{T}_R \\right)}{K_2\\left(1\/\\tilde{T}_R \\right)}.\n~~~~~~~ \n\\end{eqnarray}\nThat is, the average thermal energy per particle is\n\\begin{eqnarray}\n\\label{Eq:Eth}\n\\langle E_{\\rm th} \\rangle = \\left( 3 \\tilde{T}_R +\\frac{K_1 \\left(1\/\\tilde{T}_R \\right)}{K_2\\left(1\/\\tilde{T}_R \\right)} - 1 \\right) m_s c^2.\n\\end{eqnarray}\n\n\n\\subsection{Equilibrium rest-frame temperature}\nFor a distribution of particle momenta, we can find $\\tilde{T}_R$ (assuming a thermal equilibrium of most low energy particles) as follows:\n\\begin{eqnarray}\n\\label{Eq:du4fu}\n\\frac{d }{du} \\left[ u^4 f_{\\rm th} \\right] = \\frac{ n_R u^3 \\left(4 \\sqrt{u^2+1} \\tilde{T}_R-u^2\\right) }{4 \\pi \\tilde{T}_R^2 K_2(1\/\\tilde{T}_R) \\sqrt{u^2+1}} e^{-\\frac{\\sqrt{u^2+1}}{\\tilde{T}_R}} .\n\\end{eqnarray}\nThat is, using the value of $u$ for which $u^4 f(u)$ is maximum ($u_m$), we can determines $\\tilde{T}_R$ using\n\\begin{eqnarray}\n\\label{Eq:TR}\n\\tilde{T}_R = \\frac{ u_m^2 }{ 4 \\sqrt{1+u_m^2}}.\n\\end{eqnarray} \n\n\n\\subsection{Momentum distribution with non-thermal particles}\n\n\n\nHere, we consider the full momentum distribution $f_{\\rm full}$ that contains non-thermal and high-energy particles. \nThe main assumption adopted here is that {\\it{all low energy particles are in thermal equilibrium}}.\nThat is, all non-thermal particle are such that $u \\gg u_m$, which means the value of $u$ at which the derivative in Equation~\\eqref{Eq:du4fu} is zero is always outside the range for which the non-thermal parts of the distribution is non-zero.\nTherefore, the value of $u_m$ and hence $\\tilde{T}_R$, computed from Equation~\\eqref{Eq:TR}, is still the same.\nAn example of such a case is shown in Figure~\\ref{fig:Thermal}.\n\nAssuming that the non-thermal part of the distribution follows a power law with a typical index of -$4$, the full distribution can be approximated as follows,\n\\begin{eqnarray}\n\\label{Eq:Ffull}\nf_{\\rm full}(u) = \\frac{n_R}{1+\\alpha}\n\\left[\n\\frac{e^{-\\gamma\/\\tilde{T}_R}}{4 \\pi k_2 (1\/\\tilde{T}_R)}\n+\n\\alpha\n\\frac{u_1 u_2 u^{-4}}{4 \\pi (u_1 - u_2) }\\theta_1 \\theta_2\n\\right],\n\\end{eqnarray}\nwhere $\\alpha$ is the fractional number density of non-thermal particles which is implicitly assumed to be small, i.e., $\\alpha \\ll 1$, $\\theta_1 = \\theta(u-u_1)$, $\\theta_2 = \\theta(u_2 -u)$, and $u_1$ and $u_2$ ($u_2>u_1$) is the range in which the distribution takes a power-law form.\nThe distribution in Equation~\\eqref{Eq:Ffull} is discontinuous at $u=u_1$. However, in reality the thermal and $u^{-4}$ parts of the distribution are connected with a suprathermal distribution (with a logarithmic slope steeper than $-4$) that starts at $u \\sim 4 u_m$ to $ 5 u_m$ \\citep{Caprioli2014a}.\nTherefore, the suprathermal distribution does not change the relation between $u_m$ and $\\tilde{T}_R$ (Equation~\\ref{Eq:TR}).\n\nThe inclusion of the non-thermal part leads to a lower fraction of particles in thermal equilibrium and hence lowers proportionately the value of $E_{\\rm th}$ at a given $\\tilde{T}_R$. Namely\n\\begin{eqnarray}\n\\label{Eq:Eth2}\n\\langle E_{\\rm th} \\rangle_{\\rm full} \\approx \\frac{1}{1+\\alpha} \\left( 3 \\tilde{T}_R +\\frac{K_1 \\left(1\/\\tilde{T}_R \\right)}{K_2\\left(1\/\\tilde{T}_R \\right)} - 1 \\right) m_s c^2.\n\\end{eqnarray}\n\nThe value of $\\alpha$ can be determined by comparing the value of $f(u_m)$ and $f_{\\rm full}(u_m)$, where the value of \n$f(u_m)$ is computed from Equation~\\eqref{Eq:distribution} at $u_m$ where the expression $u^4 f_{\\rm full}(u)$ is maximized, and\n$f_{\\rm full}(u_m)$ is computed from the normalized histogram of the particles' normalized momenta ($u$) in the simulation.\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{Plots\/Fig08.pdf}\n\\caption{\nMomentum distribution for ions (red dashed) and electrons (red solid) at $t~ \\Omega_i=325$ for the Ma5Mr1836 simulation, where we first transform momenta of particles to the plasma rest frame before we compute the momentum spectra.\nSolid black lines show the analytical Maxwell-J{\\\"u}ttner distribution scaled appropriately, $u^4 f(u)$ (as given by Equation~\\ref{Eq:distribution}). The normalized temperature (given by Equation~\\ref{Eq:TR}) is determined from the location of the peaks ($u_m^i$ for ions and $u_m^e$ for electrons).\nParticle energies with $u>5u_m$ (to the right of blue lines) are considered to be non-thermal.\n\\label{fig:Thermal}\n}\n\\end{figure}\n\n\n\\subsection{Acceleration efficiency}\n\n\n\nWe note here that $\\alpha = n_{\\rm non-th}\/n_{\\rm th}$, where $n_{\\rm th}$ is the number density of particles in thermal equilibrium, and $n_{\\rm non-th}$ is the number density of non-thermal particles. Thus, \nthe rest-frame total number density is $n_R = n_{\\rm th} + n_{\\rm non-th}$.\nThat is, we can define the efficiency of acceleration as the fraction of non-thermal particles ($\\epsilon_n$) measured in per cent via\n\\begin{eqnarray}\n\\label{Eq:EffN}\n\\epsilon_n \\equiv \\frac{n_{\\rm non-th}}{n_R} \\times 100 = \\frac{\\alpha}{1+\\alpha} \\times 100 .\n\\end{eqnarray}\n\nWe can also define the acceleration efficiency by the fractional energy carried by non-thermal particles ($\\epsilon_E$) in per cent as\n\\begin{eqnarray}\n\\label{Eq:EffE}\n\\epsilon_E & \\equiv & \n\\frac{ E_{\\rm tot} - \\langle E_{\\rm th} \\rangle_{\\rm full} }{ E_{\\rm tot} } \\times 100 \n=\n\\frac{E_{\\rm non-th}}{E_{\\rm tot}} \\times 100 ,\n\\end{eqnarray}\nwhere $ E_{\\rm tot} = \\langle \\gamma -1 \\rangle m_s c^2$ is the average of $(\\gamma-1)$ of all thermal and non-thermal particles.\n\nAn equally important definition of the acceleration efficiency is how much of the upstream plasma kinetic energy is channeled into non-thermal energy. In this case, the efficiency of acceleration in per cent is defined as\n\\begin{eqnarray}\n\\label{Eq:Eff-sh}\n\\epsilon_{\\rm sh}\n& \\equiv &\n\\frac{ E_{\\rm non-th} }{ \n0.5 ~ (m_e + m_i) \\varv_{\\rm sh}^2 } \\times 100 ,\n\\end{eqnarray}\nwhere $\\varv_{\\rm sh}$ is the upstream plasma drifting speed in the shock rest-frame.\nWe compute the non-thermal energy density, $E_{\\rm non-th}$, as the average (per particle) energy with $u>5u_m$ \\citep[][]{Xu2020}. The vertical blue lines in Figure~\\ref{fig:Thermal} show such cuts.\nThat is, particles with velocities $u>5u_m$ are assumed to form the non-thermal part of the distribution function.\n\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nThe classification problem for group actions on operator algebras has a long history dating back to the hallmark results of Connes \\cite{Connes73, Connes74, Connes75, Connes76, Connes77} about injective factors that employed ideas of a dynamical nature in a crucial way.\nThe subsequent pioneering work by Jones \\cite{Jones80}, Ocneanu \\cite{Ocneanu85} and others \\cite{SutherlandTakesaki89, KawahigashiSutherlandTakesaki92, KatayamaSutherlandTakesaki98, Masuda13} completely unraveled the structure of countable amenable group actions on injective factors and paved the way for related further developments in subfactor theory \\cite{Jones83, Popa94, Popa10, Masuda99}.\nWhen the acting group is nonamenable, then the structure of its actions on the hyperfinite II$_1$-factor is already less managable, as a theorem of Jones \\cite{Jones83oc} and other subsequent stronger \\emph{no-go theorems} such as \\cite{Popa06, BrothierVaes15} later demonstrated.\n\nConnes' seminal paper \\cite{Connes76} was the first of many influential works in operator algebras to make use of a kind of touchstone object (in his case the hyperfinite II$_1$-factor $\\mathcal{R}$) and to begin the classification approach by showing that every object to be classified absorbs this touchstone object.\nMore specifically, Connes' approach begins with a structural result asserting that every injective II$_1$-factor is \\emph{McDuff} --- i.e., tensorially absorbs $\\mathcal R$; see \\cite{McDuff70} --- which is then used further to show that each such factor is isomorphic to $\\mathcal R$.\nIn Ocneanu's pioneering work to classify outer actions of an arbitrary countable amenable group $G$ on $\\mathcal{R}$, he likewise proves at an early point in the theory that each such action (even without assuming outerness) absorbs the trivial $G$-action on $\\mathcal R$ tensorially, which he then exploits for his more sophisticated classification theorem.\nAlthough one would generally need injectivity of a factor to arrive at a satisfactory classification theory about (outer) $G$-actions on it, this early part of Ocneanu's theory works in remarkable generality.\nThe precise statement and a (comparably) self-contained proof using the methods of this article, which we included for the reader's convenience, is Theorem~\\ref{theorem:model-absorption}.\n\nIf one is concerned with C$^*$-algebraic considerations related to the \\emph{equivariant Jiang--Su stability problem} (see \\cite[Conjecture A]{Szabo21si}), the current methods always find a way to exploit Oceanu's aforementioned theorem in one form or another, usually involving to some degree Matui--Sato's property (SI) technique \\cite{MatuiSato12acta, MatuiSato12, MatuiSato14, Sato19}.\nLooking at the state-of-the-art at present \\cite{GardellaHirshbergVaccaro, Wouters21}, the key difficulties arise from pushing these methods to the case where a group action $G\\curvearrowright A$ on a C$^*$-algebra induces a complicated $G$-action on the traces of $A$.\nIn particular, it is generally insufficient for such considerations to only understand $G$-actions on $\\mathcal{R}$, but one rather needs to have control over $G$-actions on more general tracial von Neumann algebras.\nThis C$^*$-algebraically motivated line of investigation led us to ask the following question that is intrinsic to von Neumann algebras:\n\n\\begin{question}\nLet $G$ be a countable amenable group and $M$ a separably acting finite von Neumann algebra with $M\\cong M\\bar{\\otimes}\\mathcal R$.\nIs it true that every action $\\alpha: G\\curvearrowright M$ is cocycle conjugate to $\\alpha\\otimes\\operatorname{id}_{\\mathcal R}: G\\curvearrowright M\\bar{\\otimes}\\mathcal R$?\n\\end{question}\n\nAlthough Ocneanu's original results confirm this in full generality when $M$ is a factor,\\footnote{While Ocneanu's work \\cite{Ocneanu85} only contains an explicit proof for so-called centrally free actions $\\alpha$, his comment following \\cite[Theorem 1.2]{Ocneanu85} suggests an alternative approach to avoid this assumption. In several papers, the more general version without central freeness is also attributed to Ocneanu.} it turned out to be not so straightforward to resolve this question, despite common folklore wisdom in the subject suggesting that the factor case ought to imply the general case.\nSome classification results in the literature \\cite{JonesTakesaki84, SutherlandTakesaki85} imply that the above has a positive answer when $M$ is injective, but relying on this has two drawbacks.\nFirstly, the question we are trying to answer is by design much weaker than a hard classification result, so it would be desirable to have a proof not relying on such a powerful theorem, in particular when an assumption such as injectivity may not even be needed.\nSecondly, there is a subtle gap in the proof of \\cite[Lemma 4.2]{SutherlandTakesaki85}.\nWe are indebted to Stefaan Vaes for pointing this out to us in the context of the above question and for outlining a sketch of proof on how to correct this, which became a sort of blueprint for the main result of the fourth section.\n\nIn contemporary research by C$^*$-algebraists, the aforementioned results by Sutherland--Takesaki are still used to provide a partial answer to the above question, for example in \\cite{GardellaHirshbergVaccaro}. In light of the previous discussion, the present article aims to give a self-contained and --- dare we say also relatively elementary --- approach to answer this question instead.\nIn fact we can treat it in greater generality than posed above, without restrictions on the type of $M$ and in the setting of amenable actions of arbitrary discrete groups.\nThe following can be viewed as our main result; see Theorem~\\ref{thm:general-amenable-McDuff}.\n\n\\begin{theoremi} \\label{theorem-A}\nLet $G$ be a countable discrete group and $M$ a von Neumann algebra with separable predual such that $M \\cong M\\bar{\\otimes} \\mathcal{R}$. \nThen every amenable action $\\alpha\\colon G \\curvearrowright M$ is cocycle conjugate to $\\alpha \\otimes \\mathrm{id}_\\mathcal{R}\\colon G\\curvearrowright M\\bar{\\otimes} \\mathcal{R}$.\n\\end{theoremi}\n\nAlong the way, our methodology employs dynamical variants of McDuff-type properties analogous to the theory of strongly self-absorbing C$^*$-dynamics \\cite{Szabo18ssa}, which can and is treated in the more general setting of continuous actions of locally compact groups; see Definitions~\\ref{def:strong-absorption} and \\ref{def:ssa-action}.\n\n\\begin{defii}\nLet $G$ be a second-countable locally compact group.\nAn action $\\delta: G\\curvearrowright\\mathcal R$ is called \\emph{strongly self-absorbing}, if there exists an isomorphism $\\Phi: \\mathcal R\\to\\mathcal R\\bar{\\otimes}\\mathcal R$, a $(\\delta\\otimes\\delta)$-cocycle $\\mathbb{U}: G\\to\\mathcal{U}(\\mathcal R\\bar{\\otimes}\\mathcal R)$ and a sequence of unitaries $v_n\\in\\mathcal{U}(\\mathcal R\\bar{\\otimes}\\mathcal R)$ such that\n\\[\nv_n(x\\otimes 1_{\\mathcal R})v_n^* \\to \\Phi(x) \\quad\\text{and}\\quad v_n(\\delta\\otimes\\delta)_g(v_n)^* \\to \\mathbb{U}_g\n\\]\nin the strong operator topology for all $x\\in\\mathcal R$ and $g\\in G$, the latter uniformly over compact subsets in $G$.\n\\end{defii}\n\nFor such actions we prove the following dynamical generalization of McDuff's famous theorem \\cite{McDuff70}; see Theorem~\\ref{thm:general-McDuff}.\n\n\\begin{theoremi} \\label{theorem-C}\nLet $G$ be a second-countable locally compact group.\nLet $\\alpha: G \\curvearrowright M$ be an action on a von Neumann algebra with separable predual and let $\\delta: G \\curvearrowright \\mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.\nThen $\\alpha$ is cocycle conjugate to $\\alpha\\otimes\\delta$ if and only if there exists a unital equivariant $*$-homomorphism $(\\mathcal R,\\delta) \\to (M_{\\omega,\\alpha},\\alpha_\\omega)$, where the latter denotes the induced $G$-action on the asymptotic centralizer algebra of $M$.\n\\end{theoremi}\n\nOur initial methodology inspired by the theory of C$^*$-dynamics is only well-suited to build all the aforementioned (and other related) theory in the setting of (actions on) semi-finite von Neumann algebras.\nAfter the first preliminary section, the second and third section are dedicated to proving Theorem~\\ref{theorem-C} in the special case of semi-finite von Neumann algebras.\nThe fourth section then builds on some of this theory, combined with the original ideas by Sutherland--Takesaki \\cite{SutherlandTakesaki85} related to disintegrating a $G$-action to an action of its transformation groupoid induced on the center. This culminates in our main technical result of that section --- a kind of measurable local-to-global principle for absorbing a given strongly self-absorbing action, Theorem~\\ref{thm:main_technical} --- which is then used to prove a stronger version of Theorem~\\ref{theorem-A} for actions on semi-finite von Neumann algebras. \n \nThe general main results are then obtained in the fifth section with the help of Tomita--Takesaki theory.\nIt is in this step that it becomes obvious why we want to treat Theorem~\\ref{theorem-C} beyond the case of discrete groups.\nNamely, if $\\alpha\\colon G\\curvearrowright M$ is an action as in Theorem~\\ref{theorem-A} on a von Neumann algebra that is not semi-finite, we may consider the extended action $\\tilde{\\alpha}\\colon G\\curvearrowright\\tilde{M}$ on the (semi-finite) continuous core.\nHowever, in order to conclude that $\\alpha$ absorbs $\\delta$ with the help of Tomita--Takesaki theory, it is not sufficient to argue that $\\tilde{\\alpha}$ absorbs $\\delta$, but one actually needs to verify this absorption for certain enlargements of these actions to continuous actions of $G\\times\\mathbb{R}$, which in any case requires Theorem~\\ref{theorem-C} for non-discrete groups.\nFortunately this can all be arranged to work and we end the last section with the proofs of our main results.\n\n\n\\section{Preliminaries}\n\nThroughout the paper, $\\omega$ denotes a fixed free ultrafilter on $\\mathbb{N}$ and $G$ denotes a second-countable, locally compact group.\nLet $M$ be a von Neumann algebra with predual $M_*$. For $x \\in M$ and $\\phi \\in M_*$ we define elements $x\\phi, \\phi x$ and $[x,\\phi] \\in M_*$ by $(x\\phi)(y) = \\phi(yx)$, $(\\phi x)(y) = \\phi(xy)$ for all $y \\in M$ and $[x,\\phi] = x\\phi - \\phi x$. Moreover, for $x \\in M$ and $\\phi \\in M_*$ we set $\\|x\\|_{\\phi} = \\phi(x^*x)^{1\/2}$ and $\\|x\\|_{\\phi}^\\# = \\phi(x^*x + xx^*)^{1\/2}$. When $\\phi$ is a faithful normal state, the norms $\\|\\cdot\\|_\\phi$ and $\\|\\cdot\\|_\\phi^\\#$ induce the strong and strong-$*$ topology on bounded sets, respectively.\nMore generally, when $\\phi$ is a normal weight on $M$, we define $\\|x\\|_{\\phi}:= \\phi(x^*x)$ for all $x$ contained in the left-ideal\n\\[\\{x \\in M \\mid \\phi(x^*x) < \\infty\\}.\\]\nRecall that $M$ is called $\\sigma$-finite if it admits a faithful normal state.\n\\subsection{Ultrapowers of von Neumann algebras}\n\nWe start with a reminder on the Ocneanu ultrapower of a von Neumann algebra and related concepts.\nThis originates in \\cite[Section 2]{Connes74} and \\cite[Section 5]{Ocneanu85}, but the reader is also referred to \\cite{AndoHaagerup14} for a thorough exposition on ultrapower constructions.\n\n\\begin{definition}\nLet $M$ be a $\\sigma$-finite von Neumann algebra. We define the subset $\\mathcal{I}_\\omega(M) \\subset \\ell^\\infty(M)$ by\n\\begin{align*}\n\\mathcal{I}_\\omega(M) &= \\{(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M) \\mid x_n \\rightarrow 0 \\text{ $*$-strongly as } n \\rightarrow \\omega\\}\\\\\n&= \\{(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M) \\mid \\lim_{n \\rightarrow \\omega}\\|x_n\\|_{\\phi}^\\# =0 \\text{ for some faithful normal state } \\phi \\text{ on } M\\}.\n\\end{align*}\nDenote \\[\\mathcal{N}_\\omega(M)=\\{(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M) \\mid (x_n)_{n \\in \\mathbb{N}} \\mathcal{I}_\\omega(M) \\subset \\mathcal{I}_\\omega(M), \\text{ and } \\mathcal{I}_\\omega(M)(x_n)_{n \\in \\mathbb{N}} \\subset \\mathcal{I}_\\omega(M)\\},\\] \n\\[\\mathcal{C}_\\omega(M) =\\{(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M) \\mid \\lim_{n \\rightarrow \\omega}\\|[x_n,\\phi]\\| = 0 \\text{ for all } \\phi \\in M_*\\}.\\]\nThen \\[{\\mathcal{I}_\\omega(M) \\subset \\mathcal{C}_\\omega(M) \\subset \\mathcal{N}_\\omega(M)}.\\] \nThe \\emph{Ocneanu} ultrapower $M^\\omega$ of $M$ is defined as\n\\[M^\\omega := \\mathcal{N}_\\omega(M)\/\\mathcal{I}_\\omega(M),\\]\nand the \\emph{asymptotic centralizer} $M_\\omega$ of $M$ is defined as\n\\[M_\\omega := \\mathcal{C}_\\omega(M)\/\\mathcal{I}_\\omega(M).\\]\nThese are both von Neumann algebras. \nAny faithful normal state $\\phi$ on $M$ induces a faithful normal state $\\phi^\\omega$ on $M^\\omega$ via the formula\n\\[\\phi^\\omega((x_n)_{n \\in \\mathbb{N}}) = \\lim_{n \\rightarrow \\omega} \\phi(x_n).\\]\nThe restriction of $\\phi^\\omega$ to $M_\\omega$ is a tracial state.\n\\end{definition}\n\n\\begin{remark}\nSince the constant sequences are easily seen to be contained in $\\mathcal{N}_\\omega(M)$, one considers $M$ as a subalgebra of $M^\\omega$.\nIf $\\lim_{n \\rightarrow \\omega}\\|[x_n,\\phi]\\| = 0$ for all $\\phi \\in M_*$, then $\\lim_{n \\rightarrow \\omega}\\|[x_n,y]\\|_\\phi^\\# = 0$ for all $y \\in M$ by \\cite[Proposition 2.8]{Connes74}.\n In this way we get a natural inclusion $M_\\omega \\subset M^\\omega \\cap M'$.\n That same proposition also shows that in order to check whether a sequence $(x_n)_n$ in $\\ell^\\infty(M)$ satisfies $\\lim_{n \\rightarrow \\omega}\\|[x_n,\\psi]\\| = 0$ for all $\\psi \\in M_*$, it suffices to check if this is true for just a single faithful normal state $\\phi$ and to check if $\\lim_{n \\rightarrow \\omega}\\|[x_n,y]\\|^\\#_\\phi =0$ for all $y \\in M$.\n This shows that $M_\\omega = M^\\omega \\cap M'$ whenever $M$ admits a faithful normal tracial state. The same is then true for all semi-finite von Neumann algebras with separable predual (for example by \\cite[Lemma~2.8]{MasudaTomatsu16}).\n\\end{remark}\n\n\\begin{definition}\nA continuous action $\\alpha\\colon G \\curvearrowright M$ of a second-countable locally compact group on a von Neumann algebra is a homomorphism $G \\to \\mathrm{Aut}(M)$, $g \\mapsto \\alpha_g$ such that\n\\[\n\\lim_{g \\rightarrow 1_G}\\|\\varphi \\circ \\alpha_g - \\varphi\\|=0 \\text{ for all } \\varphi \\in M_*.\n\\]\nBy \\cite[Proposition~X.1.2]{Takesaki03}, this is equivalent to the map being continuous for the point-weak-$*$ (or equivalently, point-weak, point-strong,$\\hdots$) topology.\nIn most contexts we omit the word ``continuous'' as it will be implicitly understood that we consider some actions to be continuous.\nIn contrast, we will explicitly talk of an algebraic $G$-action when we are considering an action of $G$ viewed as a discrete group.\n\\end{definition}\n\nGiven an action $\\alpha\\colon G \\curvearrowright M$, the induced algebraic actions $\\alpha^\\omega\\colon G \\rightarrow \\mathrm{Aut}(M^\\omega)$ and $\\alpha_\\omega \\colon G \\rightarrow \\mathrm{Aut}(M_\\omega)$ are usually not continuous.\nThe remainder of this subsection is devoted (for lack of a good literature reference) to flesh out the construction of their `largest' von Neumann subalgebras where the action is sufficiently well-behaved for our needs, called the \\emph{$(\\alpha, \\omega)$-equicontinuous parts} (see Definition \\ref{def:equicontinuous_parts}).\nThese constructions are based on \\cite[Section 3]{MasudaTomatsu16}, where the special case $G=\\mathbb{R}$ is considered. \n\n\\begin{definition}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with an action $\\alpha\\colon G \\curvearrowright M$.\nFix a faithful normal state $\\phi$ on $M$. An element $(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M)$ is called \\emph{$(\\alpha, \\omega)$-equicontinuous} if for every $\\varepsilon>0$, there exists a set $W \\in \\omega$ and an open neighborhood $1_G\\in U \\subset G$ such that\n\\[\n\\sup_{n\\in W} \\sup_{g\\in U} \\|\\alpha_g(x_n) - x_n\\|_\\phi^\\# < \\varepsilon .\n\\]\nWe denote the set of $(\\alpha, \\omega)$-equicontinuous sequences by $\\mathcal{E}^\\omega_\\alpha$.\n\\end{definition}\n\n\\begin{remark}\nThe definition above does not depend on the faithful normal state chosen.\nWhenever $\\phi$ and $\\psi$ are two faithful normal states on $M$, one has for every $\\varepsilon>0$ some $\\delta>0$ such that for all $x \\in (M)_1$, $\\|x\\|_\\phi^\\# < \\delta$ implies $\\|x\\|_\\psi^\\# < \\varepsilon$. \n\\end{remark}\n\n\\begin{lemma}\\label{lemma:epsilon_delta_sequences}\nLet $M$ be a von Neumann algebra with faithful normal state $\\phi$. For all $(x_n)_{n \\in \\mathbb{N}} \\in \\mathcal{N}_\\omega(M)$ the following holds: \nFor any $\\varepsilon >0$ and compact set $\\Psi \\subset M_*^+$ there exists a $\\delta >0$ and $W \\in \\omega$ such that if $y \\in (M)_1$ and $\\|y\\|_\\phi^\\#< \\delta$, then $\\sup_{\\psi\\in \\Psi} \\|x_n y\\|_{\\psi}^\\# <\\varepsilon$ and $\\sup_{\\psi\\in \\Psi} \\|y x_n\\|_{\\psi}^\\# <\\varepsilon$ for all $n \\in W$. \n\\end{lemma}\n\\begin{proof} We prove this by contradiction. \nSuppose that there exists $\\varepsilon >0$ and a compact set $\\Psi \\subset M_*^+$ such that for any $k \\in \\mathbb{N}$ there exists a $y_k \\in (M)_1$ with $\\|y_k\\|_\\phi^\\# < 1\/k$ but the following set belongs to $\\omega$: \n\\[\nA_k := \\Big\\{ n \\in\\mathbb{N} \\ \\Big| \\ \\sup_{\\psi \\in \\Psi}\\|x_ny_k\\|_\\psi^\\#\\geq \\varepsilon \\text{ or } \\sup_{\\psi \\in \\Psi}\\|y_k x_n\\|_\\psi^\\# \\geq \\varepsilon \\Big\\}.\n\\]\nDefine $W_0 := \\mathbb{N}$ and $W_k := A_1 \\cap \\hdots \\cap A_k \\cap [k, \\infty)$ for $k \\geq 1$. \nThese all belong to $\\omega$. \nFor each $n \\in \\mathbb{N}$ define $k(n)$ as the unique number $k \\geq 0$ with $n \\in W_k \\setminus W_{k+1}$. Put $z_n := y_{k(n)}$ if $k(n) \\geq 1$, else put $z_n:=1_M$. \nNote that for all $n \\in W_m$ we get that $\\|z_n\\|_\\phi^\\# = \\|y_{k(n)}\\|_\\phi^\\# <\\frac{1}{k(n)} \\leq \\frac{1}{n}$. \nTherefore, it holds that $(z_n)_{n\\in \\mathbb{N}} \\in \\mathcal{I}_\\omega(M)$. \nSince $(x_n)_{n \\in \\mathbb{N}} \\in \\mathcal{N}_\\omega(M)$, it follows that also $(x_nz_n)_{n\\in \\mathbb{N}}$ and $(z_nx_n)_{n\\in \\mathbb{N}}$ belong to $\\mathcal{I}_\\omega(M)$. \nHence we get that for all $\\psi \\in \\Psi$ \n\\[\\lim_{n \\rightarrow \\omega} \\left(\\|x_nz_n\\|_\\psi^\\# + \\|z_nx_n\\|_\\psi^\\#\\right) = 0.\\]\nSince $\\Psi$ is compact, we also have \n\\[\\lim_{n \\rightarrow \\omega} \\sup_{\\psi \\in \\Psi}\\left(\\|x_nz_n\\|_\\psi^\\# + \\|z_nx_n\\|_\\psi^\\#\\right) = 0.\\]\nThis gives a contradiction, since our choice of $z_n$ implies that for all $n \\in W_1$ \n\\[\\sup_{\\psi \\in \\Psi}\\left(\\|x_nz_n\\|_\\psi^\\# + \\|z_nx_n\\|_\\psi^\\#\\right)\\geq \\varepsilon.\\]\n\\end{proof}\n\n\\begin{lemma}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with action $\\alpha:G \\curvearrowright M$.\nFor any two sequences $(x_n)_{n \\in \\mathbb{N}}, (y_n)_{n \\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega \\cap \\mathcal{N}_\\omega(M)$ it follows that $(x_ny_n)_{n\\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega$. \n\\end{lemma}\n\\begin{proof}\nWithout loss of generality we may assume $\\sup_{n \\in \\mathbb{N}} \\|x_n\\| \\leq \\frac{1}{2}$ and $\\sup_{n \\in \\mathbb{N}} \\|y_n\\| \\leq \\frac{1}{2}$.\nFix a faithful normal state $\\phi$ on $M$. Let $K \\subset G$ be a compact neighbourhood of the neutral element.\nTake $\\varepsilon >0$ arbitrarily.\nBy Lemma \\ref{lemma:epsilon_delta_sequences} there exists $\\delta>0$ and $W_1 \\in \\omega$ such that for every $z \\in (M)_1$ with $\\|z\\|_\\phi^\\# < \\delta$ one has \n\\[\n\\sup_{g \\in K}\\|x_n z\\|_{\\phi \\circ \\alpha_g}^\\#< \\frac{\\varepsilon}{2} \\text{ and } \\|zy_n\\|_\\phi^\\# < \\frac{\\varepsilon}{2} \\text{ for all } n \\in W_1.\n\\] \nSince $(x_n)_{n\\in \\mathbb{N}}$ and $(y_n)_{n\\in \\mathbb{N}}$ both belong to $\\mathcal{E}_\\alpha^\\omega$, we can find an open $U \\subset K$ containing the neutral element, and a $W_2 \\in \\omega$ such that\n\\[\n\\sup_{n \\in W_2}\\sup_{ g \\in U}\\|\\alpha_g(x_n) - x_n\\|_\\phi^\\# < \\delta \\text{, and}\n\\]\n\\[\\sup_{n \\in W_2}\\sup_{ g \\in U}\n\\|\\alpha_{g^{-1}}(y_n) - y_n\\|_\\phi^\\# < \\delta.\n\\]\nThen for $g \\in U$ and $n \\in W_1 \\cap W_2$ we have\n\\begin{align*}\n\\|\\alpha_g(x_n) \\alpha_g(y_n) - x_ny_n\\|_\\phi^\\# &\\leq \\|\\alpha_g(x_n)(\\alpha_g(y_n) - y_n)\\|_\\phi^\\# + \\|(\\alpha_g(x_n) - x_n)y_n\\|_\\phi^\\#\\\\\n&= \\|x_n(\\alpha_{g^{-1}}(y_n) - y_n)\\|_{\\phi \\circ \\alpha_g}^\\# + \\|(\\alpha_g(x_n) - x_n)y_n\\|_\\phi^\\#\\\\\n& < \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon.\n\\end{align*}\nThis ends the proof.\n\\end{proof}\n\n\\begin{lemma}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with an action $\\alpha: G \\curvearrowright M$.\nThen:\n\\begin{enumerate}[label=\\textup{(\\arabic*)},leftmargin=*]\n\\item \\label{lem:equicont-algebra:1}\nSuppose $(x_n)_{n\\in \\mathbb{N}}, (y_n)_{n\\in \\mathbb{N}} \\in \\ell^\\infty(M)$ satisfy $(x_n-y_n)_{n\\in \\mathbb{N}} \\in \\mathcal{I}_\\omega(M)$.\nThen $(x_n)_{n\\in \\mathbb{N}} \\in \\mathcal{E}^\\omega_\\alpha$ if and only if $(y_n)_{n\\in \\mathbb{N}} \\in \\mathcal{E}^\\omega_\\alpha$.\n\\item \\label{lem:equicont-algebra:2}\n$\\mathcal{E}_\\alpha^\\omega \\cap \\mathcal{N}_\\omega(M)$ is an $\\alpha$-invariant C$^*$-subalgebra of $\\ell^\\infty(M)$. \n\\end{enumerate}\n\\end{lemma}\n\\begin{proof} Fix a faithful normal state $\\phi$ on $M$.\nWe first prove \\ref{lem:equicont-algebra:1}.\nLet $\\varepsilon>0$.\nWe can choose $W \\in \\omega$ and an open neighborhood $1_G \\in U \\subset G$ such that \n\\[\n\\sup_{n\\in W} \\sup_{g\\in U} \\|\\alpha_g(x_n) - x_n\\|_\\phi^\\# < \\frac{\\varepsilon}{2}.\n\\] \nWithout loss of generality we may assume that $K=\\overline{U}$ is compact.\nConsider $s_n := \\sup_{g \\in K} \\|x_n - y_n\\|_{\\phi \\circ \\alpha_g}^\\#$. Since $K$ is compact and ${(x_n-y_n)_{n\\in \\mathbb{N}} \\in \\mathcal{I}_\\omega(M)}$, we have $\\lim_{n \\rightarrow \\omega} s_n = 0$.\nHence, after possibly replacing $W$ by a smaller set in the ultrafilter, we can assume that $s_n < \\varepsilon\/4$ for all $n \\in W$.\nWe may conclude for all $g \\in U$ and $n \\in W$ that\n\\begin{align*}\n\\|\\alpha_g(y_n) - y_n\\|_\\phi^\\# &\\leq \\|\\alpha_g(y_n) - \\alpha_g(x_n)\\|_\\phi^\\# + \\|\\alpha_g(x_n)- x_n\\|_\\phi^\\# + \\|x_n - y_n\\|_\\phi^\\#\\\\\n&\\leq 2s_n + \\frac{\\varepsilon}{2} < \\varepsilon.\n\\end{align*}\nSince $\\varepsilon>0$ was arbitrary, $(y_n)_{n\\in \\mathbb{N}}$ belongs to $\\mathcal{E}_\\alpha^\\omega$.\n\nLet us prove \\ref{lem:equicont-algebra:2}.\nClearly $\\mathcal{E}_\\alpha^\\omega$ is a $*$-closed, norm-closed linear subspace of $\\ell^\\infty(M)$. \nThe previous lemma shows that $\\mathcal{E}_\\alpha^\\omega \\cap \\mathcal{N}_\\omega(M)$ is closed under multiplication. \nTo see that $\\mathcal{E}_\\alpha^\\omega$ is $\\alpha$-invariant, take $(x_n)_{n \\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega$ and $h \\in G$. Take $\\varepsilon >0$. \nWe can find an open neighborhood $1_g \\in U \\subset G$ and $W \\in \\omega$ such that one has\n\\[\\sup_{n \\in W}\\sup_{g \\in U}\\|\\alpha_g(x_n) - x_n\\|_{\\phi \\circ \\alpha_h}^\\# < \\varepsilon.\\] \nThen for all $g \\in hUh^{-1}$ and $n \\in W$ we observe\n\\[\n\\|\\alpha_g(\\alpha_h(x_n)) - \\alpha_h(x_n)\\|_\\phi^\\# = \\|\\alpha_{h^{-1}gh}(x_n) - x_n\\|_{\\phi \\circ \\alpha_h}^\\# < \\varepsilon.\n\\]\nThis shows that $(\\alpha_h(x_n))_{n\\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega$.\n\\end{proof}\n\n\\begin{definition}\\label{def:equicontinuous_parts}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with an action $\\alpha\\colon G \\curvearrowright M$. \nWe define ${M_\\alpha^\\omega := (\\mathcal{E}_\\alpha^\\omega \\cap \\mathcal{N}_\\omega(M))\/\\mathcal{I}_\\omega}$ and ${M_{\\omega, \\alpha}:= M_\\alpha^\\omega \\cap M_\\omega}$. \nWe call them the \\emph{$(\\alpha,\\omega)$-equicontinuous parts} of $M^\\omega$ and $M_\\omega$, respectively. \n\\end{definition}\n\n\\begin{lemma}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with an action $\\alpha\\colon G \\curvearrowright M$.\nThen $M_\\alpha^\\omega$ and $M_{\\omega, \\alpha}$ are von Neumann algebras. \n\\end{lemma}\n\\begin{proof}\nWe show that $M_\\alpha^\\omega$ is a von Neumann algebra by showing that its unit ball is closed with respect to the strong operator topology in $M^\\omega$. \nThen it automatically follows that $M_{\\omega, \\alpha} = M_\\alpha^\\omega \\cap M_\\omega$ is also a von Neumann algebra. \nTake a sequence $(X_k)_k \\in (M_\\alpha^\\omega)_1$ that strongly converges to $X \\in (M^\\omega)_1$. Fix a faithful normal state $\\phi$ on $M$ and a compact neighbourhood of the neutral element $K \\subset G$. \nThen the function $K \\rightarrow (M^\\omega)_*$ given by $g \\mapsto \\phi^\\omega \\circ \\alpha_g^\\omega$ is continuous (because $\\phi^\\omega \\circ \\alpha_g^\\omega = (\\phi \\circ \\alpha_g)^\\omega)$. \nHence, the set $\\{\\phi^\\omega \\circ \\alpha_g^\\omega\\}_{g \\in K}$ is compact and thus $\\lim_{n \\rightarrow \\infty} \\sup_{g \\in K} \\|X_n - X\\|^\\#_{\\phi^\\omega \\circ \\alpha_g^\\omega} = 0$.\nFix $\\varepsilon >0$. \nPick representing sequences $(x_k(n))_{n\\in \\mathbb{N}}$ and $(x(n))_{n\\in \\mathbb{N}}$ for the elements $X_k$ and $X$, respectively, such that $\\|x_k(n)\\| \\leq 1$, $\\|x(n)\\| \\leq 1$, for all $k,n \\in \\mathbb{N}$.\nThen we can find $k_0 \\in \\mathbb{N}$ and $W_1 \\in \\omega$ such that \n\\[\n\\sup_{n\\in W_1} \\sup_{g\\in K} \\|x_{k_0}(n) - x(n)\\|_{\\phi\\circ \\alpha_g}^\\# < \\frac{\\varepsilon}{3}.\n\\]\nSince $(x_{k_0}(n))_{n\\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega$, we can find an open neighborhood $1_G \\in U \\subset K$ and $W_2 \\in \\omega$ such that\n\\[\n\\sup_{n\\in W_2} \\sup_{g\\in U} \\|\\alpha_g(x_{k_0}(n)) - x_{k_0}(n)\\|_\\phi^\\# < \\frac{\\varepsilon}{3}.\n\\]\nThen for all $g \\in U$ and $n \\in W_1 \\cap W_2$ it holds that\n\\begin{align*}\n\\|\\alpha_g(x(n)) - x(n)\\|_\\phi^\\# & \\leq \\|x(n) - x_{k_0}(n)\\|_{\\phi \\circ \\alpha_g}^\\# + \\|\\alpha_g(x_{k_0}(n)) - x_{k_0}(n)\\|_\\phi^\\# + \\|x_{k_0}(n) - x(n)\\|_\\phi^\\#\\\\\n&< \\frac{\\varepsilon}{3}+\\frac{\\varepsilon}{3}+\\frac{\\varepsilon}{3} = \\varepsilon. \n\\end{align*} \nThis shows that $(x(n))_{n \\in \\mathbb{N}} \\in \\mathcal{E}_\\alpha^\\omega$, or in other words $X \\in M_\\alpha^\\omega$.\n\\end{proof}\n\n\\begin{lemma}\nLet $M$ be a $\\sigma$-finite von Neumann algebra with an action $\\alpha\\colon G \\curvearrowright M$ of a second-countable locally compact group. \nThen $\\alpha^\\omega$ restricted to $M_\\alpha^\\omega$ and $\\alpha_\\omega$ restricted to $M_{\\omega,\\alpha}$ are continuous $G$-actions.\n\\end{lemma}\n\\begin{proof}\nFix a faithful normal state $\\phi$ on $M$. Since $\\phi^\\omega$ is faithful, $\\{a\\phi^\\omega \\mid a \\in M_\\alpha^\\omega\\}$ is dense in $(M_\\alpha^\\omega)_*$. For $a \\in M_\\alpha^\\omega$ and $g \\in G$ one has\n\\begin{align*}\n\\|(a\\phi^\\omega) \\circ \\alpha^\\omega_g - a \\phi^\\omega \\|_{(M_\\alpha^\\omega)_*} & \\leq \\|\\alpha_{g^{-1}}^\\omega(a)(\\phi^\\omega \\circ \\alpha_g^\\omega - \\phi^\\omega)\\|_{(M_\\alpha^\\omega)_*} \\|+ \\|(\\alpha_{g^{-1}}^\\omega(a) - a)\\phi^\\omega\\|_{(M_\\alpha^\\omega)_*}\\\\\n& \\leq \\|a\\| \\, \\|\\phi \\circ \\alpha_g - \\phi\\|_{M_*} + \\|\\alpha_{g^{-1}}^\\omega(a) - a\\|_{\\phi^\\omega}.\n\\end{align*}\nWhen $g \\rightarrow 1_G$, this expression converges to zero because $\\alpha$ is a continuous $G$-action and $a \\in M_\\alpha^\\omega$. \nThis shows that $\\alpha^\\omega$ restricts to a genuine continuous $G$-action on $M_\\alpha^\\omega$, so the same is true for the restriction of $\\alpha_\\omega$ to $M_{\\omega, \\alpha}$. \n\\end{proof}\n\n\\begin{lemma} \\label{lemma:lifting_invariance_compact_sets}\nLet $M$ be a von Neumann algebra with a faithful normal state $\\phi$ and an action $\\alpha\\colon G \\curvearrowright M$. \nLet $z \\in M_\\alpha^\\omega$, $\\varepsilon >0$, $K \\subset G$ a compact set and suppose that $\\| \\alpha_g^\\omega(z)-z\\|_{\\phi^\\omega}^\\# \\leq \\varepsilon$ for all $g \\in K$.\nIf $(z_n)_{n\\in \\mathbb{N}}$ is any bounded sequence representing $z$, then \n\\[\n\\lim_{n \\rightarrow \\omega} \\max_{g \\in K} \\| \\alpha_g(z_n)-z_n\\|_\\phi^\\# \\leq \\varepsilon.\n\\]\n\\end{lemma}\n\\begin{proof}\nLet $\\delta>0$.\nThen for each $g \\in K$ there exists an open neighborhood $g \\in U \\subset G$ and $W_g \\in \\omega$ such that\n\\[\\sup_{n \\in W_g}\\sup_{h \\in U}\\|\\alpha_h(z_n) - \\alpha_g(z_n)\\|_\\phi^\\# < \\delta.\\]\nSince this obviously yields an open cover of $K$ and $K$ is compact, we can find finitely many elements $g_1, \\hdots, g_N \\in K$ and an open covering $K \\subset \\cup_{i=1}^N U_j$ with $g_j \\in U_j$ and some $W_1 \\in \\omega$ such that for $j=1, \\hdots, N$ we have\n\\[\\sup_{n \\in W_1}\\sup_{g \\in U_j}\\|\\alpha_{g}(z_n) - \\alpha_{g_j}(z_n)\\|_\\phi^\\# < \\delta.\\]\nSince $\\max_{g \\in K}\\| \\alpha_g^\\omega(z)-z\\|_{\\phi^\\omega}^\\# \\leq \\varepsilon$, there exists $W_2 \\in \\omega$ such that for all $n \\in W_2$ and $j=1, \\hdots, N$ \n\\[\\|\\alpha_{g_j}(z_n) - z_n\\|_\\phi^\\# \\leq \\varepsilon+\\delta.\\]\nHence, for an arbitrary $g \\in K$, there is some $j \\in \\{1, \\hdots, N\\}$ such that $g \\in U_j$ and\n\\[\\| \\alpha_g(z_n)-z_n\\|_\\phi^\\# \\leq \\|\\alpha_g(z_n) - \\alpha_{g_j}(z_n)\\|_\\phi^\\# + \\|\\alpha_{g_j}(z_n) - z_n\\|_\\phi^\\# \\leq 2\\delta+ \\varepsilon\\]\nfor all $n \\in W_1 \\cap W_2.$\nSince $\\delta$ was arbitrary, this proves the claim. \n\\end{proof}\n\n\\subsection{Cocycle morphisms}\n\n\\begin{definition}[cf.\\ {\\cite[Definition 1.10]{Szabo21cc}}]\nLet $\\alpha\\colon G \\curvearrowright M$ and $\\beta\\colon G \\curvearrowright N$ be two actions of a second-countable locally compact group on von Neumann algebras. \nA \\emph{cocycle morphism} from $(M,\\alpha)$ to $(N,\\beta)$ is a pair $(\\phi,\\mathbbm{u})$, where $\\phi\\colon M \\rightarrow N$ is a unital normal $*$-homomorphism and $\\mathbbm{u}:G\\rightarrow \\mathcal{U}(N)$ is a continuous map (in the strong operator topology) such that for all $g,h \\in G$ we have\n\\[\n\\mathrm{Ad}(\\mathbbm{u}_g) \\circ \\beta_g \\circ \\phi = \\phi \\circ \\alpha_g \\quad\\text{and}\\quad \\mathbbm{u}_g \\beta_g(\\mathbbm{u}_h) = \\mathbbm{u}_{gh}.\n\\]\nIn the special case where $\\mathbbm{u}$ is the trivial map, we identify $\\phi$ with $(\\phi,1)$ and call $\\phi$ equivariant.\n\\end{definition}\n\n\\begin{remark} \\label{rem:cocycle-category}\nAs the arguments in \\cite[Subsection 1.3]{Szabo21cc} show, the above endows the class of continuous $G$-actions on von Neumann algebras with a categorical structure, whereby the Hom-sets are given by cocycle morphisms.\nThe composition is given via\n\\[\n(\\psi,\\mathbbm{v}) \\circ (\\phi,\\mathbbm{u}) := (\\psi \\circ \\phi, \\psi(\\mathbbm{u}) \\mathbbm{v})\n\\]\nfor any pair of cocycle morphisms\n\\[\n(M, \\alpha) \\overset{(\\phi,\\mathbbm{u})}{\\longrightarrow} (N, \\beta) \\quad \\text{ and } \\quad (N, \\beta) \\overset{(\\psi,\\mathbbm{v})}{\\longrightarrow} (L, \\gamma).\n\\]\nWe see furthermore that a cocycle morphism $(\\phi, \\mathbbm{u})\\colon(M, \\alpha) \\rightarrow (N, \\beta)$ is invertible if and only if $\\phi$ is a $*$-isomorphism of von Neumann algebras, in which case we have $(\\phi, \\mathbbm{u})^{-1} = (\\phi^{-1}, \\phi^{-1}(\\mathbbm{u})^*)$.\nIf this holds, we call $(\\phi, \\mathbbm{u})$ a \\emph{cocycle conjugacy}.\nWe call two actions $\\alpha$ and $\\beta$ \\emph{cocycle conjugate}, denoted as $\\alpha \\simeq_{\\mathrm{cc}} \\beta$, if there exists a cocycle conjugacy between them.\n\\end{remark}\n\n\\begin{example} \\label{ex:inner-cc}\nLet $\\alpha\\colon G \\curvearrowright M$ be an action.\nThen every unitary $v \\in \\mathcal{U}(M)$ gives rise to a cocycle conjugacy \n\\[\n\\big( \\mathrm{Ad}(v), (v \\alpha_g(v)^*)_{g \\in G} \\big) \\colon (M, \\alpha) \\rightarrow (M, \\alpha).\\]\nWe will also write this simply as $\\mathrm{Ad}(v)$ when it is clear from context that we are talking about cocycle morphisms. \nWhen $\\beta\\colon G \\curvearrowright N$ is another action and ${(\\phi, \\mathbbm{u})\\colon (M, \\alpha) \\rightarrow (N, \\beta)}$ is a cocycle conjugacy, then\n\\[ (\\phi,\\mathbbm{u}) \\circ \\mathrm{Ad}(v) = \\mathrm{Ad}(\\phi(v)) \\circ (\\phi, \\mathbbm{u}).\\] \n\\end{example}\n\n\n\\begin{definition}\nLet $\\alpha:G \\curvearrowright M$ and $\\beta\\colon G \\curvearrowright N$ be two actions on finite von Neumann algebras $M$ and $N$.\nLet $\\tau_N$ be a faithful normal tracial state on $N$.\nLet $(\\phi,\\mathbbm{u})$ and $(\\psi,\\mathbbm{v})$ be two cocycle morphisms from $(M,\\alpha)$ to $(N,\\beta)$. We say that $(\\phi,\\mathbbm{u})$ and $(\\psi,\\mathbbm{v})$ are \\emph{approximately unitarily equivalent} if there exists a net of unitaries $w_\\lambda \\in \\mathcal{U}(N)$ such that $\\|w_\\lambda\\phi(x)w_\\lambda^*-\\psi(x)\\|_{\\tau_N} \\to 0$ for all $x\\in M$ and $\\max_{g\\in K} \\| w_\\lambda \\mathbbm{u}_g \\beta_g(w_\\lambda)^* - \\mathbbm{v}_g\\|_{\\tau_N} \\rightarrow 0$ for every compact set $K\\subseteq G$.\nWe denote the relation of approximately unitary equivalence by $\\approx_{\\mathrm{u}}$.\n\\end{definition}\n\n\\section{One-sided intertwining}\nIn this section we prove a version of \\cite[Lemma~2.1]{Szabo18ssa} for group actions on semi-finite von Neumann algebras. First we prove the following intermediate lemma:\n\n\\begin{lemma}\\label{lemma:point_strong_dense_subset}\nLet $M, N$ be von Neumann algebras, and let $\\tau_N$ be a faithful, normal, semi-finite trace on $N$. \nConsider a sequence of $*$-homomorphisms $(\\theta_n\\colon M \\rightarrow N)_{n \\in \\mathbb{N}}$ and a $*$-isomorphism $\\theta:M \\rightarrow N$ such that $\\tau_N \\circ \\theta = \\tau_N \\circ \\theta_n$ for all $n \\in \\mathbb{N}$. \nLet $X \\subset (M)_1$ be a dense subset in the strong operator topology that contains a sequence of projections $(p_n)_{n \\in \\mathbb{N}}$ converging strongly to $1_M$ with $\\tau_N (\\theta(p_n)) < \\infty$. \nIf $\\theta_n (x) \\rightarrow \\theta(x)$ strongly as $n \\rightarrow \\infty$ for every $x \\in X$, then $\\theta_n \\rightarrow \\theta$ in the point-strong topology as $n \\rightarrow \\infty$.\n\\end{lemma}\n\\begin{proof}\nTake $y \\in (M)_1$. Since the sequence $(\\theta(p_n))_{n \\in \\mathbb{N}}$ converges strongly to $1_N$, it suffices to show that for all $k \\in \\mathbb{N}$\n\\[ (\\theta(y)-\\theta_n(y)) \\theta(p_k) \\rightarrow 0 \\text{ strongly as } n \\rightarrow \\infty.\\]\nFix $k \\in \\mathbb{N}$ and $a \\in N$ such that $\\tau_N(a^*a) < \\infty$. Given $\\varepsilon>0$, there exists $x \\in X$ such that \n\\[\\|\\theta(x-y)\\theta(p_k)\\|_{\\tau_N} < \\frac{\\varepsilon}{4\\|a\\|}.\\]\n Then there exists $n_0 \\in \\mathbb{N}$ such that for all $n \\geq n_0$ \n \\[\\|(\\theta(x p_k) - \\theta_n ( x p_k))a\\|_{\\tau_N} < \\frac{\\varepsilon}{4} \\text{ and } \\|(\\theta(p_k)-\\theta_n(p_k))a\\|_{\\tau_N} < \\frac{\\varepsilon}{4}.\\] \nFor all $n \\geq n_0$ we then get that\n\\begin{align*}\n\\|(\\theta(y) - \\theta_n(y))\\theta(p_k)a\\|_{\\tau_N} &\\leq \\|\\theta(x-y)\\theta(p_k)a\\|_{\\tau_N} + \\|\\theta_n(x-y)\\theta_n(p_k)a\\|_{\\tau_N}\\\\\n&\\quad + \\|(\\theta(xp_k) - \\theta_n(xp_k))a\\|_{\\tau_N} + \\|\\theta_n(y)(\\theta(p_k) - \\theta_n(p_k))a\\|_{\\tau_N}\\\\\n&< 2\\|a\\| \\|\\theta(x-y)\\theta(p_k)\\|_{\\tau_N} +\\varepsilon\/4 + \\|(\\theta(p_k)-\\theta_n(p_k))a\\|_{\\tau_N} \\\\\n&< \\varepsilon.\n\\end{align*}\nAs $k$ and $a$ were arbitrary, this proves the claim.\n\\end{proof}\n\n\\begin{lemma}\\label{lemma:one-sided_intertwining} \nLet $M$ and $N$ be two von Neumann algebras with separable predual and faithful normal semi-finite traces $\\tau_{M}$ and $\\tau_{N}$, respectively. \nLet $\\alpha\\colon G \\curvearrowright M$ and $\\beta\\colon G \\curvearrowright N$ be two actions.\nLet $\\rho\\colon (M, \\alpha) \\rightarrow (N, \\beta)$ be a unital equivariant normal $*$-homomorphism with $\\tau_N \\circ \\rho = \\tau_M$.\nSuppose there exists a faithful normal state $\\phi$ on $N$ and a sequence of unitaries $(w_n)_{n \\in \\mathbb{N}}$ in $\\mathcal{U}(N)$ satisfying\n\\begin{enumerate}[leftmargin=*,label=$\\bullet$]\n\\item $\\mathrm{Ad}(w_n) \\circ \\rho \\to \\rho$ in the point-strong topology; \n\\item For all $y \\in (N)_1$ there exists a sequence $(x_n)_{n \\in \\mathbb{N}} \\subset (M)_1$ such that $y - w_n\\rho(x_n)w_n^* \\rightarrow 0$ in the strong operator topology;\n\\item $\\max_{g \\in K} \\|\\beta_g(w_n^*) - w_n^*\\|_\\phi \\rightarrow 0$ for every compact subset $K \\subseteq G$.\n\\end{enumerate}\nThen $\\rho(\\mathcal{Z}(M))=\\mathcal{Z}(N)$ and there exists a cocycle conjugacy $(\\theta,\\mathbbm{v})$ between $\\alpha$ and $\\beta$ with $\\theta|_{\\mathcal{Z}(M)}=\\rho|_{\\mathcal{Z}(M)}$.\nIn case $\\tau_N$ is finite, the existence of such a sequence of unitaries for $\\phi = \\tau_N$ is equivalent to the condition that $\\rho$ is approximately unitarily equivalent to a cocycle conjugacy. \n\\end{lemma}\n\\begin{proof}\nWe note right away that the first two conditions above can always be tested on self-adjoint elements, hence one can equivalently state them with the strong-$*$ topology. \nDenote\n\\[\\mathfrak{m} := \\{x \\in M \\mid \\tau_M(x^*x) < \\infty\\} \\subset M.\\]\nWe let $L^2(M,\\tau_M)$ denote the GNS-Hilbert space of $M$ with respect to $\\tau_M$.\nSimilarily, we use the notation $L^2(N, \\tau_N)$.\n\nChoose a countable subset $X = \\{x_n\\}_{n \\in \\mathbb{N}}$ in $(M)_1$ such that $X \\cap \\mathfrak{m}$ is $\\|\\cdot\\|_{\\tau_M}$-dense in $(\\mathfrak{m})_1$. \nTake a strongly dense sequence $\\{y_n\\}_{n \\in \\mathbb{N}}$ in $(N)_1$.\nChoose an increasing sequence of compact subsets $K_n \\subseteq G$ such that the union is all of $G$. \n\nWe are going to create a map $\\theta: M\\to N$ via an inductive procedure. \nFor the first step, we choose $x_{1,1} \\in (M)_1$ and $z_1 \\in \\mathcal{U}(N)$ such that\n\\begin{itemize}\n\\item $ \\|z_1 \\rho(x_1) z_1^* - \\rho(x_1)\\|^\\#_\\phi \\leq 1\/2$;\n\\item $\\|y_1 - z_1\\rho(x_{1,1})z_1^*\\|_\\phi \\leq 1\/2$;\n\\item $\\max_{g \\in K_1} \\|\\beta_g(z_1^*) - z_1^*\\|_\\phi \\leq 1\/2$.\n\\end{itemize}\nNow assume that after the $n$-th step of the induction we have found $z_1, \\hdots, z_n \\in \\mathcal{U}(N)$ and ${\\{x_{l,j}\\}_{j \\leq l \\leq n} \\subset (M)_1}$ such that\n\\begin{enumerate}\n\\item $\\|z_n \\rho(x_j) z_n^* - \\rho(x_j)\\|^\\#_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_{n-1})} \\leq 2^{-n}$ for $j= 1, \\hdots, n$; \\label{eq:commutation_dense}\n\\item $\\|z_n \\rho(x_{l,j}) z_n^* - \\rho(x_{l,j})\\|^\\#_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_{n-1})} \\leq 2^{-n}$ for $l=1, \\hdots, n-1$ and $j = 1, \\hdots, l$; \\label{eq:commutation_close_elements}\n\\item $ \\| z_{n-1}^* \\hdots z_1^* y_j z_1 \\hdots z_{n-1} - z_n\\rho(x_{n,j})z_n^*\\|_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_{n-1})} \\leq 2^{-n}$ for $j=1, \\hdots, n$; \\label{eq:close_elements}\n\\item \n$\\max_{g \\in K_n} \\|\\beta_g(z_n^*) - z_n^*\\|_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_{n-1})} \\leq 2^{-n}$ and\n\n$\\max_{g \\in K_n} \\|\\beta_g(z_n^*) - z_n^*\\|_{\\phi \\circ \\mathrm{Ad}(\\beta_g(z_1 \\hdots z_{n-1}))} \\leq 2^{-n}$.\n \\label{eq:invariance}\n\n\\label{eq:approximate_fixedness}\n\\end{enumerate}\nThen by our assumptions we can find $z_{n+1} \\in \\mathcal{U}(N)$ and $\\{x_{n+1,j}\\}_{j \\leq n+1}\\subset (M)_1$ such that\n\\begin{itemize}\n\\item $\\|z_{n+1} \\rho(x_j)z_{n+1}^* - \\rho(x_j)\\|^\\#_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_n)} \\leq 2^{-(n+1)}$ for $j= 1, \\hdots, n+1$;\n\\item $\\|z_{n+1} \\rho(x_{l,j})z_{n+1}^* - \\rho(x_{l,j})\\|^\\#_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_n)} \\leq 2^{-(n+1)}$ for $l=1, \\hdots, n$ and $j = 1, \\hdots, l$;\n\\item $ \\|z_{n}^* \\hdots z_1^*y_jz_1 \\hdots z_{n} - z_{n+1}\\rho(x_{n+1,j})z_{n+1}^*\\|_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_n)} \\leq 2^{-(n+1)}$ for $j=1, \\hdots, n+1$;\n\\item $\\max_{g \\in K_{n+1}} \\|\\beta_g(z_{n+1}^*)- z_{n+1}^*\\|_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_n)} \\leq 2^{-(n+1)}$ and\n\n$\\max_{g \\in K_{n+1}} \\|\\beta_g(z_{n+1}^*)- z_{n+1}^*\\|_{\\phi \\circ \\mathrm{Ad}(\\beta_g(z_1 \\hdots z_n))} \\leq 2^{-(n+1)}$. \n\\end{itemize}\nWe carry on inductively and obtain a sequence of unitaries $(z_n)_{n \\in \\mathbb{N}}$ in $\\mathcal{U}(N)$ and a family $\\{ x_{n,j} \\}_{n\\in\\mathbb{N}, j\\leq n}\\subset (M)_1$.\nFor each $n \\in \\mathbb{N}$, we define $u_n=z_1 \\hdots z_n$ and the normal $*$-homomorphism ${\\theta_n\\colon M \\rightarrow N}$ by $\\theta_n = \\mathrm{Ad}(u_n) \\circ \\rho$.\n\nFor $n > m$ and $j=1, \\hdots, m+1$ we get\n\\begin{align*}\n\\|\\theta_n(x_j) - \\theta_m(x_j)\\|^\\#_{\\phi} &\\leq \n\\sum_{k=m}^{n-1} \\|\\theta_{k+1}(x_j) - \\theta_k(x_j)\\|^\\#_\\phi\\\\\n&= \\sum_{k=m}^{n-1} \\|z_{k+1}\\rho(x_j)z_{k+1}^* - \\rho(x_j)\\|^\\#_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_k)}\\\\\n\\overset{\\ref{eq:commutation_dense}}&{\\leq} \\sum_{k=m}^{n-1} 2^{-k-1}.\n\\end{align*}\nWe see that for all $j\\in \\mathbb{N}$ the sequence $(\\theta_n(x_j))_{n \\in \\mathbb{N}}$ is norm-bounded and Cauchy with respect to $\\|\\cdot\\|_\\phi^\\#$.\n This means that it converges to some element in $N$ in the strong-$*$-operator topology.\n A similar calculation using \\ref{eq:commutation_close_elements} shows that\nfor $n > m \\geq l \\geq j$ \n\\begin{equation}\\label{eq:convergence_close_elements}\\|\\theta_n(x_{l,j}) - \\theta_m(x_{l,j})\\|^\\#_{\\phi} < \\sum_{k=m}^{n-1} 2^{-k-1}, \\end{equation}\nso the sequence $(\\theta_n(x_{l,j}))_{n \\in \\mathbb{N}}$ also converges in the strong-$*$-operator topology for all $j \\leq l$.\n Since $\\theta_n$ is a $*$-homomorphism for all $n \\in \\mathbb{N}$, we conclude that, restricted to the C$^*$-algebra $A\\subset M$ generated by $\\{x_n\\}_{n \\in \\mathbb{N}} \\cup \\{x_{l,j}\\}_{j \\leq l}$, the sequence $(\\theta_n)_{n \\in \\mathbb{N}}$ converges point-$*$-strongly to a $*$-homomorphism $\\theta'\\colon A \\rightarrow N$.\nSince $A$ contains a $\\|\\cdot\\|_{\\tau_M}$-dense subset of $\\mathfrak{m}$, and clearly $\\tau_N\\circ\\theta'=\\tau_M|_A$, there is a unique isometry $T\\colon L^2(M, \\tau_M) \\rightarrow L^2(N, \\tau_N)$ induced from the formula $T[a]=[\\theta'(a)]$ for all $a\\in A\\cap\\mathfrak{m}$.\n Then the normal $*$-homomorphism\n\\[\\theta \\colon M \\rightarrow N \\colon x \\mapsto T x T^*\\]\n extends $\\theta'$ and $\\left(\\theta_n \\big \\lvert_\\mathfrak{m}\\right)_{n \\in \\mathbb{N}}$ converges point-strongly to $\\theta\\lvert_\\mathfrak{m}$.\n\nWe claim that $\\theta$ is an isomorphism.\nClearly $\\tau_N\\circ\\theta=\\tau_M$ and so $\\theta$ is injective.\nBy applying \\ref{eq:close_elements} we find for all $m \\geq j$ that\n\\begin{equation*}\n\\|\\theta_m(x_{m,j}) - y_j\\|_\\phi = \\|z_m\\rho(x_{m,j})z_m^* - z_{m-1}^* \\hdots z_1^* y_j z_1 \\hdots z_{m-1}\\|_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_{m-1})} < 2^{-m}.\n\\end{equation*}\nCombining this with \\eqref{eq:convergence_close_elements} for $l=m$ and $n \\rightarrow \\infty$ we find that\n\\[\\|\\theta(x_{m,j}) - y_j\\|_\\phi \\leq \\|\\theta'(x_{m,j}) - \\theta_m(x_{m,j})\\|_\\phi + \\|\\theta_m(x_{m,j}) - y_j\\|_\\phi \\leq 2^{-m} + 2^{-m} = 2^{-m+1}.\\]\nSince the $y_j$ are strongly dense in the unit ball of $N$ and $\\theta$ is normal, this implies surjectivity of $\\theta$.\nBy Lemma~\\ref{lemma:point_strong_dense_subset} it then follows that $\\theta_n\\to \\theta$ point-strongly as $n\\to\\infty$.\nSince $\\theta_n$ is a unitary perturbation of $\\rho$ for each $n$, this implies $\\rho|_{\\mathcal{Z}(M)}=\\theta_n|_{\\mathcal{Z}(M)}\\to\\theta|_{\\mathcal{Z}(M)}$ and in particular $\\rho(\\mathcal{Z}(M))=\\theta(\\mathcal{Z}(M))=\\mathcal{Z}(N)$.\n\nFor $n > m$ and $g \\in K_{m+1}$ we have\n\\begin{align*}\n&\\|z_1 \\hdots z_n \\beta_g(z_n^* \\hdots z_1^*) - z_1 \\hdots z_m \\beta_g(z_m^*\\hdots z_1^*)\\|_{\\phi}^\\#\\\\\n&\\leq \\sum_{k=m}^{n-1} \\|z_1 \\hdots z_k(z_{k+1}\\beta_g(z_{k+1}^*) - 1) \\beta_g(z_k^*\\hdots z_1^*)\\|_{\\phi}^\\#\\\\\n&=\\sum_{k=m}^{n-1} \\big( \\|\\beta_g(z_{k+1}^*) - z_{k+1}^*\\|^2_{\\phi \\circ \\mathrm{Ad}(\\beta_g(z_1 \\hdots z_k))} + \\|\\beta_g(z_{k+1}^*) - z_{k+1}^*\\|^2_{\\phi \\circ \\mathrm{Ad}(z_1 \\hdots z_k)} \\big)^{1\/2}\\\\\n\\overset{\\ref{eq:invariance}}&{\\leq} \\sqrt{2} \\sum_{k=m}^{n-1} 2^{-(k+ 1)}.\n\\end{align*}\nFrom this calculation we see that for every $g \\in G$ the sequences $(z_1 \\hdots z_n \\beta_g(z_n^*\\hdots z_1^*))_{n \\in \\mathbb{N}}$ are Cauchy with respect to $\\|\\cdot\\|^\\#_\\phi$, with uniformity on compact sets. \nIt follows that for every $g \\in G$, the strong-$*$ limit $\\mathbbm{v}_g = \\lim_{n \\rightarrow \\infty} u_n \\beta_g(u^*_n)$ exists in $\\mathcal{U}(N)$ and that this convergence is uniform (w.r.t.\\ $\\|\\cdot\\|^\\#_\\phi$) on compact sets. \nSince $\\beta$ is point-strong continuous, this implies the continuity of the assignment $g \\mapsto \\mathbbm{v}_g$. \n\nMoreover, for each $g \\in G$ and $x \\in M$ we have the equalities of limits with respect to the strong operator topology:\n\\begin{align*}\n(\\theta \\circ \\alpha_g) (x) &= \\lim_{n \\rightarrow \\infty} (\\mathrm{Ad}(u_n) \\circ \\rho \\circ \\alpha_g)(x)\\\\\n&= \\lim_{n \\rightarrow \\infty} (\\mathrm{Ad}(u_n) \\circ \\beta_g \\circ \\rho) (x)\\\\\n&= \\lim_{n \\rightarrow \\infty} u_n \\beta_g(u_n^*)\\beta_g(u_n \\rho(x) u_n^*) \\beta_g(u_n) u_n^*\\\\\n&= (\\mathrm{Ad}(\\mathbbm{v}_g) \\circ \\beta_g \\circ \\theta) (x).\n\\end{align*}\nIt follows that $(\\theta,\\mathbbm{v})$ is a cocycle conjugacy. \n\nFor the last part of the statement, assume that $\\tau_N$ is finite.\nThen our previous calculations show that in the above situation, $\\rho$ is approximately unitarily equivalent to $\\theta$.\nConversely, suppose $\\rho$ is approximately unitarily equivalent to a cocycle conjugacy $(\\theta,\\mathbbm{v})$.\nIn particular, there exists a sequence $(u_n)_{n \\in \\mathbb{N}} \\in \\mathcal{U}(N)$ such that $\\|u_n\\rho(x)u_n^* - \\theta(x)\\|_{\\tau_N} \\rightarrow 0$ for all $x \\in M$ and $\\|u_n \\beta_g(u_n^*) - \\mathbbm{v}_g\\|_{\\tau_N} \\rightarrow 0$ uniformly over compact subsets of $G$.\nChoose a sequence $\\{y_n\\}_{n \\in \\mathbb{N}} \\subset (N)_1$ that is strongly dense in $(N)_1$. For all $k,n \\in \\mathbb{N}$ define $x_{n,k} = \\theta^{-1}(u_ny_ku_n^*)$. Then choose an increasing sequence $(m(n))_{n \\in \\mathbb{N}} \\subset \\mathbb{N}$ such that\n \\begin{equation*}\n \\lim_{n \\rightarrow \\infty} \\| \\theta(x_{n,k}) - u_{m(n)} \\rho (x_{n,k}) u_{m(n)}^*\\|_{\\tau_N} = 0 \\quad \\text{for } k \\in \\mathbb{N}.\\end{equation*}\nDefine $w_n := u_n^*u_{m(n)}$. One can check that these satisfy the assumptions in the lemma. \n\\end{proof}\n\n\\section{Strongly self-absorbing actions}\n\n\\begin{definition}[cf.\\ {\\cite[Definition 5.1]{Szabo21cc}}] \\label{def:strong-absorption}\nLet $\\alpha:G \\curvearrowright M$ and $\\delta\\colon G \\curvearrowright N$ be two actions of a second-countable locally compact group on finite von Neumann algebras $M$ and $N$ with separable predual.\nWe say that $\\alpha$ \\emph{strongly absorbs} $\\delta$ if the equivariant embedding\n\\[\\mathrm{id}_M \\otimes 1_N\\colon (M, \\alpha) \\rightarrow (M \\bar{\\otimes} N, \\alpha \\otimes \\delta)\\]\nis approximately unitarily equivalent to a cocycle conjugacy.\n\\end{definition}\n\n\\begin{definition} \\label{def:ssa-action}\nLet $\\delta\\colon G \\curvearrowright \\mathcal{R}$ be an action on the hyperfinite II$_1$-factor.\nWe say that $\\delta$ is \\emph{strongly self-absorbing}, if $\\delta$ strongly absorbs $\\delta$.\n\\end{definition}\n\n\\begin{definition}\nLet $\\alpha\\colon G \\curvearrowright \\mathcal{R}$ be an action on the hyperfinite II$_1$-factor.\nWe say $\\alpha$ has \\emph{approximately inner half-flip} if the two equivariant embeddings\n\\[\n\\mathrm{id}_\\mathcal{R} \\otimes 1_\\mathcal{R}, 1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R}\\colon (\\mathcal{R}, \\alpha) \\rightarrow (\\mathcal{R} \\bar{\\otimes} \\mathcal{R}, \\alpha \\otimes \\alpha)\n\\]\nare approximately unitarily equivalent (as cocycle morphisms).\n\\end{definition}\n\n\\begin{remark}\nIt is well-known that any type II$_1$ von Neumann algebra $N$ with approximately inner half-flip in the above sense (with $G=\\{1\\}$) must be isomorphic to $\\mathcal R$. Indeed, it is clear that $N$ must have trivial center. Then $N \\cong \\mathcal{R}$ follows from \\cite[Theorem 5.1]{Connes76} under the stronger condition that the flip automorphism on $N\\bar{\\otimes} N$ is approximately inner, but the weaker condition is seen to be enough via Connes' theorem and the obvious modification of the proof of \\cite[Proposition 2.8]{EffrosRosenberg78} that shows the semi-discreteness of $N$.\n\\end{remark}\n\n\\begin{example}\\label{example:trivial_action_R}\nFor any second-countable locally compact group $G$, the trivial action $\\mathrm{id}_\\mathcal{R}\\colon G \\curvearrowright \\mathcal{R}$ has approximately inner half-flip as a consequence of the flip automorphism on a tensor product of matrix algebras $M_n(\\mathbb{C}) \\otimes M_n(\\mathbb{C})$ being inner. \nIt is also seen to be a strongly self-absorbing action.\n\\end{example}\n\n\\begin{theorem} \\label{theorem:sufficient_criterium_strong_absorption}\nLet $\\alpha\\colon G \\curvearrowright M$ be an action on a semi-finite von Neumann algebra with separable predual.\nSuppose that $\\delta\\colon G \\curvearrowright \\mathcal{R}$ is an action with approximately inner half-flip such that there exists a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow (M_{\\omega,\\alpha}, \\alpha_\\omega)$.\nThen there exists a cocycle conjugacy $(\\theta,\\mathbbm{v}): (M,\\alpha) \\to (M\\bar{\\otimes}\\mathcal{R},\\alpha \\otimes \\delta)$ with $\\theta|_{\\mathcal{Z}(M)}=\\operatorname{id}_{\\mathcal{Z}(M)}\\otimes 1_{\\mathcal R}$.\nIf $M$ is finite, then $\\alpha$ strongly absorbs $\\delta$.\n\\end{theorem}\n\\begin{proof}\nFix a faithful normal state $\\phi$ on $M$.\nLet $\\pi\\colon (\\mathcal{R},\\delta) \\rightarrow (M_{\\omega,\\alpha},\\alpha_{\\omega})$ be a unital equivariant $*$-homomorphism.\nWe obtain an induced a map on the algebraic tensor product\n\\[\n\\mathcal{R} \\odot M \\rightarrow M^\\omega_\\alpha \\text{ via } x \\otimes m \\mapsto \\pi(x) m.\n\\]\nSince for each $m \\in M_+$ the map $x \\mapsto \\phi^\\omega(\\pi(x)m)$ defines a positive tracial functional on $\\mathcal{R}$,\n we see that it must be equal to some multiple of the unique tracial state $\\tau$ on $\\mathcal{R}$ and hence,\n we get for each $x \\in \\mathcal{R}$ and $m \\in M$ that \n\\[\\phi^\\omega(\\pi(x) m) = \\tau(x) \\phi(m) = (\\tau \\otimes \\phi)(x \\otimes m).\\]\nSo we see that the map sends the faithful normal state $\\tau \\otimes \\phi$ to $\\phi^\\omega$ and hence,\n it extends to a unital normal $*$-homomorphism $\\mathcal{R} \\bar{\\otimes} M \\rightarrow M^\\omega_\\alpha$,\n which moreover is $(\\delta \\otimes \\alpha)$-to-$\\alpha^\\omega$ equivariant. \nIn this way we get a unital equivariant normal $*$-homomorphism\n\\[(\\mathcal{R} \\bar{\\otimes} \\mathcal{R} \\bar{\\otimes}M, \\delta \\otimes \\delta \\otimes \\alpha) \\rightarrow (\\mathcal{R} \\bar{\\otimes} M^\\omega_\\alpha, \\delta \\otimes \\alpha^\\omega),\\]\ngiven by $x_1 \\otimes x_2 \\otimes m \\mapsto x_1 \\otimes (\\phi(x_2) m)$. \nComposing with the canonical inclusion map $\\iota\\colon \\mathcal{R} \\bar{\\otimes} M^\\omega_\\alpha \\mapsto (\\mathcal{R} \\bar{\\otimes} M)^\\omega_{\\delta\\otimes\\alpha}$ we get a unital and equivariant normal $*$-homomorphism\n\\[\\Phi\\colon (\\mathcal{R} \\bar{\\otimes} \\mathcal{R} \\bar{\\otimes} M, \\delta \\otimes \\delta \\otimes \\alpha) \\rightarrow ((\\mathcal{R} \\bar{\\otimes} M)^\\omega_{\\delta \\otimes \\alpha}, (\\delta \\otimes \\alpha)^\\omega)\\]\nsuch that\n\\[\n\\Phi(x \\otimes 1_\\mathcal{R} \\otimes m) = x \\otimes m \\text{ for all } x \\in \\mathcal{R}, m \\in M,\n\\]\nand\n\\[\n\\Phi(1_\\mathcal{R} \\otimes \\mathcal{R} \\otimes M) \\subset \\iota(1_\\mathcal{R} \\otimes M^\\omega_\\alpha).\n\\]\nSince $\\delta$ has approximately inner half-flip,\n we can choose a sequence of unitaries $(v_n)_{n \\in \\mathbb{N}}$ in $\\mathcal{R} \\bar{\\otimes} \\mathcal{R}$ such that \n $\\max_{g \\in K}\\|v_n - (\\delta \\otimes \\delta)_g(v_n)\\|_{\\tau \\otimes \\tau} \\rightarrow 0$ for all compact subsets $K \\subseteq G$ and \n ${\\|x \\otimes 1_\\mathcal{R} - v_n (1_\\mathcal{R} \\otimes x)v_n^*\\|_{\\tau \\otimes \\tau} \\rightarrow 0}$ for all $x \\in \\mathcal{R}$. \nDefine $u_n := \\Phi(v_n \\otimes 1_M) \\subset (\\mathcal{R} \\bar{\\otimes} M)^\\omega_{\\delta \\otimes \\alpha}$. This sequence of unitaries satisfies\n\\begin{itemize} \n\\item $[u_n, 1_\\mathcal{R} \\otimes m] = \\Phi([v_n \\otimes 1_M, 1_{\\mathcal{R} \\bar{\\otimes} \\mathcal{R}} \\otimes m]) = 0$ for all $m \\in M$;\n\\item $\\Phi(1_\\mathcal{R} \\otimes x \\otimes m) \\in \\iota(1_\\mathcal{R} \\otimes M^\\omega_\\alpha)$ and\n\\begin{align*}\n\\lim_{n \\rightarrow \\infty} u_n \\Phi(1_\\mathcal{R} \\otimes x \\otimes m) u_n^* &=\\lim_{n \\rightarrow \\infty} \\Phi((v_n \\otimes 1_M)(1_\\mathcal{R} \\otimes x \\otimes m)(v_n^* \\otimes 1_M))\\\\\n&=\\Phi(x \\otimes 1_\\mathcal{R} \\otimes m) \\\\&= x \\otimes m\n\\end{align*}\nwhere the limit is taken with respect to the strong operator topology;\n\\item $\\displaystyle \\max_{g \\in K} \\|u_n^* - (\\delta \\otimes \\alpha)^\\omega_g(u_n^*)\\|_{(\\tau \\otimes \\phi)^\\omega} = \\max_{g \\in K} \\|(v_n^* - (\\delta \\otimes \\delta)_g(v_n^*)) \\otimes 1_M\\|_{\\tau \\otimes \\tau \\otimes \\phi} \\to 0$ for all compact $K \\subseteq G$.\n\\end{itemize}\n\n Each $u_n$ can be lifted to a sequence of unitaries $(z_n^{(k)})_{k\\in \\mathbb{N}}$ in $\\mathcal{E}_{\\delta \\otimes \\alpha}^\\omega \\cap \\mathcal{N}_\\omega(\\mathcal{R} \\bar{\\otimes} M)$. Applying a diagonal sequence argument to the $(z_n^{(k)})_{k\\in \\mathbb{N}}$ and using Lemma~\\ref{lemma:lifting_invariance_compact_sets}, we can obtain a sequence of unitaries $(w_n)_{n \\in \\mathbb{N}}$ in $\\mathcal{R} \\bar{\\otimes} M$ such that\n\\begin{itemize}\n\\item $\\mathrm{Ad}(w_n)(1_\\mathcal{R} \\otimes m) - 1_\\mathcal{R} \\otimes m \\rightarrow 0$ strongly for all $m \\in M$.\n\\item $\\inf_{m \\in (M)_1}\\|x - w_n(1_\\mathcal{R} \\otimes m)w_n^*\\|_{\\tau \\otimes \\phi} \\rightarrow 0$ for $x \\in (\\mathcal{R} \\bar{\\otimes}M)_1$.\n\\item $\\max_{g \\in K} \\|w_n^* - (\\delta \\otimes \\alpha)_g(w_n^*)\\|_{\\tau \\otimes \\phi} \\rightarrow 0$ for every compact subset $K \\subseteq G$.\n\\end{itemize}\nWe conclude that the map $1_\\mathcal{R} \\otimes \\mathrm{id}_M\\colon (M, \\alpha) \\rightarrow (\\mathcal{R} \\bar{\\otimes} M, \\delta \\otimes \\alpha)$ satisfies all the necessary conditions to apply Lemma \\ref{lemma:one-sided_intertwining}. This completes the proof.\n\\end{proof}\n\n\\begin{theorem} \\label{theorem:equivalence_ssa}\nLet $\\delta:G \\curvearrowright \\mathcal{R}$ be an action on the hyperfinite II$_1$-factor.\nThen $\\delta$ is strongly self-absorbing if and only if it has approximately inner half-flip and there exists a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow (\\mathcal{R}_{\\omega,\\delta}, \\delta_\\omega)$. \n\\end{theorem}\n\\begin{proof}\nThe `if' direction follows immediately from the previous proposition.\nTo prove the other direction, we assume that $\\delta$ is strongly self-absorbing and reproduce an argument analogous to \\cite[Proposition 1.4]{TomsWinter07} and \\cite[Proposition 5.5]{Szabo21cc}.\nDenote the unique tracial state on $\\mathcal{R}$ by $\\tau$. Let $(\\phi,\\mathbbm{u})\\colon (\\mathcal{R}, \\delta) \\rightarrow (\\mathcal{R} \\bar{\\otimes} \\mathcal{R}, \\delta \\otimes \\delta)$ be a cocycle conjugacy and $u_n \\in \\mathcal{U}(\\mathcal{R} \\bar{\\otimes} \\mathcal{R})$ a sequence of unitaries such that\n\\begin{equation}\\label{eq:approx_coboundary} \\lim_{n \\rightarrow \\infty} \\max_{g \\in K} \\| u_n(\\delta \\otimes \\delta)_g(u_n^*) - \\mathbbm{u}_g\\|_{\\tau \\otimes \\tau} = 0 \\text{ for every compact } K \\subseteq G, \\text{ and }\\end{equation} \n\\[\n\\lim_{n \\rightarrow \\infty} \\|\\phi(x) - u_n(x \\otimes 1) u_n^*\\|_{\\tau \\otimes \\tau} = 0 \\text{ for all } x \\in \\mathcal{R}.\n\\]\nNote that \n\\[\\mathrm{Ad}(u_n^*) \\circ \\phi \\circ \\delta_g = \\mathrm{Ad}(u_n^*\\mathbbm{u}_g (\\delta \\otimes \\delta)_g(u_n)) \\circ (\\delta \\otimes \\delta)_g \\circ \\mathrm{Ad}(u_n^*) \\circ \\phi.\\]\nAs a consequence of \\eqref{eq:approx_coboundary}, then for every compact $K \\subseteq G$ one has \n\\begin{equation}\\label{eq:approximate_invariance_perturbation}\\lim_{n \\rightarrow \\infty} \\max_{g \\in K} \\sup_{x \\in (\\mathcal{R})_1} \\|(\\mathrm{Ad}(u_n^*) \\circ \\phi \\circ \\delta_g)(x) - ((\\delta \\otimes \\delta)_g \\circ \\mathrm{Ad}(u_n^*) \\circ \\phi)(x)\\|_{\\tau \\otimes \\tau} = 0.\\end{equation}\nIn particular, applying this to $x = \\phi^{-1}(u_n)$ and using that $(\\tau \\otimes \\tau) = \\tau \\circ \\phi^{-1}$ yields\n\\[\\lim_{n \\rightarrow \\infty} \\max_{g \\in K}\\|(\\mathrm{Ad}(\\phi^{-1}(u_n^*)) \\circ \\delta_g) (\\phi^{-1}(u_n)) -(\\phi^{-1}\\circ (\\delta \\otimes \\delta)_g)(u_n)\\|_\\tau = 0.\\]\nCombining this with \\eqref{eq:approx_coboundary} again, one gets\n\\begin{equation}\\label{eq:approx_coboundary_inverse}\n \\lim_{n \\rightarrow \\infty} \\max_{g \\in K} \\| \\phi^{-1}(u_n^*)\\, \\delta_g(\\phi^{-1}(u_n)) - \\phi^{-1}(\\mathbbm{u}^*_g)\\|_{\\tau \\otimes \\tau} = 0 \\text{ for every compact } K \\subseteq G.\n\\end{equation}\nFirst, we prove that $\\delta$ has approximately inner half-flip. \nDefine the cocyle morphism $(\\psi, \\mathbbm{v}) := (\\phi, \\mathbbm{u})^{-1} \\circ (1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R})$. Note that\n\\begin{align*}\n1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R} \n&= (\\phi, \\mathbbm{u}) \\circ (\\psi, \\mathbbm{v})\\\\\n&\\approx_{\\mathrm{u}} (\\mathrm{id}_\\mathcal{R} \\otimes 1_\\mathcal{R}) \\circ (\\psi, \\mathbbm{v})\\\\\n&= (\\psi \\otimes 1_\\mathcal{R}, \\mathbbm{v} \\otimes 1).\n\\end{align*} \nApplying the equivariant flip automorphism to both sides of this equivalence, we get that\n\\begin{equation}\\label{eq:approx_equivalence_first_factor_embedding} \\mathrm{id}_\\mathcal{R} \\otimes 1_\\mathcal{R} \\approx_\\mathrm{u} (1_\\mathcal{R} \\otimes \\psi, 1 \\otimes \\mathbbm{v}) .\\end{equation}\nWe also get\n\\begin{align*}\n(\\psi \\otimes 1_\\mathcal{R}, \\mathbbm{v} \\otimes 1) &= (\\phi^{-1} \\otimes \\mathrm{id}_\\mathcal{R}, \\phi^{-1}(\\mathbbm{u})^* \\otimes 1) \\circ (1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R} \\otimes 1_\\mathcal{R})\\\\\n\\overset{\\eqref{eq:approx_equivalence_first_factor_embedding}}&{\\approx_{\\mathrm{u}}} (\\phi^{-1} \\otimes \\mathrm{id}_\\mathcal{R}, \\phi^{-1}(\\mathbbm{u})^* \\otimes 1) \\circ (1_\\mathcal{R} \\otimes 1_\\mathcal{R} \\otimes \\psi, 1\\otimes 1 \\otimes \\mathbbm{v})\\\\\n&=(1_\\mathcal{R} \\otimes \\psi, \\phi^{-1}(\\mathbbm{u})^* \\otimes \\mathbbm{v})\\\\\n\\overset{\\eqref{eq:approx_coboundary_inverse}}&{\\approx_{\\mathrm{u}}} (1_\\mathcal{R} \\otimes \\psi, 1 \\otimes \\mathbbm{v}).\n\\end{align*}\nBy transitivity we get that ${1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R}\\approx_\\mathrm{u} \\mathrm{id}_\\mathcal{R} \\otimes 1_\\mathcal{R}}$.\n\nNext we prove the existence of a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow (\\mathcal{R}_{\\omega,\\delta}, \\delta_\\omega)$. Define the sequence of trace-preserving $*$-homomorphisms \n\\[\\chi_n = \\phi^{-1} \\circ \\mathrm{Ad}(u_n) \\circ (1_\\mathcal{R} \\otimes \\mathrm{id}_\\mathcal{R}).\\] \nWe conclude from \\eqref{eq:approximate_invariance_perturbation} that for all $x \\in \\mathcal{R}$\n\\[\\lim_{n \\rightarrow \\infty} \\max_{g \\in K} \\|\\delta_g(\\chi_n(x)) - \\chi_n(\\delta_g(x))\\|_{\\tau} =0.\\]\nFrom this and the fact that all $\\chi_n$ are trace-preserving it also follows that $(\\chi_n(x))_{n \\in \\mathbb{N}}$ belongs to $\\mathcal{E}^\\omega_\\alpha$.\nMoreover, for any $x,y \\in \\mathcal{R}$\n\\begin{align*}\n\\lim_{n \\rightarrow \\infty} \\|[x, \\chi_n(y)] \\|_\\tau\n&= \\lim_{n \\rightarrow \\infty}\\|[\\phi(x), u_n(1 \\otimes y) u_n^*]\\|_{\\tau \\otimes \\tau}\\\\\n&= \\lim_{n \\rightarrow \\infty}\\|u_n[x \\otimes 1, 1 \\otimes y] u_n^*\\|_{\\tau \\otimes \\tau}\\\\\n&=0. \n\\end{align*}\nSo the $\\chi_n$ induce a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow (\\mathcal{R}_{\\omega,\\delta}, \\delta_\\omega)$.\n\\end{proof}\n\nThe following can be seen as a direct generalization of the famous McDuff theorem \\cite{McDuff70} to actions on semi-finite von Neumann algebras.\n\n\\begin{corollary} \\label{cor:equivalence_equivariant_McDuff}\nLet $\\alpha:G \\curvearrowright M$ be an action on a semi-finite von Neumann algebra with separable predual and let $\\delta:G \\curvearrowright \\mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.\nThen the following are equivalent:\n\\begin{enumerate}[leftmargin=*,label=\\textup{(\\arabic*)}]\n\\item There exists a cocycle conjugacy $(\\theta,\\mathbbm{v})\\colon (M,\\alpha) \\to (M\\bar{\\otimes}\\mathcal{R},\\alpha \\otimes \\delta)$ with $\\theta|_{\\mathcal{Z}(M)}=\\operatorname{id}_{\\mathcal{Z}(M)}\\otimes 1_{\\mathcal R}$; \\label{prop:McDuff:1}\n\\item $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\delta$; \\label{prop:McDuff:2}\n\\item There exists a unital equivariant $*$-homomorphism $(\\mathcal{R},\\delta) \\rightarrow (M_{\\omega,\\alpha}, \\alpha_\\omega)$. \\label{prop:McDuff:3}\n\\end{enumerate}\n\\end{corollary}\n\\begin{proof}\nThe implication \\ref{prop:McDuff:1}$\\Rightarrow$\\ref{prop:McDuff:2} is tautological.\nSince strong self-absorption implies approximately inner half-flip by Proposition~\\ref{theorem:equivalence_ssa}, the implication \\ref{prop:McDuff:3}$\\Rightarrow$\\ref{prop:McDuff:1} follows from Proposition~\\ref{theorem:sufficient_criterium_strong_absorption}.\n\nIn order to prove \\ref{prop:McDuff:2}$\\Rightarrow$\\ref{prop:McDuff:3}, it is enough to show that there exists a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow ((M \\bar{\\otimes} \\mathcal{R})_{\\omega,\\alpha \\otimes \\delta}, (\\alpha \\otimes \\delta)_\\omega)$.\nWe know there exists a unital equivariant $*$-homomorphism $(\\mathcal{R}, \\delta) \\rightarrow (\\mathcal{R}_{\\omega,\\delta}, \\delta_\\omega)$ by Proposition \\ref{theorem:equivalence_ssa}.\nSince the latter is unitally and equivariantly contained in $((M \\bar{\\otimes} \\mathcal{R})_{\\omega,\\alpha \\otimes \\delta}, (\\alpha \\otimes \\delta)_\\omega)$, this finishes the proof.\n\\end{proof}\n\nThe following lemma is a straightforward application of the noncommutative Rokhlin Theorem of Masuda \\cite[Theorem 4.8]{Masuda13}\\footnote{This Rokhlin Theorem is actually a variant of Ocneanu's noncommutative Rokhlin Theorem \\cite[Theorem 6.1]{Ocneanu85}, and the proof of Masuda's version is essentially the same as Ocneanu's proof. While it is possible to deduce what we need from Ocneanu's Theorem, here we cite Masuda's version for convenience of the reader, as it is directly applicable and there is no need to deal with $\\varepsilon$-paving families of $G$.}\n\n\\begin{lemma}\\label{lem:approx-central-embeddings}\nLet $\\alpha\\colon G \\curvearrowright M$ be an action of a countable discrete group on a McDuff factor (i.e., $M \\cong M \\bar{\\otimes} \\mathcal{R}$) with separable predual.\nLet $N\\subseteq G$ be the normal subgroup consisting of all elements $g\\in G$ such that $\\alpha_{\\omega,g} \\in\\operatorname{Aut}(M_\\omega)$ is trivial.\nSuppose that the quotient group $G_0=G\/N$ is amenable with quotient map $\\pi: G\\to G_0$.\nLet $\\delta: G_0\\curvearrowright\\mathcal R$ be an action with induced $G$-action $\\delta_\\pi=\\delta\\circ\\pi$. \nThen there exists an equivariant unital $*$-homomorphism $(\\mathcal R,\\delta_\\pi)\\to (M_\\omega,\\alpha_\\omega)$.\n\\end{lemma}\n\\begin{proof}\nConsider the induced faithful action $\\gamma: G_0\\curvearrowright M_\\omega$ via $\\gamma_{gN}=\\alpha_{\\omega,g}$.\nThen clearly the claim is equivalent to finding a $G_0$-equivariant unital $*$-homomorphism $(\\mathcal R,\\delta)\\to (M_\\omega,\\gamma)$.\nLet us introduce some notation.\nLet $(x_n)_{n \\in \\mathbb{N}} \\in \\ell^\\infty(M)$ be a sequence representing an element $X \\in M_\\omega$. Then we set $\\tau_\\omega(X) = \\lim_{n \\rightarrow \\omega} x_n$, where the limit is taken in the $\\sigma$-weak topology.\nSince $M$ is a factor and $\\tau_\\omega(X)$ is central, this limit belongs to $\\mathbb{C}$. For any $\\phi \\in M_*$ we have\n\\[\\phi^\\omega(X) = \\lim_{n \\rightarrow \\omega}\\phi(x_n) = \\phi(\\tau_\\omega(X)) = \\tau_\\omega(X).\\]\nIn particular, $\\tau_\\omega$ defines a normal faithful tracial state on $M_\\omega$ and we denote $\\|X\\|_1 = \\tau_\\omega(|X|)$.\n\nSince $M$ is McDuff we can find a unital $*$-homomorphism $\\Phi: \\mathcal{R} \\rightarrow M_\\omega$. Fix $\\varepsilon >0$ and a symmetric finite subset $F \\subset\\joinrel\\subset G_0$ containing the neutral element.\nBy \\cite[Lemmas 5.6 and 5.7]{Ocneanu85} we are allowed to apply \\cite[Theorem 4.8]{Masuda13} to the action $\\gamma\\colon G_0\\curvearrowright M_\\omega$.\nSo if $S \\subset\\joinrel\\subset G_0$ is a finite $(F,\\varepsilon)$-invariant subset, then there exists a partition of unity of projections $\\{E_s\\}_{s \\in S} \\subset M_\\omega$ such that\n\\begin{align}\n\\sum_{s \\in g^{-1}S \\cap S} \\|\\gamma_{g}(E_s) - E_{gs}\\|_1 &< 4\\varepsilon^{1\/2} \\text{ for all } g \\in F; \\label{eq:r-lemma1}\\\\\n\\sum_{s \\in S\\setminus g^{-1}S} \\|E_s\\|_1 &< 3\\varepsilon^{1\/2} \\text{ for all } g \\in F;\\label{eq:r-lemma2}\\\\\n[E_s, \\gamma_h(X)] &= 0 \\text{ for all } s \\in S,\\ h\\in G_0,\\ X \\in \\Phi(\\mathcal{R}).\\label{eq:commutation_image}\n\\end{align}\nDefine \n\\[\n\\Psi: \\mathcal{R} \\rightarrow M_\\omega \\text{ via } \\Psi(x) = \\sum_{s \\in S} \\gamma_{s}(\\Phi(\\delta_{s}^{-1}(x))) E_s.\n\\]\nThis is a unital trace-preserving $*$-homomorphism because the projections $E_s$ form a partition of unity and condition \\eqref{eq:commutation_image}.\nFor $g \\in F$ and $x \\in \\mathcal{R}$ we use conditions \\eqref{eq:r-lemma1} and \\eqref{eq:r-lemma2} to observe\n\\begin{align*}\n\\|\\gamma_{g}(\\Psi(x)) - \\Psi(\\delta_g(x))\\|_1 &= \\Big\\| \\sum_{s \\in S} \\gamma_{gs}(\\Phi(\\delta_s^{-1}(x))) \\gamma_{g}(E_s) - \\sum_{s \\in g^{-1}S} \\gamma_{gs}(\\Phi(\\delta_{s}^{-1}(x))) E_{gs} \\Big\\|_1\n\\\\\n&\\leq \\sum_{s \\in S \\cap g^{-1}S} \\left\\| \\gamma_{gs}(\\Phi(\\delta_s^{-1}(x))) ( \\gamma_{g}(E_s) - E_{gs}) \\right\\|_1\\\\\n& \\qquad + \\sum_{s \\in S\\setminus g^{-1}S} \\|\\gamma_s(\\Phi(\\delta_s^{-1}(x))) E_s\\|_1 + \\sum_{s \\in S \\setminus gS} \\|\\gamma_{s}(\\Phi(\\delta_{g^{-1}s}^{-1}(x)))E_s\\|_1 \\\\\n&< 10\\varepsilon^{1\/2}\\|x\\|.\n\\end{align*}\nSince we can do this for arbitrary $\\varepsilon >0$ and $F \\subset\\joinrel\\subset G$, the claim follows via a standard reindexing trick.\n\\end{proof}\n\nThe following result recovers a famous result due to Ocneanu \\cite[Theorem 1.2 and following remark]{Ocneanu85} as well as his uniqueness theorem of outer actions of amenable groups on $\\mathcal R$.\nWe include this proof for the reader's benefit as it is comparably elementary with the methods established so far.\n\n\\begin{theorem} \\label{theorem:model-absorption}\nLet $G$ and $G_1$ be countable discrete groups with $G_1$ amenable.\nLet $\\delta: G_1\\curvearrowright\\mathcal R$ be an outer action and $\\alpha\\colon G \\curvearrowright M$ an action on a semi-finite McDuff factor (i.e.\\ $M \\cong M \\bar{\\otimes} \\mathcal{R}$) with separable predual.\nThen:\n\\begin{enumerate}[label=\\textup{(\\roman*)},leftmargin=*]\n\\item $\\delta$ is strongly self-absorbing and cocycle conjugate to any other outer action $G_1\\curvearrowright\\mathcal R$.\\label{theorem:model-absorption:1}\n\\item Suppose $H\\subseteq G$ is a normal subgroup containing all elements $g\\in G$ such that $\\alpha_{\\omega,g}$ is trivial.\nSuppose $G_1=G\/H$ with quotient map $\\pi: G\\to G_1$.\nThen $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\delta_\\pi$. \\label{theorem:model-absorption:2}\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\\ref{theorem:model-absorption:1}:\nLet $\\tau$ be the unique tracial state on $\\mathcal R$, which we may use to define the 1-norm $\\|\\cdot\\|_1=\\tau(|\\cdot|)$ on $\\mathcal R$.\nSet $\\delta^{(2)}=\\delta\\otimes\\delta: G_1\\curvearrowright \\mathcal R\\bar{\\otimes}\\mathcal R=:\\mathcal R^{(2)}$, which is also an outer action.\nSince the flip automorphism $\\sigma$ on $\\mathcal R^{(2)}$ is known to be approximately inner, we may pick a unitary $U\\in\\mathcal{U}(\\mathcal R^{(2)\\omega})$ with $UxU^*=\\sigma(x)$ for all $x\\in\\mathcal R^{(2)}$.\n\nBy \\cite[Theorem 3.2]{Connes77}, the induced action $\\delta^{(2)\\omega}: G_1\\curvearrowright\\mathcal R^{(2)}_\\omega$ is faithful.\nWe may hence argue exactly as in the proof of Lemma~\\ref{lem:approx-central-embeddings} and apply Masuda's noncommutative Rokhlin lemma.\nSo let $F\\subset\\joinrel\\subset G_1$ be a symmetric finite set and $\\varepsilon>0$.\nIf $S \\subset\\joinrel\\subset G_1$ is a finite $(F,\\varepsilon)$-invariant subset, then there exists a partition of unity of projections $\\{E_s\\}_{s \\in S} \\subset \\mathcal R^{(2)}_\\omega$ such that\n\\begin{align}\n\\sum_{s \\in g^{-1}S \\cap S} \\|\\delta^{(2)\\omega}_{g}(E_s) - E_{gs}\\|_1 &< 4\\varepsilon^{1\/2} \\text{ for all } g \\in F; \\label{eq:r-lemma1-}\\\\\n\\sum_{s \\in S\\setminus g^{-1}S} \\|E_s\\|_1 &< 3\\varepsilon^{1\/2} \\text{ for all } g \\in F;\\label{eq:r-lemma2-}\\\\\n[E_s, x] &= 0 \\text{ for all } s \\in S,\\ x\\in\\{ \\delta^{(2)\\omega}_h(U)\\}_{h\\in G_1}.\\label{eq:commutation_image-}\n\\end{align}\nDefine $W = \\sum_{s \\in S} \\delta^{(2)\\omega}_{s}(U) E_s$.\nThis is also a unitary in $\\mathcal R^{(2)\\omega}$ implementing the flip $\\sigma$ because the projections $E_s$ form a partition of unity and condition \\eqref{eq:commutation_image-}.\nFor $g \\in F$ we use conditions \\eqref{eq:r-lemma1-} and \\eqref{eq:r-lemma2-} to observe\n\\begin{align*}\n\\|\\delta^{(2)\\omega}_{g}(W) - W\\|_1 &= \\Big\\| \\sum_{s \\in S} \\delta^{(2)\\omega}_{gs}(U) \\delta^{(2)\\omega}_{g}(E_s) - \\sum_{s \\in g^{-1}S} \\delta^{(2)\\omega}_{gs}(U) E_{gs} \\Big\\|_1\n\\\\\n&\\leq \\sum_{s \\in S \\cap g^{-1}S} \\left\\| \\delta^{(2)\\omega}_{gs}(U) ( \\delta^{(2)\\omega}_{g}(E_s) - E_{gs}) \\right\\|_1\\\\ & \\qquad + \\sum_{s \\in S\\setminus g^{-1}S} \\| \\delta^{(2)\\omega}_{s}(U) E_s\\|_1 + \\sum_{s \\in S \\setminus gS} \\|\\delta^{(2)\\omega}_{s}(U) E_{s}\\|_1 \\\\\n&< 10\\varepsilon^{1\/2}.\n\\end{align*}\nSince we can do this for arbitrary $\\varepsilon >0$ and $F \\subset\\joinrel\\subset G_1$, we can use a reindexing trick to obtain a unitary $W\\in\\mathcal{U}((\\mathcal R^{(2)\\omega})^{\\delta^{(2)\\omega}})$ with $WxW^*=\\sigma(x)$ for all $x\\in\\mathcal R^{(2)}$.\nIn particular, $\\delta$ has approximately inner half-flip.\nIf we apply Lemma~\\ref{lem:approx-central-embeddings} for $G=G_1$, $N=\\{1\\}$ and $\\delta$ in place of $\\alpha$, it follows with Theorem~\\ref{theorem:equivalence_ssa} that $\\delta$ is strongly self-absorbing.\nIf $\\gamma\\colon G_1\\curvearrowright\\mathcal R$ is another outer action, then the same follows for $\\gamma$.\nBy applying Lemma~\\ref{lem:approx-central-embeddings} and Corollary~\\ref{cor:equivalence_equivariant_McDuff} twice, we obtain that $\\gamma$ and $\\delta$ absorb each other, hence they are cocycle conjugate.\n\n\\ref{theorem:model-absorption:2}:\nDefine $N$ to be the subgroup of all elements $g\\in G$ such that $\\alpha_{\\omega,g}$ is trivial, and set $G_0=G\/N$ with quotient map $\\pi^0: G\\to G_0$.\nBy assumption we have $N\\subseteq H$, hence $G_1$ can be viewed as a quotient of $G_0$ via a map $\\pi^{0\\to 1}: G_0\\to G_1$.\nThen $\\pi=\\pi^{0\\to 1}\\circ\\pi^0$ and the action $\\delta_{\\pi^{0\\to 1}}:=\\delta\\circ\\pi^{0\\to 1}$ is a $G_0$-action with $(\\delta_{\\pi^{0\\to 1}})_{\\pi^0}=\\delta_\\pi$.\nBy Lemma~\\ref{lem:approx-central-embeddings}, it follows that there exists an equivariant unital $*$-homomorphism $(\\mathcal R,\\delta_\\pi) \\to (M_\\omega,\\alpha_\\omega)$.\nSince $\\delta$ was strongly self-absorbing, so is $\\delta_\\pi$ as a $G$-action and the claim follows by Corollary~\\ref{cor:equivalence_equivariant_McDuff}.\n\\end{proof}\n\n\\section{Actions of discrete amenable groupoids}\n\nWe begin by recalling the definition of a discrete measured groupoid. This concept dates back to \\cite{Mackey63}. \n\n\\begin{definition}A discrete measured groupoid $\\mathcal{G}$ is a groupoid in the usual sense that carries the following additional structure:\n\\begin{enumerate}[label=$\\bullet$,leftmargin=*]\n\\item The groupoid $\\mathcal{G}$ is a standard Borel space and the units $\\mathcal{G}^{(0)} \\subset \\mathcal{G}$ form a Borel subset.\n\\item The source and target maps $s,t\\colon \\mathcal{G} \\rightarrow \\mathcal{G}^{(0)}$ are Borel and countable-to-one.\n\\item Define $\\mathcal{G}^{(2)}:= \\{(g,h) \\in \\mathcal{G} \\times \\mathcal{G} \\mid s(g) = t(h)\\}.$ The multiplication map $\\mathcal{G}^{(2)} \\rightarrow \\mathcal{G}\\colon (g,h) \\mapsto gh$ and the inverse map $\\mathcal{G}\\rightarrow \\mathcal{G}\\colon g \\mapsto g^{-1}$ are Borel.\n\\item $\\mathcal{G}^{(0)}$ is equipped with a measure $\\mu$ satisfying the following property.\nLet $\\mu_s$ and $\\mu_t$ denote the $\\sigma$-finite measures on $\\mathcal{G}$ obtained by integrating the counting measure over $s,t\\colon \\mathcal{G}\\rightarrow \\mathcal{G}^{(0)}$, respectively. Then $\\mu_s \\sim \\mu_t$. \n\\end{enumerate}\n\\end{definition}\n\\begin{example}\nAn important example of a discrete measured groupoid is the \\emph{transformation groupoid} associated to a non-singular action $G \\curvearrowright (X, \\mu)$ of a countable, discrete group $G$ on a standard measure space $(X, \\mu)$. In that case the unit space can be identified with $X$ and the measure $\\mu$ satisfies the necessary requirements. We denote this transformation groupoid by $G \\ltimes X$. \n\\end{example}\nWe assume the reader is familiar with the concept of amenability for discrete measured groupoids, see \\cite[Definition 3.2.8]{AnantharamanRenault00}. In particular, recall that a groupoid $\\mathcal{G}$ is amenable if and only if the associated equivalence relation\n\\[\n\\big\\{ \\big( s(g),t(g) \\big) \\mid g \\in \\mathcal{G} \\big\\}\n\\]\nand almost all associated isotropy groups\n\\[\\{g \\in \\mathcal{G} \\mid s(g) = t(g) = x\\} \\quad \\text{for } x \\in \\mathcal{G}^{(0)}\\]\nare amenable (see e.g.\\ \\cite[Corollary 5.3.33]{AnantharamanRenault00}). \nIn case of a non-singular action $G \\curvearrowright (X, \\mu)$, the associated transformation groupoid $G \\ltimes X$ is amenable if and only if the action is amenable in the sense of Zimmer (\\cite{Zimmer78, Zimmer77, Zimmer77b}).\n\n\\begin{remark}\nIn this paper we work with measurable fields of all kinds of separable structures, such as Polish spaces, Polish groups, von Neumann algebras with separable predual, and fields that can be derived from these.\nFor Polish groups the definition is explicitly given in \\cite{Sutherland85}, while the other notions can be defined in an analogous way.\nWe only consider the measurable setting and hence will often implicitly discard sets of measure zero whenever needed.\nThis means all measurable fields, groupoids and isomorphisms between measure spaces are defined up to sets of measure zero.\nBecause of this, all statements should be interpreted as holding only almost everywhere whenever appropriate.\nThis also means that we have no problem to apply the von Neumann measurable selection theorem (see e.g.\\ \\cite[Theorem 18.1]{Kechris95}) to obtain measurable sections after deletion of a suitable null set, and we will often omit the fine details related to such arguments. \n\\end{remark}\n\n\\begin{definition}\nLet $\\mathcal{G}$ be a discrete measured groupoid with unit space $(X,\\mu)$. An \\emph{action} $\\alpha$ of $\\mathcal{G}$ on a measurable field $(B_x)_{x \\in X}$ of factors with separable predual is given by a measurable field of $*$-isomorphisms \\[\\mathcal{G} \\ni g \\mapsto \\alpha_g\\colon B_{s(g)} \\rightarrow B_{t(g)},\\] \nsatisfying $\\alpha_g \\circ \\alpha_h = \\alpha_{gh}$ for all $(g,h) \\in \\mathcal{G}^{(2)}$.\n\\end{definition}\n\n\\begin{definition}\\label{def:cc_groupoid_actions}\nLet $\\mathcal{G}$ be a discrete measured groupoid with unit space $(X,\\mu)$. Suppose that $\\alpha$ and $\\beta$ are actions of $\\mathcal{G}$ on the measurable fields of factors with separable predual $(B_x)_{x \\in X}$ and $(D_x)_{x \\in X}$, respectively. The actions are said to be \\emph{cocycle conjugate} if there exists a measurable field of $*$-isomorphisms $X \\ni x \\mapsto \\theta_x\\colon B_x \\rightarrow D_x$ and a measurable field of unitaries $\\mathcal{G} \\ni g \\mapsto w_g \\in \\mathcal{U}(D_{t(g)})$ satisfying \n\\begin{align*}\n\\theta_{t(g)} \\circ \\alpha_g \\circ \\theta_{s(g)}^{-1} &= \\mathrm{Ad} w_g \\circ \\beta_g \\text{ for all } g \\in \\mathcal{G}\\\\\n w_g \\beta_g(w_h) &= w_{gh} \\text{ for all } (g,h) \\in \\mathcal{G}^{(2)}.\n\\end{align*}\n\\end{definition}\n\n\\begin{example} \\label{ex:central_decomposition}\nLet $B$ be a von Neumann algebra acting on a separable Hilbert space $\\mathcal{H}$. Then we can centrally decompose $B$ as \n\\[ (B, \\mathcal{H}) = \\int_{X}^\\oplus (B_x, \\mathcal{H}_x)\\, d\\mu(x), \\]\nwhere $(X,\\mu)$ is a standard probability space such that $L^\\infty(X,\\mu) \\cong \\mathcal{Z}(B)$ (see e.g.\\ \\cite[Theorem IV.8.21]{Takesaki02}).\nIn this way we get a measurable field of factors $(B_x)_{x \\in X}$. When $B$ is of type I, II$_1$, II$_\\infty$ or III, every $B_x$ has the same type by \\cite[Corollary V.6.7]{Takesaki02}.\nWe claim that if $B \\cong B \\bar{\\otimes} \\mathcal{R}$, then every fibre $B_x$ is McDuff.\nPick a $*$-isomorphism $\\Phi\\colon B \\rightarrow B \\bar{\\otimes} \\mathcal{R}$.\nThen there exists (see for example \\cite[Theorem III.2.2.8]{Blackadar}) a unitary $U\\colon \\mathcal{H} \\otimes \\ell^2(\\mathbb{N}) \\rightarrow \\mathcal{H} \\otimes L^2(\\mathcal{R}, \\tau_\\mathcal{R}) \\otimes \\ell^2(\\mathbb{N})$ such that the amplification of $\\Phi$ is spatial, i.e.\\ $\\Phi(b) \\otimes 1 = U(x \\otimes 1)U^*$.\nWe have the decompositions\n\\[(B \\otimes \\mathbb{C}, \\mathcal{H} \\otimes \\ell^2(\\mathbb{N})) =\\int_X^\\oplus (B_x \\otimes \\mathbb{C}, \\mathcal{H}_x \\otimes \\ell^2(\\mathbb{N})) \\, d \\mu(x),\\text{ and}\\]\n\\[(B \\bar{\\otimes} \\mathcal{R} \\otimes \\mathbb{C},\\mathcal{H} \\otimes L^2(\\mathcal{R}, \\tau_\\mathcal{R}) \\otimes \\ell^2(\\mathbb{N})) = \\int_X^\\oplus \\left(B_x \\bar{\\otimes}\\mathcal{R} \\otimes \\mathbb{C}, \\mathcal{H}_x \\otimes L^2(\\mathcal{R}, \\tau_\\mathcal{R}) \\otimes \\ell^2(\\mathbb{N})\\right) \\, d\\mu(x).\\]\nAs the amplification of $\\Phi$ necessarily maps the diagonal algebras (i.e.\\ the respective centers) to each other, we can use the fact that the disintegration is unique \\cite[Theorem 8.23]{Takesaki02}. In particular, this means every $B_x$ is isomorphic to some $B_y \\bar{\\otimes} \\mathcal{R}$ and hence, $B_x \\cong B_x \\bar{\\otimes} \\mathcal{R}$. \n\nNow suppose $\\alpha\\colon G \\curvearrowright B$ is an action of a countable discrete group. \nLet ${\\mathcal{G} = G \\ltimes X}$ denote the transformation groupoid associated to the action on $(X, \\mu)$ induced by $\\alpha$.\n Then $\\alpha$ can be disintegrated as an action $\\bar{\\alpha}$ of $\\mathcal{G}$ on the measurable field $(B_x)_{x \\in X}$ (see e.g.\\ \\cite[Corollary X.3.12]{Takesaki02}\\footnote{When the field of factors $(B_x)_{x \\in X}$ is constant (for example when $B$ is injective type II$_1$ and all $B_x$ are $\\mathcal{R}$), this construction dates back to \\cite{SutherlandTakesaki89}. \n There, the groupoid $\\mathcal{G}$ and action $\\bar{\\alpha}$ are also called the \\emph{ancillary groupoid} and \\emph{ancillary action} associated to $\\alpha$. }) such that given $b= \\int_x^\\oplus b_x\\, d \\mu(x)$, we have\n\\[\\alpha_g(b)_{g \\cdot x} = \\bar{\\alpha}_{(g,x)}(b_{x}) \\text{ for } (g,x) \\in \\mathcal{G}.\\]\n\nAssume $\\beta\\colon G \\curvearrowright D$ is another action on a separably acting von Neumann algebra $(D, \\mathcal{K}) = \\int_X^\\oplus (D_x, \\mathcal{K}_x)\\, d\\mu(x)$, and assume that $\\beta$ induces the same action on $(X,\\mu)$ as $\\alpha$. \nLet $\\bar{\\beta}$ denote its decomposition as an action of $\\mathcal{G}$ on $(D_x)_{x \\in X}$. \nIf $\\bar{\\alpha}$ and $\\bar{\\beta}$ are cocycle conjugate in the sense of Definition~\\ref{def:cc_groupoid_actions}, then $\\alpha$ and $\\beta$ are cocycle conjugate as actions on von Neumann algebras.\nIndeed, let $X \\ni x \\mapsto \\theta_x\\colon A_x \\rightarrow B_x$ and $\\mathcal{G} \\ni (g,x) \\mapsto w_{(g,x)} \\in \\mathcal{U}(B_{g \\cdot x})$ denote the measurable fields of $*$-isomorphisms and unitaries realizing a cocycle conjugacy between $\\bar{\\alpha}$ and $\\bar{\\beta}$.\nThis gives rise to a $*$-isomorphism $\\theta\\colon A \\rightarrow B$ given by $\\theta(a)_x = \\theta_x(a_x)$ for $a = \\int_X^\\oplus a_x\\, \\mathrm{d} \\mu(x) \\in A$, and for each $g \\in G$ we get a unitary $\\mathbbm{v}_g \\in \\mathcal{U}(B)$ by\n$(\\mathbbm{v}_g)_x = w_{(g, g^{-1}\\cdot x)}$. The pair $(\\theta, \\mathbbm{v})$ is a cocycle conjugacy.\n\nConversely, one can show that every cocycle conjugacy $(\\theta, \\mathbbm{v}):(B, \\alpha) \\rightarrow (D, \\beta)$ with $\\theta\\big\\lvert_{L^\\infty(X)} = \\mathrm{id}\\big\\lvert_{L^\\infty(X)}$ gives rise to a cocycle conjugacy in the sense of Definition~\\ref{def:cc_groupoid_actions}.\n\\end{example}\n\nWe will subsequently need the following lemma (albeit only for discrete groups) about strongly self-absorbing actions.\n\n\\begin{lemma} \\label{lem:special-cc}\nLet $G_j$, $j=1,2$, be two second-countable locally compact groups with a continuous group isomorphism $\\phi: G_1\\to G_2$.\nLet $\\delta^{(j)}\\colon G_j\\curvearrowright\\mathcal R$, $j=1,2$, be two strongly self-absorbing actions and choose a cocycle conjugacy $(\\Phi,\\mathbb{U})\\colon (\\mathcal R,\\delta^{(2)})\\to (\\mathcal R\\bar{\\otimes}\\mathcal R,\\delta^{(2)}\\otimes\\delta^{(2)})$ that is approximately unitarily equivalent to $\\operatorname{id}_{\\mathcal R}\\otimes 1_{\\mathcal R}$.\n(Note that $\\phi$ allows us to identify $(\\Phi,\\mathbb{U}\\circ\\phi)$ with a cocycle conjugacy between the $G_1$-action $\\delta_\\phi:=\\delta^{(2)}\\circ\\phi$ and its tensor square.)\nLet $\\alpha^{(j)}\\colon G_j\\curvearrowright M_j$, $j=1,2$, be two actions on separably acting von Neumann algebras.\nGiven a cocycle conjugacy\n\\[\n(\\theta,\\mathbbm{v})\\colon (M_1,\\alpha^{(1)})\\to \\big( M_2\\bar{\\otimes}\\mathcal{R}, (\\alpha^{(2)}\\otimes\\delta^{(2)})\\circ\\phi \\big),\n\\]\nand a conjugacy $\\Delta\\colon (\\mathcal R, \\delta^{(2)} \\circ \\phi)\\to(\\mathcal R,\\delta^{(1)})$,\nconsider the cocycle conjugacy of $G_1$-actions\n\\[\n(\\Psi,\\mathbb{V})= \\big( (\\theta, \\mathbbm{v})^{-1} \\otimes \\Delta \\big) \\circ \\big( \\mathrm{id}_{M_2} \\otimes (\\Phi, \\mathbbm{U}\\circ\\phi) \\big) \\circ (\\theta, \\mathbbm{v})\n\\]\nbetween $(M_1,\\alpha^{(1)})$ and $(M_1\\bar{\\otimes}\\mathcal R,\\alpha^{(1)}\\otimes \\delta^{(1)} )$.\nThen there exists a sequence of unitaries $y_n\\in\\mathcal{U}(M_1\\otimes\\mathcal R)$ such that\n\\[\n\\operatorname{Ad}(y_n)\\circ (\\operatorname{id}_{M_1}\\otimes 1_{\\mathcal R})\\to \\Psi,\\quad \\operatorname{Ad}(y_n^*)\\circ\\Psi \\to \\operatorname{id}_{M_1}\\otimes 1_{\\mathcal R}\n\\]\npoint-strongly, and such that\n\\[\ny_n(\\alpha^{(1)}\\otimes\\delta^{(1)})_g(y_n)^* \\to \\mathbb{V}_g,\\quad y_n^*\\mathbb{V}_g(\\alpha^{(1)}\\otimes\\delta^{(1)})_g(y_n)\\to 1_{M_1\\otimes\\mathcal R}\n\\]\nin the strong operator topology for all $g\\in G$ and uniformly over compact sets.\n\\end{lemma}\n\\begin{proof}\nBy assumption, there is a sequence of unitaries $z_n\\in\\mathcal{U}(\\mathcal{R}\\bar{\\otimes}\\mathcal{R})$ such that \n\\begin{equation} \\label{eq:main-tech:ssa1}\n\\|\\Phi(x) - z_n(x\\otimes 1_{\\mathcal R})z_n^*\\|_2\\to 0\\quad\\text{for all } x\\in\\mathcal R.\n\\end{equation}\nand \n\\begin{equation} \\label{eq:main-tech:ssa2}\n\\max_{h\\in K} \\|\\mathbb{U}_h - z_n(\\delta^{(2)}\\otimes\\delta^{(2)})_h(z_n)^*\\|_2\\to 0 \\quad\\text{for every compact } K\\subseteq G_2.\n\\end{equation}\nBy definition, we have\n\\[\n\\Psi = (\\theta^{-1}\\otimes\\Delta)\\circ (\\operatorname{id}_{M_2}\\otimes\\Phi)\\circ\\theta\n\\]\nand\n\\[\n\\mathbb{V}_g = ({\\theta}^{-1}\\otimes\\Delta)\\Big( (\\operatorname{id}_{M_2}\\otimes\\Phi)(\\mathbbm{v}_g) \\cdot (1_{M_2}\\otimes \\mathbb{U}_{\\phi(g)})\\cdot (\\mathbbm{v}_g^* \\otimes 1_{\\mathcal R}) \\Big),\\quad g\\in G_1.\n\\]\nIf we consider the sequence of unitaries \n\\[\ny_n=(\\theta^{-1}\\otimes\\Delta)(1_{M_2}\\otimes z_n),\n\\]\nthen we can observe with \\eqref{eq:main-tech:ssa1} that\n\\[\n\\operatorname{Ad}(y_n^*)\\circ\\Psi \\to (\\theta^{-1}\\otimes\\Delta)\\circ (\\operatorname{id}_{M_2}\\otimes\\operatorname{id}_{\\mathcal R}\\otimes 1_{\\mathcal R})\\circ\\theta = \\operatorname{id}_{M_1}\\otimes 1_{\\mathcal R}\n\\]\nas well as\n\\[\n\\operatorname{Ad}(y_n)\\circ(\\operatorname{id}_{M_1}\\otimes 1_{\\mathcal R}) = \\operatorname{Ad}(y_n)\\circ(\\theta^{-1}\\otimes\\Delta)\\circ (\\operatorname{id}_{M_2}\\otimes\\operatorname{id}_{\\mathcal R}\\otimes 1_{\\mathcal R})\\circ\\theta \\to \\Psi\n\\]\npoint-strongly.\nMoreover, given $g\\in G_1$, the fact that $(\\theta^{-1}, \\big( \\theta^{-1}(\\mathbbm{v}^*_g) \\big)_{g\\in G_1} )$ is the inverse of $(\\theta,\\mathbbm{v})$ leads to the equation $\\alpha^{(1)}_g\\circ\\theta^{-1}=\\theta^{-1}\\circ\\operatorname{Ad}(\\mathbbm{v}_g)\\circ(\\alpha^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}$.\nIf we combine this with \\eqref{eq:main-tech:ssa1} and \\eqref{eq:main-tech:ssa2}, we can see that\n\\[\n\\begin{array}{cl}\n\\multicolumn{2}{l}{ y_n^*\\mathbb{V}_g(\\alpha^{(1)}\\otimes\\delta^{(1)})_g(y_n) } \\\\\n=& y_n^*\\mathbb{V}_g\\cdot (\\theta^{-1}\\otimes\\Delta)\\big( \\operatorname{Ad}(\\mathbbm{v}_g\\otimes 1_{\\mathcal R})(1_{M_2}\\otimes(\\delta^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}(z_n)) \\big) \\\\\n=& (\\theta^{-1}\\otimes\\Delta)\\Big( (1_{M_2}\\otimes z_n)^*\\cdot (\\operatorname{id}_{M_2}\\otimes\\Phi)(\\mathbbm{v}_g) \\cdot (1_{M_2}\\otimes \\mathbb{U}_{\\phi(g)}) \\cdots \\\\\n& \\phantom{ (\\theta^{-1}\\otimes\\operatorname{id}_{\\mathcal{R}})\\Big( }\\cdots (1_{M_2}\\otimes(\\delta^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}(z_n))\\cdot (\\mathbbm{v}_g^* \\otimes 1_{\\mathcal R}) \\Big) \\\\\n=& (\\theta^{-1}\\otimes\\Delta)\\Big( \\underbrace{ \\big( \\operatorname{id}_{M_2}\\otimes ( \\operatorname{Ad}(z_n^*)\\circ\\Phi ) \\big) (\\mathbbm{v}_g) }_{\\to \\mathbbm{v_g}\\otimes 1_{\\mathcal R} } \n\\cdot \\big( 1_{M_2}\\otimes \\underbrace{ z_n^*\\mathbb{U}_{\\phi(g)}(\\delta^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}(z_n) }_{\\to 1_{\\mathcal{R}\\otimes\\mathcal{R}} } \\big)\\cdot (\\mathbbm{v}_g^* \\otimes 1_{\\mathcal R}) \\Big) \\\\\n\\to & 1_{M_1\\otimes\\mathcal R} \n\\end{array}\n\\]\nin the strong operator topology, uniformly over compact subsets.\nAnalogously, we observe the convergence\n\\[\n\\begin{array}{cl}\n\\multicolumn{2}{l}{ y_n(\\alpha^{(1)}\\otimes\\delta^{(1)})_g(y_n)^* }\\\\\n=& (\\theta^{-1}\\otimes\\Delta)\\Big( (1_{M_2}\\otimes z_n) \\cdot \\operatorname{Ad}(\\mathbbm{v}_g\\otimes 1_{\\mathcal R})(1_{M_2}\\otimes(\\delta^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}(z_n^*) ) \\Big) \\\\\n=& (\\theta^{-1}\\otimes\\Delta)\\Big( \\underbrace{ \\operatorname{Ad}(1_{M_2}\\otimes z_n)(\\mathbbm{v}_g\\otimes 1_{\\mathcal R})}_{\\to (\\operatorname{id}_{M_2}\\otimes\\Phi)(\\mathbbm{v}_g)}\n \\cdot \\big( 1_{M_2}\\otimes \\underbrace{ z_n(\\delta^{(2)}\\otimes\\delta^{(2)})_{\\phi(g)}(z_n^*) }_{\\to \\mathbb{U}_{\\phi(g)} } \\big) \\cdot (\\mathbbm{v}_g^*\\otimes 1_{\\mathcal R}) \\Big) \\\\\n\\to & \\mathbb{V}_g\n\\end{array}\n\\]\nuniformly over compact sets.\nThis finishes the proof.\n\\end{proof}\n\nIn the proof of the main technical result of this section we will make use of the following variant, due to Popa--Shlyakhtenko--Vaes, of the cohomology lemmas in \\cite[Appendix]{JonesTakesaki84} and \\cite[Theorem 5.5]{Sutherland85}.\n\n\\begin{lemma}[{\\cite[Theorem 3.5]{PopaShlyakhtenkoVaes20}}] \\label{lemma:cohomology_lemma}\nLet $\\mathcal{S}$ be an amenable countable nonsingular equivalence relation on the standard probability space $(X,\\mu)$.\nLet $(G_x \\curvearrowright P_x)_{x \\in X}$ be a measurable field of continuous actions of Polish groups on Polish spaces, on which $\\mathcal{S}$ is acting by conjugacies: we have measurable fields of group isomorphisms ${\\mathcal{S} \\ni (x,y) \\mapsto \\gamma_{(x,y)}\\colon G_y \\mapsto G_x}$ and homeomorphisms $\\mathcal{S} \\ni (x,y) \\mapsto \\beta_{(x,y)}\\colon P_y \\mapsto P_x$ satisfying \n\\[\n\\gamma_{(x,y)} \\circ \\gamma_{(y,z)} = \\gamma_{(x,z)}, \\quad \\beta_{(x,y)} \\circ \\beta_{(y,z)} = \\beta_{(x,z)}, \\quad \\beta_{(x,y)} (g \\cdot \\pi) = \\gamma_{(x,y)}(g) \\cdot \\beta_{(x,y)}(\\pi)\n\\]\nfor all $(x,y), (y,z) \\in \\mathcal{S}$ and $g \\in G_y, \\pi \\in P_y$.\nLet $X \\ni x \\mapsto \\sigma'(x) \\in P_x$ be a measurable section. Assume that for all $(x,y) \\in \\mathcal{S}$, the element $\\sigma'(x)$ belongs to the closure of ${G_x \\cdot \\beta_{(x,y)}(\\sigma'(y))}$. \nThen there exists a measurable family $\\mathcal{S} \\ni (x,y) \\mapsto v(x,y) \\in G_x$ and a section $X \\ni x \\mapsto \\sigma(x) \\in P_x$ satisfying:\n\\begin{itemize}\n\\item $v$ is a 1-cocycle: $v(x,y) \\gamma_{(x,y)}(v(y,z)) = v(x,z)$ for all $(x,y), (y,z)\\in \\mathcal{S}$;\n\\item $v(x,y)\\cdot \\beta_{(x,y)}(\\sigma(y)) = \\sigma(x)$ for all $(x,y) \\in \\mathcal{S}$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{remark} \\label{rem:Polish-spaces}\nBefore we state and prove our main technical result in detail, we would like to outline for what kind of input data the lemma above will be used.\nIn the situation considered below, the typical Polish space $P$ will be the space of cocycle conjugacies $(M,\\alpha)\\to (N,\\beta)$, where $\\alpha: H\\curvearrowright M$ and $\\beta: H\\curvearrowright N$ are actions of a countable discrete group $H$ on separably acting von Neumann algebras.\nHere we consider the topology defined by declaring that one has a convergence of nets $(\\theta^{(\\lambda)},\\mathbbm{v}^{(\\lambda)})\\to(\\theta,\\mathbbm{v})$ if and only if $\\mathbbm{v}^{(\\lambda)}_g\\to\\mathbbm{v}_g$ in the strong operator topology for all $g\\in H$, and $\\|\\varphi\\circ\\theta-\\varphi\\circ\\theta^{(\\lambda)}\\|\\to 0$ for all $\\varphi\\in N_*$.\nThis generalizes the well-known topology on the space of isomorphisms $M\\to N$ that one usually called the ``$u$-topology''; cf.\\ \\cite{Haagerup75, Winslow98}.\nThe typical Polish group acting on this Polish space would be the unitary group $\\mathcal{U}(N)$, which is equipped with the strong operator topology, where the action is defined via composition with the inner cocycle conjugacy as per Example~\\ref{ex:inner-cc} and Remark~\\ref{rem:cocycle-category}.\nIn other words, a unitary $w\\in\\mathcal{U}(N)$ moves the cocycle conjugacy $(\\theta,\\mathbbm{v})$ to $\\operatorname{Ad}(w)\\circ(\\theta,\\mathbbm{v}) = \\big( \\operatorname{Ad}(w)\\circ\\theta, (w\\mathbbm{v}\\beta_g(w)^*)_{g\\in H} \\big)$.\n\nIf we assume in addition that $M$ and $N$ are semi-finite, we may pick a faithful normal semi-finite tracial weight $\\tau$ on $N$.\nAssume that $(\\Psi,\\mathbb{V})\\in P$ is a cocycle conjugacy.\nThen it follows from \\cite[Proposition 3.7]{Haagerup75} that on the space of all isomorphisms $\\Psi': M\\to N$ with $\\tau\\circ\\Psi'=\\tau\\circ\\Psi'$, the $u$-topology coincides with the topology of point-strong convergence.\nAs a direct consequence, we may conclude the following. \nIf $(\\Phi,\\mathbb{U})\\in P$ is another cocycle conjugacy and there exists a net of unitaries $w_\\lambda\\in\\mathcal{U}(N)$ such that $w_\\lambda\\mathbb{V}_g\\beta_g(w_\\lambda)^*\\to \\mathbb{U}_g$ for all $g\\in H$ in the strong operator topology and $\\operatorname{Ad}(w_\\lambda)\\circ\\Psi\\to\\Phi$ point-strongly, then $(\\Phi,\\mathbb{U})\\in\\overline{ \\mathcal{U}(N)\\cdot (\\Psi,\\mathbb{V}) }$.\n\\end{remark}\n\nThe following can be seen as the main technical result of this section, which we previously referred to as a kind of measurable local-to-global principle.\n\n\\begin{theorem} \\label{thm:main_technical}\nLet $G \\curvearrowright (X,\\mu)$ be an amenable action (in the sense of Zimmer) of a countable, discrete group on a standard probability space.\nLet $\\alpha$ be an action of ${\\mathcal{G}:= G \\ltimes X}$ on a measurable field of semi-finite factors with separable predual $(B_x)_{x \\in X}$. \nDenote by ${X \\ni x \\mapsto H_x}$ the measurable field of isotropy groups. \nFor any action ${\\delta\\colon G \\curvearrowright \\mathcal{R}}$ on the hyperfinite II$_1$-factor, we define a tensor product action $\\alpha \\otimes \\delta$ of $\\mathcal{G}$ on the field of factors $(B_x \\bar{\\otimes} \\mathcal{R})_{x \\in X}$ by $(\\alpha \\otimes \\delta)_{(g,x)} = \\alpha_{(g,x)} \\otimes \\delta_g$.\nIf $\\delta$ is strongly self-absorbing, then the following are equivalent:\n \\begin{enumerate}[leftmargin=*,label=\\textup{(\\arabic*)}]\n \\item $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\delta$; \\label{prop:main_technical:1}\n \\item For every $x \\in X$ we have $\\alpha{|_{H_x}} \\simeq_{\\mathrm{cc}} (\\alpha \\otimes \\delta){|_{H_x}}$ as actions of $H_x$ on $B_x$ and $B_x \\bar{\\otimes} \\mathcal{R}$. \\label{prop:main_technical:2}\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nWe note that by following the argument outlined in Example~\\ref{ex:central_decomposition} and by applying Corollary~\\ref{cor:equivalence_equivariant_McDuff}, we see that $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\delta$ implies that one can find a cocycle conjugacy between $\\alpha$ and $\\alpha\\otimes\\delta$ that induces the identity map on $X$.\nHence \\ref{prop:main_technical:1} implies \\ref{prop:main_technical:2}.\n\nIn order to prove the other implication, assume \\ref{prop:main_technical:2} holds. \nTo verify \\ref{prop:main_technical:1}, we will show the existence of a measurable field of $*$-isomorphisms ${X \\ni x \\mapsto \\theta_x \\colon B_x \\rightarrow B_x \\bar{\\otimes} \\mathcal{R}}$ and unitaries ${\\mathcal{G} \\ni (g,x) \\mapsto w_{(g,x)} \\in \\mathcal{U}(B_{g \\cdot x} \\bar{\\otimes} \\mathcal{R})}$ such that\n\\begin{align}\n\\theta_{g \\cdot x} \\circ \\alpha_{(g,x)} \\circ \\theta_{x}^{-1} &= \\mathrm{Ad} \\big( w_{(g,x)} \\big) \\circ (\\alpha \\otimes \\delta)_{(g,x)} \\text{ for all } (g,x) \\in \\mathcal{G}\\label{eq:required_cc}\\\\\n w_{(g,h \\cdot x)} (\\alpha \\otimes \\delta)_{(g,h \\cdot x)}(w_{(h,x)}) &= w_{(gh,x)} \\text{ for all } g,h \\in G, x \\in X.\\label{eq:required_cocycle}\n\\end{align}\nFor every $x \\in X$, denote by $P_x$ the Polish space of cocycle conjugacies from $(B_x, \\alpha|_{H_x})$ to $(B_x \\bar{\\otimes} \\mathcal{R}, (\\alpha \\otimes \\delta)|_{H_x})$ as per Remark~\\ref{rem:Polish-spaces}. \nIn this way, we get a measurable field of Polish spaces $X \\ni x \\mapsto P_x$. Note that by assumption the sets $P_x$ are all non-empty and hence, there exists some measurable section \n$ X \\ni x \\mapsto (\\theta_x, \\mathbbm{v}_x) \\in P_x$.\nDefining $w_{(g,x)} := \\mathbbm{v}_{x}(g)$ for $g \\in H_x$, we get that --- although $w$ is not defined on all of $\\mathcal{G}$ yet --- the equations \\eqref{eq:required_cc}--\\eqref{eq:required_cocycle} are satisfied whenever they make sense.\nIn the rest of the proof we will show with the help of Lemma \\ref{lemma:cohomology_lemma} that there exists a (potentially different) section for which there exists a well-defined map $w$ on all of $\\mathcal{G}$ obeying conditions \\eqref{eq:required_cc}--\\eqref{eq:required_cocycle}.\n\nDenote by $\\mathcal{S}$ the countable non-singular orbit equivalence relation associated to $\\mathcal{G}$, i.e.,\n\\[\\mathcal{S}=\\{(g \\cdot x,x) \\mid (g,x) \\in \\mathcal{G}\\}.\\]\nAs $G \\curvearrowright (X,\\mu)$ is amenable, the relation $\\mathcal{S}$ is amenable and hence it follows by the Connes--Feldman--Weiss theorem \\cite{ConnesFeldmanWeiss81} that after neglecting a set of measure zero, there exists a partition of $X$ into $\\mathcal{S}$-invariant Borel subsets $X_0\\sqcup X_1$ such that the restriction of $\\mathcal{S}$ to $X_0$ has finite orbits and the restriction to $X_1$ is induced by a free non-singular action of $\\mathbb{Z}$. \nThis implies that the map $q \\colon \\mathcal{G} \\rightarrow \\mathcal{S}\\colon (g,x) \\mapsto (g \\cdot x,x)$ admits a measurable section, i.e., a measurable groupoid morphism $s \\colon \\mathcal{S} \\rightarrow \\mathcal{G}$ such that $q \\circ s = \\mathrm{id}_\\mathcal{S}$. \nTherefore, we can view $\\mathcal{G}$ as the semi-direct product of the field of groups $(H_x)_{x \\in X}$ and the measurable field of group isomorphisms $\\phi_{(x,y)}\\colon H_y \\rightarrow H_x$ given by $\\phi_{(x,y)}(g)= s(x,y) g s(x,y)^{-1}$.\nNote that $\\phi_{(x,y)} \\circ \\phi_{(y,z)} = \\phi_{(x,z)}$ for all $(x,y), (y,z) \\in \\mathcal{S}$.\n\nThis means that in order to define a measurable field $\\mathcal{G} \\ni (g,x) \\mapsto w_{(g,x)} \\in \\mathcal{U}(B_{g \\cdot x} \\bar{\\otimes} \\mathcal{R})$ satisfying \\eqref{eq:required_cocycle}, it suffices to find measurable families of unitaries \n${v{(x,y)} \\in \\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R})}$ for $(x,y) \\in \\mathcal{S}$ and \n$\\mathbbm{v}_x(g) \\in \\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R})$ for $x \\in X, g \\in H_x$ such that\n\\begin{itemize}\n\\item $v$ is a cocyle for the action of $\\mathcal{S}$ on the field of factors $(B_x\\bar{\\otimes} \\mathcal{R})_{x \\in X}$ induced by $s$, i.e.,\n$v{(x,y)} (\\alpha \\otimes \\delta)_{s(x,y)}(v{(y,z)})=v(x,z)$ for all $(x,y), (y,z) \\in \\mathcal{S}$;\n\\item for each $x \\in X$, the family $(\\mathbbm{v}_x(g))_{g \\in H_x}$ defines a cocycle for the action $H_x \\curvearrowright B_x \\bar{\\otimes} \\mathcal{R}$;\n\\item $\\mathbbm{v}_x(g) = v{(x,y)} (\\alpha \\otimes \\delta)_{s(x,y)}\\left(\\mathbbm{v}_y\\left({\\phi_{(y,x)}(g)}\\right)\\right) (\\alpha \\otimes \\delta)_g\\left(v(x,y)^*\\right) $ for all $(x,y) \\in \\mathcal{S}$ and $g \\in H_x$. \n\\end{itemize}\nIf these conditions are met, then setting $w_{g s(x,y)} := \\mathbbm{v}_x(g)(\\alpha \\otimes \\delta)_g (v(x,y))$ for $(x,y) \\in \\mathcal{S}$ and $g \\in H_x$ yields condition \\eqref{eq:required_cocycle}.\nMoreover, in order for a measurable field of $*$-isomorphisms ${X \\ni x \\mapsto \\theta_x \\colon B_x \\rightarrow B_x \\bar{\\otimes} \\mathcal{R}}$ to satisfy \\eqref{eq:required_cc}, it then suffices to check that \n\\begin{itemize}\n\\item for each $x \\in X$, the pair $(\\theta_x, \\mathbbm{v}_x)$ is a cocycle conjugacy in $P_x$;\n\\item for $(x,y) \\in \\mathcal{S}$ we have $\\theta_x \\circ \\alpha_{s(x,y)} \\circ \\theta_y^{-1} = \\mathrm{Ad}(v(x,y))\\circ (\\alpha \\otimes \\delta)_{s(y,z)}$.\n\\end{itemize}\nWe introduce some notation to rephrase this in terms of the terminology of Lemma~\\ref{lemma:cohomology_lemma}. Consider the natural action $\\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R}) \\curvearrowright P_x$ given by composing a cocycle conjugacy with the inner one given by $\\mathrm{Ad}(u)$ for $u \\in \\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R})$ as per Remark~\\ref{rem:Polish-spaces}.\nIn this way, we get a measurable field $(\\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R}) \\curvearrowright P_x)_{x \\in X}$ of continuous actions of Polish groups on Polish spaces.\nLet us convince ourselves that in the terminology of Lemma~\\ref{lemma:cohomology_lemma}, the equivalence relation $\\mathcal{S}$ acts by conjugacies on this field of actions.\nFirstly, we have a measurable field of group isomorphisms \n\\[\n{\\mathcal{S} \\ni (x,y) \\mapsto \\gamma_{(x,y)} = (\\alpha \\otimes \\delta)_{s(x,y)}|_{\\mathcal{U}(B_y \\bar{\\otimes} \\mathcal{R})} \\colon \\mathcal{U}(B_y \\bar{\\otimes} \\mathcal{R})\\rightarrow \\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R})}\n\\]\nsuch that $\\gamma_{(x,y)} \\circ \\gamma_{(y,z)} = \\gamma_{(x,z)}$ for all $(x,y), (y,z) \\in \\mathcal{S}$.\nThe latter formula holds as $\\alpha\\otimes\\delta$ was a $\\mathcal G$-action and $s: \\mathcal{S}\\to\\mathcal{G}$ is a section.\nSecondly, we have an action of $\\mathcal{S}$ on $(P_x)_{x \\in X}$ as follows.\nGiven $(x,y) \\in \\mathcal{S}$ and $(\\theta, \\mathbbm{v}) \\in P_y$, we define $\\beta_{(x,y)}(\\theta, \\mathbbm{v}) := (\\tilde{\\theta}, \\tilde{\\mathbbm{v}})$, where\n\\[\n\\tilde{\\theta} = (\\alpha \\otimes \\delta)_{s(x,y)} \\circ \\theta \\circ \\alpha_{s(y,x)} \\quad\\text{and}\\quad\n\\tilde{\\mathbbm{v}}(h) = (\\alpha \\otimes \\delta)_{s(x,y)}(\\mathbbm{v}({\\phi_{(y,x)}(h)})) \\text{ for } h \\in H_x.\n\\]\nThis construction yields a well-defined cocycle conjugacy in $P_x$, and we get a well-defined map $\\beta_{(x,y)}\\colon P_y \\rightarrow P_x$.\nTogether these maps combine to form a measurable field of homeomorphisms $\\mathcal{S} \\ni (x,y) \\mapsto \\beta_{(x,y)}\\colon P_y \\to P_x$ such that $\\beta_{(x,y)} \\circ \\beta_{(y,z)} = \\beta_{(x,z)}$ for all $(x,y), (y,z) \\in \\mathcal{S}$. \nThis formula holds once again because $\\alpha\\otimes\\delta$ and $\\alpha$ are $\\mathcal G$-actions and $s\\colon \\mathcal{S}\\to\\mathcal{G}$ is a section.\nMoreover, the maps $\\beta$ and $\\gamma$ are compatible with the measurable field of actions ${(\\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R}) \\curvearrowright P_x)_{x \\in X}}$ (as required for Lemma~\\ref{lemma:cohomology_lemma}), since for any $(x,y) \\in \\mathcal{S}, u \\in \\mathcal{U}(B_y \\bar{\\otimes} \\mathcal{R})$ and $(\\theta,\\mathbbm{v}) \\in P_y$ we may simply compare definitions and observe\n\\[\n\\beta_{(x,y)}(u \\cdot (\\theta,\\mathbbm{v})) = \\gamma_{(x,y)}(u) \\cdot \\beta_{(x,y)}(\\theta,\\mathbbm{v}).\n\\]\nHaving introduced all this data, our previous discussion can be rephrased. \nIn order to complete the proof, it suffices to find a measurable section $X \\ni x \\mapsto \\sigma(x) \\in P_x$ and a measurable family $\\mathcal{S} \\ni (x,y) \\mapsto v(x,y) \\in \\mathcal{U}(B_x \\bar{\\otimes} \\mathcal{R})$ such that\n\\begin{itemize}\n\\item $v(x,y) \\gamma_{(x,y)}(v(y,z)) = v(x,z)$ for all $(x,y), (y,z) \\in \\mathcal{S}$;\n\\item $v(x,y) \\cdot \\beta_{(x,y)}(\\sigma(y)) = \\sigma(x)$ for all $(x,y) \\in \\mathcal{S}$.\n\\end{itemize}\nBy Lemma~\\ref{lemma:cohomology_lemma}, such maps exist if we can merely show that\nthere exists a measurable section $X \\ni x \\mapsto (\\theta_x, \\mathbbm{v}_x) \\in P_x$ such that for all $(x,y) \\in \\mathcal{S}$, the element $(\\theta_x, \\mathbbm{v}_x)$ belongs to the closure of $\\mathcal{U}(B_x \\bar{\\otimes}\\mathcal{R}) \\cdot \\beta_{(x,y)}(\\theta_y,\\mathbbm{v}_y)$.\nWe claim that this is indeed the case.\n\nConsider any measurable section $X \\ni x \\mapsto (\\theta'_x, \\mathbbm{v}'_x) \\in P_x$. \nAs $\\delta$ is strongly self-absorbing, we can fix a cocycle conjugacy $(\\Phi, \\mathbb{U})$ from $(\\mathcal{R}, \\delta)$ to $(\\mathcal{R}\\bar{\\otimes} \\mathcal{R}, \\delta \\otimes \\delta)$ that is approximately unitarily equivalent to $\\operatorname{id}_{\\mathcal R}\\otimes 1_{\\mathcal R}$.\nFor each $x \\in X$ we can define the map\n\\[\n\\Lambda_x\\colon P_x \\rightarrow P_x,\\quad (\\theta, \\mathbbm{v}) \\mapsto \\big( (\\theta, \\mathbbm{v})^{-1} \\otimes \\mathrm{id}_\\mathcal{R} \\big) \\circ \\big( \\mathrm{id}_{B_x} \\otimes (\\Phi, \\mathbb{U}) \\big) \\circ (\\theta, \\mathbbm{v})\n\\]\nThen we get a new measurable section\n\\[\nX \\ni x \\mapsto (\\theta_x, \\mathbbm{v}_x) := \\Lambda_x(\\theta'_x, \\mathbbm{v}'_x) \\in P_x.\n\\]\nWe claim that this section does the trick. Fix $(x,y) \\in \\mathcal{S}$. \nIf we denote $(\\tilde{\\theta}_x, \\tilde{\\mathbbm{v}}_x):= \\beta_{(x,y)}(\\theta_y, \\mathbbm{v}_y)$, we need to convince ourselves that the cocycle conjugacy $(\\theta_x, \\mathbbm{v}_x)$ is in the closure of $\\mathcal{U}(B_x \\bar{\\otimes}\\mathcal{R})\\cdot (\\tilde{\\theta}_x, \\tilde{\\mathbbm{v}}_x)$.\nFirst of all, we observe that the construction of $(\\theta_x, \\mathbbm{v}_x)=\\Lambda_x(\\theta_x', \\mathbbm{v}_x')$ can be seen as a special case of Lemma~\\ref{lem:special-cc} for $M_1=M_2=B_x$, $G=H_x$, $\\phi=\\operatorname{id}_{H_x}$ and $\\Delta=\\operatorname{id}_{\\mathcal R}$.\nHence we can find a sequence of unitaries $y_n\\in\\mathcal{U}(B_x\\otimes\\mathcal R)$ such that\n\\begin{equation} \\label{eq:main-tech:yn}\ny_n^*\\theta_x(b)y_n \\to b\\otimes 1_{\\mathcal R},\\quad y_n^*\\mathbbm{v}_x(h)(\\alpha\\otimes\\delta)_h(y_n) \\to 1_{B_x\\otimes\\mathcal R}\n\\end{equation}\nin the strong operator topology for all $b\\in B_x$ and $h\\in H_x$.\n\nNext, by our previous notation, the group isomorphism $\\phi_{(x,y)}\\colon H_y\\to H_x$ is exactly the one so that the isomorphism of von Neumann algebras $\\alpha_{s(x,y)}\\colon B_y\\to B_x$ can be viewed as a (genuine) conjugacy between the $H_x$-actions $(\\alpha|_{H_y})_{\\phi_{(y,x)}}$ and $\\alpha|_{H_x}$.\nMoreover if $s(x,y)=(k,y)$ for $k\\in G$, then $(\\alpha\\otimes\\delta)_{s(x,y)}=\\alpha_{s(x,y)}\\otimes\\delta_k$ and $\\delta_k$ can be seen as a conjugacy between the $H_x$-actions $(\\delta|_{H_y})_{\\phi_{(y,x)}}$ and $\\delta|_{H_x}$.\n\nBy definition of $\\beta_{(x,y)}$, we have\n\\[\n\\begin{array}{ccl}\n\\tilde{\\theta}_x &=& (\\alpha \\otimes \\delta)_{s(x,y)} \\circ \\theta_y \\circ \\alpha_{s(y,x)} \\\\\n&=& (\\alpha \\otimes \\delta)_{s(x,y)} \\circ \\big( {\\theta_y'}^{-1} \\otimes \\mathrm{id}_\\mathcal{R} \\big) \\circ \\big( \\mathrm{id}_{B_y} \\otimes \\Phi \\big) \\circ \\theta_y' \\circ \\alpha_{s(y,x)} \\\\\n&=& \\big( ({\\theta_y'}\\circ\\alpha_{s(y,x)})^{-1} \\otimes \\delta_k \\big) \\circ \\big( \\mathrm{id}_{B_y} \\otimes \\Phi \\big) \\circ (\\theta_y' \\circ \\alpha_{s(y,x)})\n\\end{array}\n\\]\nand for all $h\\in H_x$ with $g=\\phi_{(y,x)}(h)$, one has\n\\[\n\\begin{array}{ccl}\n\\tilde{\\mathbbm{v}}_x(h) \n&=& (\\alpha \\otimes \\delta)_{s(x,y)}\\Big( ({\\theta_y'}^{-1}\\otimes\\operatorname{id}_{\\mathcal R})\\Big( (\\operatorname{id}_{B_y}\\otimes\\Phi)(\\mathbbm{v}_y'(g)) \\cdot (1_{B_y}\\otimes \\mathbb{U}_g)\\cdot (\\mathbbm{v}_y'(g)^* \\otimes1_{\\mathcal R}) \\Big) \\Big) \\\\\n&=& \\big( ({\\theta_y'}\\circ\\alpha_{s(y,x)})^{-1}\\otimes\\delta_k \\big)\\Big( (\\operatorname{id}_{B_y}\\otimes\\Phi)(\\mathbbm{v}_y'(g)) \\cdot (1_{B_y}\\otimes \\mathbb{U}_g)\\cdot (\\mathbbm{v}_y'(g)^* \\otimes1_{\\mathcal R}) \\Big)\n\\end{array}\n\\]\nWe conclude that Lemma~\\ref{lem:special-cc} is applicable to the cocycle conjugacy $(\\tilde{\\theta}_x,\\tilde{\\mathbbm{v}}_x)$, where we insert $G_1=H_x$, $G_2=H_y$, $\\phi=\\phi_{(y,x)}$, $M_1=B_x$, $M_2=B_y$, $\\Delta=\\delta_k$ and the cocycle conjugacy $(\\theta'_y\\circ\\alpha_{s(y,x)}, \\mathbbm{v}'_y\\circ\\phi_{(y,x)})$ in place of $(\\theta,\\mathbbm{v})$.\nThis allows us to find a sequence of unitaries $w_n\\in\\mathcal{U}(B_x\\otimes\\mathcal R)$ satisfying\n\\begin{equation} \\label{eq:main-tech:wn}\nw_n(b\\otimes 1_{\\mathcal R})w_n^* \\to \\tilde{\\theta}_x(b),\\quad w_n(\\alpha\\otimes\\delta)_h(w_n)^* \\to \\tilde{\\mathbbm{v}}_x(h)\n\\end{equation}\nin the strong operator topology for all $b\\in B_x$ and $h\\in H_x$.\nIf we consider both of the conditions \\eqref{eq:main-tech:yn} and \\eqref{eq:main-tech:wn} and keep in mind that $G$ is countable and $B_x$ is separable and semi-finite, with the help of Lemma \\ref{lemma:point_strong_dense_subset} it is possible to find an increasing sequence of natural numbers $m_n$ such that the resulting sequence of unitaries $z_n=w_ny_{m_n}^*$ satisfies\n\\[\nz_n\\theta_x(b)z_n^* \\to \\tilde{\\theta}_x(b),\\quad z_n\\mathbbm{v}_x(h)(\\alpha\\otimes\\delta)_h(z_n)^* \\to \\tilde{\\mathbbm{v}}_x(h)\n\\]\nin the strong operator topology for all $b\\in B_x$ and $h\\in H_x$.\nThen it follows from Remark~\\ref{rem:Polish-spaces} that $(\\theta_x, \\mathbbm{v}_x)$ indeed belongs to the closure of $\\mathcal{U}(B_x \\bar{\\otimes}\\mathcal{R})\\cdot (\\tilde{\\theta}_x, \\tilde{\\mathbbm{v}}_x)$.\nThis finishes the proof.\n\\end{proof}\n\n\\begin{definition}[see {\\cite[Definition~3.4]{Delaroche79}}]\nAn action $\\alpha \\colon G \\curvearrowright B$ of a countable discrete group on a von Neumann algebra is called amenable, if there exists an equivariant conditional expectation \n\\[\nP\\colon (\\ell^\\infty(G) \\bar{\\otimes} B, \\tau \\otimes \\alpha) \\rightarrow (B, \\alpha),\n\\]\nwhere $\\tau$ denotes the left translation action $G \\curvearrowright \\ell^\\infty(G)$.\n\\end{definition}\n\nBy \\cite{Delaroche82}, an action $\\alpha$ as above is amenable if and only if its restriction to $\\mathcal{Z}(B)$ is amenable, which is equivalent to the action on the measure-theoretic spectrum of the center being amenable in the sense of Zimmer.\nRecall that an automorphism $\\alpha\\in\\operatorname{Aut}(M)$ on a separably acting von Neumann algebra is properly centrally non-trivial, if for every non-zero projection $p\\in\\mathcal{Z}(M)$, the restriction of $\\alpha_\\omega$ on $pM_\\omega$ is non-trivial.\n\nThe following result contains Theorem~\\ref{theorem-A} for actions on semi-finite von Neumann algebras as a special case via $H=G$.\n\n\\begin{corollary} \\label{cor:McDuff-passes}\nLet $G$ be a countable discrete group and $B$ a semi-finite von Neumann algebra with separable predual such that $B \\cong B \\bar{\\otimes} \\mathcal{R}$. \nLet $\\alpha: G \\curvearrowright B$ be an amenable action.\nSuppose $H\\subseteq G$ is a normal subgroup such that for every $g\\in G\\setminus H$, the automorphism $\\alpha_g$ is properly centrally non-trivial.\nLet $G_1=G\/H$ with quotient map $\\pi: G\\to G_1$ and let $\\delta: G_1\\curvearrowright\\mathcal R$ be a strongly self-absorbing action.\nThen $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\delta_\\pi$.\n\\end{corollary}\n\\begin{proof}\nAdopt the notation from Example~\\ref{ex:central_decomposition} and Theorem~\\ref{thm:main_technical}.\nWe identify $\\alpha$ with an action of ${\\mathcal{G}:= G \\ltimes X}$ on a measurable field of semi-finite factors with separable predual $(B_x)_{x \\in X}$. \nDenote by ${X \\ni x \\mapsto H_x}$ the measurable field of isotropy groups.\nAmenability of the action $\\alpha$ implies that the action on $(X,\\mu)$ is amenable in the sense of Zimmer, \nwhich in turn implies amenability of the associated transformation groupoid. \nIn particular all isotropy groups $H_x$ are amenable. \n\nBy assumption on $H$ and \\cite[Theorem 9.14]{MasudaTomatsu16}, it follows for every $g\\in G\\setminus H$ and $\\mu$-almost all $x\\in X$ that either $g\\notin H_x$ or the automorphism $(\\alpha|_{H_x})_g$ on the McDuff factor $B_x$ is centrally non-trivial.\nIn other words, after discarding a null set from $X$, we may assume for all $x\\in X$ that for all $h\\in H_x\\setminus(H_x\\cap H)$, the automorphism $(\\alpha|_{H_x})_g$ on $B_x$ is centrally non-trivial.\nBy Theorem~\\ref{theorem:model-absorption}, we get that $(\\alpha|_{H_x})$ is cocycle conjugate to $(\\alpha\\otimes\\delta_\\pi)|_{H_x}$.\nThe claim then follows via Theorem~\\ref{thm:main_technical}.\n\\end{proof}\n\n\n\\section{Actions on arbitrary von Neumann algebras}\n\nIn this section we shall generalize some of the main results we obtained so far, namely Corollaries~\\ref{cor:equivalence_equivariant_McDuff} and \\ref{cor:McDuff-passes}, to the context of group actions on not necessarily semi-finite von Neumann algebras.\nThis uses standard results in Tomita--Takesaki theory, which allow us to reduce the generalize case to the semi-finite case considered in the previous sections.\nWe will henceforth assume that the reader is familiar with the basics of Tomita--Takesaki theory as well as the theory of crossed products (for a thorough treatment the reader should consult the book \\cite{Takesaki03}), although we are about to recall the specific points needed about the former for this section.\n\n\\begin{remark}[see {\\cite[Chapters VIII, XII]{Takesaki03}}] \\label{rem:tt-basics}\nLet $M$ be a separably acting von Neumann algebra.\nGiven a faithful normal state $\\varphi$ on $M$, we denote by $\\sigma^\\varphi: \\mathbb{R}\\curvearrowright M$ its associated \\emph{modular flow}.\nIf $\\psi$ is another faithful normal state on $M$, we denote by $(D\\psi: D\\varphi): \\mathbb{R}\\to\\mathcal{U}(M)$ the associated \\emph{Connes cocycle}, which is a $\\sigma^\\varphi$-cocycle satisfying $\\operatorname{Ad}(D\\psi: D\\varphi)_t\\circ\\sigma^\\varphi_t=\\sigma^\\psi_t$ for all $t\\in\\mathbb{R}$.\nThe crossed product von Neumann algebra $\\tilde{M}=M\\rtimes_{\\sigma^\\varphi}\\mathbb{R}$ is called the \\emph{continuous core} of $M$ and does not depend on the choice of $\\varphi$ up to canonical isomorphism.\nWith slight abuse of notation, we will consider $M$ as a von Neumann subalgebra in $\\tilde{M}$, and denote by $\\lambda^{\\sigma^\\varphi}: \\mathbb{R}\\to\\mathcal{U}(\\tilde{M})$ the unitary representation implementing the modular flow on $M$.\nThe continuous core $\\tilde{M}$ is always a semi-finite von Neumann algebra.\nGiven any automorphism $\\alpha\\in\\operatorname{Aut}(M)$, there is an induced \\emph{extended automorphism} $\\tilde{\\alpha}\\in\\operatorname{Aut}(\\tilde{M})$ uniquely determined by\n\\[\n\\tilde{\\alpha}|_M = \\alpha \\quad\\text{and}\\quad \\tilde{\\alpha}(\\lambda^{\\sigma^\\varphi}_t)=(D \\varphi\\circ\\alpha^{-1} : D\\varphi)_t\\cdot\\lambda^{\\sigma^\\varphi}_t,\\quad t\\in\\mathbb R.\n\\]\nThe assignment $\\alpha\\mapsto\\tilde{\\alpha}$ defines a continuous homomorphism of Polish groups.\nTherefore, given more generally a continuous action $\\alpha: G\\curvearrowright M$ of a second-countable locally compact group, we may induce the \\emph{extended action} $\\tilde{\\alpha}: G\\curvearrowright\\tilde{M}$.\nEvery extended automorphism on $\\tilde{M}$ has the property that it commutes with the dual flow $\\hat{\\sigma}^\\varphi: \\mathbb{R}\\curvearrowright\\tilde{M}$.\n\\end{remark}\n\n\n\\begin{prop} \\label{prop:cp-centralizer}\nLet $G$ be a second-countable locally compact group.\nLet $\\alpha: G\\curvearrowright M$ be an action on a separably acting von Neumann algebra.\nThen the normal inclusion $M\\subset M\\rtimes_\\alpha G$ induces (via componentwise application of representing sequences) a unital $*$-homomorphism\n$(M_{\\omega,\\alpha})^{\\alpha^\\omega} \\to (M\\rtimes_\\alpha G)_\\omega$.\n\\end{prop}\n\\begin{proof}\nAssume $M$ is represented faithfully on a Hilbert space $\\mathcal{H}$, and consider the canonical inclusion $\\pi\\colon M \\rtimes_\\alpha G \\rightarrow \\mathcal{B}(L^2(G)) \\bar{\\otimes} M \\subset \\mathcal{B}(L^2(G, \\mathcal{H}))$, which on $x \\in M \\subset M \\rtimes_\\alpha G$ is given by\n\\[ [\\pi(x) \\xi](g) = \\alpha_g^{-1}(x)\\xi(g), \\quad \\xi \\in L^2(G, \\mathcal{H}), g \\in G.\\]\nIf $(x_n)_{n \\in \\mathbb{N}}$ is any bounded sequence in $M$ representing an element $x \\in (M_{\\omega, \\alpha})^{\\alpha_\\omega}$, then the invariance of $x$ will guarantee that\n$\\pi(x_n) - (1 \\otimes x_n) \\rightarrow 0$ in the strong-$*$ operator topology as $n \\rightarrow \\omega$. Since $(1 \\otimes x_n)_{n \\in \\mathbb{N}}$ represents an element in $(\\mathcal{B}(L^2(G)) \\bar{\\otimes} M)_\\omega$, it follows that $(\\pi(x_n))_{n \\in \\mathbb{N}}$ represents an element in $(\\pi(M\\rtimes_\\alpha G))_\\omega$, so $x \\in (M\\rtimes_\\alpha G)_\\omega$. This finishes the proof.\n\\end{proof}\n\n\\begin{prop} \\label{prop:Takai-duality}\nLet $\\alpha: G \\curvearrowright M$ be an action on a von Neumann algebra with separable predual.\nLet $\\varphi$ be a faithful normal state on $M$.\nLet $\\tilde{\\alpha}: G\\curvearrowright\\tilde{M}=M\\rtimes_{\\sigma^\\varphi}\\mathbb R$ be the extended action on the continuous core as in Remark~\\ref{rem:tt-basics}.\nWith some abuse of notation, denote by $\\tilde{\\alpha}$ also the induced action $G\\curvearrowright\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R$ by acting trivially on the canonical unitary representation implementing $\\hat{\\sigma}^\\varphi$.\nThen under the Takesaki--Takai duality isomorphism $\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R \\cong \\mathcal B(L^2(\\mathbb R))\\bar{\\otimes} M$, the action $\\tilde{\\alpha}$ is cocycle conjugate to $\\operatorname{id}_{\\mathcal{B}(L^2(\\mathbb R))}\\otimes\\alpha$.\n\\end{prop}\n\\begin{proof}\nDenote $\\alpha'=\\operatorname{id}_{\\mathcal{B}(L^2(\\mathbb R))}\\otimes\\alpha$.\nWe apply Takesaki--Takai duality \\cite[Chapter X, Theorem 2.3(iii)]{Takesaki03} to understand the $G$-action $\\tilde{\\alpha}$.\nIf $M$ is represented faithfully on a Hilbert space $\\mathcal H$, then the natural isomorphism \n\\[\n\\Theta: \\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R = (M\\rtimes_{\\sigma^\\varphi}\\mathbb R)\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R \\to \\mathcal{B}(L^2(\\mathbb R)) \\bar{\\otimes} M\\subseteq \\mathcal{B}(L^2(\\mathbb R,\\mathcal H))\n\\]\nhas the following properties.\nLet $\\xi\\in L^2(\\mathbb R,\\mathcal H)$.\nFor $x\\in M$ and $s,t\\in\\mathbb R$ we have\n\\[\n[\\Theta(x)\\xi ](s) = \\sigma^\\varphi_{-s}(x)\\xi(s) \\quad\\text{and}\\quad [\\Theta(\\lambda^{\\sigma^\\varphi}_t)\\xi](s) = \\xi(s-t).\n\\]\nFurthermore, if the dual flow is given via the convention \n$\\hat{\\sigma}^\\varphi_t(\\lambda_s^{\\sigma^{\\varphi}}) = e^{its}\\lambda^{\\sigma^{\\varphi}}_s$, then we also have\n\\[\n[\\Theta(\\lambda^{\\hat{\\sigma}^\\varphi}_t)\\xi](s)=e^{its}\\xi(s).\n\\]\nThen we obtain an $\\alpha'$-cocycle $\\mathbb{W}$ via\n\\[\n[\\mathbb{W}_g\\xi](s) = (D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{-s}\\xi(s),\\quad \\xi\\in L^2(\\mathbb R,\\mathcal H).\n\\]\nGiven how $\\tilde{\\alpha}$ acts on the domain of $\\Theta$, we can observe (using \\cite[Corollary VIII.1.4]{Takesaki03}) for any $g\\in G$, $x \\in M$, $\\xi \\in L^2(\\mathbb{R}, \\mathcal{H})$ and $s \\in \\mathbb{R}$ that\n\\[\n\\begin{array}{ccl}\n[\\Theta(\\tilde{\\alpha}_g(x))\\xi ](s) &=& \\sigma^\\varphi_{-s}(\\alpha_g(x))\\xi(s) \\\\\n&=& \\operatorname{Ad}(D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{-s}\\Big( \\sigma_{-s}^{\\varphi\\circ\\alpha_g^{-1}}(\\alpha_g(x)) \\Big)\\xi(s) \\\\\n&=& \\Big(\\operatorname{Ad}(D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{-s}\\circ\\alpha_g\\Big) \\big( \\sigma^\\varphi_{-s}(x) \\big)\\xi(s) \\\\\n&=& \\Big[ \\Big( \\operatorname{Ad}(\\mathbb{W}_g)\\circ\\alpha'_g\\circ\\Theta\\Big)(x)\\, \\xi \\Big](s).\n\\end{array}\n\\]\nMoreover, using the cocycle identity and the chain rule for the Connes cocycle (\\cite[Theorem VIII.3.7]{Takesaki03}), we can see for any $g \\in G$, $\\xi \\in L^2(\\mathbb{R},\\mathcal{H})$ and $s,t \\in \\mathbb{R}$ that\n\\[\n\\begin{array}{ccl}\n[\\Theta(\\tilde{\\alpha}_g(\\lambda^{\\sigma^\\varphi}_t))\\xi](s) &=& [\\Theta\\big( (D \\varphi\\circ\\alpha_g^{-1} : D\\varphi)_t\\cdot\\lambda^{\\sigma^\\varphi}_t \\big)\\xi](s) \\\\\n&=& \\sigma^{\\varphi}_{-s}(D \\varphi\\circ\\alpha_g^{-1} : D\\varphi)_t [ \\Theta(\\lambda^{\\sigma^\\varphi}_t)\\xi ](s) \\\\\n&=& \\sigma^{\\varphi}_{-s}(D \\varphi\\circ\\alpha_g^{-1} : D\\varphi)_t \\xi(s-t) \\\\\n&=& (D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{-s} (D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{t-s}^* \\xi(s-t) \\\\\n&=& (D\\varphi: D\\varphi\\circ\\alpha_g^{-1})_{-s} \\Big[ \\Theta(\\lambda_t^{\\sigma^\\varphi}) \\mathbb{W}_g^* \\xi \\Big](s) \\\\\n&=& \\Big[ \\mathbb{W}_g \\Theta(\\lambda_t^{\\sigma^\\varphi}) \\mathbb{W}_g^* \\xi \\Big](s) \\\\\n&=& \\Big[ \\Big( \\operatorname{Ad}(\\mathbb{W}_g)\\circ\\alpha'_g\\circ\\Theta\\Big)(\\lambda_t^{\\sigma^\\varphi}) \\xi \\Big](s)\n\\end{array}\n\\]\nHere we used that $\\alpha'_g$ fixes the shift operator given by $\\Theta(\\lambda^{\\sigma^\\varphi}_t)$ for all $g\\in G$ and $t\\in\\mathbb{R}$.\nLastly, it is trivial to see that $\\alpha'$ fixes operators of the form $\\Theta(\\lambda^{\\hat{\\sigma}^\\varphi}_t)$, which in turn also commute pointwise with the cocycle $\\mathbb{W}$.\nIn conclusion, all of these observations culminate in the fact that the isomorphism $\\Theta$ extends to the cocycle conjugacy $(\\Theta,\\mathbb{W})$ between $\\tilde{\\alpha}$ and $\\alpha'$.\n\\end{proof}\n\nThe following represents our most general McDuff-type absorption theorem:\n\n\\begin{theorem} \\label{thm:general-McDuff}\nLet $G$ be a second-countable locally compact group.\nLet $\\alpha: G \\curvearrowright M$ be an action on a von Neumann algebra with separable predual and let $\\delta: G \\curvearrowright \\mathcal{R}$ be a strongly self-absorbing action on the hyperfinite II$_1$-factor.\nThen $\\alpha\\simeq_{\\mathrm{cc}}\\alpha\\otimes\\delta$ if and only if there exists a unital equivariant $*$-homomorphism $(\\mathcal R,\\delta) \\to (M_{\\omega,\\alpha},\\alpha_\\omega)$.\n\\end{theorem}\n\\begin{proof}\nThe ``only if'' part follows exactly as in the proof of Corollary~\\ref{cor:equivalence_equivariant_McDuff}, so we need to show the ``if'' part.\nSince Corollary~\\ref{cor:equivalence_equivariant_McDuff} already covers the case when $M$ is finite, we may split off the largest finite direct summand of $M$ and assume without loss of generality that $M$ is properly infinite.\n\nLet $\\varphi$ be a faithful normal state on $M$.\nAs in Remark~\\ref{rem:tt-basics}, we consider the (semi-finite) continuous core $\\tilde{M}$ and the extended $G$-action $\\tilde{\\alpha}: G\\curvearrowright\\tilde{M}$.\nSince the image of $\\tilde{\\alpha}$ commutes with the dual flow $\\hat{\\sigma}^{\\varphi}$, we have a continuous action $\\beta=\\tilde{\\alpha}\\times\\hat{\\sigma}^\\varphi: G\\times\\mathbb{R}\\curvearrowright\\tilde{M}$ via $\\beta_{(g,t)}=\\tilde{\\alpha}_g\\circ\\hat{\\sigma}^\\varphi_t$ for all $g\\in G$ and $t\\in\\mathbb{R}$.\nLet us also consider the action $\\delta^{\\mathbb R}: G\\times\\mathbb{R}\\curvearrowright\\mathcal R$ given by $\\delta^{\\mathbb R}_{(g,t)}=\\delta_g$ for all $g\\in G$ and $t\\in\\mathbb{R}$, which is evidently also strongly self-absorbing.\n\nWe apply Proposition~\\ref{prop:cp-centralizer} to $\\mathbb{R}$ in place of $G$ and to the modular flow in place of $\\alpha$.\nIn this context, recall that the ultrapower flow $(\\sigma^\\varphi)^\\omega$ already acts continuously on the ultrapower $M^\\omega$ (see \\cite[Theorem 4.1]{AndoHaagerup14}), and its restriction to $M_\\omega$ is trivial.\nSo $M_\\omega=(M_{\\omega,\\sigma^\\varphi})^{(\\sigma^\\varphi)^\\omega}$ and Proposition~\\ref{prop:cp-centralizer} implies that the inclusion $M\\subset\\tilde{M}$ induces an embedding $M_\\omega\\to \\tilde{M}_\\omega$.\nSince by definition, one has $\\tilde{\\alpha}|_M=\\alpha$ as $G$-actions, it is clear that bounded $(\\alpha,\\omega)$-equicontinuous sequences in $M$ become $(\\tilde{\\alpha},\\omega)$-equicontinuous sequences in $\\tilde{M}$.\nKeeping in mind that the dual flow $\\hat{\\sigma}^\\varphi$ acts trivially on $M$ by definition, the aforementioned inclusion therefore induces an equivariant unital $*$-homomorphism\n\\[\n(M_{\\omega,\\alpha},\\alpha_\\omega) \\to ( (\\tilde{M}_{\\omega,\\beta})^{(\\hat{\\sigma}^\\varphi)^\\omega},\\tilde{\\alpha}_\\omega ).\n\\]\nIf we compose this $*$-homomorphism with a given unital equivariant $*$-homomorphism $(\\mathcal R,\\delta)\\to (M_{\\omega,\\alpha},\\alpha_\\omega)$, we can view the resulting map as an $(G\\times\\mathbb{R})$-equivariant unital $*$-homomorphism $(\\mathcal R,\\delta^{\\mathbb R}) \\to (\\tilde{M}_{\\omega,\\beta},\\beta_\\omega)$.\nSince $\\tilde{M}$ is semi-finite, it follows from Corollary~\\ref{cor:equivalence_equivariant_McDuff} that $\\beta$ and $\\beta\\otimes\\delta^{\\mathbb R}$ are cocycle conjugate as $(G\\times\\mathbb R)$-actions, say via $(\\Psi,\\mathbb{V}): (\\tilde{M},\\beta)\\to (\\tilde{M}\\bar{\\otimes}\\mathcal{R},\\beta\\otimes\\delta^{\\mathbb R})$.\nRemembering $\\beta=\\tilde{\\alpha}\\times\\hat{\\sigma}^\\varphi$, we consider the $\\hat{\\sigma}^\\varphi\\otimes \\mathrm{id}_\\mathcal{R}$-cocycle $\\mathbbm{w}_t=\\mathbb{V}_{(1_G,t)}$ and the $\\tilde{\\alpha}\\otimes \\delta$-cocycle $\\mathbbm{v}_g=\\mathbb{V}_{(g,0)}$.\nThe cocycle identity for $\\mathbb{V}$ then implies the relation \n\\begin{equation} \\label{eq:cocycle-interaction}\n\\mathbbm{w}_t(\\hat{\\sigma}^\\varphi_t\\otimes\\operatorname{id}_{\\mathcal R})(\\mathbbm{v}_g)=\\mathbbm{v}_g(\\tilde{\\alpha}\\otimes\\delta)_g(\\mathbbm{w}_t)\n\\end{equation}\nfor all $g\\in G$ and $t\\in\\mathbb R$.\nThe cocycle conjugacy of flows $(\\Psi,\\mathbbm{w})$ induces an isomorphism\n\\[\n\\Lambda:=(\\Psi,\\mathbbm{w})\\rtimes\\mathbb R: \\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R \\to (\\tilde{M}\\bar{\\otimes}\\mathcal R)\\rtimes_{\\hat{\\sigma}^\\varphi\\otimes\\operatorname{id}_{\\mathcal R}}\\mathbb R \\cong \\big(\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R\\big)\\otimes\\mathcal R\n\\]\nvia\n\\[\n\\Lambda|_{\\tilde{M}} = \\Psi \\quad\\text{and}\\quad \\Lambda(\\lambda^{\\hat{\\sigma}^\\varphi}_t)=\\mathbbm{w}_t(\\lambda^{\\hat{\\sigma}^\\varphi}_t\\otimes 1_{\\mathcal R}),\\quad t\\in\\mathbb R.\n\\]\nWith slight abuse of notation (as in Proposition~\\ref{prop:Takai-duality}), we also denote by $\\tilde{\\alpha}$ the obvious induced $G$-action on the crossed product $\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R$.\nUsing that $(\\Psi,\\mathbb{V})$ was a cocycle conjugacy, we observe for all $g\\in G$ and $t\\in\\mathbb{R}$ that \n\\[\n\\operatorname{Ad}(\\mathbbm{v}_g)\\circ(\\tilde{\\alpha}\\otimes\\delta)_g\\circ\\Lambda|_{\\tilde{M}} = \\operatorname{Ad}(\\mathbb{V}_{(g,0)})\\circ(\\beta\\otimes\\delta^{\\mathbb R})_{(g,0)}\\circ\\Psi=\\Psi\\circ\\beta_{(g,0)}=\\Psi\\circ\\tilde{\\alpha}_g\n\\]\nand\n\\[\n\\begin{array}{ccl}\n\\big( \\operatorname{Ad}(\\mathbbm{v}_g)\\circ(\\tilde{\\alpha}\\otimes\\delta)_g\\circ\\Lambda \\big)(\\lambda^{\\hat{\\sigma}^\\varphi}_t) \n&=& \\mathbbm{v}_g \\big( (\\tilde{\\alpha}\\otimes\\delta)_g(\\mathbbm{w}_t) (\\lambda^{\\hat{\\sigma}^\\varphi}_t\\otimes 1_{\\mathcal R}) \\big) \\mathbbm{v}_g^* \\\\\n&\\stackrel{\\eqref{eq:cocycle-interaction}}{=}& \\mathbbm{w}_t(\\hat{\\sigma}^\\varphi_t\\otimes\\operatorname{id}_{\\mathcal R})(\\mathbbm{v}_g) (\\lambda^{\\hat{\\sigma}^\\varphi}_t\\otimes 1_{\\mathcal R}) \\mathbbm{v}_g^* \\\\\n&=& \\mathbbm{w}_t(\\lambda^{\\hat{\\sigma}^\\varphi}_t\\otimes 1_{\\mathcal R}) \\ = \\ \\Lambda(\\lambda^{\\hat{\\sigma}^\\varphi}_t).\n\\end{array}\n\\]\nIn conclusion, the pair $(\\Lambda,\\mathbbm{v})$ defines a cocycle conjugacy between the $G$-actions $\\tilde{\\alpha}$ on $\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R$ and $\\tilde{\\alpha}\\otimes\\delta$ on $\\big(\\tilde{M}\\rtimes_{\\hat{\\sigma}^\\varphi}\\mathbb R\\big)\\otimes\\mathcal R$.\nBy Proposition~\\ref{prop:Takai-duality}, the action $\\tilde{\\alpha}$ is cocycle conjugate to $\\operatorname{id}_{\\mathcal B(\\ell^2(\\mathbb{N}))}\\otimes\\alpha$.\nSince we assumed $M$ to be properly infinite, it furthermore follows that $\\operatorname{id}_{\\mathcal B(\\ell^2(\\mathbb{N}))}\\otimes\\alpha$ is cocycle conjugate to $\\alpha$.\\footnote{Although this appears to be well-known, we could not find a good literature source for this precise claim and in this generality. We note, however, that the recent proof of the analogous C$^*$-algebraic statement \\cite[Proposition 1.4]{GabeSzabo22kp} carries over in an obvious way to this setting.}\nCombining all these cocycle conjugacies yields one between $\\alpha$ and $\\alpha\\otimes\\delta$.\nThis finishes the proof.\n\\end{proof}\n\nThe following consequence is our last main result, which generalizes Corollary~\\ref{cor:McDuff-passes} to actions on arbitrary von Neumann algebras.\n\n\\begin{theorem} \\label{thm:general-amenable-McDuff}\nLet $G$ be a countable discrete group and $M$ a von Neumann algebra with separable predual such that $M \\cong M\\bar{\\otimes} \\mathcal{R}$. \nThen for every amenable action $\\alpha: G \\curvearrowright M$, one has $\\alpha \\simeq_{\\mathrm{cc}} \\alpha \\otimes \\mathrm{id}_\\mathcal{R}$.\n\\end{theorem}\n\\begin{proof}\nChoose a faithful normal state $\\varphi$ on $M$.\nRecall that the induced faithful normal state $\\varphi^\\omega$ on $M^\\omega$ restricts to a tracial state on $M_\\omega$. We denote by $\\|\\cdot\\|_2=\\|\\cdot\\|_{\\varphi^\\omega}$ the induced tracial 2-norm on $M_\\omega$.\nSince we assumed $M$ to be McDuff, it follows that $M_\\omega$ is locally McDuff in the following sense:\nGiven any $\\|\\cdot\\|_2$-separable $*$-subalgebra $B\\subset M_\\omega$, there exists a unital $*$-homomorphism $\\mathcal R\\to M_\\omega\\cap B'$.\n\nNow we choose $N_1=\\mathcal{Z}(M)$ as a subalgebra of $M_\\omega$.\nWe may then choose a unital $*$-homomorphism $\\psi_1: \\mathcal{R}\\to M_\\omega$, and define $N_2$ to be the $\\|\\cdot\\|_2$-closed $*$-subalgebra generated by $N_1$ and the range of $\\alpha_{\\omega,g}\\circ\\psi_1$ for all $g\\in G$.\nAfter that, we may choose a unital $*$-homomorphism $\\psi_2: \\mathcal{R}\\to M_\\omega\\cap N_2'$, and define $N_3$ to be the $\\|\\cdot\\|_2$-closed $*$-subalgebra generated by $N_2$ and the range of $\\alpha_{\\omega,g}\\circ\\psi_2$ for all $g\\in G$.\nCarry on inductively like this and obtain an increasing sequence of $\\alpha_{\\omega}$-invariant separable von Neumann subalgebras $N_k\\subseteq M_\\omega$.\nThe $\\|\\cdot\\|_2$-closure of the union $\\bigcup_{k\\geq 1} N_k$ is then a separably acting finite von Neumann subalgebra, which is clearly McDuff and $\\alpha_{\\omega}$-invariant.\nFurthermore $N$ contains the center of $M$ equivariantly, which implies that the action $\\alpha_\\omega$ is amenable on $N$.\n\nBy Corollary~\\ref{cor:McDuff-passes} (with $H=G$), it follows that $\\alpha_{\\omega}|_N$ is cocycle conjugate to $(\\alpha_\\omega|_N)\\otimes\\operatorname{id}_{\\mathcal R}$.\nIn particular we may find some unital $*$-homomorphism $\\mathcal R\\to (N_\\omega)^{\\alpha_\\omega}$.\nApplying a standard reindexation trick, we may use this to obtain a unital $*$-homomorphism $\\mathcal R\\to (M_\\omega)^{\\alpha_\\omega}$, which finishes the proof by Theorem~\\ref{thm:general-McDuff}.\n\\end{proof}\n\n\n\\addcontentsline{toc}{section}{References}\n\\bibliographystyle{gabor}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzavel b/data_all_eng_slimpj/shuffled/split2/finalzzavel new file mode 100644 index 0000000000000000000000000000000000000000..9d26c8313897e42d9998194b61fce5c086b5a8ad --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzavel @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAutonomous, biomimetic robots can serve as tools in animal behavioural studies. Robots are used in ethology and behavioural studies to untangle the multimodal modes of interactions and communication between animals~\\cite{mondada2013general}. When they are socially integrated in a group of animals, they are capable of sending calibrated stimuli to test the animal responses in a social context~\\cite{halloy2007social}. Moreover, animal and autonomous robot interactions represent an interesting challenge for robotics. Confronting robots to animals is a difficult task because specific behavioural models have to be designed and the robots have to be socially accepted by the animals. The robots have to engage in social behaviour and convince somehow the animal that they can be social companions. In this context, the capabilities of the robots and their intelligence are put in harsh conditions and often demonstrate the huge gap that still exists between autonomous robots and animals not only considering motion and coping with the environment but also in terms of intelligence. It is a direct comparison of artificial and natural collective intelligence.\nMoreover, the design of such social robots is challenging as it involves both a luring capability including appropriate robot behaviours, and the social acceptation of the robots by the animals. We have shown that the social integration of robots into groups of fish can be improved by refining the behavioural models used to build their controllers~\\cite{cazenille2017acceptation}. The models have also to be calibrated to replicate accurately the animal collective behaviours in complex environments~\\cite{cazenille2017acceptation}.\n\nResearch on animal and robot interactions need also bio-mimetic formal models as behavioural controllers of the robots if the robots have to behave as congeners~\\cite{Bonnet2016IJARS,bonnet2017cats}. Robots controllers have to deal with a whole range of behaviours to allow them to take into account not only the other individuals but also the environment and in particular the walls \\cite{cazenille2017acceptation,cazenille2017automated}. However, most of biological collective behaviour models deal only with one sub-part at a time of fish behaviours in unbounded environments. Controllers based on neural networks, such as multilayer perceptron (MLP)~\\cite{king1989neural} or echo state networks (ESN)~\\cite{jaeger2007echo} have the advantage to be easier to implement and could deal with a larger range of events.\n\n\\subsection*{Objectives}\nWe aim at building models that generate accurately zebrafish trajectories of one individual within a small group of 5 agents. The trajectories are the result of social interactions in a bounded environment. Zebrafish are a classic animal model in the research fields of genetics and neurosciences of individual and collective behaviours. Building models that correctly reproduce the individual trajectories of fish within a group is still an open question~\\cite{herbert2015turing}. We explore MLP and ESN models, optimised by evolutionary computation, to generate individual trajectories. MLP and ESN are black-box models that need few \\textit{a priori} information provided by the modeller. They are optimised on the experimental data and as such represent a model of the complex experimental collective trajectories. However, they are difficult to calibrate on the zebrafish experimental data due to the complexity of the fish trajectories.\nHere, we consider the design and calibration by evolutionary computation of neural network models, MLP and ESN, that can become robot controllers. We test two evolutionary optimisation methods, CMA-ES~\\cite{auger2005restart} and NSGA-III~\\cite{yuan2014improved} and show that the latter gives better results. We show that such MLP and ESN behavioural models could be useful in animal robot interactions and could make the robots accepted by the animals by reproducing their behaviours and trajectories as in~\\cite{cazenille2017acceptation}. \n\n\n\\section{Materials and Methods} \\label{sec:methods}\n\n\\subsection{Experimental set-up} \\label{sec:setup}\nWe use the same experimental procedure, fish handling, and set-up as in~\\cite{cazenille2016automated,bonnet2017cats,seguret2017loose,cazenille2017acceptation,collignon2017collective,bonnet2018closed}. The experimental arena is a square white plexiglass aquarium of $1000\\times1000\\times100$~mm. An overhead camera captures frames at 15 FPS, with a $500\\times500$px resolution, that are then tracked to find the fish positions. We use 10 groups of 5 adults wild-type AB zebrafish (\\textit{Danio rerio}) in 10 trials lasting each one for 30-minutes as in~\\cite{cazenille2016automated,bonnet2017cats,seguret2017loose,cazenille2017acceptation,collignon2017collective,bonnet2018closed}.\nThe experiments performed in this study were conducted under the authorisation of the Buffon Ethical Committee (registered to the French National Ethical Committee for Animal Experiments \\#40) after submission to the French state ethical board for animal experiments.\n\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=0.85\\textwidth]{workflow3.pdf}\n\\caption{Methodology workflow. An evolutionary algorithm is used to evolve the weight of a MLP (1 hidden layer, 100 neurons) or an ESN (100 reservoir neurons) neural networks that serves as the controller of a simulated robot interacting with 4 fish described by the experimental data. Only the connections represented by dotted arrows are evolved (for MLP: all connections; for ESN: connections from inputs to reservoir, from reservoir to outputs and from outputs to outputs and to reservoir). The fitness function is computed through data-analysis of these simulations and represent the biomimetism metric of the simulated robot behaviour compared to the behaviour exhibited by real fish in experiments. Two evolutionary algorithms are tested: CMA-ES (mono-objective) and NSGA-III (multi-objective).}\n\\label{fig:workflow}\n\\end{center}\n\\end{figure*}\n\n\n\\subsection{Artificial neural network model} \\label{sec:model}\nBlack-box models, like artificial neural networks (ANN), can be used to model phenomena with few \\textit{a priori} information. Although they are not used yet to model fish collective behaviours based on experimental data, here we show that they are relevant to model zebrafish collective behaviour.\nWe propose a methodology (Fig.~\\ref{fig:workflow}) where either a multilayer perceptron (MLP)~\\cite{king1989neural} artificial neural network, or an echo state network (ESN)~\\cite{jaeger2007echo}, is calibrated through the use of evolutionary algorithms to model the behaviour of a simulated fish in a group of 5 individuals. The 4 other individuals are described by the experimental data obtained with 10 different groups of 5 fish for trials lasting 30 minutes.\n\nMLP are a type of feedforward artificial neural networks that are very popular in artificial intelligence to solve a large variety of real-world problems~\\cite{norgaard2000neural}. Their capability to universally approximate functions~\\cite{cybenko1989approximation} makes them suitable to model control and robotic problems~\\cite{norgaard2000neural}. We consider MLP with only one hidden layer of $100$ neurons (using a hyperbolic tangent function as activation function).\n\nESN are recurrent neural networks often used to model temporal processes, like time-series, or robot control tasks~\\cite{polydoros2015advantages}. They are sufficiently expressive to model complex non-linear temporal problems, that non-recurrent MLP cannot model.\n\nFor the considered focal agent, the neural network model takes the following parameters as input:\n (i) the \\textit{direction vector} (angle and distance) from the focal agent towards each other agent;\n (ii) the \\textit{angular distance} between the focal agent direction and each other agent direction (alignment measure);\n (iii) the \\textit{direction vector} (angle and distance) from the focal agent towards the nearest wall;\n (iv) the \\textit{instant linear speed} of the focal agent at the current time-step, and at the previous time-step;\n (v) the \\textit{instant angular speed} of the focal agent at the current time-step, and at the previous time-step.\nThis set of inputs is typically used in multi-agent modelling of animal collective behaviour~\\cite{deutsch2012collective,sumpter2012modelling}. As a first step, we consider that it is sufficient to model fish behaviour with neural networks.\n\nThe neural network has two outputs corresponding to the change in linear and angular speeds to apply from the current time-step to the next time-step. Here, we limit our approach to modelling fish trajectories resulting from social interactions in a homogeneous environment but bounded by walls. Very few models of fish collective behaviours take into account the presence of walls~\\cite{collignon2016stochastic,calovi2018disentangling}.\n\n\n\\subsection{Data analysis} \\label{sec:dataAnalysis}\nFor each trial, $e$, and simulations, we compute several behavioural metrics using the tracked positions of agents: (i) the distribution of \\textit{inter-individual distances} between agents ($D_e$); (ii) the distributions of \\textit{instant linear speeds} ($L_e$); (iii) the distributions of \\textit{instant angular speeds} ($A_e$); (iv) the distribution of \\textit{polarisation} of the agents in the group ($P_e$) and (v) the distribution of \\textit{distances of agents to their nearest wall} ($W_e$).\nThe polarisation of an agent group measures how aligned the agents in a group are, and is defined as the absolute value of the mean agent heading: $P = \\frac{1}{N} \\bigl\\lvert \\sum^{N}_{i=1} u_i \\bigr\\rvert$ where $u_i$ is the unit direction of agent $i$ and $N=5$ is the number of agents~\\cite{tunstrom2013collective}.\n\nWe define a similarity measure (ranging from $0.0$ to $1.0$) to measure the biomimetism of the simulated robot behaviour by comparing the behaviour of the group of agents in simulations where the robot is present (experiment $e_r$: four fish and one robot) to the behaviour of the experimental fish groups (experiment $e_c$: five fish):\n\\begin{equation}\nS(e_r, e_c) = \\sqrt[5]{I(D_{e_r}, D_{e_c}) I(L_{e_r}, W_{e_c}) I(A_{e_r}, O_{e_c}) I(P_{e_r}, T_{e_c}) I(W_{e_r}, T_{e_c})}\n\\end{equation}\nThe function $I(X, Y)$ is defined as such: $I(X, Y) = 1 - H(X, Y)$.\nThe $H(X, Y)$ function is the Hellinger distance between two histograms~\\cite{deza2006dictionary}. It is defined as: $H(X, Y) = \\frac{1}{\\sqrt{2}} \\sqrt{ \\sum_{i=1}^{d} (\\sqrt{X_i} - \\sqrt{Y_i} )^2 }$ where $X_i$ and $Y_i$ are the bin frequencies.\n\nThis score measures the social acceptation of the robot by the fish, as defined in~\\cite{cazenille2017acceptation,cazenille2017automated}. Compared to the similarity measure defined in these articles, we added a measure of the polarisation of the agents. This was motivated by the tendency of our evolved neural models, without a polarisation factor, to generate agents with unnatural looping behaviour to catch up with the group.\n\n\n\\subsection{Optimisation} \\label{sec:optim}\nWe calibrate the ANN models presented here to match as close as possible the behaviour of one fish in a group of 5 individuals in 30-minute simulations (at $15$ time-steps per seconds, \\textit{i.e.}\\ $27000$ steps per simulation). This is achieved by optimising the connection weights of the ANN through evolutionary computation that iteratively perform global optimisation (inspired by biological evolution) on a defined fitness function so as to find its maxima~\\cite{salimans2017evolution,jiang2008supervised}. \n\nWe consider two optimisation methods (as in~\\cite{cazenille2017automated}), for MLP and ESN networks.\nIn the \\textbf{Sim-MonoObj-MLP} case, we use the CMA-ES~\\cite{auger2005restart} mono-objective evolutionary algorithm to optimise an MLP, with the task of maximising the $S_(e_1, e_2)$ function.\nIn the \\textbf{Sim-MultiObj-MLP} and \\textbf{Sim-MultiObj-ESN} cases, we use the NSGA-III~\\cite{yuan2014improved} multi-objective algorithm with three objectives to maximise. The first objective is a performance objective corresponding to the $S_(e_1, e_2)$ function. We also consider two other objectives used to guide the evolutionary process: one that promotes genotypic diversity~\\cite{mouret2012encouraging} (defined by the mean euclidean distance of the genome of an individual to the genomes of the other individuals of the current population), the other encouraging behavioural diversity (defined by the euclidean distance between the $D_{e}$, $L_{e}$, $A_{e}$, $P_{e}$ and $W_{e}$ scores of an individual). The NSGA-III algorithm was used with a $0.80\\%$ probability of crossovers and a $0.20\\%$ probability of mutations (we also tested this algorithm with only mutations and obtained similar results).\nThe NSGA-III algorithm~\\cite{yuan2014improved} is considered instead of the NSGA-II algorithm~\\cite{deb2002fast} employed in~\\cite{cazenille2017automated} because it is known to converge faster than NSGA-II on problems with more than two objectives~\\cite{ishibuchi2016performance}.\n\nIn both methods, we use populations of 60 individuals and 300 generations. Each case is repeated in 10 different trials.\nWe use a NSGA-III implementation based on the DEAP python library~\\cite{fortin2012deap}.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.99\\textwidth]{AllScores1.pdf}\n\\includegraphics[width=0.99\\textwidth]{AllScores2.pdf}\n\\caption{Similarity scores between the behaviour of the experimental fish groups (control) and the behaviour of the best-performing simulated individuals of the MLP models optimised by CMA-ES or NSGA-III. Results are obtained over 10 different trials (experiments for fish-only groups, and simulations for NN models). We consider five behavioural features to characterise exhibited behaviours. \\textbf{Inter-individual distances} corresponds to the similarity in distribution of inter-individual distances between all agents and measures the capabilities of the agents to aggregate. \\textbf{Linear and Angular speeds distributions} correspond to the distributions of linear and angular speeds of the agents. \\textbf{Polarisation} measures how aligned the agents are in the group. \\textbf{Distances to nearest wall} corresponds to the similarity in distribution of agent distance to their nearest wall, and assess their capability to follow the walls. The \\textbf{Biomimetic score} corresponds to the geometric mean of the other scores.}\n\\label{fig:scores}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.90\\textwidth]{hists}\n\\caption{Comparison between 30-minutes trials involving 5 fish (control, biological data) and simulations involving 4 fish and 1 robot, over 10 trials and across 5 behavioural features: inter-individual distances (\\textbf{A}), linear (\\textbf{B}) and angular (\\textbf{C}) speeds distributions, polarisation (\\textbf{D}), and distances to nearest wall (\\textbf{E}).}\n\\label{fig:plotsHists}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Results} \\label{sec:results}\nWe analyse the behaviour of one simulated robot in a group of 4 fish. The robots are driven by ANN (either MLP or ESN) evolved with CMA-ES (\\textbf{Sim-MonoObj-MLP} case) or with NSGA-III (\\textbf{Sim-MultiObj-MLP} and \\textbf{Sim-MultiObj-ESN} cases) and compare it to the behaviour of fish-only groups (\\textbf{Control} case). We only consider the best-evolved ANN controllers. In the simulations, the simulated robot does not influence the fish because the fish are described by their experimental data that is replayed.\n\nExamples of agent trajectories obtained in the three tested cases are found in Fig.~\\ref{fig:plotsTraces}A. In the \\textbf{Sim-MonoObj-MLP} and \\textbf{Sim-MultiObj-*} cases, they correspond to the trajectory of the simulated robot agent.\nIn both case, we can see that the robot follow the walls like the fish, and are often part of the fish group as natural fish do. However, the robot trajectories can incorporate patterns not found in the fish trajectories. For example, small circular loop are done when the robot performs an U-turn to catch up with the fish group. This is particularly present in the \\textbf{Sim-MonoObj-MLP} case, and seldom appear in the \\textbf{Sim-MultiObj-*} cases.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.80\\textwidth]{traj}\n\\caption{Agent trajectories observed after 30-minute trials in a square ($1m$) aquarium, for the 4 considered cases: \\textbf{Control} reference experimental fish data obtained as in~\\cite{collignon2016stochastic,seguret2017loose}, \\textbf{Sim-MonoObj-MLP} MLP optimised by CMA-ES, \\textbf{Sim-MultiObj-MLP} MLP optimised by NSGA-III, \\textbf{Sim-MultiObj-ESN} ESN optimised by NSGA-III. \\textbf{A} Examples of an individual trajectory of one agent among the 5 making the group (fish or simulated robot) during 1-minute out of a 30-minute trial. \\textbf{B} Presence probability density of agents in the arena.}\n\\label{fig:plotsTraces}\n\\end{center}\n\\end{figure}\n\n\nWe compute the presence probability density of agents in the arena (Fig.~\\ref{fig:plotsTraces}B): it shows that the robot tend to follow the walls as the fish do naturally.\n\nFor the three tested cases, we compute the statistics presented in Sec.~\\ref{sec:dataAnalysis} (Fig.~\\ref{fig:plotsHists}). The corresponding similarity scores are shown in Fig.~\\ref{fig:scores}. The results of the \\textbf{Control} case shows sustained aggregative and wall-following behaviours of the fish group. Fish also seldom pass through the centre of the arena, possibly in small short-lived sub-groups. There is group behavioural variability, especially on aggregative tendencies (measured by inter-individual distances), and wall-following behaviour (measured by the distance to the nearest wall), because each one of the 10 groups is composed of different fish \\textit{i.e.} 50 fish in total.\n\nThe similarity scores of the \\textbf{Sim-MultiObj-*} cases are often within the variance domain of the \\textbf{Control} case, except for the inter-individual score. It suggests that groups incorporating the robot driven by an MLP evolved by NSGA-III exhibit relatively similar dynamics as a fish-only group, at least according to our proposed measures. However, it is still perfectible: the robot is sometimes at the tail of the group, possibly because of gap created between the robot and the fish group by small trajectories errors (\\textit{e.g.}\\ small loops shown in robot trajectories in Fig.~\\ref{fig:plotsTraces}A).\n\nThe \\textbf{Sim-MonoObj-MLP} case sacrifices biomimetism to focus mainly on group-following behaviour: this translated into a higher inter-individual score than in the \\textbf{Sim-MultiObj-*} cases, and robot tend to follow closely the fish group. With \\textbf{Sim-MonoObj-MLP}, the robot is going faster than the fish, and will fastly go back towards the centroid of the group if it is too far ahead of the group: this explains the large presence of loops in Fig.~\\ref{fig:plotsTraces}A. The \\textbf{Sim-MonoObj-MLP} does not take into account behavioural diversity like the \\textbf{Sim-MultiObj-*}, but focus on the one that is easier to find (namely the group-following behaviour) and stays stuck in this local optimum.\n\nThere are few differences between the results of the \\textbf{Sim-MultiObj-MLP} and the \\textbf{Sim-MultiObj-ESN} cases, the latter showing often slightly lower scores than the former. However, the \\textbf{Sim-MultiObj-ESN} displays a large variability of inter-individual scores, which could suggest that its expressivity could be sufficient to model agents with more biomimetic behaviours if the correct connection weights were found by the optimiser.\n\n\n\n\n\\section{Discussion and Conclusion} \\label{sec:conclusion}\nWe evolved artificial neural networks (ANN) to model the behaviour of a single fish in a group of 5 individuals. This ANN controller was used to drive the behaviour of a robot agent in simulations to integrate the group of fish by exhibiting biomimetic behavioural capabilities. Our methodology is similar to the calibration methodology developed in~\\cite{cazenille2017automated}, but employs artificial neural networks instead of an expert-designed behavioural model. Artificial neural networks are black-box models that require few \\textit{a-priori} information about the target tasks.\n\nWe design a biomimetism score from behavioural measures to assess the biomimetism of robot behaviour. In particular, we measure the aggregative tendencies of the agents (inter-individual distances), their disposition to follow walls, to be aligned with the rest of the group (polarisation), and their distribution of linear and angular speeds.\n\nHowever, finding ANN displaying behaviours of appropriate levels of biomimetism is a challenging issue, as fish behaviour is inherently multi-level (tail-beats as motor response vs individual trajectories vs collective dynamics), multi-modal (several kinds of behavioural patterns, and input\/output sources), context-dependent (different behaviours depending on the spatial position and proximity to other agents) and stochastic (leading to individual and collectives choices and action selection)~\\cite{collignon2016stochastic,sumpter2018using}. More specifically, fish dynamics involve trade-offs between social tendencies (aggregation, group formation), and response to the environment (wall-following, zone occupation); they also follow distinct movement patterns that allow them to move in a polarised group and react collectively to environmental and social cues.\n\nWe show that this artificial neural models can be optimised by using evolutionary algorithms, using the biomimetism score of robot behaviour as a fitness function. The best-performing evolved ANN controllers show competitive biomimetism scores compared to fish group behavioural variability. We demonstrate that taking into account genotypic and behavioural diversity in the optimisation process (through the use of the global multi-objective optimiser NSGA-III) improve the biomimetic scores of the evolved best-performing controllers.\nThe ANN models evolved through mono-objective optimisation tend to focus more on evolving a group-following behaviour rather than a biomimetic agent.\n\nOur approach is still perfectible, in particular, we only evolve the behaviour of a single agent in a group, rather than all agents of the group. This choice was motivated by the large increase in difficulty in evolving ANN models for the entire group, which would also involve additional behavioural trade-offs: \\textit{e.g.}\\ individual free-will and autonomous dynamics, individuals leaving or re-joining the group. However, it also means that here the fish do not react to the robot in simulations because the fish behaviour is a replay of fish experimental trajectories recorded without robot.\n\nAdditionally, it may be possible to improve the performance (in term of biomimetism) of the multi-objective optimisation process by combining additional selection pressures as objectives (\\textit{i.e.}\\ not just genotypic and behavioural diversity)~\\cite{doncieux2014beyond}. We already include behavioural and phenotypic diversities as selection pressures to guide the optimisation process; however, taking into account phenotypic diversity can bias the optimisation algorithm to explore rather than exploit, which can prevent some desired phenotypes to be considered by the optimisation algorithm. An alternative would be to use angular diversity instead~\\cite{szubert2016reducing}.\n\nThis study shows that ANN are good candidates to model individual and collective fish behaviours, in particular in the context of social bio-hybrid systems composed of animals and robots. By evolutionary computation, they can be calibrated on experimental data. This approach requires less \\textit{a priori} knowledge than equations or agent based modelling techniques. Although they are black box model, they could also produce interesting results from a biological point of view. Thus, ANN collective behaviour models can be an interesting approach to design animal and robot social interactions.\n\n\n\n\\section*{Acknowledgement}\n{\\small\nThis work was funded by EU-ICT project 'ASSISIbf', no 601074.}\n\n\n\\FloatBarrier\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{\\texorpdfstring{$k$}{k}-Fold Matroid Union} \\label{appendix:matroid-union-fold}\n\nIn this section, we study the special case of matroid union where we take the $k$-fold union of the same matroid.\nThat is, a basis of the $k$-fold union of $\\cM = (U, \\cI)$ is the largest subset $S \\subseteq U$ which can be partitioned into $k$ disjoint independent sets $S_1, \\ldots, S_k$ of $\\cI$.\nMany of the prominent applications of matroid union fall into this special case, particularly the $k$-disjoint spanning tree problem.\nAs a result, here we show an optimized version of the algorithm presented in \\cref{sec:matroid-union} with better and explicit dependence on $k$ that works in this regime. \n\n\\matroidunionfold*\n\nNote that in the breadth-first-search and blocking-flow algorithms in \\cref{sec:matroid-union}, there is an $O(k)$ overhead where we have to spend $O(k)$ time iterating through the $k$ binary search trees in order to explore the out-neighbors of the $O(kr)$ elements.\nOur goal in this section is thus to show that it is possible to further ``sparsify'' the exchange graphs to contain essentially only a basis, hence reducing its size from $\\Theta(kr)$ to $O(r)$.\nWe start with a slight modification to the BFS \\cref{alg:bfs-union} which reduces the running time by a factor of $O(k)$. The idea is that if we visit an element $u$ in the BFS which does not increase the rank of all visited elements so far, we can skip searching out-edges from $u$. Indeed, if $(u,v)$ is an edge of the exchange graph, then there must have been some element $u'$ visited earlier in the BFS which also has the edge $(u',v)$.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{BFS in a $k$-fold union exchange graph}\n \\label{alg:bfs-union-fold}\n \\KwData{$S \\subseteq U$ with partition $S_1, \\ldots, S_k$ of independent sets and a basis $B$ of $U \\setminus S$}\n \\KwResult{The $(s, v)$-distance $d(v)$ in $H(S)$ for each $v \\in S \\cup \\{t\\}$}\n\n $\\mathsf{queue} \\gets B$ and $R \\gets \\emptyset$\\;\n $d(v) \\gets \\infty$ for each $v \\in S \\cup \\{t\\}$, and $d(v) \\gets 1$ for each $v \\in B$\\;\n $\\cT_i \\gets \\textsc{Initialize}(\\cM, S_i, S_i)$ (\\cref{thm:bst} with $\\beta = 1$)\\;\n \\While{$\\mathsf{queue} \\neq \\emptyset$} {\n $u \\gets \\mathsf{queue}.\\textsc{Pop}()$\\;\n \\If{$R + u \\in \\cI$} {\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$} {\n \\While{$v := \\cT_i.\\textsc{Find}(u) \\neq \\bot$} {\n $d(v) \\gets d(u) + 1$ and $\\mathsf{queue}.\\textsc{Push}(v)$\\;\n $\\cT_i.\\textsc{Delete}(v)$\\;\n }\n \\lIf{$S_i + u \\in \\cI$ and $d(t) = \\infty$} {\n $d(t) \\gets d(u) + 1$\n }\n }\n $R \\gets R + u$\\;\n }\n }\n \\textbf{return} $d(v)$ for each $v \\in S \\cup \\{t\\}$.\n\\end{algorithm}\n\n\\begin{lemma}\n Given $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$ and a basis $B$ of $U \\setminus S$, it takes $\\tO(kr)$ time to construct the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$.\n \\label{lemma:bfs-union-fold}\n\\end{lemma}\n\n\\begin{proof}\n The algorithm is presented as \\cref{alg:bfs-union-fold}.\n It performs a BFS from a basis $B$ of the first layer and only explores out-edges from the first basis $R$ it found.\n It takes\n (i) $\\tO(|S|)$ time to construct the $\\cT_i$'s,\n (ii) $\\tO(1)$ time to discover each of the $O(kr)$ element, and\n (iii) an additional $\\tO(k \\cdot |R|)$ time to iterate through all $k$ binary search trees $\\cT_i$'s for each $u \\in R$.\n The total running time is thus bounded by $\\tO(kr)$.\n \n We have shown in \\cref{lemma:bfs-union} that it is feasible to replace $U \\setminus S$ with simply $B$.\n It remains to show that exploring only the out-neighbors of $u \\in R$ does not affect the correctness.\n Consider a $v \\in S \\setminus R$ (we know that $B \\subseteq R$ so it suffices to consider elements in $S$) with an out-neighbor $x \\in S_i$, i.e., $S_i - x + v \\in \\cI$.\n It then follows that $\\mathsf{rank}((S_i - x) + R_v) = \\mathsf{rank}((S_i - x) + (R_v + v)) \\geq \\mathsf{rank}(S_i)$ by \\cref{obs:exchange} and \\cref{lemma:basis-rank}, where $R_v$ is the set $R$ when $v$ is popped out of the queue (in other words, $R_v + v \\not\\in \\cI$).\n This implies that there is a $u \\in R_v$ which is visited before $v$ that also has out-neighbor $x$.\n The modification is therefore correct.\n\\end{proof}\n\nOur blocking-flow algorithm for $k$-fold matroid union is presented as \\cref{alg:blocking-flow-union-fold}.\nIt's essentially a specialization of \\cref{alg:blocking-flow-union} to the case where all the $k$ matroids are the same, except that we skip exploring the out-neighbors of $a_{\\ell}$ and remove it directly if it is ``spanned'' by the previous layers and the set $R_{\\ell} \\subseteq L_{\\ell}$ of elements that are \\emph{not} on any augmenting path of length $d_t$.\nWith this optimization, we obtain the following lemma analogous to \\cref{lemma:blocking-flow-union}.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \\SetKwInput{KwGuarantee}{Guarantee}\n \n \\caption{Blocking flow in a $k$-fold union exchange graph}\n \\label{alg:blocking-flow-union-fold}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and a dynamic-basis data structure $\\cD$ of $U \\setminus S$}\n \\KwResult{$S^\\prime \\in \\cI_{\\text{part}} \\cap \\cI_k$ with $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$}\n \\KwGuarantee{$\\cD$ maintains a basis of $U \\setminus S^\\prime$ at the end of the algorithm}\n \n Build the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$ with \\cref{lemma:bfs-union-fold}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $B \\gets$ the basis maintained by $\\cD$ and $L_1 \\gets B$\\;\n $A_\\ell \\gets L_\\ell$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell}^{(i)} \\gets \\textsc{Initialize}(\\cM_i, S_i, Q_{S_i}, A_{\\ell} \\cap S_i)$ for each $2 \\leq \\ell < d_t$ and $1 \\leq i \\leq k$ (\\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d_t$)\\;\n $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n $R_{\\ell} \\gets \\emptyset$ for each $2 \\leq \\ell < d_t$\\;\n $\\ell \\gets 0$ and $a_0 \\gets s$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n \\lIf{$\\ell \\geq 2$ and $\\mathsf{rank}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell} \\cup \\{a_{\\ell}\\}) = \\mathsf{rank}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell})$} {\\label{line:check-not-spanned}\n $A_{\\ell} \\gets A_{\\ell} - a_{\\ell}$ and \\textbf{continue}\n }\n \\lIf{$\\ell > 0$} {\n Find an $a_{\\ell + 1} := \\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(a_\\ell) \\neq \\bot$ for some $1 \\leq i \\leq k$\n }\n \\lElse {\n $a_{\\ell + 1} \\gets$ an arbitrary element in $A_1$\n }\n \\If{such an $a_{\\ell + 1}$ does not exist} {\n \\lIf{$\\ell \\geq 2$} {\n $R_{\\ell} \\gets R_{\\ell} + a_{\\ell}$ and $\\cT_{\\ell}^{(j)}.\\textsc{Delete}(a_{\\ell})$ where $a_{\\ell} \\in S_j$\n }\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\\label{line:remove}\n }\n \\lElse {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n $B \\gets B - a_1$, $A_1 \\gets A_1 - a_1$, and $D_1 \\gets D_1 + a_1$\\;\n \\If{$\\cD.\\textsc{Delete}(a_1)$ returns a replacement $x$} {\n $B_i \\gets B_i + x$ and $A_i \\gets A_i + x$\\;\n }\n \\For{$i \\in \\{2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}^{(j)}.\\textsc{Delete}(a_i)$ and $\\cT_{i}^{(j)}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$ where $a_i \\in S_j$\\;\n }\n Augment $S$ along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$\\;\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\n\\begin{lemma}\n Given an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H(S)}(s, t) = d_t$ together with a data structure $\\cD$ of \\cref{thm:decremental-basis} that maintains a basis of $U \\setminus S$, it takes\n \\begin{equation}\n \\tO\\left(\\underbrace{kr}_{\\ref{item:term1}} + \\underbrace{\\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}}_{\\ref{item:term2}} + \\underbrace{\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right) \\cdot k}_{\\ref{item:term3}} + \\underbrace{\\left(kr + (|S^\\prime| - |S|)\\right) \\cdot \\frac{\\sqrt{r}}{d_t}}_{\\ref{item:term4}}\\right)\n \\label{eq:complexity}\n \\end{equation}\n time to obtain an $S^\\prime \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H}(S^\\prime)(s, t) > d_t$, with an additional guarantee that $\\cD$ now maintains a basis of $U \\setminus S^\\prime$.\n \\label{lemma:blocking-flow-union-fold}\n\\end{lemma}\n\nWe need the following observation to bound the running time of \\cref{alg:blocking-flow-union-fold}.\n\n\\begin{observation}\n In \\cref{alg:blocking-flow-union-fold}, it holds that $B \\cup R_2 \\cup R_{3} \\cup \\cdots \\cup R_{d_t - 1} \\in \\cI$.\n \\label{claim:is-basis}\n\\end{observation}\n\n\n\\begin{proof}[Proof of \\cref{lemma:blocking-flow-union-fold}]\n We analyze the running time of \\cref{alg:blocking-flow-union-fold} first.\n In particular, there are four terms in \\cref{eq:complexity} which come from the following.\n \\begin{enumerate}[label=(\\roman*)]\n \\item\\label{item:term1} $\\tO(kr)$: It takes $\\tO(kr)$ time to compute the distance layers using \\cref{lemma:bfs-union-fold} and initialize all the binary search trees $\\cT_{\\ell}^{(i)}$'s. Computing the rank of $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ also takes $\\tO(kr)$ time in total since we can pre-compute query-sets of the form $L_1 \\cup \\cdots \\cup L_{k}$ for each $k$ in $\\tO(kr)$ time, and each insertion to $R_{\\ell}$ takes $\\tO(1)$ time.\n \\item\\label{item:term2} $\\tO\\left(\\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}\\right)$: For each of the $O(|S^\\prime| - |S|)$ augmentations, it takes $\\tO(r \\cdot \\frac{d_t}{\\sqrt{r}})$ time to update the binary search trees.\n \\item\\label{item:term3} $\\tO(\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right) \\cdot k)$: The number of elements whose out-edges are explored is bounded by $O\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right)$. This is because for each such element $u$, either $u$ is included in an augmenting path of length $d_t$, or $u$ is removed in Line \\ref{line:remove}.\n There are $O((|S^\\prime| - |S|) \\cdot d_t)$ such $u$'s in the augmenting paths.\n For $u$ removed in Line \\ref{line:remove}, if $\\ell = 1$, then the number of such $u$'s is $O(|S^\\prime| - |S| + r)$ because there are initially $O(r)$ elements in $A_1$, and we add at most one to it every augmentation.\n If $\\ell \\geq 2$, then we insert it into $R_{\\ell}$, and by Line \\ref{line:check-not-spanned}, the rank of $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ increases after including $u$ into $R_{\\ell}$.\n By \\cref{claim:is-basis}, the number of such $u$'s is bounded by $O(r)$.\n The term then comes from spending $O(k)$ time iterating through the $k$ binary search trees for each of the $O\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right)$ elements whose out-neighbors are explored.\n \\item\\label{item:term4} $\\tO(\\left(kr + (|S^\\prime| - |S|)\\right) \\cdot \\frac{\\sqrt{r}}{d_t})$: The number of elements that are once in some $A_{\\ell}$ is bounded by $O(kr + |S^\\prime| - |S|)$. Initially, there are $O(kr)$ elements ($A_1$ plus all the $A_\\ell$ for $\\ell \\geq 2$), and each augmentation adds at most one element to $A_1$.\n Each of these elements is discovered by $\\cT_{\\ell}^{(i)}.\\textsc{Find}(\\cdot)$ at most once, and thus we can charge the $\\tO(\\frac{\\sqrt{r}}{d_t})$ cost to it, resulting in the fourth term of \\cref{eq:complexity}.\n \\end{enumerate}\n Note that for each element whose out-neighbors are explored, any failed attempt of $\\cT_{\\ell}^{(i)}.\\textsc{Find}(\\cdot)$ costs only $\\tO(1)$ instead of $\\tO(\\frac{\\sqrt{r}}{d_t})$ according to \\cref{thm:bst}.\n The $\\tO(\\frac{\\sqrt{r}}{d_t})$ cost of a successful search is charged to term \\ref{item:term4} instead of \\ref{item:term3}.\n \n As for correctness, it suffices to show that each of the $a_{\\ell}$ removed from $A_{\\ell}$ because it is spanned by $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ in Line \\ref{line:check-not-spanned} is not in any augmenting path of length $d_t$.\n Consider its out-neighbor $a_{\\ell + 1}$ with respect to the current $S$, and we would like to argue that $a_{\\ell + 1}$ is not on any augmenting path of length $d_t$ anymore.\n This is because we have already explored all the out-neighbors of elements in $R_{\\ell}$.\n Since $a_{\\ell} \\in \\mathsf{span}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell})$, by \\cref{lemma:basis-rank}, there must exist some $u \\in L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ with a directed edge $(u, a_{\\ell + 1})$.\n We consider two cases:\n \\begin{itemize}\n \\item $u\\in R_{\\ell}$. This means that we have already explored $a_{\\ell+1}$, as we finished exploring all out-neighbors of $u$ already.\n \\item $u\\in L_{1}\\cup \\cdots \\cup L_{\\ell-1}$. We know that by \\cref{lemma:monotone}, both $d_{H(S)}(s,v)$ and\n $d_{H(S)}(v,t)$ can only increase after augmentations for all elements $v$.\n Hence $a_{\\ell+1}$ cannot be part of an augmenting path of length $d_t$ anymore, since if it was its distance to $t$ must be $d-(\\ell+1)$, but then the distance from $u$ to $t$ must be at most $d-\\ell$ (which is smaller than its initial distance to $t$ at the beginning of the phase).\n \\end{itemize}\n \n As a result, all of $u$'s out-neighbors have either already been explored or do not belong to any augmenting path of length $d_t$.\n This implies that $u$ is not on any such path either, and thus it's correct to skip and remove it from $A_{\\ell}$.\n This concludes the proof of \\cref{lemma:blocking-flow-union-fold}.\n\\end{proof}\n\n\n\\cref{thm:dynamic-matroid-union-fold} now follows from analyzing the total running time of $O(\\sqrt{\\min(n, kr)})$ runs of \\cref{lemma:blocking-flow-union-fold}.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-union-fold}]\n We initialize the dynamic-basis data structure $\\cD$ of \\cref{thm:decremental-basis} on $U$ in $\\tO(n)$ time.\n Let $p = \\min(n, kr)$ be the rank of the $k$-fold matroid union.\n Using $\\cD$, we then run $O(\\sqrt{p})$ iterations of \\cref{lemma:blocking-flow-union-fold} until $d_{H(S)}(s, t) > \\sqrt{p}$.\n Summing the first two terms of \\cref{eq:complexity} over these $O(\\sqrt{p})$ iterations gives (recall that \\cref{lemma:augmenting-path-lengths} guarantees that $\\sum_{d = 1}^{\\sqrt{p}}d \\cdot (|S_d| - |S_{d - 1}|) = \\tO(p)$)\n \\[\n \\tO\\left(kr\\sqrt{p} + \\sqrt{r} \\cdot \\sum_{d = 1}^{\\sqrt{p}}d \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) = \\tO\\left(kr\\sqrt{p}\\right)\n \\]\n since $p\\sqrt{r} \\leq kr\\sqrt{p}$.\n The third term of \\cref{eq:complexity} contributes a total running time of\n \\[\n \\tO\\left(\\left(\\sum_{d = 1}^{\\sqrt{p}}dk \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) + kr\\sqrt{p}\\right) = \\tO\\left(kr\\sqrt{p} + kp\\right),\n \\]\n while the fourth term of \\cref{eq:complexity} sums up to\n \\[\n \\tO\\left(\\left(\\sum_{d = 1}^{\\sqrt{p}}kr\\frac{\\sqrt{r}}{d}\\right) + kr\\sqrt{r}\\right) = \\tO\\left(kr\\sqrt{r}\\right).\n \\]\n We finish the algorithm by finding the remaining $O(\\sqrt{p})$ augmenting paths one at a time with \\cref{lemma:bfs-union-fold} in a total of $\\tO(kr\\sqrt{p})$ time.\n The $k$-fold matroid union algorithm thus indeed runs in $\\tO\\left(n + kr\\sqrt{\\min(n, kr)} + k\\min(n, kr)\\right)$ time, concluding the proof of \\cref{thm:dynamic-matroid-union-fold}.\n\\end{proof}\n\n\n\n\n\n\\section{Dynamic Oracles for Specific Matroids \\& Applications} \\label{appendix:applications}\n\n\nIn this appendix, we show how to leverage known dynamic algorithms to implement the dynamic rank oracle (\\cref{def:dyn-oracle}) efficiently for many important matroids.\nWhat we need are data structures that can maintain the rank of a set dynamically under insertions and deletions in \\emph{worst-case} update time (converting a \\emph{worst-case} data structure to \\emph{fully-persistent} can be done by the standard technique of \\cite{DriscollSST86,Dietz89}, paying an overhead of $O(\\log{n})$). Additionally, note that the data structures do not need to work against an adaptive adversary since we only ever use the \\emph{rank} of the queried sets, which is not affected by internal randomness.\n\nIn particular, for \\emph{partition, graphic, bicircular, convex transversal}, and \\emph{simple job scheduling matroids} it is possible to maintain the rank with polylog$(n)$ update-time, and for \\emph{linear matroids} in $O(n^{1.529})$ update-time.\n\nTogether with our matroid intersection (\\cref{sec:matroid-intersection}) and matroid union (\\cref{sec:matroid-union}) algorithms, this leads to a black-box approach to solving many different problems. In fact, we can solve matroid intersection and union on any combination of the above matroids, leading to improved or matching running times for many problems (see the introduction \\cref{sec:intro} with \\cref{tab:intro:dyn-oracle-implies-fast-algorithms,tab:intro:implications} for a more thorough discussion). For completeness, we define these problems in \\cref{appendix:problems}.\nThe same algorithms are powerful enough to also solve new problems which have not been studied before.\n\n\n\\paragraph{Example Application: Tree Scheduling (or Maximum Forest with Deadlines).} We give an example of a reasonably natural combinatorial scheduling problem, which---to our knowledge---has not been studied before. Suppose we are given a graph $G = (V,E)$ where each edge $e\\in E$ has two numbers associated with it: a release date $\\ell_e$ and a deadline $r_e$. Consider the problem where we want to for each day pick exactly one edge (say, to build\/construct), but we have constraints that edge $e$ can only be picked between days $\\ell_e$ and $r_e$. Now the task is to come up with a scheduling plan to build a spanning tree of the graph, if possible.\n\nThis problem is exactly a matroid intersection problem between a graphic matroid and a convex transversal matroid. Hence, by a black-box reduction, we know that we can solve this problem in $\\tO(|E|\\sqrt{|V|})$ time.\n\n\n\\subsection{Partition Matroids}\nIn a partition matroid $\\cM = (U,\\cI)$, each element $u\\in U$ is assigned a color $c_u$. We are also, for each color $c$, given a non-negative integer $d_c$, and we define a set of elements $S\\subseteq U$ to be independent if for each color $c$, $S$ includes at most $d_c$ elements of this color.\nImplementing the dynamic oracle for the partition matroid is easy:\n\n\\begin{lemma}\nOne can maintain the rank of a partition matroid in $O(1)$-update time.\n\\end{lemma}\n\\begin{proof}\nFor each color $c$ we maintain a counter $x_c$ of how many elements we have of color $c$. We also maintain $r = \\sum \\min_c(x_c, d_c)$, which is the rank of the current set.\n\\end{proof}\n\n\\begin{remark}\nBipartite matching can be modeled as a matroid intersection problem of two partition matroids. So our matroid intersection algorithm together with the above lemma match (up to poly-logarithmic factors induced by fully-persistence)---in a black-box fashion---the $O(|E|\\sqrt{|V|})$-time bound of the best \\emph{combinatorial} algorithm for bipartite matching \\cite{HopcroftK73}.\n\\end{remark}\n\n\\subsection{Graphic and Bicircular Matroids}\n\\label{appendix:applications-graphic}\nGiven a graph $G = (V,E)$, the \\emph{graphic} and \\emph{bicircular} matroids are matroids capturing the connectivity structure of the graph.\n\n\n\\paragraph{Graphic Matroid.}\nIn the graphic matroids $\\cM = (E,\\cI)$ a subset of edges $E'\\subseteq E$ are independent if and only if they do not contain a cycle.\nWe use the following result to implement the dynamic oracle for this matroid.\n\n\\begin{lemma}[\\cite{KapronKM13,GibbKKT15}]\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of edges $e$ into\/from $E$ in worst case $O(\\log^4{|V|})$ time and query of the connectivity between $u$ and $v$ in worst case $O(\\log{|V|}\/\\log{\\log{|V|}})$ time.\n The data structure works with high probability against an oblivious adversary.\n \\label{lemma:worst-case-dynamic-conn-appendix}\n\\end{lemma}\n\nWith a simple and standard extension, we can maintain the number of connected components as well, and hence also the rank (since $\\mathsf{rank}(E') = |V|- \\#\\text{connected components in }G[E']$).\n\n\\begin{corollary}\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of $e$ into\/from $E$ in worst-case $O(\\log^4{|V|})$ time.\n After each operation, the data structure also returns the number of connected components in $G$.\n The data structure works with high probability against an oblivious adversary.\n \\label{cor:worst-case-dynamic-components-appendix}\n\\end{corollary}\n\n\\begin{proof}\n We maintain the data structure $\\cC$ of \\cref{lemma:worst-case-dynamic-conn-appendix} and a counter $c := |V|$ representing the number of connected components.\n For insertion of $e = (u, v)$, we first query the connectivity of $u$ and $v$ before inserting $e$ into $\\cC$.\n If they are not connected before the insertion, decrease $c$ by one.\n For deletion of $e = (u, v)$, after deleting $e$ from $\\cC$, we check if $u$ and $v$ are still connected.\n If not, then we increase $c$ by one.\n\\end{proof}\n\n\n\\paragraph{Bicircular Matroid.}\nIn the bicircular matroid $\\cM = (E,\\cI)$, a subset of edges $E'\\subseteq E$ are independent if and only if each connected component in $G[E']$ has at most one cycle. Similar to the graphic matroid, dynamic connectivity algorithms can be used to implement the dynamic rank oracle for bicircular matroids too.\n\n\\begin{corollary}\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of $e$ into\/from $E$ in worst-case $O(\\log^4{|V|})$ time.\n After each operation, the data structure also returns the rank of $E$ in the bicircular matroid.\n The data structure works with high probability against an oblivious adversary.\n \\label{cor:worst-case-dynamic-components-appendix-bicircular}\n\\end{corollary}\n\n\\begin{proof}\nThe dynamic connectivity data structure of \n\\cite{KapronKM13,GibbKKT15} (\\cref{lemma:worst-case-dynamic-conn-appendix}) can be adapted to also keep track of the number of edges and vertices in each connected component. Using this, the data structure can, for each connected component $c$ keep track of a number $x_c$ as the minimum of the number of edges in this component and the number of vertices in this component. Then the rank of the bicircular matroid is just the sum of $x_c$ (as in an independent set each component is either a tree or a tree with an extra edge). In each update two components can merge, a component can be split up into two, or the edge-count of a component may simply change. \n\\end{proof}\n\n\\begin{remark}[Deterministic Dynamic Connectivity]\nThe above dynamic connectivity data structures are randomized.\nThere are also deterministic connectivity data structures, but with slightly less efficient sub-polynomial $|V|^{o(1)}$ update time~\\cite{ChuzhoyGLNPS20}.\n\\end{remark}\n\n\\subsection{Convex Transversal and Scheduling Matroids}\n\nConvex transversal and scheduling matroids are special cases of the \\emph{transversal} matroid, with applications in scheduling algorithms.\n\n\\begin{definition}[Transversal Matroid~\\cite{edmonds1965transversals}]\n A \\emph{transversal matroid} with respect to a bipartite graph $G = (L, R, E)$ is defined over the ground set $L$, where each $S \\subseteq L$ is independent if and only if there is a perfect matching in $G$ between $S$ and a subset of $R$.\n\\end{definition}\n\nA bipartite graph $G = (L, R, E)$ is \\emph{convex} if $R$ has a linear order $R = \\{r_1, r_2, \\ldots, r_n\\}$ and each $\\ell \\in L$ corresponds to an interval $1 \\leq s(\\ell) \\leq t(\\ell) \\leq n$ such that $(\\ell, r_i) \\in E$ if and only if $s(\\ell) \\leq i \\leq t(\\ell)$, i.e., the neighbors of each $\\ell$ form an interval.\n\n\\begin{definition}[Convex Transversal Matroid and Simple Job Scheduling Matroid]\n A \\emph{convex transversal matroid} is a transversal matroid with respect to a convex bipartite graph.\n A \\emph{simple job scheduling matroid} is a special case of convex transversal matroids in which $s(\\ell) = 1$ for each $\\ell \\in L$.\n\\end{definition}\n\nOne intuitive way to think about the simple job scheduling matroid is that there is a machine capable of finishing one job per day.\nThe ground set of the matroid consists of $n$ jobs, where the $i$-th jobs must be done before its deadline $d_i$.\nA subset of jobs forms an independent set if it's possible to schedule these jobs on the machine so that every job is finished before its deadline.\n\n\\begin{lemma}[Dynamic Convex Bipartite Matching \\cite{BrodalGHK07}]\n There is a data structure which, given a convex bipartite graph $G = (L, R, E)$, after $\\tO(|L|+|R|)$ initialization, maintains the size of the maximum matching of $G[A \\cup R]$ where $A \\subseteq L$ is a dynamically changing subset of $L$ that is initially empty.\n The data structure supports insertion\/deletion of an $x \\in L$ to\/from $A$ in worst-case $O(\\log^{2}(|L|+|R|))$ update time.\n\\end{lemma}\n\n\\begin{remark}\n The exact data structure presented in \\cite{BrodalGHK07} is different from the stated one.\n In particular, they support insertion\/deletion of an \\emph{unknown} job, i.e., we do not know beforehand what the starting date and deadline of the job are, nor do we know its relative position among the current set of jobs.\n As a result, they used a rebalancing-based or rebuilding-based binary search tree~\\cite{NievergeltR72,Andersson89,Andersson91}, resulting in their amortized bound.\n For our use case, all the possible jobs are known and we are just activating\/deactivating them, hence a static binary tree with a worst-case guarantee over these jobs suffices.\n\\end{remark}\n\n\n\\subsection{Linear Matroid}\nIn a linear matroid $\\cM = (U, \\cI)$, $U$ is a set of $n$ vectors (of dimension $r$) in some vector space and the notion of independence is just that of \\emph{linear independence}. The dynamic algorithm to maintain the rank of a matrix of \\cite{BrandNS19} can be used without modification as the dynamic oracle.\n\n\\begin{lemma}[Dynamic Matrix Rank Maintenance~\\cite{BrandNS19}]\n There is a data structure which, given an $n \\times n$ matrix $M$, maintains the rank of $M$ under row updates in worst-case $O(n^{1.529})$ update time.\n\\end{lemma}\n\n\\subsection{Problems} \\label{appendix:problems}\nFor completeness, here we define the problems we discuss in the introduction, and why they reduce to matroid union or intersection.\n\n\\paragraph{$k$-Forest.} In this problem we are given a graph $G = (V,E)$ and asked to find $k$ edge-disjoint forests of the graph, of the maximum total size.\nIt can be modeled as the $k$-fold matroids union over the graphic matroid of $G$.\n\n\\paragraph{$k$-Disjoint Spanning Trees.} This problem is a special case of the above $k$-forest problem where we ask to find $k$ edge-disjoint spanning trees of the graph. Clearly, if such exists, the $k$-forest problem will find them.\n\n\\paragraph{$k$-Pseudoforest.} Similar to above, in this problem we are given a graph $G = (V,E)$ and asked to find $k$ edge-disjoint \\emph{pseudoforests} of the graph, of the maximum total size. A pseudoforest is an undirected graph in which every component has at most one cycle.\nThe problem can be modeled as the $k$-fold matroids union over the bicircular matroid of $G$.\n\n\\paragraph{$(f,p)$-Mixed Forest-Pseudoforest.} Again, we are given a graph $G = (V,E)$ and asked to find $f$ forests and $p$ pseudoforest (all edge-disjoint), of the maximum total size. \nThe problem can be modeled as the matroids union over $f$ graphic matroids and $p$ bicircular matroids.\n\n\\paragraph{Tree Packing.} In the tree packing problem, we are given a graph $G = (V,E)$ and are asked to find the maximum $k$ such that we can find $k$-disjoint spanning trees in the graph. This number $k$ is sometimes called the \\emph{tree-pack-number} or \\emph{strength} of the graph. The problem can be solved with the $k$-disjoint spanning trees problem, by binary searching for $k$ in the range $[0,|E|\/(|V|-1)]$, and is an example of a \\emph{matroid packing} problem.\n\n\\paragraph{Arboricity and Pseudoarboricity.} The arboricity (respectively pseudoarboricity) of a graph $G = (V,E)$ is the least integer $k$ such that we can partition the edges into $k$ edge-disjoint forests (respectively pseudoforests). \nThis can be solved with the $k$-forest (respectively $k$-pseudoforest) problem with a binary search over $k$. It is well known that for a simple graph the (pseudo-)arboricity is at most $\\sqrt{|E|}$, so we need only search for $k$ in the range $[0,\\sqrt{|E|}]$. The problems are examples of \\emph{matroid covering} problems.\n\n\\paragraph{Shannon Switching Game.} The Shannon switching game is a game played on a graph $G = (V,E)$, between two players ``Short'' and ``Cut''. They alternate turns with Short playing first, and all edges are initially colored white. On Short's turn, he may color an edge of the graph black. On Cut's turn, he picks a remaining non-black edge and removes it from the graph. Short wins if he connects the full graph with black edges, and Cut if he manages to disconnect the graph. It can be shown that Short wins if and only if there exists two disjoint spanning trees in the graph (and these two spanning trees describes a winning strategy for Short). Hence solving this game is a special case of the $k$-disjoint spanning tree problem with $k = 2$.\n\n\\paragraph{Graph $k$-Irreducibility.} \nA (multi-)graph $G = (V,E)$ is called $k$-irreducible (\\cite{Whiteley88rigidity}) if and only if $|E| = k(|V|-1)$ and for any vertex-induced nonempty, proper subgraph $G[V']$ it holds that $|E(G[V'])| < k(|V'|-1)$.\nThe motivation behind this definition comes from the rigidity of \\emph{bar-and-body frameworks}. A bar-and-body framework where rigid bars are attached to rigid bodied with joints (represented by the graph $G$).\nThen any stress put on a $k$-irreducible structure will propagate to all the bars (i.e.\\ edges).\n\\cite{gabow1988forests} show how one can decide if a graph is $k$-irreducible by first determining if its edges can be partitioned into $k$ edge-disjoint trees, and then performing an additional $\\tO(k|V|)$ work.\n\n\\paragraph{Bipartite Matching.} In the bipartite matching problem, we are given a bipartite graph $G = (L\\cup R,E)$, and the goal is to find a \\emph{matching} (a set of edges which share no vertices) of maximum size. Bipartite matching can be modeled as a matroid intersection problem over two partition matroids $M_L = (E,\\cI_L)$ and $M_R = (E,\\cI_R)$. $M_L$ specifies that no two edges share the same vertex on the left $L$ (and $M_R$ is defined similarly on the right set of vertices $R$).\n\n\\paragraph{Colorful Spanning Tree.} In this problem\\footnote{sometimes also called \\emph{rainbow spanning tree}.}, we are given a graph $G = (V,E)$ together with colors on the edges $c:E \\to \\bbZ$. We are tasked to find a spanning tree of $G$ such that no two edges in our spanning tree have the same color. This problem can be modeled by the matroid intersection of the graphic matroid of $G$ (ensuring we pick a forest),\nand a partition matroid of the coloring $c$ (ensuring that we pick no duplicate colors). We also note that this problem is more difficult than bipartite matching since any bipartite matching instance can be converted to a colorful spanning tree instance on a star-multi-graph.\n\n\\paragraph{Graphic Matroid Intersection.} In graphic matroid intersection we are given two graphs $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ and a bijection of the edges $\\phi : E_1 \\to E_2$. The task is to find a forest in $G_1$ of the maximum size, which also maps to a forest in $G_2$. By definition, this is a matroid intersection problem over two graphic matroids. Again, this problem is a further generalization of the colorful spanning tree problem.\n\n\\paragraph{Convex Transversal and Simple Job Scheduling Matroid Intersection.} In these problems, we are given a set of unit-size jobs $V$, where each job $v$ has two release times $\\ell_1(v)$, $\\ell_2(v) \\ge 1$ (in simple job scheduling $\\ell(v) = 1$) and two deadlines $r_1(v)$,$r_2(v)\\le \\mu$. The task is to find a set of jobs $S$ of the maximum size such that they can be scheduled on two machines as follows: each job needs to be scheduled at both machines, and at machine $i$ it must be scheduled at time $t \\in [\\ell_i(v), r_i(v)]$.\n\n\\paragraph{Linear Matroid Intersection}. In this problem, we are given two $n\\times r$ matrices $M_1$ and $M_2$ over some field.\nThe task is to find a set of indices $S\\subseteq \\{1,2,\\ldots,n\\}$ of maximum cardinality, such that the rows of $M_1$ (respectively $M_2$) indexed by $S$ are independent at the same time. This is a matroid intersection of two linear matroids defined by $M_1$ and $M_2$. We note that partition, graphic, and transversal matroids are special cases of linear matroids.\n\n\n\\section{Independence-Query Matroid Intersection Algorithm} \\label{appendix:dynamic-matroid-intersection-ind}\n\nIn this section, we show that we can obtain an $\\tO(nr^{3\/4})$ matroid intersection algorithm in the dynamic-independence-oracle model.\nThis matches the state-of-the-art traditional independence-query algorithm of Blikstad \\cite{blikstad21}.\nWe will only provide a proof sketch here because our algorithm is mostly an implementation of \\cite{blikstad21} in the new model with the help of (circuit) binary search trees.\n\nUsing the same construction as \\cref{thm:bst} and \\cref{obs:exchange}, circuit binary search trees work verbatim in the dynamic-independence-oracle model (however, co-circuit binary search trees do not).\nIn particular, \\cref{obs:exchange}\\ref{item:circuit-exchange} can be checked with a single independence query.\n\n\\begin{corollary}\n For any integer $\\beta \\geq 1$, there exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, the query-set $Q_S$ that corresponds to $S$, and $X \\subseteq S$ or $X = \\{t\\}$, initialize the data structure in $\\tO(|X|)$ time. The data structure also maintains $S$.\n \\item $\\textsc{Find}(y)$: Given $y \\in \\bar{S}$,\n \\begin{itemize}\n \\item if $X \\subseteq S$, then return an $x \\in X$ such that $S - y + x \\in \\cI$, or\n \\item if $X = \\{t\\}$, then return the only element $x = s$ or $x = t$ in $X$ if $S + y \\in \\cI$ and $\\bot$ otherwise.\n \\end{itemize}\n The procedure returns $\\bot$ if such an $x$ does not exist.\n The procedure takes $\\tO(\\beta)$ time if the result is not $\\bot$, and $\\tO(1)$ time otherwise.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, if $x \\not\\in \\{s, t\\}$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ in $X$ by $y$ in $O(\\log{n})$ time.\n \\item $\\textsc{Update}(\\Delta)$: Update $S$ to $S \\oplus (\\Delta \\setminus \\{s, t\\})$ in amortized $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ time.\n \\end{itemize}\n \\label{cor:bst-ind}\n\\end{corollary}\n\n\\paragraph{Framework.}\nThe algorithm of \\cite{blikstad21} consists of the following three phases.\n\n\\begin{enumerate}\n \\item First, obtain an $(1 - \\epsilon)$-approximate solution $S$ using augmenting sets in $\\tO(\\frac{n\\sqrt{r}}{\\epsilon})$ time.\n \\item Eliminate all augmenting paths in $G(S)$ of length at most $d$ using Cunningham's algorithm implemented by \\cite{chakrabarty2019faster} in $\\tO(nd + nr\\epsilon)$ time.\n \\item Finding the remaining $O(r\/d)$ augmenting paths one at a time, using $\\tO(n\\sqrt{r})$ time each.\n\\end{enumerate}\n\nWith $\\epsilon = r^{-1\/4}$ and $d = r^{3\/4}$, the total running time is $\\tO(nr^{3\/4})$.\nWe briefly sketch how to implement the above three steps in the same running time also for the dynamic-independence-oracle model.\n\nNote that the primary difficulty independence-query algorithms face is that we are only capable of checking \\cref{obs:exchange}\\ref{item:circuit-exchange} (using \\cref{cor:bst-ind}), which means that we can only explore the neighbors of $u \\in \\bar{S}$.\nThe aforementioned rank-query algorithms for building distance layers and finding blocking-flow style augmenting paths are thus inapplicable in the dynamic-independence-oracle model.\n\n\\paragraph{Approximation Algorithm.}\nThe $O(n\\sqrt{r}\/\\epsilon)$-query $(1-\\epsilon)$-approximation algorithm of \\cite{blikstad21} needs to first compute distance layers up to distance $O(\\frac{1}{\\epsilon})$. This is done in a similar way as sketched below for ``Eliminating Short Augmenting Paths''.\n\nOtherwise, the approximation algorithm works through a series of ``refine'' operations\n(algorithms \\texttt{RefineAB}, \\texttt{RefineBA}, and \\texttt{RefineABA} in \\cite{chakrabarty2019faster,blikstad21}) to build a partial augmenting set. In each such operation, we only need to be able to do the following for some sets $(P,Q)$: start from some set $Q$ and find a maximal set $X\\subseteq P$ such that\n$Q+X$ is independent. This can be performed with a greedy algorithm in time (and dynamic query) $O(|P|)$, given that we already have queried set $Q$ before (which will be the case). \n\nFinally, the approximation algorithm falls back to finding a special type of augmenting paths \\emph{with respect to} the current augmenting set, in the \\texttt{RefinePath} algorithm of \\cite{blikstad21}, with $\\tO(n)$ queries for each such path. This algorithm can be implemented also in the dynamic-oracle model with the same query complexity. \\texttt{RefinePath} relies on the \\texttt{RefineAB} and \\texttt{RefineBA} algorithms (which we already covered), in addition to a binary search trick to find feasible exchange pairs. This binary search can be implemented with the circuit trees (\\cref{cor:bst-ind}), and it takes a total time $\\tO(n)$ to build them (since we can keep track of a queried set for $S$, and then we only need to build a circuit tree statically once for each layer in cost proportional to the size of the layer---which sums up to $\\tO(n)$).\n\n\\paragraph{Eliminating Short Augmenting Paths.}\nUsing \\cite{chakrabarty2019faster}'s implementation of Cunningham's algorithm, we can eliminate all augmenting paths of length $d$, thereby obtaining a $(1 - 1\/d)$-approximate solution.\nThe algorithm relies on \\cref{lemma:monotone} to ``fix'' the distance layers after each augmentation.\nInitially, all elements have distance $1$ or $2$ from $s$ depending on whether it belongs to $S$ (the common independent set obtained by the above approximation algorithm) or not.\nBefore the first and after each of the remaining $O(\\epsilon r)$ augmentation, we can fix the distance layers as follows.\n\n\\begin{itemize}\n \\item For each $1 \\leq \\ell \\leq d$ and $u \\in L_{\\ell}$, if $u$ is not of distance $\\ell$ from $s$, i.e., there is no in-edge from $L_{\\ell - 1}$ to $u$ anymore, move $u$ from $L_{\\ell}$ to $L_{\\ell + 2}$. This check is done as follows, depending on the parity of $\\ell$.\n \\begin{itemize}\n \\item If $\\ell$ is even, then for each $v \\in L_{\\ell - 1}$, we find all the unmarked $u \\in L_{\\ell}$ that $v$ has an edge to and mark $u$.\n In the end, all the unmarked $u \\in L_{\\ell}$ do not belong to $L_{\\ell}$ and should be moved to $L_{\\ell + 2}$.\n \\item If $\\ell$ is odd, then we simply check if there is an in-edge from $L_{\\ell - 1}$ to decide whether $u$ should be moved to other layers.\n \\end{itemize}\n\\end{itemize}\n\nBoth cases can be implemented efficiently with circuit binary search trees of \\cref{cor:bst-ind}: Each time we spend $\\tO(1)$ time to either confirm that $u$ has distance $\\ell$ from $s$ with respect to the current $S$ (in which case it will not be moved anymore in this iteration), or we increase the distance estimate of $u$.\nThe total running time is thus $\\tO(nd + nr\\epsilon)$, where $\\tO(nd)$ comes from increasing the distance estimate of each element to at most $d$, and $\\tO(nr\\epsilon)$ comes from confirming that each element belongs to its distance layer in the $O(\\epsilon r)$ iterations.\n\nA caveat here is that we need to support insertion\/deletion into the binary search trees.\nThis can be made efficient by doubling the size of a binary search tree (and re-initializing it) every time when there are not enough leaf nodes left in it.\nThe cost of re-building will be amortized to $\\tO(1)$ time per update (i.e., movement of an element to another layer).\n\n\\paragraph{Finding a Single Augmenting Path.}\nWith the $(1 - 1\/d)$-approximate solution obtained in the first two steps, \\cite{blikstad21} then finds the remaining $O(r\/d)$ augmenting paths one at a time, using the \\emph{reachability} algorithm of \\cite{quadratic2021}.\nThe reachability algorithm roughly goes as follows.\nFirst, we initialize two circuit binary search trees (\\cref{cor:bst-ind}) over the two matroids for discovering out-edges and in-edges of elements in $\\bar{S}$.\nWe then repeatedly run the following three steps until either an $(s, t)$-path is found (an arbitrary $(s, t)$-path suffices since such a path can be converted into a chordless one in $\\tO(r)$ time along which augmentation is valid) or we conclude that $t$ is unreachable from $s$.\nWe keep track of a set of visited vertices $F$ which we know are reachable from $s$.\n\n\\begin{enumerate}[label=(\\roman*)]\n \\item Identify the set of unvisited \\emph{heavy} vertices in $\\bar{S}$ that have at least $\\sqrt{r}$ unvisited out-neighbors or has a direct edge toward $t$. This is done by sampling a set $R$ of unvisited vertices in $S$ and then computing for each vertex $u$ whether $R \\cap \\mathsf{OutNgh}(u) = \\emptyset$, or equivalently, whether $S - R + u \\in \\cI$.\n Intuitively, vertices with more out-neighbors are more likely to fail the test.\n This can be tested for a single $R$ and all $u$ in $O(n)$ time in the dynamic-independence-oracle model.\n With $O(\\log{n})$ samples, heavy vertices can be successfully identified with high probability.\n \\item Discover all the out-neighbors for each light vertex, taking a total of $\\tO(n\\sqrt{r})$ time using the circuit binary search tree over the whole run of the algorithm. (Each vertex turns from heavy to light at most once.)\n \\item Perform a reversed breadth-first-search from all the heavy vertices simultaneously. We can assume that every vertex on the path is light (i.e., we find a ``closest'' heavy vertex reachable from $s$), and thus all its out-neighbors have already been discovered.\n That is, going backward from $S$ to $\\bar{S}$, we use the out-edges of light vertices.\n From $\\bar{S}$ to $S$, we use the circuit binary search tree.\n This takes $\\tO(n)$ time, and we either find a heavy vertex reachable from $s$ (in which case we make progress by visiting at least $\\sqrt{r}$ vertices in $S$), or we conclude that all heavy vertices are unreachable from $s$ (in which case $t$ is unreachable either).\n\\end{enumerate}\n\nThe number of iterations is bounded by $O(\\sqrt{r})$ since we discover at least $\\sqrt{r}$ unvisited vertices in $S$ every time.\nThe total running time of finding a single augmenting path is thus $\\tO(n\\sqrt{r})$.\n\nUsing the same parameters $\\epsilon$ and $d$ as in \\cite{blikstad21} to combine the three phases, we obtain the following matroid intersection algorithm in the dynamic-independence-oracle model.\n\n\\begin{theorem}\n For two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$, it takes $\\tO(nr^{3\/4})$ time to obtain the largest $S \\in \\cI_1 \\cap \\cI_2$ with high probability in the dynamic-independence-oracle model.\n \\label{thm:dynamic-matroid-intersection-ind-main}\n\\end{theorem}\n\n\\section{Binary Search Tree} \\label{sec:bst}\n\nIn this section, we give the core data structure of our algorithms which allows us to do binary searches and find free elements (elements $x$ such that $S + x \\in \\cI$) and exchange pairs (pairs $(x,y)$ such that $S-x+y\\in \\cI$, corresponding to edges in the exchange graph) efficiently.\nWe also support updating the common independent set $S$ that the exchange relationship is based upon.\nFor a matroid $\\cM = (U, \\cI)$, the data structure has the following guarantee ($s, t \\not\\in U$ denote the two distinguished vertices of the exchange graph as defined in \\cref{def:exchange-graph}).\n\n\\begin{theorem}\n For any integer $\\beta \\geq 1$, there exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, the query-set $Q_S$ that corresponds to $S$, and $X \\subseteq \\bar{S}$ (respectively, $X \\subseteq S$ or $X = \\{t\\}$), initialize the data structure in $\\tO(|X|)$ time. The data structure also maintains $S$.\n \\item $\\textsc{Find}(y)$: Given $y \\in S \\cup \\{s\\}$ (respectively, $y \\in \\bar{S}$),\n \\begin{itemize}\n \\item if $y \\in S$ (respectively, $X \\subseteq S$), then return an $x \\in X$ such that $S - y + x \\in \\cI$ (respectively, $S - x + y \\in \\cI$), or\n \\item if $y = s$ (respectively, $X = \\{t\\}$), then return an $x \\in X$ such that $S + x \\in \\cI$ (respectively, return the only element $x = s$ or $x = t$ in $X$ if $S + y \\in \\cI$ and $\\bot$ otherwise).\n \\end{itemize}\n The procedure returns $\\bot$ if such an $x$ does not exist.\n The procedure takes $\\tO(\\beta)$ time if the result is not $\\bot$, and $\\tO(1)$ time otherwise.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, if $x \\not\\in \\{s, t\\}$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ in $X$ by $y$ in $O(\\log{n})$ time.\n \\item $\\textsc{Update}(\\Delta)$: Update $S$ to $S \\oplus (\\Delta \\setminus \\{s, t\\})$ in amortized $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ time.\n \\end{itemize}\n \\label{thm:bst}\n\\end{theorem}\n\n\\begin{remark}\n To make sense of the seemingly complicated input and casework of \\cref{thm:bst}, one should focus on the first item of $\\textsc{Find}(\\cdot)$.\n We will use \\cref{thm:bst} to explore the exchange graphs, and thus we need to find an exchange element $x$ of $y$ as in the first case.\n The additional complication is included solely because we also have to deal with edges incident to $s$ or $t$.\n For instance, say $X \\subseteq \\bar{S}$, then $\\textsc{Find}(s)$ finds an edge in $G(S)$ directed from $s$ to $X$.\n This will make our algorithms presented later cleaner (see \\cref{alg:blocking-flow} for example).\n \\label{remark:bst}\n\\end{remark}\n\n\nSometimes, we will omit the $Q_S$ parameter of $\\textsc{Initialize}$, meaning that we explicitly build the query-set $Q_S$ from $S$ in $O(r)$ time before running the actual initialization.\nIn such cases, $|X|$ will be $\\Omega(r)$, and thus this incurs no overhead.\n\nWe will later refer to the case of $X \\subseteq \\bar{S}$ as the \\emph{co-circuit binary search tree} and the case of $X \\subseteq S$ as the \\emph{circuit binary search tree}.\nThe data structure follows from the binary search algorithm of \\cite[Lemma 10]{chakrabarty2019faster}, which is based on the following observation.\n\n\\begin{observation}[\\cite{chakrabarty2019faster}]\n \\label{obs:exchange}\n To find free elements and exchange pairs, we can use the following observations.\n\\begin{enumerate}[label=(\\roman*)]\n\\item\\label{item:free-element}\n\\emph{Free element}:\n There exists an $x \\in X$ such that $S + x \\in \\cI$ if and only if $\\mathsf{rank}(S + X) > |S|$.\n\\item\\label{item:cocircuit-exchange}\n\\emph{Co-circuit exchange}:\n Given $y\\in S$, there exists an $x \\in X$ such that $S - y + x \\in \\cI$ if and only if $\\mathsf{rank}(S - y + X) \\geq |S|$.\n \\item\\label{item:circuit-exchange}\n\\emph{Circuit exchange}:\n Given $y\\not\\in S$,\n there exists an $x \\in X$ such that $S - x + y \\in \\cI$ if and only if $\\mathsf{rank}(S - X + y) = |S-X+y|$.\n \\end{enumerate}\n\\end{observation}\n\nThe data structure of \\cref{thm:bst} is built upon the following similar data structure whose independent set $S$ is ``static'' in the sense that its update will be specified for each query.\nWe construct the data structure of \\cref{lemma:bst} first, and then use it for \\cref{thm:bst} in \\cref{sec:periodic-rebuild}.\n\n\\begin{lemma}\n There exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, a query-set $Q_S$ corresponding to $S$, and $X \\subseteq \\bar{S}$ (respectively, $X \\subseteq S$ or $X = \\{t\\}$), initialize the data structure in $\\tO(|X|)$ time.\n \\item $\\textsc{Find}(y)$: Let $S^\\prime := S \\oplus \\Delta$. It is guaranteed that $S^\\prime \\in \\cI$. Given $y \\in S^\\prime \\cup \\{s\\}$ (respectively, $y \\in \\bar{S^\\prime}$),\n \\begin{itemize}\n \\item if $y \\in S^\\prime$ (respectively, $X \\subseteq S^\\prime$), then return an $x \\in X$ such that $S^\\prime - y + x \\in \\cI$ (respectively, $S^\\prime - x + y \\in \\cI$), otherwise\n \\item if $y = s$ (respectively, $X = \\{t\\}$), then return an $x \\in X$ such that $S^\\prime + x \\in \\cI$ (respectively, return the only element $x = s$ or $x = t$ in $X$ if $S^\\prime + y \\in \\cI$ and $\\bot$ otherwise),\n \\end{itemize}\n in $\\tO(\\beta)$ time.\n The procedure returns $\\bot$ if such an $x$ does not exist.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ by $y$ in $X$ in $O(\\log{n})$ time.\n \\end{itemize} \n \\label{lemma:bst}\n\\end{lemma}\n\nWe present the co-circuit version of the data structures as the circuit version is analogous (their difference is essentially stated in the two cases \\ref{item:cocircuit-exchange} and \\ref{item:circuit-exchange} of \\cref{obs:exchange}).\nThe data structure of \\cref{lemma:bst} is a balanced binary tree in which every node $v$ corresponds to a subset $X_v$ of $X$.\nThe subsets correspond to nodes at the same level form a disjoint partition of $X$.\nThere are $|X|$ leaf nodes, each of which corresponds to a single-element subset of $X$.\nAn internal node $v$ with children $u_1$ and $u_2$ has $X_v = X_{u_1} \\sqcup X_{u_2}$.\nEach node $v$ is also associated with a query-set $Q_v := S + X_v$, for which we have prepared a dynamic oracle (see \\cref{def:dyn-oracle}).\n\n\\paragraph{Initialization.}\nIn the initialization stage, we first compute the query-set of the root node $Q_r := S + X$ from $Q_S$ in $O(|X|)$ time.\nAs long as the current node $v$ has $|X_v| > 1$, we split $X_v$ into two equally-sized subsets $X_{u_1}, X_{u_2}$, compute $Q_{u_1}, Q_{u_2}$ from $Q_v$, and then recurse on the two newly created nodes $u_1$ and $u_2$.\nComputing $Q_{u_1}$ and $Q_{u_2}$ from $Q_v$ takes $O(|X_v|)$ time in total, and thus the overall running time for initialization is $\\tO(|X|)$.\n\n\\paragraph{Query.}\nTo find an exchange element of $y \\in S^\\prime$, we perform a binary search on the tree.\nFor each node $v$, we can test whether such an element exists in $X_v$ via \\cref{obs:exchange}\\ref{item:cocircuit-exchange} by computing the query-set $Q^\\prime_v := S^\\prime - y + X_v$ from $Q_v := S + X_v$ in $1+|\\Delta|$ dynamic-oracle queries.\nIf such an element does not exist for the root $X_r = X$, then we return $\\bot$.\nOtherwise, for node $v$ initially being $r$, there must exist one of the child nodes $u_i$ of $v$ where such an exchange $x$ exists in $X_{u_i}$.\nWe then recurse on $u_i$ until we reach a leaf node, at which point we simply return the corresponding element.\nSimilarly, to find a free element, we compute the rank of $Q^\\prime_v := S^\\prime + X_v$ instead (see \\cref{obs:exchange}\\ref{item:free-element}).\nSince we need to compute $Q^\\prime_v$ for each of the visited nodes, the running time is $O(|\\Delta|\\log{n})$.\n\n\\paragraph{Update.}\nFor deletion of $x$, we simply walk up from the leaf node corresponding to $x$ to the root node and remove $x$ from each of the $X_v$ and $Q_v$.\nThis takes time proportional to the depth of the tree, which is $O(\\log{n})$.\nReplacement of $x$ by $y$ follows similarly from deletion of $x$: instead of simply removing $x$ from $X_v$ and $Q_v$, we add $y$ to them as well.\n\n\n\\begin{remark}\n Note that the above binary search tree is static in the sense that we only deactivate elements from a fixed initial set.\n We can extend this data structure to support a dynamically changing input set $X$ by using a dynamic binary search tree based on partial rebuilding~\\cite{Andersson89,Andersson91} instead.\n The amortized time complexity remains the same since rebuilding a subtree takes time proportional to the number of nodes of it.\n\\end{remark}\n\n\\subsection{Periodic Rebuilding} \\label{sec:periodic-rebuild}\n\nHere we extend \\cref{lemma:bst} to prove \\cref{thm:bst}. Recall that the difference between the two data structures is that we need to support a dynamically changing independent set in \\cref{thm:bst} (which we will need since $S$ changes after each augmentation in our matroid algorithms).\nHow we achieve this is to essentially employ a batch-and-rebuild approach to the binary search tree of \\cref{lemma:bst}.\n\n\n\\begin{proof}\n We maintain a binary search tree $\\cT := \\textsc{Initialize}(\\cM, S, Q_S, X)$ of \\cref{lemma:bst} and a collection of ``batched'' updates $\\tilde{\\Delta}$ of size at most $\\beta$.\n Throughout the updates, we also maintain the query-set corresponding to the current $S$ starting from the given $Q_S$ and the query-set corresponding to $S + X$, which initially can be computed from $Q_S$ in $O(|X|)$ time.\n Each call to $\\textsc{Find}(y)$ is delegated to $\\cT.\\textsc{Find}(y, \\tilde{\\Delta})$, which runs in time $\\tO(|\\tilde{\\Delta}|) = \\tO(\\beta)$ time.\n Note that we can test whether the result of $\\cT.\\textsc{Find}(\\cdot)$ will be $\\bot$ in $\\tO(1)$ time by simply checking if \\cref{obs:exchange}\\ref{item:cocircuit-exchange} (or \\ref{item:free-element} if $y = s$) holds with the query-set corresponding to $S + X$ we maintain.\n \n Each call to $\\textsc{Delete}(x)$ and $\\textsc{Replace}(x, y)$ translates simply to $\\cT.\\textsc{Delete}(x)$ and $\\cT.\\textsc{Replace}(x, y)$.\n For an update to $S$ with $\\Delta$, we set $\\tilde{\\Delta} \\gets \\tilde{\\Delta} \\cup \\Delta$ and update $S$ and the query-sets accordingly.\n If the size of $\\tilde{\\Delta}$ exceeds $B$, then we rebuild the binary search tree with the input common independent set being the up-to-date $S$ we maintain.\n Note that we will pass query-set $Q_S$ to $\\textsc{Initialize}$ to not pay the extra $O(r)$ factor.\n Finally, since the binary search tree is now up-to-date, we set $\\tilde{\\Delta}$ to be $\\emptyset$.\n The rebuilding takes $\\tO(|X|)$ time and is amortized to $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ per update operation with $|\\Delta|$ changes.\n\\end{proof}\n\n\\section{Dynamically Maintaining a Basis of a Matroid} \\label{sec:decremental-basis}\n\nIn this section, we construct a data structure that allows us to maintain a basis of a matroid in a decremental set.\nThe data structure is used for obtaining an $\\tO_k(n + r\\sqrt{r})$ running time for matroid union, but it may be of independent interest as well.\nSpecifically, our data structure has the following guarantees.\n\n\\begin{theorem}\n For a (weighted) matroid $\\cM = (U, \\cI)$, there exists a data structure supporting the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(X)$: Given a set $X \\subseteq U$, initialize the data structure and return a (min-weight) basis $S$ of $X$ in $\\tO(n)$ time.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, remove $x$ from $X$ and return a new (min-weight) basis of $X$ in $\\tO(\\sqrt{r})$ time. Specifically, the new basis will contain at most one element (the replacement element of $x$) not in the old basis, and this procedure returns such an element if any.\n \\end{itemize}\n \\label{thm:decremental-basis}\n\\end{theorem}\n\n\n\n\nOur data structure for \\cref{thm:decremental-basis} will consist of two parts.\nThe first part, introduced in \\cref{sec:baseline}, is a baseline, unsparsified data structure that supports the $\\textsc{Delete}$ operation in $\\tO(\\sqrt{n})$ time, and the second one is a sparsification structure which brings the complexity down to $\\tO(\\sqrt{r})$, as presented in \\cref{sec:sparsification}.\n\nAs hinted by the statement of \\cref{thm:decremental-basis}, to make things simpler, we will assign an arbitrary but unique weight $w(x)$ to each $x \\in X$.\nNow, instead of maintaining an arbitrary basis of $X$, we maintain the \\emph{min-weight} basis instead.\nThe min-weight basis is well-known to be unique (as long as the weights are) and can be obtained greedily as shown in \\cref{alg:greedy} (see, e.g.,~\\cite{edmonds1971}).\n\n\\begin{algorithm}[!ht]\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{Greedy algorithm for computing the min-weight basis}\n \\label{alg:greedy}\n \\KwData{A set $X \\subseteq U$ of size $k$}\n \\KwResult{The min-weight basis $S$ of $X$}\n Order $X = (x_1, x_2, \\ldots, x_{k})$ so that $w(x_1) < w(x_2) < \\cdots < w(x_{k})$\\;\n $S \\gets \\emptyset$\\;\n \\For{$i \\in [1, k]$} {\n \\If{$\\mathsf{rank}(S + x_i) > \\mathsf{rank}(S)$} {\\label{line:check}\n $S \\gets S + x_i$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\nMoreover, suppose we remove $x \\in S$ from the set $X$. Then the new min-weight basis is either\n(i) $S - x + y$ where $y$ is the minimum weight element in $X - x$ that makes $S - x + y$ independent or\n(ii) simply $S - x$ if such a $y$ does not exist.\nIn case (i), $y$ is called the \\emph{replacement} element of $x$.\nNote that $w(y) > w(x)$ must hold.\n\nIt is useful to note that the $S$ in Line~\\ref{line:check} of \\cref{alg:greedy} is interchangeable with $X_{i - 1} = \\{x_1, \\ldots, x_{i - 1}\\}$, since\n$\\mathsf{span}(X_{i - 1}) = \\mathsf{span}(S \\cap X_{i - 1})$, so the sets $X_{i-1}$ and $S\\cap X_{i-1}$ have the same rank.\nIn other words, in each iteration $i$, we can imagine that \\cref{alg:greedy} has chosen every element before $x_i$.\n\n\\begin{observation}\n In \\cref{alg:greedy}, $x_i \\in S$ if and only if $\\mathsf{rank}(X_{i}) > \\mathsf{rank}(X_{i - 1})$.\n \\label{obs:greedy}\n\\end{observation}\n\n\n\\subsection{Baseline Data Structure} \\label{sec:baseline}\n\nOur baseline data structure supports the operations of \\cref{thm:decremental-basis}, except in time $\\tO(\\sqrt{k})$ where $k = |X|$ instead of $\\tO(\\sqrt{r})$.\n\n\\begin{lemma}\n For a weighted matroid $\\cM = (U, \\cI)$, there exists a data structure supporting the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(X)$: Given a set $X \\subseteq U$ with $|X| = k$, initialize the data structure and return the min-weight basis $S$ of $X$ in $\\tO(k)$ time.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, remove $x$ from $X$ and return the new min-weight basis of $X$ in $\\tO(\\sqrt{k})$ time. Specifically, the new basis will contain at most one element (the replacement element of $x$) not in the old basis, and this procedure returns such an element if any.\n \\item $\\textsc{Insert}(x)$: Given $x \\not\\in X$, add $x$ to $X$. It's guaranteed that $x$ is not in the min-weight basis of the new $X$ and the size of $X$ does not exceed $2k$.\n \\end{itemize}\n \\label{lemma:decremental-basis_baseline}\n\\end{lemma}\n\n\\paragraph{Initialization.}\nIn the initialization stage, we order $X$ by the weights and split the sequence into $\\sqrt{k}$ blocks $X_1, X_2, \\ldots, X_{\\sqrt{k}}$ from left to right, where each block has roughly the same size $O(\\sqrt{k})$.\nThat is, $X_1$ contains the $\\sqrt{k}$ elements with the smallest weights while $X_{\\sqrt{k}}$ contains elements with the largest weights.\nWe also compute the basis $S$ of $X$ from left to right as in \\cref{alg:greedy} together with $\\sqrt{k}$ query-sets $Q_1, Q_2, \\ldots, Q_{\\sqrt{k}}$, where $Q_j = \\bigcup_{i = 1}^{j}X_i$ is the union of the first $j$ blocks.\nThis takes $\\tO(k)$ time in total.\n\n\\paragraph{Deletion.}\nFor each deletion of $x$ located in the block $X_i$, we first update the query-sets $Q_i, \\ldots, Q_{\\sqrt{k}}$ by removing $x$ from them.\nLet $Q_1^\\prime, \\ldots, Q_{\\sqrt{k}}^\\prime$ denote the old query-sets before removing $x$.\nIf $x$ is not in the basis $S$ we currently maintain, then $S$ remains the min-weight basis of the new $X$ and nothing further needs to be done.\nOtherwise, we would like to find the min-weight replacement element $y$ of~$x$.\nWe know that such a $y$, if it exists, can only be located in blocks $X_i, X_{i + 1}, \\ldots, X_{\\sqrt{k}}$.\nAs such, we find the first $j \\geq i$ with $\\mathsf{rank}(Q_j) = \\mathsf{rank}(Q_j^\\prime)$ and recompute the portion of $S$ inside $X_j$.\nThis can be done by running \\cref{alg:greedy} with the initial set $S$ being $Q_{i - 1}$, the union of the first $i - 1$ blocks (see \\cref{obs:greedy}).\nThus, the deletion takes $\\tO(\\sqrt{k})$ time.\n\n\\paragraph{Insertion.}\nFor insertion of $x$, we simply add $x$ to a block where it belongs (according to $w(x)$) and then update $Q_i$'s appropriately.\nThis takes $\\tO(\\sqrt{k})$ as well.\n\n\\paragraph{Rebalancing.}\nTo maintain an update time of $\\tO(\\sqrt{k})$, whenever the size of a block $X_i$ grows larger than $2\\sqrt{k}$, we split it into two blocks and recompute $Q_i$ and $Q_{i+1}$.\nSimilarly, to avoid having too many blocks, whenever the size of a block $X_i$ goes below $\\sqrt{k} \/ 2$, we merge it with an adjacent block and remove $Q_i$.\nEach of the above operations takes $\\tO(\\sqrt{k})$ time, which is subsumed by the cost of an update.\n\\\\\n\n\\noindent\nWe have shown how to implement each operation of \\cref{lemma:decremental-basis_baseline} in its desired running time, and the correctness of the data structure is manifest as we always follow the greedy basis algorithm (\\cref{alg:greedy}).\n\n\\subsection{Sparsification} \\label{sec:sparsification}\n\nIn this section, we prove \\cref{thm:decremental-basis} by ``sparsifying'' the input set of the data structure for \\cref{lemma:decremental-basis_baseline} in a recursive manner, similar to what \\cite{EppsteinGIN97} did to improve \\cite{Frederickson85}'s $O(\\sqrt{|E|})$ dynamic MST algorithm to $O(\\sqrt{|V|})$.\nThe following claim asserts that such sparsification is valid.\n\n\\begin{claim}\n Let $S_X$ and $S_Y$ be the min-weight basis of $X$ and $Y$, respectively, where $w(x) < w(y)$ holds for each $x \\in X$ and $y \\in Y$.\n Then, the min-weight basis of $S_X + S_Y$ is also the min-weight basis of $X + Y$.\n \\label{claim:sparsification}\n\\end{claim}\n\n\\begin{proof}\n Consider running the greedy \\cref{alg:greedy} on the set $X + Y$ to obtain the min-weight basis $S$ of it.\n Clearly, we have $S_X \\subseteq S$ since $X$ contains the elements of smaller weights (in fact $S \\cap X = S_X$).\n Assume for contradiction that $S \\cap Y \\not\\subseteq S_Y$, i.e., there exists a $y^{*} \\in S \\cap Y$ which does not belong to $S_Y$.\n Then, it must be the case that there exists a $y \\in S_Y$ with $w(y) > w(y^{*})$, as otherwise (i.e., $y^{*}$ is ordered after everything in $S_Y$) by \\cref{lemma:basis-rank} the greedy algorithm stops before seeing $y^{*}$.\n We claim that the greedy algorithm on $Y$ chooses $y^{*}$ before all such $y$'s, thereby contradicting the fact that $S_Y$ is the min-weight basis of $Y$.\n This is true by the diminishing returns property\\footnote{The diminishing returns property of submodular functions states that $f(z+X)-f(X) \\geq f(z+Y)-f(Y)$ holds for each $Y \\subseteq X \\subseteq U$ and $z \\not\\in X$.} of the rank function: Let $Y^{*}$ be elements in $Y$ with weights smaller than $w(y^{*})$.\n Since $y^{*} \\in S$, it follows that $\\mathsf{rank}(X + Y^{*} + y^{*}) > \\mathsf{rank}(X + Y^{*})$, implying $\\mathsf{rank}(Y^{*} + y^{*}) > \\mathsf{rank}(Y^{*})$ and the greedy algorithm run on $Y$ picks $y^{*}$.\n\\end{proof}\n\nWe are now ready to present our sparsification data structure.\n\n\\begin{proof}[Proof of \\cref{thm:decremental-basis}]\n Our data structure is a balanced binary tree where the leaf nodes correspond to elements in $X$ and each internal node corresponds to the set consisting of elements in leaf nodes of this subtree.\n We will abuse notation and use a node $v$ to also refer to the elements contained in the subtree rooted at $v$.\n \n We first build the binary tree top-down, starting with the root node containing $X$ and recursively splitting the current set into two subsets of roughly the same size and recursing on them.\\footnote{Note that unlike in \\cref{sec:bst}, we are not building query-sets here.}\n We then build the min-weight basis of each node in a bottom-up manner, starting from the leaves.\n For each node $v$ with children $u_1$ and $u_2$, we initialize the data structure $\\cD_v$ for \\cref{lemma:decremental-basis_baseline} with input set $S_{u_1} + S_{u_2}$, the min-weight basis of $u_1$ and $u_2$ which are obtained from $\\cD_{u_1}$ and $\\cD_{u_2}$.\n By \\cref{claim:sparsification}, the basis $\\cD_v$ maintains is the min-weight basis of $v$.\n Thus, by induction, the basis maintained in the root node is indeed the min-weight basis of the whole set $X$.\n The data structure for \\cref{lemma:decremental-basis_baseline} takes time near-linear in the size of the input set to construct, and since the sparsified input is a subset of elements in the subtree, the initialization takes time near-linear in the sum of sizes of the subtrees, which is $\\tO(n)$ (indeed, every element occurs in at most $\\log n$ nodes).\n \n To delete an element $x \\in X$, we first identify the leaf node $v_x$ of the binary tree which corresponds to $x$.\n Going upward, for each ancestor $p$ of $v_x$, we delete $x$ from $\\cD_p$.\n If we find a replacement element $y$ for $x$, we insert $y$ into $\\cD_{q}$, where $q$ is $p$'s parent, before proceeding to $q$ ($y$ is not in $\\cD_p$ so such an insertion is valid by \\cref{claim:sparsification}).\n Since $x$ will be removed from $\\cD_q$ shortly, the input set of $\\cD_q$ remains the union of the min-weight bases of $q$'s children.\n This takes $\\tO(\\sqrt{r})$ time since $\\cD_p$ is of size $O(r)$.\n Inductively, since the min-weight bases of the child nodes are updated, by \\cref{claim:sparsification}, the min-weight basis of each of the affect nodes (hence the min-weight basis of $X$) is correctly maintained.\n\\end{proof}\n\n\\section{Introduction} \\label{sec:intro}\n\n\n\nVia reductions to the max-flow and min-cost flow problems, exciting progress has been recently made for many graph problems such as maximum matching, vertex connectivity, directed cut, and Gomory-Hu trees \\cite{Madry13,LeeS14,Madry16,\nBrandLNPSSSW20,KathuriaLS20,LiP20,\nAbboudKT21stoc,BrandLLSS0W21-maxflow,LiNPSY21,\nAbboudKT21focs,AxiotisMV21,Cen0NPSQ21,0006PS21,GaoLP21,\nAbboudKT22,CenLP22,\nBrandGJLLPS22,\nAbboudK0PST22,ChenKLPGS22,\nCenHLP23}. \nHowever, many basic problems still witness no progress since a few decades ago. These problems include $k$-disjoint spanning trees \\cite{gabow1988forests,Gabow91}, colorful spanning tree \\cite{GabowS85}, arboricity \\cite{Gabow95}, spanning tree packing \\cite{gabow1988forests}, graphic matroid intersection \\cite{GabowS85,GabowX89}, and simple job scheduling matroid intersection \\cite{XuG94}. \nFor example, in the {\\em $k$-disjoint spanning trees problem} \\cite[Chapter 51]{schrijver2003}, \nwe want to find $k$ edge-disjoint spanning trees in a given input graph $G=(V,E)$. When $k=1$, this is the spanning tree problem and can be solved in linear time. \nFor higher values of $k$, the best runtime remains \n$\\tO(k^{3\/2}|V|\\sqrt{|E|})$-time algorithm from around 1990 \\cite{gabow1988forests,Gabow91}\\footnote{The stated bound was due to Gabow and Westermann \\cite{gabow1988forests}. Gabow \\cite{Gabow91} announced an improved bound of $O(kn\\sqrt{m+kn\\log(n)})$ but this bound was later removed from the journal version of the paper.}, which is also the best runtime for its applications such as Shannon Switching Game \\cite{Gardner61} and Graph $k$-Irreducibility \\cite{Whiteley88rigidity,graver1993combinatorial}. \nNo better runtime was known even for the special case of $k=2$.\n\n{\\em Can we improve the \nbounds of $k$-disjoint spanning trees and other problems?}\nMore importantly, since it is very unclear if these problems can be reduced to max-flow or min-cost flow\\footnote{For example, the best-known number of max-flow calls to decide whether there are $k$ disjoint spanning trees and to find the $k$ spanning trees are $O(n)$ and $O(n^2)$ respectively.}, \n{\\em is there an alternative approach to designing fast algorithms for many problems simultaneously?}\nFortunately, many of the above problems can be modeled as {\\em matroid problems}, giving hope that solving matroid problems would solve many of these problems in one shot. Unfortunately, this is not true in the traditional model for matroid problems---even the most efficient algorithm possible for a matroid problem does not \nnecessarily give a faster algorithm for any of its special cases. \nWe discuss this more below.\n\n\n\\paragraph{Matroid Problems.} A matroid $\\cM$ is a pair $(U, \\cI)$ where $U$ is a finite set (called the ground set) and $\\cI$ is a family of subsets of $U$ (called the independent sets) satisfying some constraints (see \\cref{def:matroid}; these constraints are not important in the following discussion).\nSince $\\cI$ can be very large, problems on matroid $\\cM$ are usually modeled with {\\em oracles} that answer {\\em queries}. Given a set $S\\subseteq U$, {\\em independence queries} ask if $S\\in \\cI$ and {\\em rank queries} ask for the value of $\\max_{I\\in \\cI, I\\subseteq S} |I|.$\nTwo textbook examples of matroid problems are {\\em matroid intersection} and {\\em union}\\footnote{\\emph{Matroid union} is also sometimes called \\emph{matroid sum}.} (e.g., \\cite[Chapters~41-42]{schrijver2003}). \nWe will also consider the special case of matroid union called {\\em $k$-fold matroid union}.\n\n\\begin{definition}[Matroid intersection and ($k$-fold) matroid union]\\label{def:intro:matroid intersection union}\n(I) In matroid intersection, we are given two matroids $(U, \\cI_1)$ and $ (U, \\cI_2)$ and want to find a set of maximum size in $\\cI_1\\cap \\cI_2.$ \n(II) In matroid union, we are given $k$ matroids $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$, and want to find the set $S_1\\cup S_2 \\cup \\cdots \\cup S_k$, where $S_i \\in \\cI_i$ for every $i$, of maximum size. \n(III) Matroid union in the special case where $U_1=U_2=\\cdots = U_k$ and $\\cI_1=\\cI_2=\\cdots = \\cI_k$ is called $k$-fold matroid union. \n\n{\\em Notations:} Throughout, for problems over matroids $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$, we define $n:=\\max_i{|U_i|}$ and $r:=\\max_{i}\\max_{S \\in \\cI_i}|S|$. \\qed\n\\end{definition}\n\n\n\nMatroid problems are powerful abstractions that can model many fundamental problems. \nFor example, the $2$-disjoint spanning tree problem can be modeled as a $2$-fold matroid union problem: \n\\begin{tcolorbox}[breakable,boxrule=0pt,frame hidden,sharp corners,enhanced,borderline west={.5 pt}{0pt}{red},colback=white]\n\\vspace{-.2cm}\nGiven a graph $G=(V, E)$, let $\\cM=(U, \\cI)$ be the corresponding {\\em graphic matroids}, i.e. $U=E$ and $S\\subseteq E$ is in $\\cI$ if it is a forest in $G$. (It is a standard fact that such an $\\cM$ is a matroid.) \nThe $2$-fold matroid union problem with input $\\cM$ is a problem of finding two forests $F_1\\subseteq E$ and $F_2\\subseteq E$ in $G$ that maximizes $|F_1\\cup F_2|$. This is known as the {\\em $2$-forest} problem which clearly generalizes $2$-disjoint spanning tree (a $2$-forest algorithm will return two disjoint spanning trees in $G$ if they exist). \n\\end{tcolorbox}\nObserve that this argument can be generalized to modeling the $k$-disjoint spanning trees problem by $k$-fold matroid union. \nOther problems that can be modeled as matroid union (respectively, matroid intersection) include arboricity, spanning tree packing, $k$-pseudoforest, and mixed $k$-forest-pseudoforest (respectively, bipartite matching and colorful spanning tree).\n\nThe above fact makes matroid problems a {\\em unified} approach for showing that many problems, including those mentioned above, can be solved in polynomial time. This is because (i) the matroid union, intersection, and other problems can be solved in polynomial time and rank\/independence queries, and (ii) for most problems queries can be answered in polynomial time. For example, when we model $k$-disjoint spanning trees as $k$-fold graphic matroid union like above, the corresponding rank query is: given a set $S$ of edges, find the size of a spanning forest of $S$. This can be solved in $O(|S|)$ time. \n\nWhen it comes to more fine-grained time complexities, such as nearly linear and sub-quadratic time, matroid algorithms in the above model are not very helpful. This is because simulating a matroid algorithm in this model causes too much runtime blow-up. \nFor example, even if we can solve $k$-fold matroid union over $(U, \\cI)$ in {\\em linear} ($O(|U|)$) rank query complexity, it does not necessarily imply that we can solve its special case of $2$-disjoint spanning tree any faster.\nThis is because each query about a set $S$ of edges needs at least $O(|S|)$ time even to specify $S$, which can be as large as the number of edges in the input graph.\nIn other words, even a matroid union algorithm with linear complexities may only imply $O(|E|^2)$ time for solving 2-disjoint spanning trees on graphs $G=(V,E)$. \nThis is also the case for other problems that can be modeled as matroid union and intersection. \nBecause of this, previous works obtained improved bounds by simulating an algorithm for matroid problems and coming up with clever ideas to speed up the simulation for each of these problems one by one (e.g., \\cite{GabowT79,RoskindT85,GabowS85,gabow1988forests,FredericksonS89,GabowX89,Gabow91,XuG94}).\nIt cannot be guaranteed that recent and future improved algorithms for matroid problems (e.g., \\cite{chakrabarty2019faster,quadratic2021,blikstad21}) would imply improved bounds for any of these problems. \n\n\n\\paragraph{Dynamic Oracle.}\nThe main conceptual contribution of this paper is an introduction of a new matroid model called {\\em dynamic oracle}\nand an observation that, using dynamic algorithms, \nsolving a matroid problem efficiently in our model immediately implies efficient algorithms for many problems it can model. \nIn contrast to traditional matroids where a query can be made with an arbitrary set $S$, our model only allows queries made by slightly modifying the previous queries.\\footnote{The ``cost'' of a query in our dynamic model is the distance (size of the symmetric difference) from some (not necessarily the last) previous query.} More precisely, the dynamic-rank-oracle model, which is the focus of this work, is defined as follows.\\footnote{One can also define the dynamic-independence-oracle model where $\\textsc{Query}(i)$ returns only the independence of $S_i$.} \n\n\n\n\\begin{definition}[Dynamic-rank-oracle model]\n\\label{def:dyn-oracle}\n For a matroid $\\cM = (U, \\cI)$, starting from $S_0 = \\emptyset$ and $k = 0$, the algorithm can access the oracle via the following three operations.\n \\begin{itemize}[noitemsep]\n \\item $\\textsc{Insert}(v, i)$: Create a new set $S_{k + 1} := S_i \\cup \\{v\\}$ and increment $k$ by one.\n \\item $\\textsc{Delete}(v, i)$: Create a new set $S_{k + 1} := S_i \\setminus \\{v\\}$ and increment $k$ by one.\n \\item $\\textsc{Query}(i)$: Return the rank of $S_i$, i.e., the size of the largest independent subset of $S_i$.\n \\end{itemize}\n We say that a matroid algorithm takes $t$ time and dynamic-rank-query complexities if its time complexity and required number of operations are both at most $t$. \\qed\n \\end{definition}\n\nWe emphasize that a query can be obtained from \\emph{any} previous query, not just the last one.\n\n\\begin{table}[ht]\n \\centering\n\n{\\small\n \\begin{tabular}{c|l}\n {\\bf matroid problems} &\n \\begin{tabular}{l}\n {\\bf special cases}\n \\end{tabular}\n \\\\\\hline\n \\begin{tabular}{c}\n $k$-fold matroid union in $T(n, r, k)$ \\\\\n \\end{tabular} & \\begin{tabular}{l}\n \\\\\n $k$-forest in $\\tT(|E|, |V|, k))$ \\\\\n $k$-pseudoforest in $\\tT(|E|, |V|, k))$ \\\\\n $k$-disjoint spanning tree in $\\tT(|E|, |V|, k)$ (randomized) \\\\\n \\phantom{$k$-disjoint spanning tree in} $\\hat{T}(|E|, |V|, k)$ (deterministic) \\\\\n arboricity in $\\tT(|E|, |V|, \\sqrt{|E|}))$ \\\\\n tree packing in $\\tT(|E|, |V|, |E|\/|V|))$ \\\\\n Shannon Switching Game in $\\tT(|E|, |V|, 2))$ \\\\\n graph $k$-irreducibility in $\\tT(|E|, |V|, k))$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\begin{tabular}{c}\n matroid union in $T(n, r, k)$ \\\\\n \\end{tabular}& \\begin{tabular}{l}\n \\\\\n $(f, p)$-mixed forest-pseudoforest in $\\tT(|E|, |V|, f + p))$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\begin{tabular}{c}\n matroid intersection in $T(n, r)$ \\\\\n \\end{tabular} & \\begin{tabular}{l}\n \\\\\n bipartite matching in $\\tT(|E|, |V|)$ \\\\\n colorful spanning tree in $\\tT(|E|, |V|)$\\\\\n graphic matroid intersection in $\\tT(|E|, |V|)$ \\\\\n simple job scheduling matroid intersection in $\\tT(n, r)$ \\\\\n convex transversal matroid intersection in $\\tT(|V|, \\mu)$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\end{tabular}\n}\n \\caption{Examples of implications of dynamic-rank-oracle matroid algorithms. The complexities in the first column are in terms of time and dynamic-rank-query complexities. Notations $n$, $r$, and $k$ are as in \\Cref{def:intro:matroid intersection union}.\n In the second column, the $\\mathrm{polylog}{n}$ factors are hidden in $\\tT$ and\n subpolynomial factors are hidden in $\\hat{T}.$ Details are in \\cref{sec:packing,appendix:applications}.}\n \\label{tab:intro:dyn-oracle-implies-fast-algorithms}\n\\end{table}\n\n\n\n\n\\begin{observation}[Details in \\cref{sec:packing,appendix:applications}]\\label{thm:intro:dyn-oracle implies fast alg}\nAlgorithms for the $k$-fold matroid union, matroid union, and matroid intersection problems imply algorithms for a number of problems with time complexities shown in \\Cref{tab:intro:dyn-oracle-implies-fast-algorithms}.\n\\end{observation}\n\\begin{proof}[Proof Idea]\nAs an example, we sketch the proof that if $k$-fold matroid union can be solved in $T(n, r, k)$ then $k$-disjoint spanning trees can be found in $T(|E|, |V|, k)\\cdot\\mathrm{polylog}(|V|)$ time. \nRecall that in the traditional rank-oracle model, the algorithm can ask an oracle for the size of a spanning forest in an arbitrary set of edges $S$, causing $O(|S|)$ time to simulate. \nIn our dynamic-rank-oracle model, an algorithm needs to modify some set $S_i$ to the desired set $S$ using the {\\sc Insert} and {\\sc Delete} operations before asking for the size of a spanning forest in $S$. We can use a spanning forest data structure to keep track of the size of the spanning forest under edge insertions and deletions. This takes $\\mathrm{polylog}(|V|)$ time per operation \\cite{KapronKM13,GibbKKT15}.\\footnote{The dynamic spanning forest algorithms of \\cite{KapronKM13,GibbKKT15} are randomized and assume the so-called oblivious adversary (as opposed to, e.g., \\cite{NanongkaiS17,Wulff-Nilsen17} which work against adaptive adversaries). This is not a problem because we only need to report the size of the spanning forest and not an actual forest. We can also use a deterministic algorithm from \\cite{ChuzhoyGLNPS20,NanongkaiSW17} which requires $|V|^{o(1)}$ time per operation.}\nSo, if $k$-fold matroid union can be solved in $T(n, r, k)$ time and dynamic rank queries, then $k$-disjoint spanning trees can be solved in $\\tO(T(n, r, k))=\\tO(T(|E|, |V|, k))$ time, where the equality is because the ground set size is the number of edges ($|U|=|E|$) and the rank $r$ is equal to the size of a spanning forest (thus at most $|V|$).\\footnote{Note that we also need a fully-persistent data structure \\cite{DriscollSST86,Dietz89} to maintain the whole change history in our argument.}\n\\end{proof}\n\nObserve that designing efficient algorithms in our dynamic-oracle model is {\\em not} easier than in the traditional model: a dynamic-oracle matroid algorithm can be simulated in the traditional model within the same time and query complexities. Naturally, the first challenge of the new model is this question: {\\em Can we get matroid algorithms in the new model whose performances match the state-of-the-art algorithms in the traditional model?}\nMoreover, for the new model to provide a unified approach to solve many problems simultaneously, one can further ask: {\\em Would these new matroid algorithms imply state-of-the-art bounds for many problems?}\n\n\n\n\\paragraph{Algorithms.} \nIn this paper, we provide algorithms in the new model whose complexities not only match those in the traditional model but sometimes even improve them. These lead to new bounds for some problems and, for other problems, a unified algorithm whose performance matches previous results developed in various papers for various problems. \n\n\nMore precisely, the best time and rank-query complexities for matroid intersection on input $(U, \\cI_1)$ and $(U, \\cI_2)$ were $\\tO(n\\sqrt{r})$ by Chakrabarty, Lee, Sidford, Singla, and Wong \\cite{chakrabarty2019faster} (improving the previous $\\tO(nr)$ bound based on Cunningham's classic algorithm \\cite{cunningham1986improved,LeeSW15,nguyen2019note}).\nDue to a known reduction, this implies $\\tO(k^2\\sqrt{k}n\\sqrt{r})$ bound for $k$-fold matroid union and matroid union. \nIn this paper, we present algorithms in the dynamic-oracle model that imply improved bounds in the traditional model for $k$-fold matroid union and matroid union and match the bounds for matroid intersection.\n\nHere, we only state our dynamic-rank-query complexities as they are the main focus of this paper, and for all the applications we have, answering (and maintaining dynamically) independence queries does not seem to be significantly easier.\nNote that we also obtain dynamic-independence-query algorithms that match the state-of-the-art traditional ones~\\cite{blikstad21} which we defer to \\cref{appendix:dynamic-matroid-intersection-ind}.\n\\begin{theorem} \\label{thm:main}\n(I) $k$-fold matroid union over input $(U, \\cI)$ can be solved in $\\tO(n + kr\\sqrt{\\min(n, kr)} + k \\min(n, kr))$ time and dynamic rank queries.\n(II) Matroid union over input $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$ can be solved in $\\tO\\left(\\left(n + r\\sqrt{r}\\right) \\cdot \\mathrm{poly}(k)\\right)$ time and dynamic rank queries.\n(III) Matroid intersection over input $(U, \\cI_1)$ and $(U, \\cI_2)$ can be solved in $\\tO(n\\sqrt{r})$ time and dynamic rank queries.\n\\end{theorem}\n\n\n\n\n\\definecolor{applegreen}{rgb}{0.55, 0.71, 0.0}\n\n\\begin{savenotes}\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{c|l|l}\n {\\bf problems} & {\\bf our bounds} & {\\bf state-of-the-art results} \\\\\\hline\n {\\tiny (Via $k$-fold matroid union)}& & \\\\\n $k$-forest\\footnote{For $k$-forest and its related graph problems in the table, we can assume that $k \\leq |V|$, and thus the $k^2r$ (where $r = \\Theta(|V|)$) term in \\cref{thm:main} is dominated by the $(kr)^{3\/2}$ term.} & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n $k$-pseudoforest & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{red}\\xmark} & $|E|^{1 + o(1)}$ \\cite{ChenKLPGS22} \\\\\n $k$-disjoint spanning trees & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n arboricity\\footnote{Here we use the bound that $\\alpha \\leq \\sqrt{E}$~\\cite{Gabow95}.} & $\\tO(|E||V|)$ {\\color{red}\\xmark} & $\\tO(|E|^{3\/2})$ \\cite{Gabow95} \\\\\n tree packing & $\\tO(|E|^{3\/2})$ & $\\tO(|E|^{3\/2})$ \\cite{gabow1988forests} \\\\\n Shannon Switching Game & $\\tO(|E| + |V|^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n graph $k$-irreducibility & $\\tO(|E| + (k|V|)^{3\/2} + k^2|V|)$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\ & & \\\\ \\hline\n {\\tiny (Via matroid union)} & & \\\\\n $(f, p)$-mixed forest-pseudoforest & $\\tO_{f,p}(|E| + |V|\\sqrt{|V|})$ {\\color{green!50!black}\\cmark} & $\\tO((f + p)|V|\\sqrt{f|E|})$ \\cite{gabow1988forests} \\\\ & & \\\\\\hline\n {\\tiny (Via matroid intersection)} & & \\\\\n bipartite matching (combinatorial\\footnoteref{foot:combinatorial}) & $\\tO(|E|\\sqrt{|V|})$ & $O(|E|\\sqrt{|V|})$ \\cite{HopcroftK73} \\\\\n bipartite matching (continuous) & $\\tO(|E|\\sqrt{|V|})$ {\\color{red}\\xmark} & $|E|^{1 + o(1)}$ \\cite{ChenKLPGS22} \\\\\n graphic matroid intersection & $\\tO(|E|\\sqrt{|V|})$ & $\\tO(|E|\\sqrt{|V|})$ \\cite{GabowX89} \\\\\n simple job scheduling matroid intersection & $\\tO(n\\sqrt{r})$ & $\\tO(n\\sqrt{r})$ \\cite{XuG94} \\\\\n convex transversal matroid intersection & $\\tO(|V|\\sqrt{\\mu})$ & $\\tO(|V|\\sqrt{\\mu})$ \\cite{XuG94} \\\\\n linear matroid intersection\\footnote{Our bound is with respect to the current value of $\\omega < 2.37286$~\\cite{AlmanW21}. If $\\omega = 2$, then our bound becomes $\\tO(n^{2.5}\\sqrt{r})$.} & $\\tO(n^{2.529}\\sqrt{r})$ {\\color{red}\\xmark} & $\\tO(nr^{\\omega - 1})$ \\cite{Harvey09} \\\\\n colorful spanning tree & $\\tO(|E|\\sqrt{|V|})$ & $\\tO(|E|\\sqrt{|V|})$ \\cite{GabowS85} \\\\\n maximum forest with deadlines & $\\tO(|E|\\sqrt{|V|})$ {\\color{green!50!black}\\cmark} & (no prior work) \\\\\n \\end{tabular}\n \\caption{Implications of our matroid algorithms stated in \\cref{thm:main} in comparison with previous results. Results marked with a {\\color{green!50!black}\\cmark} improve over the previous ones. Results marked with a {\\color{red}\\xmark} are worse than the best time bounds. Other results match the currently best-known algorithms up to poly-logarithmic factors. Details can be found in \\cref{appendix:applications}.}\n \\label{tab:intro:implications}\n\\end{table}\n\\end{savenotes}\n\n\nCombined with \\Cref{thm:intro:dyn-oracle implies fast alg}, the above theorem immediately implies fast algorithms for many problems. \\Cref{tab:intro:implications} shows some of these problems. \nOne of our highlights is the improved bounds for $k$-forest and $k$-disjoint spanning trees. Even for $k=2$, there was no runtime better than the decades-old $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ runtime \\cite{gabow1988forests,Gabow91}. Our result improves this to $\\tO(|E| + (k|V|)^{3\/2})$.\nThis is nearly linear for dense input graphs and small $k$.\nThis also implies a faster runtime for, e.g., Shannon Switching Game (see \\cite{shannon1955game,gabow1988forests}) which is a special case of $2$-disjoint spanning trees. \n\n\nOur matroid intersection algorithm gives a unified approach to achieving time complexities that were previously obtained by various techniques in many papers. \nThus, improving this algorithm would imply breakthrough runtimes for many of these problems simultaneously.\nMoreover, in contrast to the previous approach where matroid algorithms have to be considered for each new problem one by one, our approach has the advantage that it can be easier to derive new bounds.\nFor example, say we are given a graph $G = (V, E)$, where the edge $e$ will stop functioning after day $d(e)$.\nEvery day we can ``repair'' one functioning edge. \nOur goal is to make the graph connected in the long run (an edge will work forever once it has been repaired).\nThis is the \\emph{maximum forest with deadlines} problem.\nFormally speaking, the goal is to construct a spanning tree or a forest of the maximum size at the end, by selecting an edge $e$ with $d(e) \\geq t$ in the $t^{\\scriptsize \\mbox{{\\rm th}}}$ round.\\footnote{It is tempting to believe that we can use a greedy algorithm where we always select an edge $e$ with the smallest $d(e)$ to the solution. The following example shows why this does not work: There are three vertices $V=\\{a,b,c,d\\}$. Edges $e_1$ and $e_2$ between $a$ and $b$ have $d(e_1)=1$ and $d(e_2)=3$. Edges $e_3=(b,c)$ and $e_4=(c,d)$ have $d(e_3)=d(e_4)=2$.\n}\nOur result implies a runtime of $\\tO(|E|\\sqrt{|V|})$ for this problem. The runtime holds even for the harder case where each edge is also associated with an {\\em arrival time} (edges cannot be selected before they arrive). \n\n\n\nWe also list some problems where our bounds cannot match the best bounds in \\Cref{tab:intro:implications}. Improving our matroid algorithms to match these bounds is a very interesting open problem. A particularly interesting case is the maximum bipartite matching problem. Our dynamic-oracle matroid intersection algorithm implies a runtime that matches the runtime from the best combinatorial algorithm of Hopcroft and Karp \\cite{HopcroftK73} which has been recently improved via continuous optimization techniques (e.g., \\cite{Madry13,Madry16,CohenMSV17,AxiotisMV20,BrandLNPSSSW20,BrandLLSS0W21-maxflow,ChenKLPGS22}).\\footnote{\\label{foot:combinatorial}The term ``combinatorial'' is vague and varies in different contexts. Here, an algorithm is ``combinatorial'' if it does not use any of the continuous optimization techniques such as interior-point methods (IPMs).}\nThere are barriers to using continuous optimization even to solve some special cases of matroid intersection (e.g. colorful spanning tree's linear program requires exponentially many constraints). Thus, improving our matroid intersection algorithm requires either a new way to use continuous optimization techniques or a breakthrough idea in designing combinatorial algorithms that would improve Hopcroft-Karp algorithm.\n\n\n\n\n\n\n\n\\paragraph{Lower Bounds.} \nAnother advantage of our dynamic-oracle matroid model is that it can be easier to prove lower bounds than the traditional model. As a showcase, we show a simple super-linear rank-query lower bound in our new model. In fact, our argument also implies the first super-linear independence-query lower bound in the traditional model.\nThe latter result might be of independent interest. \n\\begin{theorem}\\label{thm:intro:lowerbound}\n(I) Any deterministic algorithms require $\\Omega(n\\log n)$ dynamic rank queries to solve the matroid union and matroid intersection problems.\n(II) Any deterministic algorithms require $\\Omega(n\\log n)$ (traditional) independence queries to solve the matroid union and matroid intersection problems.\n\\end{theorem}\nOur first lower bound suggests that the dynamic-oracle model might at best give nearly linear (and not linear) time algorithms.\nPrior to this paper, only a $\\log_2(3)n - o(n)$ independence-query lower bound for deterministic algorithms was known for (traditional) independence queries, due to Harvey \\cite{harvey2008matroid}.\\footnote{To the best of our knowledge, this lower bound does not hold for rank queries.}\nOur lower bound in the traditional model improves this decade-old bound. \nMoreover, showing super-linear independence-query lower bounds in the traditional model for matroid intersection is a long-standing open problem considered since 1976 (e.g. \\cite{Welsh1976matroid_book,chakrabarty2019faster}).\\footnote{As noted by Harvey, Welsh asked about the number of queries\nneeded to solve the matroid partition problem, which is equivalent to matroid union and intersection.} \nOur lower bound in the traditional model answers this open problem for deterministic algorithms. The case of randomized algorithms would be resolved too if an $\\omega(|V|)$ lower bound was proved for the communication complexity for computing connectivity of an input graph $G=(V,E)$. (It was conjectured to be $\\Omega(n\\log n)$ in \\cite{ApersEGLMN22}.)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Techniques}\nIn this section we briefly discuss our technical contributions. For a more in-depth overview of our algorithms, see the technical overview (\\cref{sec:overview}).\n\n\\paragraph{Exchange Graph \\& Blocking Flow.} Our algorithms and lower bounds are based on the notion of finding \\emph{augmenting paths} in the \\emph{exchange graph}, due to \\cite{edmonds1970submodular,Lawler75,aignerD}.\nGiven a common independent set $S\\in \\cI_1\\cap \\cI_2$, the exchange graph $G(S)$ is a directed graph where finding an $(s,t)$-path corresponds to increasing the size of $S$ by one. Starting with the work of Cunningham \\cite{cunningham1986improved}, modern matroid intersection algorithms (including the state-of-the-art \\cite{chakrabarty2019faster,blikstad21}) are based on a ``Blocking Flow'' idea inspired by the Hopcroft-Karp's \\cite{HopcroftK73} bipartite matching and Dinic's \\cite{dinic1970algorithm} max-flow algorithms.\n\n\\paragraph{Matroid Intersection with Dynamic Oracle.} \nOur matroid intersection algorithms are implementations of the state-of-the-art $\\tO(n\\sqrt{r})$ rank-query algorithm of \\cite{chakrabarty2019faster} and\nthe $\\tO(n r^{3\/4})$ independence-query algorithm of \\cite{blikstad21}. Our contribution here is to show that versions of them can be implemented also in the \\emph{dynamic}-oracle model. \n\nThese algorithms explore the exchange graph efficiently in the classic non-dynamic models by performing binary searches with the oracle queries to find useful edges. However, such a binary search is very expensive in the dynamic-oracle model (as the queries differ by a lot): a single such binary search might cost up to $O(n)$ in the dynamic-oracle model instead of just $O(\\log n)$.\n\nOur contribution is to design a binary-tree data structure that supports finding these useful edges efficiently also in the dynamic-oracle model.\nNote that after each augmentation the underlying exchange graph changes, so the data structure must also support these dynamic updates efficiently. Some updates can just be propagated up the tree, while others we handle by batching them and rebuilding the tree periodically.\nWe also rely on a structural result ``Augmenting Sets'' by \\cite{chakrabarty2019faster} which states that the updates to the exchange graph are local, which helps us reduce the number of updates we need to make to our data structure, and achieve the final time bound.\n\n\\paragraph{Matroid Union with Dynamic Oracle.} \nOur $\\tO(n + r\\sqrt{r})$ matroid union algorithm with dynamic rank oracle is based on our $\\tO(n \\sqrt{r})$ matroid intersection algorithm (indeed, matroid union is a special case of matroid intersection).\nWe are able to obtain a more efficient algorithm by \ntaking advantage of the additional structure of the exchange graph in the case of matroid union.\nThe main idea is to run the blocking flow algorithm only on a dynamically-changing subgraph of size $\\Theta(r)$, instead of on the full exchange graph of size $\\Theta(n)$.\n\nA crucial observation is that all but $O(r)$ elements will be directly connected to the source vertex~$s$. To ``sparsify'' this first layer in the breadth-first-search tree, we argue that one only needs to consider a basis of it (this basis will have size at most $r$ as opposed to $n$). After an augmentation, this first layer changes, so we design a \\emph{dynamic algorithm to maintain a basis of a matroid}\\footnote{For example, maintaining a spanning forest in a dynamically changing graph.}, with $\\tO(\\sqrt{r})$ update time and $O(n)$ pre-computation.\nOur algorithm to maintain this basis dynamically is inspired by the\n dynamic minimum spanning tree algorithm of \\cite{Frederickson85} ($O(\\sqrt{|E|})$ update time), in combination with the sparsification trick of \\cite{EppsteinGIN97} ($\\tO(\\sqrt{|V|})$ update time).\n We believe that our dynamic algorithm to maintain a (min-weight) basis of a matroid might also be of independent interest.\n\n\n\n\\paragraph{Lower Bounds.} Our super-linear $\\Omega(n \\log n)$ query lower bound comes from studying the communication complexity of matroid intersection. The matroids $\\cM_1$ and $\\cM_2$ are given to two parties Alice and Bob respectively and they are asked to solve the matroid intersection problem using as few bits of communication between them. We show that even if Alice and Bob know some common independent set $S\\in \\cI_1\\cap \\cI_2$, they need to communicate $\\Omega(n\\log n)$ bits to see if $S$ is optimal. Essentially, they need to determine if there is an augmenting path in the exchange graph. Using a class of matroids called \\emph{gammoids} (see e.g.~\\cite{perfect1968applications,mason1972class}), we show a reduction from the $(s,t)$-connectivity problem which has a deterministic $\\Omega(n\\log n)$ communication lower bound \\cite{HajnalMT88}.\n\n\\subsection{Organization}\n\nThe rest of the paper is organized as follows.\nWe first give a high-level overview of how we obtain our algorithms in \\cref{sec:overview}.\nIn \\cref{sec:prelim}, we provide the necessary preliminaries.\nWe then construct the binary search tree data structure in \\cref{sec:bst}, followed in \\cref{sec:matroid-intersection} by how to use it to implement our $\\tO(n\\sqrt{r})$ matroid intersection algorithm in the new dynamic-rank-oracle model (the dynamic-\\emph{independence}-oracle algorithm is in \\cref{appendix:dynamic-matroid-intersection-ind}).\nIn \\cref{sec:decremental-basis} we describe our data structure to maintain a basis of a matroid dynamically, and then we use this in our $\\tO_k(n+r\\sqrt{r})$ matroid union algorithm in \\cref{sec:matroid-union} (the special case of $k$-fold matroid union is in \\cref{appendix:matroid-union-fold}).\nWe show our super-linear lower bound in \\cref{sec:lowerbound}. We end our paper with a discussion of open problems in \\cref{sec:openproblems}.\nIn \\cref{appendix:applications} we mention how to implement different matroids oracles in the dynamic-oracle model, and discuss some problems we can solve with our algorithms.\n\n\\section{Super-Linear Query Lower Bounds} \\label{sec:lowerbound}\n\nLower bounds for matroid intersection have been notoriously difficult to prove. The current highest lower bound is due to Harvey \\cite{harvey2008matroid} which says that $(\\log_2 3) n - o(n)$ queries are necessary for any deterministic independence-query algorithm solving matroid intersection.\nObtaining an $\\omega(n)$ lower bound has been called a challenging open question \\cite{chakrabarty2019faster}.\n\nIn this section, we show the first super-linear query lower bound for matroid intersection, both in our \\emph{new dynamic-rank-oracle} model (\\cref{def:dyn-oracle}), and also for the \\emph{traditional independence-oracle} model, thus answering the above-mentioned open question and improving on the bounds of \\cite{Harvey09}.\nWe obtain our lower bounds by studying the communication complexity for matroid intersection.\n\n\\begin{theorem}\nIf Alice is given a matroid $\\cM_1 = (U,\\cI_1)$\nand Bob a matroid $\\cM_2 = (U,\\cI_2)$, any deterministic communication protocol needs $\\Omega(n \\log n)$ bits of communication to solve the matroid intersection problem.\n\\label{thm:comm-lb}\n\\end{theorem}\n\nThe communication lower bound of \\cref{thm:comm-lb} implies a similar lower bound for the number of independence queries needed. \nWe argue that any independence-query algorithm can be simulated by Alice and Bob in the communication setting by exchanging a single bit per query asked. Whenever they want to ask an independence query ``Is $S \\in \\cI_{i}$?'', Alice or Bob will check this locally and share the answer with the other party by sending one bit of communication.\n\nUnfortunately, this argument does not extend to the traditional rank-oracle model (since each rank query can in fact reveal $\\Theta(\\log n)$ bits of information, which need to be sent to the other party). However, for the new \\emph{dynamic}-rank-oracle model, the $\\Omega(n\\log n)$ lower bound holds as now each new query only reveals constant bits of information: either the rank remains the same, increases by one, or decreases by one (and Alice or Bob can send which is the case to the other party with a constant number of bits). Our discussion proves the following corollaries, given \\cref{thm:comm-lb}.\n\n\n\\begin{corollary}\nAny deterministic (traditional) independence-query algorithm solving matroid intersection requires $\\Omega(n \\log n)$ queries.\n\\end{corollary}\n\n\\begin{corollary}\nAny deterministic dynamic-rank-query algorithm solving matroid intersection requires $\\Omega(n \\log n)$ queries.\n\\end{corollary}\n\n\\begin{remark}\nWe note that our lower bounds are also valid for the \\emph{matroid union} problem, due to the standard reductions\\footnote{See \\cref{sec:reduction} for a reduction from matroid union to matroid intersection. To reduce from matroid intersection to matroid union, consider $\\cM = \\cM_1 \\vee \\cM_2^{*}$, where $\\cM_2^{*}$ is the \\emph{dual matroid} of $\\cM_2$ ($S \\subseteq U$ is independent in $\\cM_2^{*}$ if and only if $U - S$ contains a basis). It's easy to show that the basis $B$ of $\\cM$ in $\\cM_1$ will be of the form $B = S \\cup (U \\setminus R)$, where $S$ is the solution to the intersection between $\\cM_1$ and $\\cM_2$ and $R$ is an arbitrary basis of $\\cM_2$ that contains $S$.} between matroid intersection and union.\n\\end{remark}\n\n\\subsection{Communication Setting}\n\nWe study the following communication game\nwhich we call \\textsf{Matroid-Intersection-with-Candidate}.\nAlice and Bob are given matroids $\\cM_1 = (U,\\cI_1)$ respectively $\\cM_2 = (U,\\cI_2)$.\nSuppose they are also both given a common independent set $S\\in \\cI_1\\cap \\cI_2$, and they wish to determine\nwhether $S$ is a maximum-cardinality independent\nset.\nClearly \n\\textsf{Matroid-Intersection-with-Candidate} is an easier version of\nthe matroid intersection problem, as Alice and Bob can just ignore the candidate $S$.\n\nOur idea is that in order to solve\n\\textsf{Matroid-Intersection-with-Candidate}, Alice and Bob need to determine if there exists an \\emph{augmenting path}---that is an $(s,t)$-path---in the \\emph{exchange graph} $G(S)$ (see \\cref{def:exchange-graph} and \\cref{lemma:augmenting-path}). It is known that \\textsf{$(s,t)$-connectivity} in a\ngraph requires $\\Omega(n\\log n)$ bits of communication (\\cref{lem:lb-conn}, \\cite{HajnalMT88}). Using \\emph{strict gammoids} as our matroids, we argue that we can choose exactly how the underlying exchange graph looks like, and hence that matroid intersection admits the same lower bound.\n\n\\begin{definition}[Strict Gammoid, see~\\cite{perfect1968applications,mason1972class}]\nLet $H = (V,E)$ be a directed graph and $X\\subseteq V$ a subset of vertices.\nThen $(H,X)$ defines a matroid $\\cM = (V,\\cI)$ called a \\emph{strict gammoid}, where a set of vertices $Y\\subseteq V$ is independent if and only if\n there exists a set of vertex-disjoint directed paths (some of which might just consist of single vertices) in $H$ whose starting points all belong to $X$ and whose ending points are exactly $Y$.\n\\end{definition}\n\n\\begin{claim}\n\\label{clm:lb-matroids}\nSuppose $G=(L,R,E)$ is a directed bipartite graph and $a, b \\in R$ are two unique vertices such that $a$ has zero in-degree and $b$ has zero out-degree.\nThen there exist two matroids $\\cM_1, \\cM_2$ over the ground set $L\\cup R$ such that\n$L$ is independent in both matroids and\nthe \\emph{exchange graph} $G(L)$ is exactly $G$ plus two extra vertices ($s$ and $t$) and two extra edges ($s\\to a$ and $b \\to t$).\n\\end{claim}\n\n\\begin{proof}\nLet $F_{1} = \\{(u,v) | (u,v)\\in E, u\\in L, v\\in R\\}$ be the directed edges from $L$ to $R$ in $G$,\nand\n$F_{2} = \\{(u,v) | (v,u)\\in E, u\\in L, v\\in R\\}$ be the (reversed) directed edges from $R$ to $L$ in $G$.\nAlso let $H_1 = (L\\cup R,F_1)$ and $H_2 = (L\\cup R,F_2)$ be the directed graphs with these edges respectively.\n\nWe let $\\cM_1$, respectively $\\cM_2$, be the strict gammoids defined by $(H_1, L+a)$ respectively $(H_2, L+b)$. Now $L$ is independent in both matroids. It is straightforward to verify that the exchange graph $G(L)$ is exactly as described in the claim. We certify that this is the case for the edges defined by $\\cM_1$ ($\\cM_2$ is similar):\n\n\n\\begin{enumerate}\n \\item $G(L)$ will have an edge from $s$ to $a$, since\n $L+a$ is independent in $\\cM_1$. \n Additionally note that $a$ has in-degree zero in $G$ (and hence is an isolated vertex in $H_1$).\n \n \\item\n For any $x\\in L, y\\in R$, the edge $(x,y)$ exists in $G(L)$\n if and only if $L - x + y$ is independent in $\\cM_1$. By definition this is if and only if there exists a vertex-disjoint path starting from $L$ and ending to $L-x+y$ in $H_1$, or equivalently if the edge $(x,y)$ exists in $H_1$ (indeed, all vertices in $L-x$ must be both starts and ends of paths, so the path to $y$ must have started in $x$).\n \\qedhere{}\n \n\\end{enumerate}\n\n\\end{proof}\n\nWe now proceed to reduce an instance of \\textsf{$(s,t)$-connectivity} to that of \\textsf{Matroid-Intersection-with-Candidate}, which concludes the proof of \\cref{thm:comm-lb}.\n\n\\begin{definition}[\\textsf{$(s,t)$-connectivity}]\nSuppose $G = (V, E_A \\cup E_B)$ is an undirected graphs on $n=|V|$ vertices,\nwhere Alice knows edges $E_A$ and Bob knows edges $E_B$.\nThey are also both given vertices $s$ and $t$, and want to determine\nif $s$ and $t$ are connected in $G$.\n\\label{def:st-conn}\n\\end{definition}\n\n\\begin{lemma}[\\cite{HajnalMT88}]\n\\label{lem:lb-conn}\nThe deterministic communication complexity of\n\\textsf{$st$-connectivity} is $\\Omega(n\\log n)$.\n\\end{lemma}\n\n\\begin{proof}[Proof of \\cref{thm:comm-lb}.]\nWe show that an instance of \\textsf{$(s,t)$-connectivity} can be converted to an instance\nof \\textsf{Matroid-Intersection-with-Candidate} of roughly the same size.\nSuppose the symbols are defined as in Definition \\ref{def:st-conn}.\nLet $\\bar{V} = \\{\\bar{v} : v\\in V\\}$ be a copy of $V$.\nWe construct a directed bipartite graph $G' = (V, \\bar{V}, E'_A \\cup E'_B)$\nas follows:\n\n\\begin{itemize}\n \\item $(v,\\bar{v}) \\in E'_A$ for all $v\\in V$.\n \\item $(\\bar{v},v) \\in E'_B$ for all $v\\in V$.\n \\item $(v,\\bar{u}), (u,\\bar{v}) \\in E'_A$ for all $\\{u,v\\}\\in E_A$.\n \\item $(\\bar{v},u), (\\bar{u},v) \\in E'_A$ for all $\\{u,v\\}\\in E_B$.\n \\item No other edges exist.\n\\end{itemize}\n\nAlice knows $E'_A$, and Bob knows $E'_B$. $G'$ has $2n$ vertices and $2n+2m$ edges.\n\nNow let $G''$ be $G'$ but removing all incoming edges from $\\bar{s}$ and\nall outgoing edges from $\\bar{t}$, in order to apply \\cref{clm:lb-matroids} on $G''$ with $a = \\bar{s}$ and $b = \\bar{t}$.\nSay we get matroids $\\cM_1$ and $\\cM_2$. Note that \nAlice knows $\\cM_1$ and Bob knows $\\cM_2$ by construction.\n\nNow $s$ and $t$ are connected in $G$ if and only if there is a directed $(\\bar{s},\\bar{t})$-path in $G''$. This happens if and only if $V$ is not a maximum-cardinality common independent set of $\\cM_1$ and $\\cM_2$ (i.e.\\ in the case we found an augmenting path for $V$).\n\nHence if there is a (deterministic) communication protocol for matroid intersection using $c$ bits of communication, there is also one for \\textsf{$(s,t)$-connectivity} using $O(c)$ bits of communication. \\cref{lem:lb-conn} then implies the\n$\\Omega(n\\log n)$ communication lower bound for matroid intersection.\n\\end{proof}\n\n\n\\section{Matroid Intersection} \\label{sec:matroid-intersection}\n\nIn this section, we present a matroid intersection algorithm in the dynamic-rank-oracle model that matches the state-of-the-art algorithm~\\cite{chakrabarty2019faster} in the traditional model.\n\n\\begin{restatable}{theorem}{matroidintersection}\n For two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$, it takes $\\tO(n\\sqrt{r})$ time to obtain the largest $S \\in \\cI_1 \\cap \\cI_2$ in the dynamic-rank-oracle model.\n \\label{thm:dynamic-matroid-intersection-rank-main}\n\\end{restatable}\n\nThe algorithm follows the blocking-flow framework of \\cite{chakrabarty2019faster} similar to the Hopcroft-Karp algorithm for bipartite matching~\\cite{HopcroftK73}, which goes as follows.\nInitially, they start with $S = \\emptyset$.\n\n\\begin{enumerate}\n \\item\\label{item:step1} First, they obtain a common independent set that is of size at least $(1 - \\epsilon)r$ by eliminating all augmenting paths of length $O(1\/\\epsilon)$. In each of the $O(1\/\\epsilon)$ iterations, they first compute the distance layers of $G(S)$ along which they find a maximal set of compatible shortest augmenting paths using an approach similar to a depth-first-search from $s$.\n Augmenting paths are searched in a depth-first-search manner.\n Whenever an element has no out-edge with respect to the current common independent set to the next layer, they argue that it can be safely removed as it will not be on a shortest augmenting path anymore in this iteration.\n Augmenting along these augmenting paths increases the $(s, t)$-distance of $G(S)$ by at least one.\n \\item\\label{item:step2} With the current solution which is only $\\epsilon$ fraction away from being optimal, they find the remaining $O(\\epsilon r)$ augmenting paths one at a time.\n\\end{enumerate}\n\nA proper choice of $\\epsilon$ (in this case it is $\\epsilon = 1\/\\sqrt{r}$) that balances the cost between the two steps results in their algorithm.\n\n\\subsection{Building Distance Layers}\n\nBuilding distance layers and finding a single augmenting path in Step~\\ref{item:step2} is immediate by replacing binary searches in \\cite[Algorithm 4]{chakrabarty2019faster} with the binary search trees of \\cref{thm:bst}.\n\n\\begin{lemma}\n It takes $\\tO(n)$ time to compute the $(s, u)$-distance for each $u \\in U$ and find the shortest $(s, t)$-path in $G(S)$ or determine that $t$ is unreachable from $s$.\n \\label{lemma:bfs}\n\\end{lemma}\n\n\\begin{proof}\n First, we build two binary search trees via \\cref{thm:bst} with $\\beta = 1$, a circuit binary search tree ${\\mathcal{T}}_1 := \\textsc{Initialize}(\\cM_1, S, X_1)$ where $X_1 = S$ for the first matroid and a co-circuit binary search tree ${\\mathcal{T}}_2 := \\textsc{Initialize}(\\cM_2, S, X_2)$ where $X_2 = \\bar{S}$ for the second matroid. Initializing these takes $\\tO(n)$ time.\n These two binary search trees allow us to explore the exchange graph efficiently.\n \n Then we run the usual BFS algorithm from the source $s$ (or equivalently, all $u \\in \\bar{S}$ with $S + u \\in \\cI_1$).\n For each visited element $u$, if $u \\in S$, then we repeatedly find $x \\in X_2$ such that $S - u + x \\in \\cI_2$ using ${\\mathcal{T}}_2.\\textsc{Find}(u)$, mark $x$ as visited, and remove $x$ from $X_2$ via ${\\mathcal{T}}_2.\\textsc{Delete}(x)$ (until $\\bot$ is returned).\n Similarly, for $u \\in \\bar{S}$, we find $x \\in X_1$ with $S - x + u \\in \\cI_1$, mark $x$ as visited, and remove $x$ from $X_1$ using ${\\mathcal{T}}_1$.\n This explores all the unvisited out-neighbors of $u$ in $G(S)$.\n Since each element will be visited at most once, the total running time is $\\tO(n)$.\n\\end{proof}\n\n\\subsection{Blocking Flow}\n\nIn this section, we prove the following lemma regarding a single phase of blocking-flow computation.\n\n\\begin{lemma}\n Given an $S \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S)}(s, t) = d$, it takes $\\tO\\left(n + \\frac{n\\sqrt{r}}{d} + \\frac{(|S^\\prime| - |S|) \\cdot nd}{\\sqrt{r}}\\right)$ time to obtain an $S^\\prime \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$.\n \\label{lemma:blocking-flow}\n\\end{lemma}\n\nBefore proceeding to prove \\cref{lemma:blocking-flow}, we first use it to finish our matroid intersection algorithm.\nLike Hopcroft-Karp bipartite matching algorithm \\cite{HopcroftK73} and the matroid intersection algorithm of \\cite{chakrabarty2019faster}, we run several iterations of blocking-flow, and then keep augmenting until we get the optimal solution.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-intersection-rank-main}]\n Starting from $S = \\emptyset$, we run the blocking-flow algorithm until $d_{G(S)}(s,t) \\geq \\sqrt{r}$.\n This, by \\cref{lemma:blocking-flow}, takes\n \\begin{equation}\n \\tO\\left(n\\sqrt{r}\\right) + \\tO\\left(\\sum_{d = 1}^{\\sqrt{r}}\\frac{n\\sqrt{r}}{d}\\right) + \\tO\\left(\\frac{n}{\\sqrt{r}}\\cdot\\left(\\sum_{d = 1}^{\\sqrt{r}}{d \\cdot (|S_d| - |S_{d - 1}|)}\\right)\\right)\n \\label{eq:timebound}\n \\end{equation}\n time, where $S_d$ is the size of the $S$ we get after augmenting along paths of length $d$.\n Observe that $\\sum_{d = 1}^{\\sqrt{r}}{d \\cdot (|S_d| - |S_{d - 1}|)}$ is the sum of lengths of the augmenting paths that we use, and thus the third term in \\cref{eq:timebound} is $\\tO(\\frac{n}{\\sqrt{r}} \\cdot r) = \\tO(n\\sqrt{r})$ by \\cref{lemma:augmenting-path-lengths}.\n The second term also sums up to $\\tO(n\\sqrt{r})$ (by a harmonic sum), and therefore the total running time of the blocking-flow phases is $\\tO(n\\sqrt{r})$.\n The current common independent set $S$ has size at least $r - O(\\sqrt{r})$ by \\cref{lemma:approx}, and thus finding the remaining $O(\\sqrt{r})$ augmenting paths one at a time takes a total running time of $\\tO(n\\sqrt{r})$ via \\cref{lemma:bfs}.\n This concludes the proof of \\cref{thm:dynamic-matroid-intersection-rank-main}.\n\\end{proof}\n\n\nThe rest of the section is to prove \\cref{lemma:blocking-flow}.\nOur blocking-flow algorithm is a slight modification to \\cite[Algorithm 5]{chakrabarty2019faster}, as shown in \\cref{alg:blocking-flow}.\nIt takes advantage of the data structure of \\cref{thm:bst} to explore an out-edge from the current element $a_{\\ell}$ to $A_{\\ell + 1}$---the set of ``alive'' elements in the next layers---while (approximately) keeping track of the current common independent set $S$.\nAn element $u$ is ``alive'' if it has not been included in the augmenting set $\\Pi := (D_1, \\ldots, D_{d_t - 1})$ yet, nor has the algorithm determines that there cannot be any shortest augmenting path through $u$.\n\n\\begin{algorithm}[!ht]\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{Blocking flow}\n \\label{alg:blocking-flow}\n \\KwData{$S \\in \\cI_1 \\cap \\cI_2$}\n \\KwResult{$S^\\prime \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$}\n \n Build the distance layers $L_1, \\ldots, L_{d_t - 1}$ of $G(S)$ with \\cref{lemma:bfs}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $A_\\ell \\gets L_{\\ell}$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell} \\gets \\textsc{Initialize}(\\cM_{1}, S, Q_S, L_{\\ell})$ for each \\emph{odd} $1 \\leq \\ell \\leq d_t$ by \\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d$\\;\n $\\cT_{\\ell} \\gets \\textsc{Initialize}(\\cM_{2}, S, Q_S, L_{\\ell})$ for each \\emph{even} $1 \\leq \\ell \\leq d_t$ by \\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d$\\;\n $\\ell \\gets 0$, $a_0 \\gets s$, and $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n $a_{\\ell + 1} \\gets \\cT_{\\ell + 1}.\\textsc{Find}(a_\\ell)$\\;\\label{line:search}\n \\If{$a_{\\ell + 1} = \\bot$} {\n $\\cT_{\\ell}.\\textsc{Delete}(a_{\\ell})$\\;\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\n }\n \\Else {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n \\For{$i \\in \\{1, 2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}.\\textsc{Delete}(a_i)$ and $\\cT_{i}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$\\;\\label{line:update}\n }\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S^\\prime := S \\oplus \\Pi$, where $\\Pi := (D_1, D_2, \\ldots, D_{d_t - 1})$\\;\n\\end{algorithm}\n\n\n\nWe emphasize that the difference between \\cref{alg:blocking-flow} and \\cite[Algorithm 5]{chakrabarty2019faster} is exactly in the replacement of binary searches with the data structure of \\cref{thm:bst}.\nNote that indeed by the specification stated in \\cref{thm:bst}, the binary search trees let us explore edges in the exchange graph (see \\cref{remark:bst}).\nAs a result, our proof will focus on showing that such a replacement does not affect the correctness.\nFor this, we need the concept of \\emph{augmenting sets} (see \\cref{def:augmenting-sets}) which characterizes a collection of ``mutually compatible'' augmenting paths---i.e.\\ a ``blocking flow''.\nThe structural results in \\cref{sec:prelim} culminate in the following lemma that is key to the correctness of our algorithm.\nIt models when we can safely ``remove'' an element since there will be no augmenting path through it in the future.\nThis is in particular required for us (as opposed to the simpler argument used in \\cite{chakrabarty2019faster}) because the set $S$ is not fully updated after each augmentation (at least in the binary search trees that we use to explore the exchange graphs).\n\n\\begin{lemma}\n Let $\\Pi \\subseteq \\Pi^\\prime$ be augmenting sets in $G(S)$ with distance layers $L_1, \\ldots, L_{d_t - 1}$ where $d_{G(S)}(s, t) = d_t$.\n For $x \\in L_\\ell$, if there is no $y \\in L_{\\ell + 1}$ such that\n \\begin{equation}\n (S \\oplus D_{\\ell} \\oplus D_{\\ell + 1}) \\oplus \\{x, y\\} \\in \\cI,\\;\\text{where $\\cI := \\cI_1$ if $\\ell$ is even and $\\cI := \\cI_2$ if $\\ell$ is odd},\n \\label{eq:condition}\n \\end{equation}\n then there is no augmenting path of length $d_t$ through $x$ in $G(S \\oplus \\Pi^\\prime)$.\n \\label{lemma:key}\n\\end{lemma}\n\n\\begin{proof}\n We claim that there is no augmenting path of length $d_t$ through $x$ in $G(S \\oplus \\Pi)$: If there is such a $P$, then we can put $P$ into $\\Pi$ and get an augmenting set $\\tilde{\\Pi} := (\\tilde{D}_1, \\ldots, \\tilde{D}_{d_t - 1})$ by \\cref{claim:put-aug-path-in-aug-set}.\n By definition of the augmenting set, this means that there is such a $y \\in \\tilde{D}_{k + 1} \\setminus D_{k + 1}$ satisfying \\eqref{eq:condition}, a contradiction to our assumption.\n The lemma now follows from \\cref{claim:no-path}. \n\\end{proof}\n\nWe are now ready to prove \\Cref{lemma:blocking-flow}.\n\n\\begin{proof}[Proof of \\cref{lemma:blocking-flow}]\n We analyze the running time of \\cref{alg:blocking-flow} first.\n Similar to \\cite[Lemma 15]{chakrabarty2019faster}, in each iteration, we use $\\cT_\\ell.\\textsc{Find}(\\cdot)$ to find an out-edge of $a_{\\ell}$, taking $\\tO(\\beta) = \\tO(\\sqrt{r}\/d)$ time by \\cref{thm:bst}.\n In each iteration, we either increase $\\ell$ and extend the current path by a new element, decrease $\\ell$ and remove one element, or find an $(s, t)$-path (then remove everything in it), and each element can participate in each of the event at most once.\n Thus, there are only $O(n)$ iterations, and the total cost of $\\cT_{\\ell}.\\textsc{Find}(\\cdot)$ is consequently $\\tO(\\frac{n\\sqrt{r}}{d})$ by our choice of $\\beta$.\n For each of the augmenting path, $\\cT_{\\ell}.\\textsc{Update}(\\cdot)$ takes $\\tO\\left(\\frac{|L_\\ell| \\cdot d}{\\sqrt{r}}\\right)$ time, contributing to a total running time of $\\tO\\left(\\frac{(|S^\\prime| - |S|) \\cdot nd}{\\sqrt{r}}\\right)$ since $L_\\ell$'s are disjoint.\n \n We then argue the correctness of the algorithm.\n Observe that at any point in time, $\\cT_\\ell$ is a data structure capable of finding a replacement element with respect to the independent set $S \\oplus D_{\\ell - 1} \\oplus D_\\ell$, due to the updates that we gave it.\n This means that the collection $\\Pi := (D_1, D_2, \\ldots, D_{d_t - 1})$ remains an augmenting set in $G(S)$ because $S \\oplus (D_{\\ell - 1} + a_{\\ell - 1}) \\oplus (D_{\\ell} + a_\\ell)$ is independent for each $\\ell$ whenever a path is found.\n As a result, when the algorithm terminates, $S^\\prime := S \\oplus \\Pi$ is indeed a common independent set as guaranteed by \\cref{lemma:aug-set}.\n \n It remains to show that $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$ by arguing that for each $a_{\\ell}$ not in $\\Pi$ but removed from $A_{\\ell}$ at time $t$, there is no shortest augmenting path in $G(S^\\prime)$ that passes through $a_{\\ell}$.\n This is a direct consequence of \\cref{lemma:key} since $\\Pi^{(t)}$, the augmenting set obtained at time $t$, is contained in $\\Pi$.\n The fact that $\\cT_{\\ell + 1}^{(t)}.\\textsc{Find}(a_{\\ell})$ returns nothing (equivalently, \\cref{eq:condition} is not satisfied) shows that $a_{\\ell}$ is not on any shortest augmenting path in $G(S^\\prime)$ since the set $X$ maintained in $\\cT_{\\ell + 1}^{(t)}$ (see \\cref{thm:bst}) is $A_{\\ell + 1}$ at all time.\n We remark that $x$ might have an out-edge (with respect to $S \\oplus D_{\\ell}^{(t)} \\oplus D_{\\ell}^{(t + 1)}$) to a removed element with distance $\\ell + 1$ from $s$ (not in $A_{\\ell + 1}$), but such an element, by induction, is not on any augmenting path either.\n\\end{proof}\n\n\\section{Matroid Union} \\label{sec:matroid-union}\n\nIn this section, we present our improved algorithm for matroid union.\nOur main focus of this algorithm is on optimizing the $O(n\\sqrt{r})$ term to $O(r\\sqrt{r})$.\nThus, for simplicity of presentation, we will treat $k$ as a constant (the dependence on $k$ will be a small polynomial) and express our bounds using the $O_k(\\cdot)$ and $\\tO_k(\\cdot)$ notation.\n\n\\begin{theorem}\n In the dynamic-rank-oracle model, given $k$ matroids $\\cM_i = (U_i, \\cI_i)$ for $1 \\leq i \\leq k$, it takes $\\tO_k(n + r\\sqrt{r})$ time to find a basis $S \\subseteq U_1 \\cup \\cdots \\cup U_k$ of $\\cM = \\cM_1 \\vee \\cdots \\vee \\cM_k$ together with a partition $S_1, \\ldots, S_k$ of $S$ in which $S_i \\in \\cI_i$ for each $1 \\leq i \\leq k$.\n \\label{thm:dynamic-matroid-union}\n\\end{theorem}\n\nIn \\cref{appendix:matroid-union-fold}, we present an optimized (for the parameter $k$) version of the above algorithm which solves the important special case when all the $k$ matroids are the same---i.e.\\ \\emph{$k$-fold matroid union}---with applications in matroid packing problems. For example, the problem of finding $k$ disjoint spanning trees in a graph falls under this special case. In particular, in \\cref{appendix:matroid-union-fold}, we obtain the following \\cref{thm:dynamic-matroid-union-fold}, and we discuss some immediate consequences for the \\emph{matroid packing, matroid covering}, and \\emph{$k$-disjoint spanning trees problems} in \\cref{sec:packing,sec:spanning}.\n\n\\begin{restatable}{theorem}{matroidunionfold}\n In the dynamic-rank-oracle model, given a matroid $\\cM = (U, \\cI)$ and an integer $k$, it takes $\\tO(n + kr\\sqrt{\\min(n, kr)} + k\\min(n, kr))$ time to find the largest $S \\subseteq U$ and a partition $S_1, \\ldots, S_k$ of $S$ in which $S_i \\in \\cI$ for each $1 \\leq i \\leq k$.\n \\label{thm:dynamic-matroid-union-fold}\n\\end{restatable}\n\n\n\n\nThe rest of this section will focus on the proof of \n\\cref{thm:dynamic-matroid-union} (again, where the number of matroids $k$ is treated as a constant).\nOur algorithm is based on the matroid intersection algorithm in \\cref{sec:matroid-intersection}, in which we identify and optimize several components that lead to the improved time bound.\n\n\\subsection{Reduction to Matroid Intersection} \\label{sec:reduction}\n\nFor completeness, we provide a standard reduction from matroid union to matroid intersection.\nFor an in-depth discussion, see \\cite[Chapter 42]{schrijver2003}.\nLet $\\cM_i = (U_i, \\cI_i)$ be the given $k$ matroids and $U = U_1 \\cup \\cdots \\cup U_k$ be the ground set of the matroid union $\\cM = \\cM_1 \\vee \\cdots \\vee \\cM_k$.\nWe first relabel each element in the matroids with an identifier of its matroid, resulting in $\\hat{\\cM}_i = (\\hat{U}_i, \\cI_i)$, where $\\hat{U}_i = \\{(u, i) \\mid u \\in U_i\\}$.\nLet $\\hat{\\cM} = (\\hat{U}, \\hat{\\cI}) = \\hat{\\cM}_1 \\vee \\cdots \\vee \\hat{\\cM}_k$ be over the ground set $\\hat{U} = \\hat{U}_1 \\sqcup \\cdots \\sqcup \\hat{U}_k$.\n\nIn other words, in $\\hat{\\cM}$, we duplicate each element that is shared among multiple matroids into copies that are considered different, effectively making the ground sets of the $k$ matroids disjoint.\nAfter this modification, an independent set in $\\hat{\\cM}$ is now simply the union of $k$ independent sets, one from each matroid.\nHowever, that might not be what we want since these independent sets may overlap, i.e., contain copies that correspond to the same element. \nWe therefore intersection $\\hat{\\cM}$ with a partition matroid $\\cM_{\\text{part}} = (\\hat{U}, \\cI_{\\text{part}})$ given by\n\\[ \\cI_{\\text{part}} = \\{S \\subseteq \\hat{U} \\mid \\left|S \\cap \\{(u, i)\\;\\text{for $1 \\leq i \\leq k$} \\mid u \\in U_i\\}\\right| \\leq 1\\;\\text{holds for each $u \\in U$} \\} \\]\nto restrict different copies of the same element to be chosen at most once.\nThe matroid union problem is thus reducible to the matroid intersection problem in the sense that the intersection of $\\hat{\\cM}$ and $\\cM_{\\text{part}}$ maps exactly to the independent sets of the matroid union $\\cM$.\n\nNotation-wise, given the above mapping between the two worlds, whenever we write $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$, a subset set of $\\hat{U}$, we will equivalently regard $S$ as a subset of $U$ with an implicit partition $S_1, \\ldots, S_k$ where $S_i \\in \\cI_i$.\n\n\n\\subsection{Specialized Matroid Intersection Algorithm}\n\nGiven the reduction, to prove \\cref{thm:dynamic-matroid-union}, it suffices to compute the intersection of $\\hat{\\cM}$ and $\\cM_{\\text{part}}$ in the claimed time bound.\nIn the following, we will set $\\cM_1$ to be $\\cM_{\\text{part}}$ and $\\cM_2$ to be $\\hat{\\cM}$ when talking about exchange graphs and other data structures.\nOur main goal is to optimize the $O(n\\sqrt{r})$ term to $O(r\\sqrt{r})$, so it might be more intuitive to think of $r \\ll n$.\nWe first show that for an $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$, the exchange graph $G(S)$ is quite unbalanced in the sense that most elements appear in the first distance layer.\nIn fact, the first distance layer of $G(S)$ contains all duplicates of elements $u$ in $U$ that do not appear in $S$.\nThis is by definition of $G(S)$ and the fact that $\\cM_1 = \\cM_{\\text{part}}$ is the partition matroid.\nIn the following, when the context is clear, we let $d_t$ denote the $(s, t)$-distance of $G(S)$ and $L_1, \\ldots, L_{d_t - 1}$ denote the distance layers.\n\n\\begin{fact}\n It holds that $L_1 = \\{(u, i) \\mid (u, i) \\in \\hat{U}\\;\\text{and}\\;(u, j) \\not\\in S\\;\\text{for any $1 \\leq j \\leq k$}\\}$.\n \\label{fact:large-first-layer}\n\\end{fact}\n\nSimilarly, the odd layers of $G(S)$ (that corresponds to $\\bar{S}$) are well-structured in the sense that they consist of elements whose one of the duplicates appears in $S$.\nBy definition of $G(S)$, we also know that elements in odd layers have only a single in-edge, which is from their corresponding duplicate in $S$.\nThese elements thus all have the same distance from $s$.\n\n\\begin{fact}\n It holds that $L_3 \\cup L_5 \\cup \\cdots \\cup L_{d_t - 1} = \\{(u, i) \\mid (u, i) \\in \\hat{U}\\;\\text{and}\\;(u, j) \\in S\\;\\text{for some}\\;i \\neq j\\}$,\n and for each $(u, i) \\in L_3 \\cup \\cdots \\cup L_{d_t - 1}$, we have $d_{G(S)}(s, (u, i)) = d_{G(S)}(s, (u, j)) + 1$ where $(u, j) \\in S$.\n \\label{fact:structured-odd-layers}\n\\end{fact}\n\n\n\\paragraph{Union Exchange Graph.} Given the above facts, we introduce another notion of exchange graphs which is commonly used for matroid union (see, e.g., \\cite{edmonds1968matroid,cunningham1986improved}).\nFor the given $k$ matroids $\\cM_i = (U_i, \\cI_i)$ and a subset $S \\subseteq U$ that can be partitioned into $k$ independent sets $S_1, \\ldots, S_k$ with $S_i \\in \\cI_i$, the \\emph{union exchange graph} is a directed graph $H(S) = (U \\cup \\{s, t\\}, E)$ with two distinguished vertices $s, t \\not\\in U$ and edge set $E = E_s \\cup E_t \\cup E_{\\text{ex}}$, where\n\n\\begin{align*}\n E_s &= \\{(s, u) \\mid u \\not\\in S\\}, \\\\\n E_t &= \\{(u, t) \\mid S_i + u \\in \\cI_i\\;\\text{for some $1 \\leq i \\leq k$}\\},\\;\\text{and} \\\\\n E_{\\text{ex}} &= \\{(u, v) \\mid S_i - v + u \\in \\cI_i\\;\\text{where $v \\in S_i$}\\}. \\\\\n\\end{align*}\n\nWe can see that the exchange graph $G(S)$ with respect to $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ (as a subset of $\\hat{U}$) and the union exchange graph $H(S)$ with respect to $S \\subseteq U$ is essentially the same in the sense that $H(S)$ can be obtained from $G(S)$ by contracting all copies of the same element in the first layers and skipping all other odd layers.\nIn particular, for each $(u, i) \\in S$, in $G(S)$, there might be a direct edge from $(u, i)$ to $(u, j)$ and an edge from $(u, j)$ to $(v, j)$, where $(v, j) \\in S_j$ and $S_j - v + u \\in \\cI_j$.\nCorrespondingly, in $H(S)$, we skip the intermediate vertex $(u, j)$ and meld the above two edges as one direct edge from $u \\in S_i$ to $v \\in S_j$.\nWe also merge all edges from $s$ to some $(u, i)$ of the same $u$ in the first layer to a single edge from $s$ to $u$ (\\cref{fact:large-first-layer}).\nThis simplification does not impact the distance layers of $H(S)$ since all such $(u, j)$ have the same distance from $s$ (\\cref{fact:structured-odd-layers}).\n\n\nFrom now on, for simplicity, our algorithms will run on the union exchange graphs $H(S)$, i.e., we will perform blocking-flow computation and augment $S$ along paths in $H(S)$.\nOn the other hand, to not repeat and specialize all the lemmas to the case of union exchange graphs, proofs and correctness will be argued implicitly in the perspective of the exchange graph $G(S)$ for matroid intersection.\nFor instance, for $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ a shortest $(s, t)$-path in $H(S)$, ``augmenting $S$ along $P$'' means moving $a_i$ to the independent set that originally contains $a_{i + 1}$ for each $i \\geq 1$, and thus effectively enlarge the size of $S$ by one via putting $a_1$ in it.\\footnote{One can show that the matroid union $\\cM$ is a matroid~\\cite[Chapter 42]{schrijver2003}. As such, a basis can be obtained by trying to include each element into $S$. From the union exchange graph perspective, the independence test of $S + x$ corresponds to asking whether ``there is a path in $H(S)$ from $x$ to $t$''.}\nOne can verify that this is indeed what happens if we map $P$ back to a path $P^\\prime$ in $G(S)$, and then perform the augmentation of $S$ (as a subset of $\\hat{U}$) along $P^\\prime$.\n\nOur main idea to speed up the matroid union algorithm to $\\tO_k(r\\sqrt r)$ (instead of $\\tO_k(n \\sqrt{r})$) is to ``sparsify'' the first layer of $H(S)$ by only considering a subset of elements contained in some basis. We formalize this in the following \\cref{lemma:bfs-union,lemma:blocking-flow-union} together with \\cref{alg:bfs-union,alg:blocking-flow-union}.\n\n\n\\begin{lemma}\n Given $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$ and $k$ bases $\\{B_i\\}_{i = 1}^{k}$ of $U_i \\setminus S$, it takes $\\tO_k(r)$ time to construct the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$.\n \\label{lemma:bfs-union}\n\\end{lemma}\n\nNote that we know exactly what elements are in the first distance layer, so computing $L_2, \\ldots, L_{d_t - 1}$ suffices.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{BFS in a union exchange graph}\n \\label{alg:bfs-union}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and $k$ bases $\\{B_i\\}_{i = 1}^{k}$ of $U_i \\setminus S$}\n \\KwResult{The $(s, u)$-distance $d(u)$ in $H(S)$ for each $u \\in S \\cup \\{t\\}$}\n\n $\\mathsf{queue} \\gets B_1 \\cup \\cdots \\cup B_k$\\;\n $d(u) \\gets \\infty$ for each $u \\in S \\cup \\{t\\}$, and $d(u) \\gets 1$ for each $u \\in B_1 \\cup \\cdots \\cup B_k$\\;\n $\\cT_i \\gets \\textsc{Initialize}(\\cM_i, S_i, S_i)$ (\\cref{thm:bst} with $\\beta = 1$)\\;\n \\While{$\\mathsf{queue} \\neq \\emptyset$} {\n $u \\gets \\mathsf{queue}.\\textsc{Pop}()$\\;\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$ where $u \\in U_i$ and $u \\not\\in S_i$} {\n \\While{$v := \\cT_i.\\textsc{Find}(u) \\neq \\bot$} {\n $d(v) \\gets d(u) + 1$ and $\\mathsf{queue}.\\textsc{Push}(v)$\\;\n $\\cT_i.\\textsc{Delete}(v)$\\;\n }\n \\lIf{$S_i + u \\in \\cI$ and $d(t) = \\infty$} {\n $d(t) \\gets d(u) + 1$\n }\n }\n }\n \\textbf{return} $d(u)$ for each $u \\in S \\cup \\{t\\}$\\;\n\\end{algorithm}\n\n\\begin{proof}\n The algorithm is presented as \\cref{alg:bfs-union}, and it is essentially a breadth-first-search (BFS) starting from $B_1 \\cup \\cdots \\cup B_k$ instead of $s$.\n Out-edges in $H(S)$ are explored via $k$ binary search trees $\\cT_1, \\cT_2, \\ldots, \\cT_k$ of \\cref{thm:bst}, one for each matroid $\\cM_i$ and independent set $S_i$.\n Let's analyze the running time first.\n Building $\\cT_i$ takes a total of $\\tO(|S|) = \\tO_k(r)$ time.\n Exploring the graph takes $\\tO(|S \\cup B_1 \\cup \\cdots \\cup B_k| \\cdot k) = \\tO_k(r)$ time in total since each element in $S$ is found at most once by $\\cT_i.\\textsc{Find}(\\cdot)$ because $S_i$'s are disjoint, and we also spend $O(k)$ time for each element in $S \\cup B_1 \\cup \\cdots \\cup B_k$ iterating over $\\cT_i$.\n \n It remains to show that starting from $B_1 \\cup \\cdots \\cup B_k$ instead of $U \\setminus S$ does not affect the correctness of the BFS.\n For this, it suffices to show that we successfully compute $d(u)$ for all $u \\in S$ with distance $2$ from $s$.\n By definition, $u \\in S_i$ is of distance $2$ from $s$ if and only if there exists an $x \\in U_i \\setminus S$ such that $S_i - u + x \\in \\cI_i$.\n This is equivalent to $\\mathsf{rank}_i(S_i - u + (U_i \\setminus S)) > \\mathsf{rank}_i(S_i)$ by \\cref{obs:exchange}.\n But then by \\cref{lemma:basis-rank}, we have $\\mathsf{rank}_i(S_i - u + (U_i \\setminus S)) = \\mathsf{rank}_i(S_i - u + B_i)$, and so such an $x$ exists in $B_i$ as well.\n This concludes the proof of \\cref{lemma:bfs-union}.\n\\end{proof}\n\n\n\\begin{lemma}\n Given an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H(S)}(s, t) = d_t$ together with data structures $\\cD_i$ of \\cref{thm:decremental-basis} that maintains a basis of $U_i \\setminus S$ for each $1 \\leq i \\leq k$, it takes $\\tO_k(r + \\frac{r\\sqrt{r}}{d_t} + (|S^\\prime| - |S|) \\cdot d_t\\sqrt{r})$ time to obtain an $S^\\prime \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H}(S^\\prime)(s, t) > d_t$, with an additional guarantee that $\\cD_i$ now maintains a basis of $U_i \\setminus S^\\prime$ for each $1 \\leq i \\leq k$.\n \\label{lemma:blocking-flow-union}\n\\end{lemma}\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \\SetKwInput{KwGuarantee}{Guarantee}\n \n \\caption{Blocking flow in a union exchange graph}\n \\label{alg:blocking-flow-union}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and a dynamic-basis data structure $\\cD_i$ of $U_i \\setminus S$ for each $1 \\leq i \\leq k$}\n \\KwResult{$S^\\prime \\in \\cI_{\\text{part}} \\cap \\cI_k$ with $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$}\n \\KwGuarantee{$\\cD_i$ maintains a basis of $U_i \\setminus S^\\prime$ at the end of the algorithm for each $1 \\leq i \\leq k$}\n \n Build the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$ with \\cref{lemma:bfs-union}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $B_i \\gets$ the basis maintained by $\\cD_i$ and $L_1 \\gets B_1 \\cup \\cdots \\cup B_k$\\;\n $A_\\ell \\gets L_\\ell$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell}^{(i)} \\gets \\textsc{Initialize}(\\cM_i, S_i, Q_{S_i}, A_{\\ell} \\cap S_i)$ for each $2 \\leq \\ell < d_t$ and $1 \\leq i \\leq k$ (\\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d_t$)\\;\n $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n $\\ell \\gets 0$ and $a_0 \\gets s$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n \\lIf{$\\ell = 0$} {\n Find an $a_{\\ell + 1} := \\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(a_\\ell) \\neq \\bot$ for some $1 \\leq i \\leq k$\n }\n \\lElse {\n $a_{\\ell + 1} \\gets$ an arbitrary element in $A_1$\n }\n \\If{such an $a_{\\ell + 1}$ does not exist} {\n \\lIf{$\\ell \\geq 2$} {\n $\\cT_{\\ell}^{(j)}.\\textsc{Delete}(a_{\\ell})$ where $a_{\\ell} \\in S_j$\n }\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\n }\n \\Else {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n $D_1 \\gets D_1 + a_1$ and $A_1 \\gets A_1 - a_1$\\;\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$ where $a_1 \\in U_i$} {\n $B_i \\gets B_i - a_1$\\;\n \\If{$\\cD_i.\\textsc{Delete}(a_1)$ returns a replacement $x$} {\\label{line:delete}\n $B_i \\gets B_i + x$ and $A_1 \\gets A_1 \\cup \\{x\\}$\\;\n }\n }\n \\For{$i \\in \\{2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}^{(j)}.\\textsc{Delete}(a_i)$ and $\\cT_{i}^{(j)}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$ where $a_i \\in S_j$\\;\n }\n Augment $S$ along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$\\;\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\n\\begin{proof}[Proof of \\Cref{lemma:blocking-flow-union}]\n Our blocking-flow algorithm for matroid union is presented as \\cref{alg:blocking-flow-union}.\n As it is equivalent to \\cref{alg:blocking-flow} running on $G(S)$ except that the first layer $L_1 := B_1 \\cup \\cdots \\cup B_k$ is now only a subset (which is updated after each augmentation) of $U \\setminus S$, we skip most parts of the proof and focus on discussing this difference.\n That is, we need to show that if $A_1$ becomes empty, then there is no augmenting path of length $d_t$ in $H(S^\\prime)$ anymore.\n Given how $A_1$ and $B_i$'s are maintained and \\cref{lemma:key} (note that the set $X$ maintained in $\\cT_{\\ell}^{(i)}$ is always $A_\\ell \\cap S_i$ with respect to the current $S_i$ and thus it lets us explore out-edges to $A_{\\ell} \\cap S_i$ satisfying \\cref{eq:condition}), $A_1$ is always the subset of $B_1 \\cup \\cdots \\cup B_k$ consisting of elements that still potentially admits augmenting path of length $d_t$ in $H(S^\\prime)$ through them.\n That means if $A_1 = \\emptyset$, then there is no augmenting set of length $d_t$ in $G(S^\\prime)$, that starts from some $b \\in B_1 \\cup \\cdots \\cup B_k$.\n This would imply that there is no such path even if we start from $x \\in (U_i \\setminus S) \\setminus D_1$ as $B_i$ is a basis of it: if $S_i + x - y \\in \\cI$ for some $x \\in (U_i \\setminus S) \\setminus D_1$ and $y \\in S_i$, then there is a $b \\in B$ with $S + b - y \\in \\cI$, and thus a path starting from $x$ can be converted into a path starting from $b$.\n On the other hand, all elements in $D_1$ are not on a such path by \\cref{lemma:monotone} either.\n This shows that indeed $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$.\n \n The guarantee that $\\cD_i$ now operates on $U_i \\setminus S^\\prime$ is clear: Augmenting along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ corresponds to adding $a_1$ into $S$, and since we call $\\cD_i.\\textsc{Delete}(a_1)$ in Line~\\ref{line:delete} after each such augmentation, $\\cD_i$ indeed stays up-to-date. \n \n It remains to analyze the running time of \\cref{alg:blocking-flow-union}.\n Computing distance layers with \\cref{lemma:bfs-union} takes $\\tO_k(r)$ time.\n The number of elements that have ever been in some $A_i$ is $O_k(r + |S^\\prime| - |S|)$ since\n (i) $L_2 \\cup \\cdots \\cup L_{d_t - 1}$ has size $O_k(r)$,\n (ii) the initial basis $B_i$ of $U_i \\setminus S$ for each $1 \\leq i \\leq k$ has total size $O_k(r)$, and\n (iii) each of the $|S^\\prime| - |S|$ augmentations adds at most $O_k(1)$ elements to $A_1$.\n Similar to \\cref{lemma:blocking-flow}, this means that there are at most $O_k(r)$ iterations, each taking $O_k(\\frac{\\sqrt{r}}{d_t})$ time in $\\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(\\cdot)$ with our choice of $\\beta$.\n The algorithm found $|S^\\prime| - |S|$ augmenting paths, taking $\\tO_k(d_t\\sqrt{r} \\cdot (|S^\\prime| - |S|))$ time in total to update the binary search trees.\n Also, for each such augmentation, we need $\\tO_k(\\sqrt{r})$ time to update the basis $B_i$ for all $1 \\leq i \\leq k$, which is subsumed by the cost of updating $\\cT_{i}^{(j)}$.\n These components sum up the total running time of\n \\[ \\tO_k\\left(r + \\frac{r\\sqrt{r}}{d_t} + \\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}\\right). \\]\n\\end{proof}\n\n\\cref{thm:dynamic-matroid-union} now follows easily.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-union}]\n We initialize the dynamic-basis data structure $\\cD_i$ of \\cref{thm:decremental-basis} on $U_i$ for each of the matroid $\\cM_i$.\n We then run \\cref{lemma:blocking-flow-union} for at most $\\sqrt{r}$ iterations with $\\{\\cD_i\\}_{i = 1}^{k}$ until $d_{H(S)}(s, t) \\geq \\sqrt{r}$ and get an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $\\cD_i$ now operating on $U_i \\setminus S$ for each $1 \\leq i \\leq k$.\n This takes\n \\[ \\tO_k\\left(r\\sqrt{r} + \\sum_{d = 1}^{\\sqrt{r}}\\frac{r\\sqrt{r}}{d} + \\sum_{d = 1}^{\\sqrt{r}}d \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) = \\tO_k(r\\sqrt{r})\\]\n time.\n By \\cref{lemma:approx}, $S$ is $O_k(\\sqrt{r})$ steps away from being optimal, and thus we find the remaining augmenting paths one at a time using \\cref{lemma:bfs-union} in $\\tO_k(r\\sqrt{r})$ time in total.\n Note that since a single augmentation corresponds to adding an element to $S$ (hence removing it from $U \\setminus S$), we can maintain the basis of $U_i \\setminus S$ that \\cref{lemma:bfs-union} needs in $\\tO_k(\\sqrt{r} \\cdot \\sqrt{r})$ total update time, which is subsumed by other parts of the algorithm.\n\\end{proof}\n\n\\subsection{Matroid Packing and Covering}\n\\label{sec:packing}\n\n\nA direct consequence of our matroid union algorithm (\\cref{thm:dynamic-matroid-union-fold} in particular) is that we can solve the following packing and covering problem efficiently.\nAs a reminder, the exact dependence on $k$ of our algorithm is $\\tO(n + kr\\sqrt{\\min(n, kr)} + k \\min(n, kr))$ by \\cref{thm:dynamic-matroid-union-fold}.\n\n\\begin{restatable}[Packing]{corollary}{packingalgo}\n For a matroid $\\cM = (U, \\cI)$, it takes $\\tO(n\\sqrt{n} + \\frac{n^2}{r})$ time to find the largest integer $k$ and a collection of disjoint subsets $\\cS = \\{S_1, S_2, \\ldots, S_k\\}$ of $U$ such that $S_i$ is a basis for each $1 \\leq i \\leq k$ under the dynamic-rank-query model.\n \\label{cor:packing}\n\\end{restatable}\n\n\\begin{proof}\n It's obvious that $k \\leq \\frac{n}{r}$ holds.\n We do a binary search of $k$ in the range $[0, \\frac{n}{r}]$, and for each $k$, we can determine the largest subset $S$ of $U$ which can be partitioned into $k$ disjoint independent sets by \\cref{thm:dynamic-matroid-union-fold}.\n If $|S| = kr$, then it means that there are at least $k$ disjoint bases.\n Otherwise, there are less than $k$ disjoint bases.\n The running time is $\\tO(n\\sqrt{n} + \\frac{n^2}{r})$.\n\\end{proof}\n\n\n\\begin{restatable}[Covering]{corollary}{coveringalgo}\n For a matroid $\\cM = (U, \\cI)$, it takes $\\tO(\\alpha r\\sqrt{n} + \\alpha n)$ time to find the smallest integer $\\alpha$ and a partition $\\cS = \\{S_1, S_2, \\ldots, S_\\alpha\\}$ of $U$ such that $S_i \\in \\cI$ holds for each $1 \\leq i \\leq \\alpha$ under the dynamic-rank-query model.\n \\label{cor:covering}\n\\end{restatable}\n\n\n\\begin{proof}\n We first obtain a $2$-approximation $\\alpha^{\\prime}$ of $\\alpha$ (i.e., $\\alpha \\leq \\alpha^\\prime \\leq 2\\alpha$) by enumerating powers of $2$, running \\cref{thm:dynamic-matroid-union-fold} with $k = 2^i$, and checking if the returned $S$ has size $n$: If $|S| = n$, then we know $2^i$ independent sets suffice to cover $U$.\n Note that the enumeration stops whenever we found a suitable value of $\\alpha^\\prime$.\n The exact value of $\\alpha$ can then be found by a binary search in $[\\frac{\\alpha^\\prime}{2}, \\alpha^\\prime]$.\n This takes $\\tO(\\alpha r\\sqrt{n} + \\alpha n)$ (note that $\\alpha r \\geq n$ must hold).\n\\end{proof}\n\n\\subsection{Application: Spanning Tree Packing}\n\\label{sec:spanning}\n\n\nWe demonstrate the applicability of our techniques by deriving an $\\tO(|E| + (k|V|)^{3\/2})$ algorithm for the $k$ disjoint spanning tree problem in a black-box manner.\nThis improves Gabow's specialized $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ algorithm~\\cite{gabow1988forests}.\nSince all applications of our algorithms follow the same reduction, we only go through it once here.\nRefer to \\cref{appendix:applications} for other applications of both our matroid union and matroid intersection algorithms.\n\n\\begin{theorem}\n Given an undirected graph $G = (V, E)$, it takes $\\tO(|E| + (k|V|)^{3\/2})$ time to find $k$ edge-disjoint spanning trees in $G$ or determine that such spanning trees do not exist with high probability\\footnote{We use \\emph{with high probability} to denote with probability at least $1 - |V|^{-c}$ for an arbitrarily large constant $c$.}.\n \\label{thm:k-dst}\n\\end{theorem}\n\n\\begin{proof}\nBy \\cref{thm:dynamic-matroid-union-fold}, it suffices to provide a data structure that supports the three dynamic-oracle operations (\\cref{def:dyn-oracle}) in $\\mathrm{polylog}(|V|)$ time.\nOur black-box reduction makes use of the worst-case connectivity data structure of \\cite{KapronKM13,GibbKKT15}, which can be adapted to in $O(\\mathrm{polylog}(|V|))$ update time maintain the rank of a set of edges (see \\cref{appendix:applications-graphic} for a discussion on how this can be done).\n\n\n Let $\\cM_G$ be the graphic matroid with respect to $G = (V, E)$.\n $G$ admits $k$ edge-disjoint spanning trees if and only if $\\cM_G$ admits $k$ disjoint bases.\n The theorem now follows from \\cref{thm:dynamic-matroid-union-fold} with $n = |E|$ and $r = |V| - 1$ since \\cref{thm:dynamic-matroid-union-fold} returns a union of $k$ disjoint bases if they exist (we note that $k \\le |E|\/(|V|-1) \\le O(|V|)$, and hence the $O(k^2 r)$ term is dominated by the $O((kr)^{3\/2})$ term).\n\\end{proof}\n\n\\section{Open Problems} \n\\label{sec:openproblems}\n\nOur dynamic-oracle model opens up a new path to achieve fast algorithms for many problems at once, where the ultimate goal is to achieve near-linear time and dynamic-rank-query complexities. This would imply near-linear time algorithms for many fundamental problems. We envision reaching this goal via a research program where the studies of algorithms and lower bounds in our and the traditional models \ncomplement each other. \nIn particular, a major open problem is to improve our algorithms further, which would imply improved algorithms for many problems simultaneously.\nA major step towards this goal is improved algorithms in the traditional model, which would already be a breakthrough. Moreover, failed lower bound attempts might lead to new algorithmic insights and vice versa, \nwe leave improving our lower bounds as another major open problem. \nWe believe that the communication complexity of graph and matroid problems is an important component in this study since it plays a main role in our lower bound argument. Recently the communication and some query complexities of bipartite matching and related problems were resolved in \\cite{blikstadBEMN22}. How about the communication and query complexities of dynamic-oracle matroid problems and their special cases such as colorful spanning trees?\nIt is also fruitful to resolve some special cases as the solutions may shed more light on how to solve matroid problems in our model. Below are some examples.\n\n\\begin{itemize}\n \\item \\textbf{Disjoint Spanning Trees.} Can we find $k$ edge-disjoint spanning trees in an undirected graph in near-linear time for constant $k$, or even do so for the case of $k = 2$ (which already has application in the Shannon Switching Game)? Our new $\\tO(|E|+|V|\\sqrt{|V|})$-time algorithm shows that it is possible for sufficiently dense graphs. For the closely related problem of finding $k$ edge-disjoint arborescences (rooted \\emph{directed} spanning trees) in a directed graph, the case of $k = 2$ has long been settled by Tarjan's linear time algorithm~\\cite{Tarjan76}, and the case of constant $k$ has also been resolved by~\\cite{BhalgatHKP08}. It is a very interesting question whether the directed case is actually computationally easier than the undirected case or not.\n\n \\item \\textbf{Colorful Spanning Tree.} This problem generalizes the maximum bipartite matching problem, among others. Given the recent advances in max-flow algorithms which are heavily based on continuous optimization techniques, bipartite matching can now be solved in almost-linear time~\\cite{ChenKLPGS22} in general and nearly-linear time for dense input graphs \\cite{BrandLNPSSSW20}. \n It is very unclear if continuous optimization can be used for colorful spanning tree since its linear program has exponentially many constraints. \n This reflects the general challenge of using continuous optimization to solve matroid problems and many of their special cases. \n Thus, improving Hopcroft-Karp's $O(|E|\\sqrt{|V|})$ runtime \\cite{HopcroftK73} (which is matched by our dynamic-oracle matroid algorithm) may shed some light on either how to use continuous optimization for these problems or how combinatorial algorithms can break this runtime barrier for colorful spanning tree, bipartite matching, and matroid problems. \n\n\\end{itemize}\n\n\\paragraph{Other Problems with Dynamic Oracles.} It also makes sense to define dynamic oracles for problems like submodular function minimization (SFM), which asks to find the minimizer of a submodular function given an evaluation oracle. In this regime, similar to matroid intersection, we want to limit the symmetric difference from the current evaluation query to the previous ones. \nWe believe that the recent algorithms for submodular function minimization based on convex optimization and cutting-plane methods, particularly the work of \\cite{LeeSW15,JiangLSW20}, can be adapted to the dynamic-oracle setting. \nHowever, \nwe are not aware of any applications of these dynamic-oracle algorithms. \nThe first step is thus improving the best bounds in the traditional oracle model. \nThe special case of the cut-query setting \\cite{RubinsteinSW18,MukhopadhyayN20,LeeSZ21,LeeLSZ21,ApersEGLMN22} is also very interesting; we leave getting algorithms for min $(s,t)$-cut \\cite{ChenKLPGS22} and directed global mincut \\cite{Cen0NPSQ21} with near-linear time and dynamic-query complexity as major open problems.\\footnote{Adapting the cut-query algorithm of \\cite{MukhopadhyayN20} to work with dynamic cut oracles and, even better, with a parallel algorithm \\cite{AndersonB21,Lopez-MartinezM21}, is also open; though, we suspect that these are not hard.}\nAnother interesting direction is the quantum setting. For example, can one define the notion of dynamic quantum cut query so that the quantum cut-query algorithm of \\cite{ApersEGLMN22} can imply a non-trivial quantum global mincut algorithm?\n\n\n\\paragraph{Improved Lower Bounds.} \nObtaining improved lower bounds for matroid intersection is also an important open problem. Getting $\\Omega(n\\log n)$ lower bound for traditional rank-query matroid intersection algorithms is particularly interesting since it would subsume our $\\Omega(n\\log n)$ lower bounds (traditional rank-query lower bound implies independence-query and dynamic-rank-query lower bounds) and the $\\Omega(n \\log n)$ SFM lower bound of \\cite{ChakrabartyGJS22}. \nFor the latter, \\cite{ChakrabartyGJS22} showed an $\\Omega(n\\log{n})$ lower bound for SFM against \\emph{strongly}-polynomial time algorithms. Since SFM generalizes matroid intersection in the traditional rank-oracle model (i.e., a rank query of a matroid corresponds to an evaluation of the submodular function), getting the same lower bound for traditional rank-query matroid intersection algorithms would further strengthen the result of \\cite{ChakrabartyGJS22} to hold against \\emph{weakly}-polynomial time algorithms. \n\nAdditionally, achieving a {\\em truly} super-linear lower bound (i.e. an $n^{1+\\Omega(1)}$ bound) for any of the above problems is extremely interesting. \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Technical Overview of Algorithms}\n\\label{sec:overview}\n\n\n\\subsection{The Blocking-Flow Framework}\nIn this section, we give a high-level overview of our algorithms.\nWe will focus on the dynamic-rank-oracle model (\\cref{def:dyn-oracle}), and sketch how to efficiently implement the ``blocking flow''\\footnote{Similar to the Hopcroft-Karp's \\cite{HopcroftK73} bipartite matching and Dinic's \\cite{dinic1970algorithm} maximum flow algorithms.} matroid intersection algorithms of \\cite{GabowS85,cunningham1986improved,chakrabarty2019faster,nguyen2019note} in this model.\nAs such, we briefly recap how the $\\tO(n\\sqrt{r})$ rank-query (in the traditional oracle model) algorithm of \\cite{chakrabarty2019faster} works first, and then explain how to implement their framework in the new dynamic oracle model with the same cost.\n\nTheir algorithm, like most of the matroid intersection algorithms, is based on repeatedly finding \\emph{augmenting paths} in \\emph{exchange graphs} (see \\cref{sec:prelim} for a definition). Say we have already found some common independent set $S\\in \\cI_1\\cap \\cI_2$ (we start with $S = \\emptyset$). Then the \\emph{exchange graph} $G(S)$ is a directed bipartite graph in which finding an $(s,t)$-path exactly corresponds to increasing the size of $S$ by one.\nAccording to Cunningham's blocking-flow argument \\cite{cunningham1986improved}, if we always augment along the \\emph{shortest} augmenting path, the lengths of such augmenting paths are non-decreasing.\nMoreover, if the length of the shortest augmenting path in $G(S)$ is at least $1\/\\epsilon$, then the size of the current common independent set $S$ must be at least $\\left(1 - O(\\epsilon)\\right) r$ (i.e.\\ it is only $O(\\epsilon r)$ away from optimal).\nThus, the ``blocking flow''-style algorithms consists of two stages:\n\\begin{enumerate}\n\\item\nIn the first stage, they obtain a $(1 - \\epsilon)$-approximate solution by finding augmenting paths until their lengths become more than $O(1\/\\epsilon)$. This is done by running in phases, where in phase~$i$ they eliminate all augmenting paths of length $2i$ by finding a so-called ``blocking flow''---a maximal (not necessarily maximum) collection of compatible augmenting paths. Each such phase can be implemented using only $\\tO(n)$ rank queries, as shown in \\cite{chakrabarty2019faster}. This means that the first stage needs a total of $\\tO(n\/\\epsilon)$ rank-queries (in the classic non-dynamic model).\n\n\\item\nIn the second stage, they find the remaining $O(\\epsilon r)$ augmenting paths one at a time. Each such augmentation can be found in $\\tO(n)$ rank queries, for a total of $\\tO(\\epsilon nr)$ queries for this stage.\n\\end{enumerate}\nUsing $\\epsilon = 1\/\\sqrt{r}$, \\cite{chakrabarty2019faster} obtains their $\\tO(n \\sqrt{r})$ rank-query exact matroid intersection algorithm.\nThe crux of how to implement the stages efficiently is a binary search trick to explore useful edges of the exchange graph quickly (for e.g.\\ to implement a breadth-first-search on the graph). The exchange graph can have up to $\\Theta(nr)$ edges in total, but it is not necessary to find all of them. We will argue that this binary search trick (which issues queries far away from each other) can still be implemented in the \\emph{dynamic-oracle model}, with the use of some data structures.\n\n\n\\subsection{Matroid Intersection}\n\n\\paragraph{Binary Search Tree.}\nThe crux of why a breadth-first-search (BFS) and augmenting path searching can be implemented efficiently (in terms of the number of queries) in \\cite{chakrabarty2019faster} is that they show how to, for $S \\in \\cI$, $u \\in S$, and $X \\subseteq \\bar{S}$, discover an element $x \\in X$ with $(S \\setminus \\{u\\}) \\cup \\{x\\} \\in \\cI$ in $O(\\log{n})$ rank queries using binary search (such a pair $(u,x)$ is called an \\emph{exchange pair}, and corresponds to an edge in the exchange graph).\nThe idea is that such an $x$ exists in $X$ if and only if $\\mathsf{rank}((S \\setminus \\{u\\}) \\cup X) \\geq |S|$.\nThus, we can do a binary search over $X$: we split $X$ into two equally-sized subsets $X_1$ and $X_2$, and check if such an $x$ exists in $X_1$ via the above equation. If it does, then we recurse on $X_1$ to find $x$. Otherwise, such an $x$ must exist in $X_2$ (as it does in $X$), and so we recurse on $X_2$.\nTo make this process efficient in our new model, we pre-build a binary search tree over the elements of $X$, where the internal nodes contain all the query-sets we need. That is, in the root node we have the query-set for $S\\cup X$, and in its two children for $S\\cup X_1$ respectively $S\\cup X_2$.\n\nUsing this binary tree, one can simulate the binary search process as described above.\nSince what we need to do in a BFS is to (i) find a replacement element $x$ and (ii) mark $x$ as visited (thus effectively ``deactivate'' $x$ in $X$), each time we see $x$, we just need to remove $x$ from the $O(\\log{n})$ nodes on a root-to-leaf path, and thus the whole BFS algorithm runs in near-linear time as well.\n\n\\paragraph{Batching, Periodic Rebuilding, and Augmenting Sets.}\nThe above binary search tree is efficient when the common independent set $S$ is static. However, once we find an augmenting path, we need to update $S$. This means that every node in the binary search tree needs to be updated.\nIf done naively, this would need at least $\\Omega(nr)$ time, as there are up to $r$ augmentations, and rebuilding the tree takes $O(n)$ time.\nTherefore, we employ a batching approach here.\nThat is, we do not walk through every node and update them immediately when we see an update to $S$.\nInstead, we batch $k$ updates (for $k$ to be decided later) and pay an additional $O(k)$-factor every time we want to do a query in our tree. In other words, at some point, we might want to search for exchanges for a common independent set $S'$ (by doing queries like $(S' \\setminus \\{u\\}) \\cup X$ to find edges incident to $u$). Our binary tree might only have an outdated version $S$ (i.e.\\ store sets like $S\\cup X$). Then the cost of converting $S\\cup X$ to $(S'\\setminus \\{u\\}) \\cup X$ is $|S\\oplus S'| + 1$, which we assert is less than $k$.\nWhen this number exceeds $k$, we rebuild the binary search tree completely using the up-to-date $S'$ instead, in $\\tO(n)$ time.\n\nOver the whole run of the algorithm, there are only $O(r \\log r)$ updates to our common independent set $S$ (see, e.g., \\cite{cunningham1986improved,nguyen2019note}).\nHence, the total running time becomes\n\\[ \\underbrace{\\tO(nk\\epsilon^{-1})}_{\\text{Blocking-flow for $\\tO(\\epsilon^{-1})$ iterations}} + \\underbrace{\\tO(\\epsilon rn)}_{\\text{Remaining $\\tO(\\epsilon r)$ augmenting paths}} + \\underbrace{\\tO(nrk^{-1})}_{\\text{Rebuilding binary search trees}}, \\]\nwhich is $\\tO(nr^{2\/3})$ for $k = r^{1\/3}$ and $\\epsilon =r^{-1\/3}$.\n\nTo achieve the $\\tO(n\\sqrt{r})$ bound in our dynamic-rank-oracle model, there is one additional observation we need.\nBy the ``Augmenting Sets'' argument~\\cite{chakrabarty2019faster}, for each element $v$ that we want to query our tree, it suffices to consider changes to $S$ that are in the same distance layer as $v$ is (in a single blocking-flow phase).\nSince changes to $S$ are uniformly distributed among layers, when the $(s, t)$-distance in $G(S)$ is $d$, we only need to spend an additional $O(\\frac{k}{d})$-factor (instead of an $O(k)$-factor) when querying the binary search tree.\nThis brings our complexity down to\n\\[ \\tO\\left(n\\epsilon^{-1} + \\sum_{d = 1}^{1\/\\epsilon}\\frac{nk}{d}\\right) + \\tO(\\epsilon rn) + \\tO(nrk^{-1}), \\]\nwhere the first part is a harmonic sum which makes for $\\tO(n\\epsilon^{-1} + nk)$, and the total running time is $\\tO(n\\sqrt{r})$ for $k = r^{1\/2}$ and $\\epsilon = r^{-1\/2}$.\n\n\\subsection{Matroid Union}\n\nFor simplicity of the presentation in this overview, let's assume we are solving the $k$-fold matroid union problem and that $k$---the number of bases\\footnote{As an example, consider the problem of finding $k$ disjoint spanning trees of a graph.} we want to find---is constant.\nA standard black-box reduction from matroid intersection, combined with our algorithm outlined above, immediately gives us an $\\tO(n\\sqrt{r})$ bound in the dynamic-rank-oracle model.\nNevertheless, we show how to exploit certain properties of matroid union to speed this up to $\\tO(n + r\\sqrt{r})$, i.e.\\ \\emph{near-linear time} for sufficiently ``dense''\\footnote{We call matroids with $n \\gg r$ ``dense'' by analogy to the graphic matroids where $n$ denotes the number of edges and $r$ the number of vertices.} matroids.\n\nSuppose $\\cM = (U,\\cI)$ is the matroid we want to find $k$ disjoint bases for. The standard reduction to matroid intersection is that we create $k$ copies of all elements $u\\in U$. Then we define two matroids as follows:\n\\begin{itemize}\n\\item The first matroid $\\cM_1$ says that we only want to use one version of each element $u\\in U$. We set $\\cM_1 = (U\\times \\{1,\\ldots,k\\}, \\cI_\\text{part})$ to be the partition matroid defined as $S\\in \\cI_{\\text{part}}$ if and only if $|\\{(u,i)\\in S : i = x\\}| \\le 1$ for all $x$.\n\\item The second matroid $\\cM_2$ says that for each copy of the ground set $U$ we must pick an independent set according to $\\cM$. That is set $\\cM_2 = (U\\times \\{1,\\ldots,k\\},\\hat{\\cI})$ to be the \\emph{disjoint} $k$-fold union of $\\cM$, i.e.\\ $S\\in \\hat{\\cI}$ if and only if $\\{u : (u,i)\\in S\\}$ is independent in $\\cM$ for all $i$.\n\\end{itemize}\nFor a set $S$ which can be partitioned into $k$ disjoint independent sets, notice that in the exchange graph, the number of elements \\emph{not} in the first layer is bounded by $O(r)$.\nThis is because every $u\\in U$ who is not represented in $S$ will be in the first layer $L_1$ of the BFS tree.\nAs such, we can build the BFS layers starting from the second layer if we can identify all the elements in this second layer.\nThis can be done by checking for each $y$ \\emph{not} in the first layer whether $L_1$ contains an exchange element of $y$ (via computing the rank of $(S \\setminus \\{y\\}) \\cup L_1$; no need to do a binary search).\nAlthough binary search is not needed when identifying elements in the second layer, when going backward among layers to find an augmenting path $P$, we still have to find the exact element in the first layer which can be the first element of $P$ since it will decide which augmenting paths remain ``compatible'' later.\nThis inspires us to maintain two separate binary search trees: one, of size $O(r)$, for finding edges from the second layer and onward, and the other, of size $O(n)$, for finding the first elements of the augmenting paths.\nStill, doing a binary search for each element in the first layer results in a total number of $O(r\\epsilon^{-1})$ queries to the binary search tree, which is too much.\nTo reduce the number of queries down to $\\tO(r)$, we note that only binary searches which correspond to the actual augmenting paths will succeed, i.e., reach the leaf nodes of the binary search tree.\nSince there are at most $O(r\/d)$ augmenting paths when the $(s, t)$-distance in $G(S)$ is $d$, we only need to do $O(r\/d)$ queries to the binary search tree; other queries can be blocked by first checking if their corresponding exchange elements exist in the first layer.\nThis results in a running time of $\\tO(n + r\\sqrt{n})$ (note: $\\sqrt{n}$ and not $\\sqrt{r}$), which already matches Gabow's algorithm for $k$-disjoint spanning tree~\\cite{gabow1988forests}. \n\n\\paragraph{Toward $\\tO(n + r\\sqrt{r})$ for Matroid Union.}\nThe bottleneck of the above algorithm is that we need to do binary searches over (and hence rebuild periodically) the tree data structure for the first layer (of size $\\Omega(n)$).\nIf we can reduce the size of this tree down to $O(r)$, then the running time would be $\\tO(n + r\\sqrt{r})$.\nThis suggests that we might want to somehow ``sparsify'' the first layer.\nIndeed, for a single augmenting path, we only need a basis of the first layer.\nAs a concrete example, consider the case of a graphic matroid: Given a forest $S$, an edge $e \\in S$, and the set of non-tree edges $E \\setminus S$, we want to find a ``replacement'' edge $e^\\prime$ in $E \\setminus S$ for $e$ which ``restores'' the connectivity of $S - e + e^\\prime$.\nIn this case, it suffices to only consider a spanning forest (i.e.\\ ``basis'') $B$ of $E \\setminus S$, in the sense that such a replacement edge exists in $E \\setminus S$ if and only if it exists in this spanning forest $B \\subseteq E\\setminus S$.\n\nMoreover, note that after each augmentation a single element will be removed from the first layer.\nThus, if we can maintain a decremental basis of the first layer, we can build our binary search tree data structure dynamically on top of this basis and get the desired time bound.\n\n\\paragraph{Maintaining a Basis in a Matroid.}\nOur data structure for maintaining a basis is inspired by the dynamic minimum spanning tree algorithm of \\cite{Frederickson85}, in combination with the sparsification trick of \\cite{EppsteinGIN97}. It uses $\\tO(n)$ time to initialize, and then $\\tO(\\sqrt{r})$ dynamic rank queries\\footnote{In the application of our matroid union algorithm, there will only be $\\tO(r)$ updates, so this is efficient enough for our final $\\tO(n+r\\sqrt{r})$ algorithm.} per deletion.\nIt also supports maintaining a min-weight basis.\n\n\nLet $L \\subseteq U$ for $|L| = O(n)$ be the first layer in which we want to maintain a dynamic basis.\nIn the preprocessing stage, we split $L$ into $\\sqrt{n}$ blocks $L_1, L_2, \\ldots, L_{\\sqrt{n}}$ of size roughly $\\sqrt{n}$ and compute the basis of $L$ from left to right.\nWe also build the ``prefix sums'' of these $\\sqrt{n}$ blocks so that we can quickly access\/query sets of the form $L_1 \\cup L_2 \\cup \\cdots \\cup L_k$ for all values of $k$.\nWhen we remove an element $x$ from $L_i$, we first update the prefix sums in $O(\\sqrt{n})$ time.\nIf $x$ is not in the basis we currently maintain, then nothing additional needs to be done.\nOtherwise, we have to find the ``first'' replacement element, which is guaranteed to be located in blocks $L_i, \\ldots, L_{\\sqrt{n}}$.\nThe block $L_j$ in which the replacement element lies can be identified simply by inspecting the ranks of the prefix sums, and after that, we then go through elements in that block to find the exact element.\nNote that blocks after $L_j$ need not be updated, as for them it does not matter what basis we picked among blocks $L_1$ to $L_j$. \nThis gives us an $O(\\sqrt{n})$-update time algorithm for maintaining a basis of a matroid.\n\nTo get a complexity of $O(\\sqrt{r}\\log n)$, we show that a similar sparsification structure as that of \\cite{EppsteinGIN97} for dynamic graph algorithms also works for arbitrary matroids. The sparsification is a balanced binary tree over the $n$ elements, where in each node we have an instance of our (un-sparsified) underlying data structure to maintain a basis consisting of elements in the subtree rooted at the node. Only elements part of the basis of a node are propagated upwards to the parent node. This means that in each instance of our underlying data structure we work over a ground set of size at most $2r$.\nThus, each update corresponds to at most two updates (a single insertion and deletion) to at most $O(\\log n)$ (which is the height of the tree) nodes of the tree, each costing $O(\\sqrt{r})$ dynamic rank queries in order to maintain the basis at this node.\nThis results in the desired time bound.\n\n\n\n\\section{Preliminaries} \\label{sec:prelim}\n\n\n\\paragraph{Notation.}\nWe use standard set notation.\nIn addition to that, for two sets $X$ and $Y$, we use $X + Y$ to denote $X \\cup Y$ (when $X \\cap Y = \\emptyset$) and $X - Y$ to denote $X \\setminus Y$ (when $Y \\subseteq X$).\nFor an element $v$, $X + v$ and $X - v$ refer to $X + \\{v\\}$ and $X - \\{v\\}$, respectively.\nLet $X \\oplus Y$ denote the symmetric difference of $X$ and $Y$.\n\n\n\\paragraph{Matroid.}\nIn this paper, we use the standard notion of matroids which is defined as follows.\n\n\\begin{definition}\nA \\emph{matroid} $\\cM = (U, \\cI)$ is defined by a tuple consisting of a finite \\emph{ground set} $U$ and a non-empty family of \\emph{independent sets} $\\cI \\subseteq 2^U$ such that the following properties hold.\n\\begin{itemize}\n \\item \\textbf{Downward closure:} If $S \\in \\cI$, then any subset $S^\\prime \\subseteq S$ is also in $\\cI$.\n \\item \\textbf{Exchange property:} For any two sets $S_1, S_2 \\in \\cI$ with $|S_1| < |S_2|$, there exists an $x \\in S_2 \\setminus S_1$ such that $S_1 + x \\in \\cI$.\n\\end{itemize}\n\\label{def:matroid}\n\\end{definition}\n\nLet $U$ be the ground set of a matroid $\\cM$.\nFor $S \\subseteq U$, let $\\bar{S}$ denote $U \\setminus S$.\nFor $X \\subseteq U$, the \\emph{rank} of $X$, denoted by $\\mathsf{rank}(X)$, is the size of the largest independent set contained in $X$, i.e., $\\mathsf{rank}(X) = \\max_{S \\in \\cI}|X \\cap S|$.\nThe \\emph{rank} of a matroid $\\cM = (U, \\cI)$ is the rank of $U$.\nWe let $r \\leq n$ denote the rank of the input matriods.\nWhen the input consists of more than one matroid (e.g., in the matroid union problem), let $\\mathsf{rank}_i$ denote the rank function of the $i^{\\scriptsize \\mbox{{\\rm th}}}$ matroid.\nA \\emph{basis} of $X$ is an independent set $S \\subseteq X$ with $|S| = \\mathsf{rank}(X)$.\nA basis of $\\cM$ is a basis of $U$.\nA \\emph{circuit} is a minimal dependent set.\nThe \\emph{span} of $X$ contains elements whose addition to $X$ does not increase the rank of it, i.e., $\\mathsf{span}(X) = \\{u \\in U \\mid \\mathsf{rank}(X \\cup \\{u\\}) = \\mathsf{rank}(X)\\}$.\n\n\\begin{fact}\n The rank function is submodular.\n That is, $\\mathsf{rank}(X) + \\mathsf{rank}(Y) \\geq \\mathsf{rank}(X \\cap Y) + \\mathsf{rank}(X \\cup Y)$ holds for each $X, Y \\subseteq U$.\n\\end{fact}\n\n\\begin{fact}[see, e.g., {\\cite[Lemma 1.3.6]{price2015}}]\n $\\mathsf{rank}(A) = \\mathsf{rank}(\\mathsf{span}(A))$ holds for every $A \\subseteq U$.\n \\label{fact:span-does-not-increase-rank}\n\\end{fact}\n\n\\begin{lemma}\n For two sets $X, Y$ and their bases $S_X, S_Y$, it holds that $\\mathsf{rank}(S_X + S_Y) = \\mathsf{rank}(X + Y)$.\n \\label{lemma:basis-rank}\n\\end{lemma}\n\\begin{proof}\n Since $X, Y \\subseteq \\mathsf{span}(S_X + S_Y)$, we have $S_X + S_Y \\subseteq X + Y \\subseteq \\mathsf{span}(S_X + S_Y)$.\n The lemma then follows from $\\mathsf{rank}(S_X + S_Y) = \\mathsf{rank}(\\mathsf{span}(S_X + S_Y))$ using \\cref{fact:span-does-not-increase-rank}.\n\\end{proof}\n\n\\paragraph{Exchange Graph.}\nOur algorithms for matroid intersection and union will be heavily based on finding augmenting paths in exchange graphs.\n\n\\begin{definition}[Exchange Graph]\nFor two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$ over the same ground set and an $S \\in \\cI_1 \\cap \\cI_2$, the \\emph{exchange graph} with respect to $S$ is a directed bipartite graph $G(S) = (U \\cup \\{s, t\\}, E_S)$ with $s, t \\not\\in U$ being two distinguished vertices and $E_S = E_1 \\cup E_2 \\cup E_s \\cup E_t$, where\n\\begin{align*}\n E_1 &= \\{(x, y) \\mid x \\in S, y \\not\\in S,\\;\\text{and}\\;S - x + y \\in \\cI_1\\}, \\\\\n E_2 &= \\{(y, x) \\mid x \\in S, y \\not\\in S,\\;\\text{and}\\;S - x + y \\in \\cI_2\\}, \\\\\n E_s &= \\{(s, x) \\mid S + x \\in \\cI_1 \\},\\;\\text{and} \\\\\n E_t &= \\{(x, t) \\mid S + x \\in \\cI_2 \\}.\n\\end{align*}\n\\label{def:exchange-graph}\n\\end{definition}\n\nThe \\emph{distance layers} of $G(S)$ is the sets $L_1, \\ldots, L_{d_{G(S)}(s, t) - 1}$, where $L_\\ell$ consists of elements in $U$ that are of distance $\\ell$ from $s$ in $G(S)$.\nMost matroid intersection algorithms including ours are based on augmenting a common independent set with an augmenting path in $G(S)$ until such a path does not exist.\nThe following lemma certifies the correctness of this approach.\n\n\\begin{lemma}[Augmenting Path]\n Let $P$ be a shortest $(s, t)$-path\\footnote{In fact, $P$ only needs to be ``chordless''~\\cite{quadratic2021}, i.e., without shortcuts. Nonetheless, a shortest $(s, t)$-path suffices for our rank-query algorithms.} of $G(S)$.\n Then, the set $S^\\prime := S \\oplus (V(P) \\setminus \\{s, t\\})$ is a common independent set with $|S^\\prime| = |S| + 1$.\n On the other hand, if $t$ is unreachable from $s$ in $G(S)$, then $S$ is a largest common independent set.\n \\label{lemma:augmenting-path}\n\\end{lemma}\n\nWe write $S \\oplus P$, where $P$ is an augmenting path in $G(S)$, for the common independent set $S^\\prime := S \\oplus (V(P) \\setminus \\{s, t\\})$ obtained by augmenting $S$ along $P$.\nLet $d_{G(S)}(u, v)$ denote the $(u, v)$-distance in $G(S)$.\nWhen $S$ is clear from context, let $d_t$ denote $d_{G(S)}(s, t)$.\nThe following lemma states that if $d_{G(S)}(s, t)$ is large, then $S$ is close to being optimal.\n\n\\begin{lemma}[\\cite{cunningham1986improved}]\n If $S \\in \\cI_1 \\cap \\cI_2$ satisfies $d_{G(S)}(s, t) \\geq d$, then $|S|$ is at least $\\left(1 - O(\\frac{1}{d})\\right)r$.\n \\label{lemma:approx}\n\\end{lemma}\n\nThe following bound on the total length of shortest augmenting paths will be useful for our analysis.\n\n\\begin{lemma}[\\cite{cunningham1986improved}]\n If we solve matroid intersection by repeatedly finding the shortest augmenting paths, then the sum of the lengths of these augmenting paths is $O(r\\log r)$.\n \\label{lemma:augmenting-path-lengths}\n\\end{lemma}\n\n\\begin{lemma}[\\cite{cunningham1986improved,price2015,chakrabarty2019faster}]\n If we augment along a shortest $(s, t)$-path in $G(S)$ to obtain $S^\\prime$, then for each $u \\in U$, the following hold (let $d := d_{G(S)}$ and $d^\\prime := d_{G(S^\\prime)})$.\n \\begin{enumerate}\n \\item If $d(s, u) < d(s, t)$, then $d^\\prime(s, u) \\geq d(s, u)$. Similarly, if $d(u, t) < d(s, t)$, then $d^\\prime(u, t) \\geq d(u, t)$.\n \\item If $d(s, u) \\geq d(s, t)$, then $d^\\prime(s, u) \\geq d^\\prime(s, t)$. Similarly, if $d(u, t) \\geq d(s, t)$, then $d^\\prime(u, t) \\geq d^\\prime(s, t)$.\n \\end{enumerate}\n \\label{lemma:monotone}\n\\end{lemma}\n\n\\paragraph{Augmenting Sets.}\nThe following notion of \\emph{augmenting sets}, introduced by \\cite{chakrabarty2019faster}, models a collection of ``mutually compatible'' augmenting paths, i.e., paths that can be augmented sequentially without interfering with each other.\n\n\\begin{definition}[Augmenting Set~{\\cite[Definition 24]{chakrabarty2019faster}}]\n Let $S \\in \\cI_1 \\cap \\cI_2$ satisfy $d_{G(S)}(s, t) = d_t$ and let $L_1, L_2, \\ldots, L_{d_t - 1}$ be the distance layers of $G(S)$.\n A collection $\\Pi := (D_1, D_2, \\cdots, D_{d_t - 1})$ is an \\emph{augmenting set} in $G(S)$ if\n \\begin{enumerate}[noitemsep, label=(\\roman*)]\n \\item $D_\\ell \\subseteq L_\\ell$ holds for each $1 \\leq \\ell < d_t$,\n \\item $|D_1| = |D_2| = \\cdots = |D_{d_t - 1}|$,\n \\item $S + D_1 \\in \\cI_1$,\n \\item $S + D_{d_t - 1} \\in \\cI_2$,\n \\item $S - D_\\ell + D_{\\ell + 1} \\in \\cI_1$ holds for each \\emph{even} $1 \\leq \\ell < d_t - 1$, and\n \\item $S - D_{\\ell + 1} + D_{\\ell} \\in \\cI_2$ holds for each \\emph{odd} $1 \\leq \\ell < d_t - 1$.\n \\end{enumerate}\n \\label{def:augmenting-sets}\n\\end{definition}\n\nOne can think of the concept of augmenting sets as a generalization of augmenting paths.\nIndeed, an augmenting path is an augmenting set where $|D_1| = \\cdots = |D_{d_t - 1}| = 1$.\nThe term ``mutually compatible'' augmenting paths is formalized as follows.\n\n\\begin{definition}[Consecutive Shortest Paths~{\\cite[Definition 28]{chakrabarty2019faster}}]\n A collection of vertex-disjoint shortest $(s, t)$-paths $\\cP = (P_1, \\ldots, P_k)$ in $G(S)$ is a collection of \\emph{consecutive shortest paths} if $P_i$ is a shortest augmenting path in $G(S \\oplus P_1 \\oplus \\cdots \\oplus P_{i - 1})$ for each $1 \\leq i \\leq k$.\n\\end{definition}\n\nThe following structural lemmas of \\cite{chakrabarty2019faster} will be useful for us, particularly in deriving \\cref{lemma:key} in \\cref{sec:matroid-intersection}.\n\n\\begin{lemma}[{\\cite[Theorem 25]{chakrabarty2019faster}}]\n Let $\\Pi := (D_1, \\ldots, D_{d_t - 1})$ be an augmenting set in $G(S)$.\n Then, $S^\\prime := S \\oplus \\Pi := S \\oplus D_1 \\oplus \\cdots \\oplus D_{d_t - 1}$ is a common independent set.\n \\label{lemma:aug-set}\n\\end{lemma}\n\n\nFor two augmenting sets $\\Pi = (D_1, \\ldots, D_{d_t - 1})$ and $\\Pi^\\prime = (D_1^\\prime, \\ldots, D_{d_t - 1}^\\prime)$, we use $\\Pi \\subseteq \\Pi^\\prime$ to denote that $D_{\\ell} \\subseteq D_{\\ell}^\\prime$ hold for each $1 \\leq \\ell < d_t$.\nIn this case, let $\\Pi^\\prime \\setminus \\Pi := (D_1^\\prime \\setminus D_1, \\ldots, D_{d_t - 1}^\\prime \\setminus D_{d_t - 1})$.\nWe will hereafter abuse notation and let $\\Pi$ also denote the set of elements $D_1 \\cup \\cdots \\cup D_{d_t - 1}$ in it.\nIn particular, $U \\setminus \\Pi$ denotes $U \\setminus \\left(D_1 \\cup \\cdots \\cup D_{d_t - 1}\\right)$.\n\n\\begin{lemma}[{\\cite[Theorem 33]{chakrabarty2019faster}}]\n For two augmenting sets $\\Pi \\subseteq \\Pi^\\prime$ in $G(S)$, $\\Pi^\\prime \\setminus \\Pi$ is an augmenting set in $G(S \\oplus \\Pi)$.\n \\label{lemma:difference-is-aug-set}\n\\end{lemma}\n\n\\begin{lemma}[{\\cite[Theorem 29]{chakrabarty2019faster}}]\n Given a collection of consecutive shortest paths $P_1, \\ldots, P_k$ in $G(S)$, where $P_i = (s, a_{i,1}, \\ldots, a_{i, d_t - 1}, t)$, the collection $\\Pi = (D_1, \\ldots, D_{d_t - 1})$, where $D_i = \\{a_{1,i}, \\ldots, a_{k, i}\\}$, is an augmenting set in $G(S)$.\n \\label{lemma:aug-set-from-aug-paths}\n\\end{lemma}\n\nThe converse of \\cref{lemma:aug-set-from-aug-paths} also holds.\n\n\\begin{lemma}[{\\cite[Theorem 34]{chakrabarty2019faster}}]\n Given an augmenting set $\\Pi$ in $G(S)$, there is a collection consecutive shortest paths $P_1, \\ldots, P_k$ in $G(S)$ where $P_i = (s, a_{i,1}, \\ldots, a_{i,d_t - 1}, t)$ such that $D_i = \\{a_{1,i}, \\ldots, a_{k,i}\\}$.\n \\label{lemma:aug-paths-from-aug-set}\n\\end{lemma}\n\n\\begin{remark}\n Note that \\cref{lemma:aug-set-from-aug-paths,lemma:aug-paths-from-aug-set} are not equivalent to the exact statements of {\\cite[Theorems 29 and 34]{chakrabarty2019faster}} (in particular, they did not specify how $\\Pi$ and $P_i$ are constructed), but our versions are clear from their proof.\n\\end{remark}\n\n\n\n\n\\begin{claim}\n Let $S \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S)}(s, t) = d_t$ and $\\Pi \\subseteq \\Pi^\\prime$ be two augmenting sets in $G(S)$.\n Let $u \\not\\in \\Pi^\\prime$ be an element.\n If $u$ is not on any augmenting path of length $d_t$ in $G(S \\oplus \\Pi)$, then $u$ is not on any augmenting path of length $d_t$ in $G(S \\oplus \\Pi^\\prime)$ either.\n \\label{claim:no-path}\n\\end{claim}\n\n\\begin{proof}\n Let $S^\\prime := S \\oplus \\Pi$ and $S^{\\prime\\prime} := S \\oplus \\Pi^\\prime$.\n Since $S^{\\prime\\prime}$ can be obtained by augmenting $S^\\prime$ along a series of shortest augmenting paths (by \\cref{lemma:difference-is-aug-set,lemma:aug-paths-from-aug-set}), the claim follows from the fact that the $(s, u)$-distance and $(u, t)$-distance are monotonic (\\cref{lemma:monotone}).\n\\end{proof}\n\n\\begin{claim}\n Let $\\Pi = (D_1, \\ldots, D_{d_t - 1})$ be an augmenting set in $G(S)$ and $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ be an augmenting path in $G(S \\oplus \\Pi)$.\n Then, $\\Pi^\\prime = (D_1 + a_1, \\ldots, D_{d_t - 1} + a_{d_t - 1})$ is an augmenting set in $G(S)$.\n \\label{claim:put-aug-path-in-aug-set}\n\\end{claim}\n\n\\begin{proof}\n This directly follows from \\cref{lemma:aug-paths-from-aug-set,lemma:aug-set-from-aug-paths}.\n\\end{proof}\n\n\n\n\n\\paragraph{Using Dynamic Oracle.}\nIn the following sections except for \\cref{appendix:dynamic-matroid-intersection-ind} where we discuss independence-query algorithms, all algorithms and data structures will run in the \\emph{dynamic-rank-oracle} model (see \\cref{def:dyn-oracle}).\nIn other words, we will simply write ``in $t$ time'' for ``in $t$ time and dynamic rank queries''.\nWe will use the term \\emph{query-sets} to refer to the sets $S_i$ in \\cref{def:dyn-oracle}.\nIn particular, constructing a query-set means building the corresponding set from $S_0 = \\emptyset$ with the $\\textsc{Insert}(\\cdot)$ operation.\nInsertion\/Deletion of an element into\/from a query-set is done via the $\\textsc{Insert}$\/$\\textsc{Delete}$ operations.\nUsing the $\\textsc{Query}$ operation, we assume that we know the ranks of all the query-sets we construct in our algorithms.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sl:intro}\nDuring the past few decades, science has made great progress in understanding the biology of cancer \\cite{p53review_yu2014small,cancer_advances_2014}. The latest technological tools allow assaying tens of thousands of genes simultaneously, providing large volumes of data to search for cancer biomarkers \\cite{microarray_techn_2013,venter2001sequence}. Ideally, scientists would like to extract some qualitative signal from this data in the hope to better understand the underlying biological processes. At the same time it is desirable that the extracted signal is robust in the presence of noise and errors while effectively describing the dataset \\cite{valarmathi2014noise,pozhitkov2014revised}.\n\nOne suite of methods which have enjoyed increased level of success in recent years is based on concepts from the mathematical field of algebraic topology, in particular, {\\em persistent homology} \\cite{EdLeZo2002,Gh2008barcodes,CaZoCoGu2004}. The key benefits of using topological features to describe data include coordinate-free description of shape, robustness in the presence of noise and invariance under many transformations, as well as highly compressed representations of structures \\cite{Lumetal2013}. Analysis of the homology of data allows detection of high-dimensional features based on {\\em connectivity}, such as loops and voids, which could not be detected using traditional methods such as clustering \\cite{Ca2009,Gh2008barcodes}. Further, identifying the most {\\em persistent} of such features promises to pick up the significant shapes while ignoring noise \\cite{Gh2008barcodes}. Such analysis of the stable topological features of the data could provide helpful insights as demonstrated by several studies, including some recent ones on cancer data \\cite{dewoshkin2010,seeman2012}.\n\n\\subsection{Our contributions} \\label{contrib}\nWe present a new method of topological analysis of various cancer gene expression data sets. Our method belongs to the category of exploratory data analysis. In order to more efficiently handle the huge number of genes whose expressions are recorded in such data sets (typically in the order of tens of thousands), we transpose the data and analyze it in its {\\em dual space}, i.e., with each gene represented in the much lower dimensional (in the order of a few hundred) sample space. We then sample critical genes as guided by the topological analysis. In particular, we choose a small subset (typically $120$--$200$) of genes as {\\em landmarks} \\cite{desilva2004}, and construct a family of nested simplicial complexes, indexed by a proximity parameter. We observe topological features (loops) in the first homology group ($\\homo_1$) that remain significant over a large range of values of the proximity parameter (we consider small loops as topological noise). By repeating the procedure for different numbers of landmarks, we select stable features that persist over large ranges of both the number of landmarks and the proximity parameter. We then further analyze these loops with respect to their membership, working under the hypothesis that their topological connectivity could reveal functional connectivity. Through the search of scientific literature, we establish that many loop members have been implicated in cancer biogenesis. We applied our methodology to five different data sets from a variety of cancers (brain, breast, ovarian, and acute myeloid leukemia (AML)), and observed that in each of the five cases, many members of the significant loops in $\\homo_1$ have been identified in the literature as having connections to cancer.\n\nOur method is capable of identifying geometric properties of the data that cannot be found by traditional algorithms such as clustering \\cite{cluster_genes_review2012,cluster_review2010}. By employing tools from algebraic topology, our method goes beyond clustering and detects connected components around holes (loops) in the data space. The shown methodology is also different from techniques such as graph \\cite{cluster_massive_graphs_siam2013,graph_mining_book_cook2006} or manifold learning \\cite{Isomap_original2000,LLE_original2000,HessianLLE_original2003,laplacian_embedding_original2001,diffusion_maps2006}. Graph algorithms, while identifying connectivity, miss wealth of information beyond clustering. Manifold learning algorithms assume that the data comes from an intrinsically low-dimensional space and their goal is to find a low-dimensional embedding. We do not make such assumptions about the data.\n\n\n\n\\subsection{Related work}\\label{prev_work}\nSeveral applications of tools from algebraic topology to analyze complex biological data from the domain of cancer research have been reported recently. DeWoshkin et al.~\\cite{dewoshkin2010} used computational homology for analysis of comparative genomic hybridization (CGH) arrays of breast cancer patients. They analyzed DNA copy numbers by looking at the characteristics of \\(\\homo_0\\) group. Using $Betti_0$ (\\(\\beta_0\\)) numbers, which are the ranks of the zeroth homology groups ($\\homo_0$), their method was able to distinguish between recurrent and non-recurrent patient groups.\n\nLikewise, Seeman et al.~\\cite{seeman2012} applied persistent homology tools to analyze cancer data. Their algorithm starts with a set of genes that are preselected using the {\\it nondimensionalized standard deviation} metric \\cite{seeman2012}. Then, by applying persistent homology analysis to the \\(\\homo_0\\) group, the patient set is recursively subdivided to yield three subgroups with distinct cancer types. By inspecting the cluster membership, a core subset of genes is selected which allows sharper differentiation between the cancer subtypes.\n\nAnother example of topological data analysis is the work of Nicolau et al.~\\cite{nicolau2011}. Their method termed {\\em progression analysis of disease} (PAD) is applied to differentiate three subgroups of breast cancer patients. PAD is a combination of two algorithms -- disease-specific genome analysis (DSGA) \\cite{dsga2007} and the topology-based algorithm termed {\\it Mapper} \\cite{mapper2007}. First, DSGA transforms the data by decomposing it into two components -- the disease component and the healthy state component, where the disease component is a vector of residuals from a {\\it Healthy State Model} (HSM) linear fit. A small subset of genes that show a significant deviation from the healthy state are retained and passed on to Mapper, which applies a specified filter function to reveal the topology of the data. Mapper identified three clusters corresponding to ER$+$, ER$-$, and normal-like subgroups of breast cancer. This work is somewhat different from the previous two papers mentioned above because it does not explicitly analyze features of any of the homology groups.\n\nAll studies mentioned above utilized $\\beta_0$ numbers, thus performing analyses that are topologically equivalent to clustering. In contrast, our method relies on $\\beta_1$ numbers (ranks of $\\homo_1$ groups). One can think of $\\beta_1$ numbers characterizing the loops constructed from connected components (genes) around ``holes'' in the data. The underlying idea is that connections around holes may imply connections between the participating genes and biological functions. Also, most of other methods use some data preprocessing to limit the initial pool of candidate genes. Our method selects the optimal number of genes as part of the analysis itself.\n\n\\section{Mathematical background} \\label{sl:theory}\n\nWe review some basic definitions from algebraic topology used in our work. For details, refer one of the standard textbooks \\cite{Munkres1984,EdHa2009}. Illustrations of simplices, persistent homology, and identification of topological features from landmarks are available in the literature \\cite{Gh2008barcodes, desilva2004, javaplex2011}.\n\n\\subsection{Simplices and simplicial complexes} \\label{sl:simplicial_complexes}\nTopology represents the shape of point sets using combinatorial objects called simplicial complexes. Consider a finite set of points in \\(\\R^n\\). More generally, the space need not be Euclidean. We just need a unique pairwise distance be defined for every pair of points. The building blocks of the combinatorial objects are {\\em simplices}, which are made of collections of these points. \n\nFormally, the convex hull of $k+1$ affinely independent points $\\{v_{0}, v_{1}, \\dots, v_{k}\\}$ is a $k$-simplex. The dimension of the simplex is $k$, and $v_{j}$s are its vertices. Thus, a vertex is a $0$-simplex, a line segment connecting two vertices is a $1$-simplex, a triangle is a $2$-simplex, and so on. Observe that each $p$-simplex $\\sigma$ is made of lower dimensional simplices, i.e., $k$-simplices $\\tau$ with \\(k \\leq p\\). Here, $\\tau$ is called a {\\em face} of $\\sigma$, denoted $\\tau \\subset \\sigma$. A collection of simplices satisfying two regularity properties forms a {\\em simplicial complex}. The first property is that each face of every simplex in a simplicial complex $K$ is also in $K$. Second, each pair of simplices in $K$ intersect in a face of both, or not at all. Due to these properties, algorithms to study shape and topology run much more efficiently on the simplicial complex than on the original point set.\n\nTo construct a simplicial complex on a given point set, one typically considers balls of a given diameter $\\epsilon$ (called $\\epsilon$-ball) centered at each point. The two widely studied complexes of this form are the {\\em \\v{C}ech} and the {\\em Vietoris-Rips} complexes. A $k$-simplex is included in the \\v{C}ech complex if there exists an $\\epsilon$-ball containing all its $k+1$ vertices. Such a simplex is included in the the Vietoris-Rips complex \\(R_{\\epsilon} \\) if each pair of its vertices is within a distance $\\epsilon$. As such, Vietoris-Rips complexes are somewhat easier to construct, since we only need to inspect pairwise, and not higher order, distances.\n\nHowever, both the \\v{C}ech and the Vietoris-Rips complexes have as vertex set all of the points in the data. Such complexes are computationally intensive for datasets of tens of thousands of points. The feasible option is to work with an approximation of the topological space of interest \\cite{desilva2004}. The key idea is to select only a small subset of points (landmarks), while the rest of points serve as {\\it witnesses} to the existence of simplices. Termed {\\em witness complexes}, such complexes have a number of advantages. They are easily computed, adaptable to arbitrary metrics, and do not suffer from the curse of dimensionality. They also provide a less noisy picture of the topological space. We use the {\\it lazy Witness complex}, in which conditions for inclusion are checked only for pairs and not for higher order groups of points \\cite{desilva2004}, analogous to the distinction between the constructions of Vietoris-Rips and \\v{C}ech complexes.\n\nWe employ the heuristic landmark selection procedure called {\\it sequential maxmin} to select a representative set of landmark points \\cite{desilva2004,adams_carlsson_2009,carlsson_etal_2008}. The first landmark is selected randomly from the point set $S$. Then the algorithm proceeds inductively. If \\(L_{i-1}\\) is the set of the first \\(i-1\\) landmarks, then the $i$-th landmark is the point of $S$ which maximizes the function \\(d(x, L_{i-1})\\), the distance between the point \\(x\\) and the set \\(L_{i-1}\\). We vary the total number of landmarks, exploring each of the resulting lazy witness complexes. The final number of landmarks is chosen so that the resulting witness complex maximally exposes topological features.\n\n\\subsection{Persistent homology}\\label{persist_homo}\n\nHomology is the concept from algebraic topology which captures how space is {\\em connected}. Thus, homology can be used to characterize interesting features of a simplicial complex such as connected clusters, holes, enclosed voids, etc., which could reveal underlying relationships and behavior of the data set. Homology of a space can be described by its {\\em Betti numbers}. The $k$-th Betti number \\(\\beta_k\\) of a simplicial complex is the rank of its $k$-th homology group. For $k=0,1,2$, the $\\beta_k$ have intuitive interpretation. \\(\\beta_{0}\\) represents a number of connected components, \\(\\beta_{1}\\) the number of holes, and \\(\\beta_{2}\\) the number of enclosed voids. For example, a sphere has $\\beta_0=1, \\beta_1=0, \\beta_2=1$, as it has one component, no holes, and one enclosed void.\n\nConsider the formation of a simplicial complex using balls of diameter $\\epsilon$ centered on points in a set. For small \\(\\epsilon\\), the simplicial complex is just a set of disjoint vertices. For sufficiently large \\(\\epsilon\\), the simplicial complex becomes one big cluster. What value of \\(\\epsilon\\) reveals the ``correct'' structure? Persistent homology \\cite{EdLeZo2002,Gh2008barcodes} gives a rigorous response to this question. By increasing \\(\\epsilon\\), a sequence of nested simplicial complexes called a filtration is created, which is examined for attributes of connectivity and their robustness. Topological features appear and disappear as \\(\\epsilon\\) increases. The features which exist over a longer range of the parameter $\\epsilon$ are considered as signal, and short-lived features as noise \\cite{EdLeZo2002,zomo_carlsson_2005}. This formulation allows a visualization as a collection of barcodes (one in each dimension), with each feature represented by a bar. The longer the life span of a feature, the longer its bar. In the example barcodes in Figs.~\\ref{breast_evolve}--\\ref{aml170_dim25}, the x-axis represents the \\(\\epsilon\\) parameter, and the bars of persistent loops of interest are circled.\n\n\\section*{Research questions}\n\nOur approach could address several critical questions in the context of cancer data analysis. First, could we select a small subset of relevant genes while {\\em simultaneously} identifying robust nontrivial structure, i.e., topology, of the data? Most previous approaches require the selection of a subset of genes {\\em before} exploring the resulting structure, and hence limiting the generality. Second, could we elucidate higher order interactions (than clusters) between genes that could have potential implications for cancer biogenesis? Higher order structures such as loops could reveal critical subsets of genes with relevant nontrivial interactions, which together have implications to the cancer. Third, could this method work even when data is available from only a {\\em subset} of patients?\n\n\\section{Data}\\label{sl:data}\nWe analyzed five publicly available microarray datasets of gene expression from four different types of cancer -- breast, ovarian, brain, and acute myeloid leukemia (AML). Four of the datasets have the same protocol, GPL570 (HG\\_U133\\_Plus\\_2, Affymetrix Human Genome U133 Plus 2.0 Array). The fifth dataset has a different protocol, HG\\_U95Av2, which has a fewer number of genes (see Table~\\ref{sl:tbl_data}). By including data sets from different protocols, we could verify that the topological features identified are not just artifacts of a particular protocol. \n\n\\begin{wraptable}{r}{3.8in}\n\\tbl{Datasets used in the study.}\n{\n\\begin{tabular}{@{}lllcc@{}}\n\\toprule\nDataset & Series & Protocol & \\# Genes & \\# Samples \\\\ \\colrule\nBrain & GSE36245 & GPL570 & 46201 & 46\\\\\nBreast & GSE3744 & GPL570 & 54613 & 47\\\\ \nOvarian & GSE51373 & GPL570 & 54613 & 28\\\\\nAML188 & GSE10358 & GPL570 & 54613 & 188\\\\\nAML170 & willm-00119 & HG\\_U95Av2 & 12558 & 170\\\\ \\botrule\n\\end{tabular}\n} \\label{sl:tbl_data}\n\\end{wraptable}\nThe number of genes represents the number of unique gene id tags defined by a protocol excluding controls. While the brain dataset is of the same protocol as the breast and ovarian datasets, the former one has fewer genes -- 46201 vs.~54613. This variability, however, did not affect our procedure to find topological features.\n\nAll datasets, except for AML170, were obtained from NCBI Gene Expression Omnibus \\url{http:\/\/www.ncbi.nlm.nih.gov\/geo\/} in October 2013. AML170 was retrieved at the same time from the National Cancer Institute caArray Database at \\url{https:\/\/array.nci.nih.gov\/caarray\/project\/willm-00119}. \n\n\\section{Methods}\\label{sl:methods}\nWe work with the raw gene expression values. In particular, we do not log-transform them.\n\\subsection{Dual space of data}\nTraditionally, gene expression data is viewed in its gene space, i.e., the expression profile of patient $i$ with $m$ genes is a point in $\\R^m$, $\\vx_i= [ x_{i_1}~x_{i_2}~\\cdots~x_{i_m} ]$. Each \\(x_{i_j}\\) is an expression of gene $j$ in the patient $i$. For example, each patient in the Brain dataset is a point in $\\R^{46201}$.\n\nWe analyze expression data in its dual space, i.e., in its {\\it sample space}. Hence a gene $j$ is represented as a point in $\\R^n$, $\\vx_j = [ x_{j_1}~x_{j_2}~\\cdots~x_{j_n} ]$, where $n$ is the number of samples or patients. Each \\(x_{j_i}\\) is the expression of the gene $j$ in the patient $i$. For example, in the same Brain dataset, the points now sit in $\\R^{46}$ space. Hence we study gene expression across the span of all patients.\n\nThe key motivation for this approach is to handle the high dimensionality in a meaningful way for analyzing the {\\em shape} of data. Given the small size of patients, one could efficiently construct a Vietoris-Rips complex using the set of pairwise distances between patients in the gene space (no need to choose landmarks). But such distances become less discriminatory when the number of genes is large \\cite{meaningiful_nneighbor_beyer1999,concentration_ledoux2005}. Working in the dual space, we let our topological method select a manageable number of genes as landmarks to construct the witness complexes, which potentially capture interesting topology of the data. Hence we do not preselect a small number of genes before the topological analysis, as is done by some previous studies \\cite{nicolau2011,seeman2012}.\n\n\\subsection{Choosing the number of landmarks}\nFor construction of the witness complexes, the number of landmarks has to be defined {\\it a priori} (Sec.~\\ref{sl:simplicial_complexes}). Hence the question becomes how many landmarks do we select? We let the data itself guide the selection of genes used as landmarks. If there is a significant loop \n\\begin{wraptable}{r}{2in}\n\\vspace*{-0.1in}\n\\tbl{Numbers of landmarks selected in each dataset.}\n{\\begin{tabular}{@{}lc@{}}\\toprule\nDataset & \\# landmarks \\\\ \\colrule\nBrain & 120\\\\\nBreast & 110\\\\ \nOvarian & 200\\\\\nAML188 & 150\\\\\nAML170 & 130\\\\ \\botrule\n\\end{tabular}}\\label{sl:tbl_num_landmarks}\n\\end{wraptable}\nfeature in the data, it would persist through a range of landmarks in $\\homo_1$ of the complexes. We reconstruct the topological space incrementing the number of landmarks\nwhile observing appearance and disappearance of the topological features. Initially, there would be very few small, noisy features because of insufficient number of points. As the number of landmarks increases, some features stabilize, i.e., do not change much either in size or membership. Then they reach their maximal size, and start to diminish once some critical number of landmarks is exceeded (when the ``holes'' are all filled in). The ``optimal'' number of landmarks is chosen when the length of the bar representing the topological feature is maximal.\n\nA typical example of such behavior is seen in the Breast dataset (see Fig.~\\ref{breast_evolve}). A small loop appears when the number of landmarks is \\(L=50\\). It stabilizes around \\(L=90\\), reaches its maximum span at \\(L=110\\), and then decreases as \\(L\\) grows.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=\\textwidth]{Breast_evolution2_no_title}\n\\caption{Evolution of the loop of interest (circled) for varying number of landmarks from L=50 to L=130 in the breast dataset. Here and in Figs.~\\ref{brain_loop}--\\ref{aml170_dim25}, the x-axis represents the \\(\\epsilon\\) parameter.}\\label{breast_evolve}\n\\end{figure}\n\n\\subsection{Composition of loops}\n\nOne of the goals of our method is to determine the genes which participate in $\\homo_1$ features, which could indicate potential implications for cancer biogenesis. Since the first landmark is chosen randomly in the sequential maxmin procedure, the composition of the loops identified may differ based on this first choice. To circumvent this effect, we do $20$ different runs in each case to collect possible variations in loop formation. Members of the loops are then pooled together for further analysis. Due to the almost deterministic nature of sequential maxmin selection (apart from the first landmark being selected randomly), we observed very little variation over the $20$ runs in most cases. The recovered members of loops are then queried in scientific literature for cancer-related reports.\n\n\\medskip\n\\noindent We implemented our computations using the package JavaPlex \\cite{javaplex2011}. We explored the barcodes for $\\homo_0, \\homo_1,$ and $\\homo_2$, but interesting persistent features with members related to cancer biogenesis were detected only for $\\homo_1$.\n\n\\section{Results}\\label{sl:results}\n\nPersistent topological features in the homology group \\(\\homo_1\\) were observed in every cancer dataset we analyzed. Representative examples are shown in Figs.~\\ref{brain_loop}--\\ref{aml170_2loops}. The AML datasets both had two persistent loops, while the other datasets have one loop each. The Ovarian dataset had a few medium length bars in the $\\homo_1$ barcode, but we investigated only the longest loop. Once the persistent loops were identified, they were inspected with respect to their composition and relation to cancer through search of scientific literature. Below is a brief description of results for each of the datasets (the full list of all loop members is also available \\cite{suppl}).\n\n\\begin{table}[h]\n\\tbl{Selected representatives of loops in different datasets.}\n{\\begin{tabular}{@{}lllc@{}}\\toprule\nGene & Dataset & Description & References \\\\ \\colrule\nCAV1 & Brain & tumor suppressor gene & \\cite{gene_cav1}\\\\\nRPL36 & Brain & prognostic marker in hepatocellular carcinoma & \\cite{rpl36_2011}\\\\ \nRPS11 & Breast & downregulation in breast carcinoma cells & \\cite{rps11_2001}\\\\\nFTL & Breast & prognostic biomarkers in breast cancer & \\cite{ftl_2006}\\\\\nLDHA & Ovarian & overexpressed in tumors, important for cell growth & \\cite{ldha_2013}\\\\\nGNAS & Ovarian & biomarker for survival in ovarian cancer & \\cite{gnas_2010}\\\\\nLAMP1 & AML170 & regulation of melanoma metastasis & \\cite{lamp1_2014}\\\\\nPABPC1 & AML170 & correlation with tumor progression & \\cite{pabpc1_2006}\\\\\nHLF & AML188 & promotes resistance to cell death & \\cite{hlf_2013}\\\\\nDTNA & AML188 & induces apoptosis in leukemia cells & \\cite{dtna_2012}\\\\ \\botrule\n\\end{tabular}}\\label{sl:tb_some_members}\n\\end{table}\n\n\\vspace*{-0.2in}\n\\subsection{Brain dataset}\\label{sl:brain}\nThe lifespan of the longest loop in this dataset was about 820 (bar between $\\approx[480,1300]$). The loop was consistent over different choices of the first landmark. We identified \n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[height=1.1in]{Brain_one_loop_no_title1}\n\\caption{Representative loop in brain dataset.}\\label{brain_loop}\n\\end{wrapfigure}\n13 loop members, out of which 9 were found in cancer literature. Some cancer-related members include EGR1 and CAV1, which have genes been characterized as cancer suppressor genes\\cite{gene_cav1, gene_egr1}, A2M, which has been identified as a predictor for bone metastases\\cite{a2m_2001}, and RPL36 which has been found to be a prognostic marker in hepatocellular carcinoma\\cite{rpl36_2011}.\n\n\n\\subsection{Breast dataset}\\label{sl:breast}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{Breast_one_loop_no_title}\n\\caption{Representative loop in breast dataset.}\\label{breast_loop}\n\\end{wrapfigure}\nThe lifespan of the longest loop in this dataset is in the range $[10080.0, 16684.2]$. As with the brain dataset, this loop is very consistent. However, there were only 10 members of this loop, and 8 of which were found in cancer literature. An interesting feature of this loop is that it had five ribosomal proteins which are known to play a critical role in tightly coordinating p53 signaling with ribosomal biogenesis \\cite{ribo_prots_2011}.\n\n\\subsection{Ovarian dataset}\\label{sl:ovarian}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{Ovarian_one_loop_no_title}\n\\caption{Representative loop in ovarian dataset.}\\label{ovarian_loop}\n\\end{wrapfigure}\nThe Ovarian dataset had the most variable features in \\(\\homo_1\\). However, we investigated the loop corresponding to the most consistent and longest bar, which ranged from about 4000 to over 7000. This loop consisted of 17 members, and 9 were mentioned in the cancer-related literature. Among cancer-related members were GNAS, which was identified as ``an independent, qualitative, and reproducible biomarker to predict progression-free survival in epithelial ovarian cancer'' \\cite{gnas_2010}, and HNRNPA1, which has been identified as a potential biomarker for colorectal cancer \\cite{hnrpra1_2009}. \n\n\\subsection{AML188 dataset}\\label{sl:aml188}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\vspace*{-0.1in}\n\\includegraphics[width=3in]{AML188-representative-loops_no_title}\n\\caption{Representative loops in AML188 dataset.}\\label{aml188_2loops}\n\\end{wrapfigure}\nAcute myeloid leukemia 188 (AML188) had two significant loops (as did AML170). The first one occurred at $[25200.0, 102200.0]$ and the second one at $[78400.0, 146219.24]$. The first loop has 27 members while the second one only 6. Altogether, only 14 of these 33 genes were mentioned in cancer literature. Some cancer-related representatives were hepatic leukemia factor (HLF) which promotes resistance to cell death \\cite{hlf_2013}, RPL35A known for inhibition of cell death \\cite{rpl35a_2002}, and GRK5 which regulates tumor growth \\cite{grk5_2012}. A group of zinc finger proteins were present in the first loop, some of which have been reported as novel biomarkers for detection of squamous cell carcinoma \\cite{zink_finger_2011,zink_finger_1991}.\n\n\\subsection{AML170 dataset}\\label{sl:aml170}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\vspace*{-0.1in}\n\\includegraphics[width=3in]{AML170-representative-loops_no_title}\n\\caption{Representative loops in AML170 dataset.}\\label{aml170_2loops}\n\\end{wrapfigure}\nAML170 comes from caArray database and its protocol HG\\_U95Av2 has only 12558 genes. Even though this protocol had a smaller (about $1\/4$) number of genes compared to the other data sets, we still detected two loops in this dataset. They were relatively shorter than the loops in AML188, and occurred at $[5800.0, 11400.0]$ and $[11000.0, 17600.0]$. The two loops comprised of 19 members, of which 10 were found in cancer-related literature. These relevant members included ubiquitin C (UBC), which was recently identified as a novel cancer-related protein \\cite{ubc_2014}, PABPC1 whose positive expression is correlated with tumor progression in esophageal cancer \\cite{pabpc1_2006}, and LAMP1 which facilitates lung metastasis \\cite{lamp1_2014}.\n\n\\section{Discussion}\\label{sl:discuss}\n\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{AML170_Dim25_L130_no_tilte}\n\\caption{Two persistent loops in the AML170 dataset detected using only $25$ dimensions at the number of landmarks L=130.} \\label{aml170_dim25}\n\\end{wrapfigure}\nThe Breast, Brain, and Ovarian datasets had only one persistent loop, while AML170 and AML188 had two. Also, the AML datasets had a higher number of patients (samples) than the other three sets (see Table~\\ref{sl:tbl_data}). Is this fact just a coincidence or, indeed, does the number of \\(\\homo_1\\) features (loops) correlate with the number of dimensions? To address this question, we chose samples of random and progressively larger ($25$--$175$) subsets of patients from AML170 and AML188 while also increasing the number of landmarks, and studied the evolution of \\(\\homo_1\\) features. In other words, we repeated our method on smaller subsets of patients from these datasets. Both the AML datasets contained two loops even with only 25 dimensions (see Fig.~\\ref{aml170_dim25} for AML170), and continued to do so for the progressively larger subsets. Thus, the number of significant \\(\\homo_1\\) features appears to depend on intrinsic qualities of the data rather than the number of dimensions, demonstrating the robustness of our method to the number of patients in the dataset.\n\nAn important property of a loop is its lifespan \\cite{carlsson2006algebraic}. One may note that the life span of loops for different datasets vary significantly. For example, the lifespan of a significant \\(\\homo_1\\) feature in the brain dataset is only $820$, while for AML188 the lifespan of the first loop is $77,000$. This difference is not only due to the increase in the actual size of a loop as indicated by the number of points comprising the loop ($13$ vs.~$27$ in this case), but also in part because of the different absolute values in microarray expression data. The maximum value for the brain dataset is \\(24 \\times 10^3\\), while for AML188 is \\(3 \\times 10^6\\). Therefore, the absolute length of an \\(\\homo_1\\) feature is not as important as its length relative to other \\(\\homo_1\\) features.\n\nThe crucial step of our method is the choice of landmarks. The goal here is the efficient inference of the topology of data, while selecting a small subset of potentially relevant genes. Landmarks chosen using sequential maxmin tend to cover the dataset better and are spread apart from each other, but the procedure is also known to pick outliers \\cite{desilva2004}. In our datasets, outliers are typically identified by extreme expression values \\cite{outlier_survey2004}. We examined the expression values of the chosen landmarks, and found that very few of them had extreme values (Figs.~\\ref{aml170_loop_hist} and \\ref{breast_loop_hist}). Similarly, the expressions of the genes implicated in cancer biogenesis (among the loop members) did not have any extreme values, and in fact appear to follow normal distributions. We infer that this observation results because sequential maxmin indeed picks points on the outskirts of the topological features. Further, the distribution of expressions of the loop members suggests that the group as a whole could have potential implications for the disease. More interestingly, the ``hole'' structure means such groups could not potentially be identified by traditional coexpression or even differential expression analyses \\cite{ChKe2009}. \n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.33]{hist_aml170_tripple}\n\\caption{Histograms for AML170 dataset, x-axis represents gene expression level of scale $10^4$. (a) distribution of gene expressions for the whole set; (b) distribution of gene expressions for only the loop members; and (c) distribution of gene expressions for cancer-related loop members.} \\label{aml170_loop_hist} \n\\end{figure}\n\\vspace*{-0.2in}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.33]{hist_breast_tripple}\n\\caption{\\label{breast_loop_hist} Histograms for Breast dataset, x-axis represents gene expression level of scale $10^5$. (a) distribution of gene expressions for the whole set, (b) distribution of gene expressions for the loop members only, (c) distribution of gene expressions for cancer-related loop members.}\n\\end{figure}\n\nThe computational complexity of our method is based on the current implementation of JavaPlex \\cite{javaplex2011}, where the two main steps are building and filtering simplicial complexes, and computing homology. Up to dimension $2$ (\\(\\beta_2\\)), homology could be computed in $O(n^3)$ time \\cite{EdHa2009}. However, building simplicial complexes relies on clique enumeration, which is NP-complete, and has a complexity of $O(3^{n\/3}) $\\cite{jholes2014,cliques65}. Further, JavaPlex requires explicit enumeration of simplicial facets appearing at each filtration step, implying the need for large memory resources\\cite{javaplex2011}.\n\nThe cancer-related loop members identified from the two AML datasets were distinct apart from the prominent group of ribosomal proteins. This observation could be explained by two main reasons. First, AML188 has four times the number of genes as AML170 (see Table~\\ref{sl:tbl_data}). Second, JavaPlex identifies only {\\em one} representative of each homology class. That is, if a significant topological feature (hole) exists, it will be identified, but only one loop will be found around that hole. There could be other relevant points in proximity to the hole, but are not guaranteed to be included in the loop. If we have prior knowledge of some genes being relevant, we could try to identify loops around the holes that include these genes as members. In this case, the other members of the identified loops could also have potential implications for the cancer biogenesis. Methods to find a member of a homology class that includes specific points could be of independent interest in the context of optimal homology problems \\cite{DeHiKr2011,DeSuWa2010}.\n\n\\section{Conclusion}\nWe have presented a method to look at cancer data from a different angle. Unlike previous methods, we look at characteristics of the first homology group ($\\homo_1$). We identify the persistent $\\homo_1$ features (which are loops, rather than connected components) and inspect their membership. Importantly, our approach finds potentially interesting connections among genes which cannot be found otherwise using traditional methods. This geometric connectedness may imply functional connectedness, however, this is yet to be investigated by oncologists. If such connections are indeed implied, then the genes in the loops could together form a characteristic ``signature'' for the cancer in question.\n\n\\paragraph{Acknowledgment:} \nKrishnamoorthy acknowledges support from the National Science Foundation through grant \\#1064600.\n\n\\vspace*{-0.1in}\n\\bibliographystyle{ws-procs11x85}\n\\input{BibLockwood_TopoFtrsCncrData}\n\n\n\\end{document}\n\n\n\n\\section{Using Other Packages}\\label{aba:sec1}\nThe class file loads the packages {\\tt amsfonts, amsmath, amssymb,\nchapterbib, cite, dcolumn, epsfig, rotating} and {\\tt url} at\nstartup. Please try to limit your use of additional packages as they\noften introduce incompatibilities. This problem is not specific to\nthe WSPC styles; it is a general \\LaTeX{} problem. Check this\narticle to see whether the required functionality is already\nprovided by the WSPC class file. If you do need additional packages,\nsend them along with the paper. In general, you should use standard\n\\LaTeX{} commands as much as possible.\n\n\\section{Layout}\nIn order to facilitate our processing of your article, please give\neasily identifiable structure to the various parts of the text by\nmaking use of the usual \\LaTeX{} commands or by using your own commands\ndefined in the preamble, rather than by using explicit layout\ncommands, such as \\verb|\\hspace, \\vspace, \\large, \\centering|,\netc.~Also, do not redefine the page-layout parameters.\n\n\\section{User Defined Macros}\nUser defined macros should be placed in the preamble of the article,\nand not at any other place in the document. Such private\ndefinitions, i.e. definitions made using the commands\n\\verb|\\newcommand,| \\verb|\\renewcommand,| \\verb|\\newenvironment| or\n\\verb|\\renewenvironment|, should be used with great care. Sensible,\nrestricted usage of private definitions is encouraged. Large macro\npackages and definitions that are not used in this example article\nshould be avoided. Please do not change the existing environments,\ncommands and other standard parts of \\LaTeX.\n\n\\section{Using WS-procs11x85}\nYou can obtain these files from the following website:\n\\url{http:\/\/www.wspc.com.sg\/style\/proceedings_style.shtml}.\n\n\\subsection{Input used to produce this paper}\n\\begin{verbatim}\n\\documentclass{ws-procs11x85}\n\\begin{document}\n\\title{FOR PROCEEDINGS ...}\n\\author{A. B. AUTHOR$^*$ ...}\n\\address{University ...}\n\\author{A. N. AUTHOR}\n\\address{Group, Laboratory, ...}\n\\begin{abstract}\nThis article...\n\\end{abstract}\n\\keywords{Style file; ...}\n\\bodymatter\n\\section{Using Other Packages}\nThe class file has...\n...\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and background}\n\\setcounter{equation}{0}\n\\setcounter{theo}{0}\nFor a fixed $ 0<\\alpha \\leq 2$ {\\it symmetric $\\alpha$-stable L\\'{e}vy motion} $\\{L_\\alpha (t), t\\geq 0\\}$ is a stochastic process characterized by having stationary independent increments with $L(0) = 0$ almost surely, and \n$L_\\alpha (t) - L_\\alpha (s)\\ (s>t)$ having the distribution of $S_\\alpha((t-s)^{1\/\\alpha},0,0)$,\nwhere $S_\\alpha(c, \\beta,\\mu)$ denotes a stable random variable with stability-index $\\alpha$, with scale parameter $c$, skewness parameter $\\beta$ and shift $\\mu$. A detailed account of such processes may be found in \\cite{Bk_Sam} but we summarize here the features we need.\nStable motion $L_\\alpha $ is $1\/\\alpha$-self-similar in the sense that $L_\\alpha (c t)$ and $c^{1\/\\alpha}L_\\alpha (t)$ are equal in distribution so in particular have the same finite-dimensional distributions. There is a version of $L_\\alpha $ such that its sample paths are c\\`{a}dl\\`{a}g, that is right continuous with left limits. \n\nOne way of representing symmetric $\\alpha$-stable L\\'{e}vy motion $L_\\alpha$ is as a sum over a plane point process.\n Throughout the paper we write\n$$\nr^{\\langle s\\rangle} = {\\rm sign}(r)|r|^s \\mbox{ for }r\\in \\mathbb{R}, s\\in \\mathbb{R}.\n$$\nThen\n\\begin{equation}\\label{sum}\nL_\\alpha (t) = C_\\alpha \n\\sum_{(\\X,\\Y) \\in \\Pi} 1_{(0,t]}(\\X) \\Y^{\\langle-1\/\\alpha\\rangle},\n\\end{equation}\nwhere $C_\\alpha$ is a normalising constant given by\n$$\nC_\\alpha = \\Big(\\int_0^\\infty u^{-\\alpha} \\sin u\\, du \\Big)^{-1\/\\alpha}\n$$\nand where $\\Pi$ is a Poisson point process on $\\mathbb{R}^+ \\times\\mathbb{R}$ with plane Lebesgue measure $\\mathcal{L}^2$ as mean measure, so that for a Borel set $A \\subset \\mathbb{R}^+ \\times\\mathbb{R}$ the number of points of $\\Pi$ in $A$ has a Poisson distribution with parameter $\\mathcal{L}^2(A)$, independently for disjoint $A$. \nThe sum \\eqref{sum} is almost surely absolutely uniformly convergent if $0<\\alpha<1$, but if $\\alpha\\geq 1$ then \\eqref{sum} must be taken as the limit as $n\\to \\infty$ of symmetric partial sums\n$$\nL_{\\alpha ,n} (t) = C_\\alpha \n\\sum_{(\\X,\\Y) \\in \\Pi : |\\Y|\\leq n} 1_{(0,t]}(\\X) \\Y^{\\langle-1\/\\alpha\\rangle},\n$$\nin the sense that $\\|L_{\\alpha ,n}-L_\\alpha\\|_{\\infty}\\to 0$ almost surely.\n\nSeveral variants of $\\alpha$-stable motion have been considered. \nFor example, for {\\it multistable L\\'{e}vy motion} $\\{M_{\\alpha} (t), t\\geq 0\\}$ the stability index $\\alpha$ in \\eqref{sum} can depend on $\\X$ so that the local behaviour changes with $t$, see \\cite{FLL,FL,KFLL,XFL,LLA,LLL,LL}. Thus given a continuous $\\alpha: \\mathbb{R}^+ \\to (0,2)$, \n$$\nM_{\\alpha}(t) = \n\\sum_{(\\X,\\Y) \\in \\Pi} 1_{(0,t]}(\\X) C_{\\alpha(\\X)} \\Y^{\\langle-1\/\\alpha(\\X)\\rangle}.\n$$\nThen $M_{\\alpha}$ is a Markov process. Under certain conditions it is {\\it localisable} with {\\it local form} $L_{\\alpha(t)}$, in the sense that near $t$ the process `looks like' an $\\alpha(t)$-stable process, that is for each $t>0$ and $u \\in \\mathbb{R}$,\n$$\\frac{M_{\\alpha}(t+ru) -M_{\\alpha}(t)}{r^{1\/\\alpha(t)}} \\tod L_{\\alpha(t)}(u)$$\nas $r\\searrow 0$, where convergence is in distribution with respect to the Skorohod metric and consequently is convergent in finite dimensional distributions, see \\cite{FLL,FL}. \n\nThe local stability parameter of multistable L\\'{e}vy motion depends on the time $t$ but in some contexts, for example in financial modelling, it may be appropriate for the local stability parameter to depend instead (or even as well) on the {\\it value} of the process at time $t$. Such a process might be called `self-stabilizing'. Thus, for suitable \n$\\alpha: \\mathbb{R} \\to (0,2)$, we seek a process $\\{Z (t), t\\geq 0\\}$ that is localisable with local form $L^0_{\\alpha(Z (t))}$, in the sense that for each $t$ and $u>0$,\n\\begin{equation}\\label{locdef1}\n\\frac{Z(t+ru) -Z(t)}{r^{1\/\\alpha(Z (t))}}\\bigg|\\, \\mathcal{F}_t \\ \\tod \\ L^0_{\\alpha(Z (t))}(u)\\end{equation}\nas $r\\searrow 0$,\nwhere convergence is in distribution and finite dimensional distributions and where $\\mathcal{F}_t$ indicates conditioning on the process up to time $t$. (For notational simplicity it is easier to construct $Z_{\\alpha}$ with the non-normalised $\\alpha$-stable processes $L^0_{\\alpha} = C_\\alpha^{-1} L_{\\alpha}$ as its local form.)\n\nThroughout the paper we write $D[t_0,t_1)$ for the c\\`{a}dl\\`{a}g functions on the interval $[t_0,t_1)$, that is functions that are right continuous with left limits; this is the natural space for functions defined by sums over point sets. \n\nIn an earlier paper \\cite{FL2} we constructed self-stabilizing processes for $\\alpha: \\mathbb{R}^+ \\to (0,1)$ by first showing that there exists a deterministic function $f\\in D[t_0,t_1)$ satisfying the relation\n$$f(t) = a_0 +\\sum_{(x,y) \\in \\Pi} 1_{(t_0,t]}(x) y^{\\langle-1\/\\alpha(f(x_-))\\rangle}$$\nfor a {\\it fixed} point set $\\Pi$, and then randomising to get a random function $Z$ such that\n\\begin{equation*}\nZ (t) = a_0 + \\sum_{(\\X,\\Y) \\in \\Pi} 1_{(t_0,t]}(\\X)\\, \\Y^{\\langle-1\/\\alpha(Z (\\X_-))\\rangle} \\qquad (t_0\\leq t x$ and $y'0,\\ u,v \\in \\mathbb{R} ),$$\nwhere $\\xi \\in (u,v)$. In particular this gives the estimate we will use frequently:\n\\be \n\\big| y^{-1\/\\alpha(v)} - y^{-1\/\\alpha(u)}\\big |\n\\ \\leq \\ M\\ |v- u|\\, y^{-1\/(a,b)}\\qquad (y>0, \\ u,v \\in \\mathbb{R}), \\label{ydif4}\n \\ee\nwhere\n$$\nM\\ =\\ \\sup_{\\xi\\in\\mathbb{R}} \\frac{|\\alpha'(\\xi)|}{\\alpha(\\xi)^2},\n$$\nand for convenience we write\n$$\ny^{-1\/(a,b)} \\ = \\ \\max\\big\\{ y^{-1\/a}\\big(1+|\\log y| \\big), y^{-1\/b}\\big(1+|\\log y| \\big)\\big\\}\n\\qquad (y>0)\n$$\nand \n$$ y^{-2\/(a,b)} \\ =\\ (y^{-1\/(a,b)})^2.$$ \n\nFor $t_0< t_1$ and a suitable probability space $\\Omega$ (to be specified later), we will work with functions \n$F: \\Omega\\times [t_0, t_1) \\to \\mathbb{R}\\cup \\{\\infty \\}$ which we assume to be measurable (taking Lebesgue measure on $[t_0, t_1)$). Writing $F_\\omega(t)$ for the value of $F$ at $\\omega \\in \\Omega$ and $t\\in [t_0, t_1)$, we think of $F_\\omega$ as a random function on $[t_0, t_1)$ in the natural way (most of the time we will write $F$ instead of $F_\\omega$ when the underlying randomness is clear). In particular we will work in the space \n$${\\mathcal D} \\ = \\ \\big\\{F: F_\\omega \\in D[t_0, t_1) \\mbox{ for almost all }\n \\omega \\in \\Omega \\mbox{ with } \\mathbb{E}\\big(\\|F\\|_\\infty^2\\big)<\\infty\\big\\},$$\n where $\\mathbb{E}$ is expectation, $D[t_0,t_1)$ denotes the c\\`{a}dl\\`{a}g functions, and $\\|\\cdot\\|_\\infty$ is the usual supremum norm. By identifying $F$ and $F'$ if $F_\\omega = F'_\\omega$ for almost all $\\omega \\in \\Omega$, this becomes a normed space under the norm\n\\be\\label{norm}\n\\Big(\\mathbb{E}\\big(\\|F\\|_\\infty^2\\big)\\Big)^{1\/2}.\n\\ee\nA routine check shows that \\eqref{norm} defines a complete norm on ${\\mathcal D}$. \n\n\n\n\\section{Point sums with random signs}\\label{sec:signs}\n\\setcounter{equation}{0}\n\\setcounter{theo}{0}\n\nIn this section we fix a discrete point set $\\Pi^+ \\subset (t_0,t_1)\\times \\mathbb{R^+}$ \nand form sums over values at the points of $\\Pi^+$ with an independent random assignment of sign $+$ or $-$ at each point of $\\Pi^+$.\n\nWe will assume that the point set $\\Pi^+$ satisfies \n$$\n \\sum_{(x,y) \\in \\Pi^+}y^{-2\/(a,b)}< \\infty;\n$$\nthis will certainly be the case if $ \\sum_{(x,y) \\in \\Pi^+}y^{-2\/b'}< \\infty$ for some $b'$ with $bn$. We list the points\n$$\\{(x,y) \\in \\Pi^+ : y\\leq m\\} = \\{(x_1,y_1), \\ldots, (x_N,y_N)\\},$$\nwith $t_0n\\geq 1$. Thus taking the unconditional expectation and using \\eqref{ind1} for $i-1$ gives \\eqref{ind1} for $i$.\n\n(b) If $i= i_k $ for some $1\\leq k\\leq K$, then from \\eqref{mart},\n \\begin{align*}\n\\mathbb{E}\\big((Z_m & (x_i)- Z_n(x_i))^2\\big| \\mathcal{F}_{i-1} \\big)\\\\\n&=\\ \\mathbb{E}\\Big(\\Big(Z_m(x_{i-1}) - Z_n(x_{i-1}) + S(x_{i},y_{i})\\big(y_{i}^{-1\/\\alpha (Z_m(x_{i-1}))} - y_{i}^{-1\/\\alpha (Z_n(x_{i-1}))}\\big)\\Big)^2\\Big| \\mathcal{F}_{i-1}\\Big)\\\\\n&=\\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2 + \\big(y_{i}^{-1\/\\alpha (Z_m(x_{i-1}))} - y_{i}^{-1\/\\alpha (Z_n(x_{i-1}))}\\big)^2\\\\\n&\\leq \\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2\\big(1 + M^2 y_{i_k}^{-2\/(a,b)}\\big)\\\\\n&\\leq \\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2\\big(1 + c_k\\big),\n\\end{align*}\n using \\eqref{ydif4} and \\eqref{cki}.\nAgain, taking the unconditional expectation and using \\eqref{ind2} for $i-1$ gives \\eqref{ind1} for $i$ with a vacuous sum of terms $y_j^{-2\/b}$, completing the induction.\n\nIt follows from \\eqref{ind2} that for all $0\\leq i \\leq N$,\n\\begin{eqnarray}\n\\mathbb{E}\\big((Z_m(x_i) - Z_n(x_i))^2\\big)& \\leq & \\prod_{k=1}^K (1+c_k) \\sum_{k= 1}^{K+1} \\epsilon_k\\nonumber\\\\ \n & \\leq & \\prod_{(x,y)\\in \\Pi ^+ : y\\leq n} (1+M^2y^{-2\/(a,b)}) \\sum_{(x,y)\\in \\Pi ^+ : n n} y^{-2\/b}\\ \\to 0. \n\\ee\nMoreover, there exists a sequence $n_j \\nearrow \\infty$ such that almost surely $\\|Z_{n_j} -Z\\|_\\infty \\to 0$ i.e. $Z_{n_j} \\to Z$ uniformly. If $0n_j} y^{-2\/b}\\ < \\ 2^{-j}\n\\ee\nfor all sufficiently large $j$,\nthen by \\eqref{cauchy0} $\\mathbb{E}\\big(\\|Z_{n_{j+1}}-Z_{n_j}\\|_\\infty^2\\big)< c\\ 2^{-j}$ so almost surely\n $Z= Z_{n_1} + \\sum_{j=1}^\\infty (Z_{n_{j+1}}-Z_{n_j})$ is convergent in $\\|\\cdot\\|_\\infty$. If $00$,\n\\be\\label{Obound}\n\\sum_{(x,y)\\in \\Pi^+\\, :\\, t< x\\leq t+h} y^{-2\/\\alpha(Z(t))} \\ = \\ O(h^\\beta) \\qquad (0n} |\\Y|^{-2\/b}\\ <\\ C\\, n^{-\\eta}\\qquad (n \\in \\mathbb{N}).\n\\ee\n\\end{lem}\n\\begin{proof}\nBy Theorem \\ref{camp},\n$$\n\\mathbb{E}\\Big(\\sum_{(\\X,\\Y) \\in \\Pi: |\\Y|>n} |\\Y|^{-2\/b}\\Big)\\ = \\ \\int_n^\\infty y^{-2\/b}\n\\ \\leq \\ \\frac{b}{2-b}n^{1-2\/b}.$$\nA Borel-Cantelli argument summing over $n=2^{-k}$ completes the proof.\n\\end{proof}\n\nWe can now obtain develop Theorem \\ref{finlim2} to sums over a Poisson point process. \n\n\n\\begin{theo}\\label{thmrand2}\nLet $\\Pi \\subset (t_0,t_1)\\times \\mathbb{R}$ \nbe a Poisson point process with mean measure ${\\mathcal L}^2$, let $\\alpha :\\mathbb{R} \\to [a,b]$ where $00$ such that $|\\Y|\\geq y_0$ if $(\\X,\\Y) \\in \\Pi$, which ensures that the right hand side of \\eqref{diffbound1} below converges. In practice this is a realistic assumption in that it excludes the possibility of $Z$ having unboundedly large jumps.\n\n\n\\begin{theo}\\label{thmrand3}\nLet $y_0>0$ and let $\\Pi $ be a Poisson point process on $ (t_0,t_1)\\times (-\\infty, -y_0] \\cup [y_0,\\infty)$ \n with mean measure ${\\mathcal L}^2$ restricted to this domain. Let $a_0\\in\\mathbb{R}$, let $0 n} |\\Y|^{-2\/b}\\bigg)\\\\\n&=\\ \n4\\,\\mathbb{E}\\bigg( \\exp\\Big( \\sum_{(\\X,\\Y) \\in \\Pi ^+_2: y_0\\leq \\Y\\leq n } \\log(1+M^2\\Y^{-2\/(a,b)})\\Big)\\bigg)\n\\mathbb{E}\\bigg( \\sum_{(\\X,\\Y)\\in \\Pi ^+_2 : \\Y> n} |\\Y|^{-2\/b}\\bigg)\\\\\n&=\\ \n4\\,\\exp\\bigg(\\int_{y_0}^n 2(t_1-t_0)\\ M^2 y^{-2\/(a,b)} dy\\bigg) \n(t_1-t_0) \\int_n^\\infty 2y^{-2\/b} dy.\n \\end{align*}\nLetting $n\\to \\infty$ in the first integral and evaluating the second integral gives \\eqref{diffbound1}.\n\\end{proof}\n\nWe remark that Theorem \\ref{thmrand3} allows us to quantify the rate of convergence in probability of $\\|Z_n -Z\\|_\\infty\\to 0$ in Theorem \\ref{thmrand2}. By the Poisson distribution $\\P\\{\\Pi \\cap ((t_0,t_1)\\times [0,y_0] )= \\emptyset\\} = \\exp(-y_0(t_1-t_0))$. Given $\\epsilon >0$ we can choose $y_0$ to make this probability at most $\\epsilon\/2$, then using \\eqref{diffbound1} and Markov's inequality it follows that if $n$ is sufficiently large then $\\P\\{ \\|Z_n -Z\\|_\\infty>\\epsilon\\} < \\epsilon$. In practice, this leads to an enormous value of $n$. \n\n\\subsection{Local properties and self-stabilising processes}\n\nWe next obtain local properties of the random functions defined by a Poisson point process as in Theorem \\ref{thmrand2}. Not only are the sample paths right-continuous, but they satisfy a local H\\\"{o}lder continuity estimate and are self-stabilizing, that is locally they look like $\\alpha$-stable processes. \n\nWe will use a bound provided by the $\\alpha$-{\\it stable subordinator} which may be defined for each (constant) $0<\\alpha<1$ by\n$$\nS_\\alpha (t) := \\sum_{(\\X,\\Y) \\in \\Pi} 1_{(t_0 ,t]}(\\X)\\, |\\Y|^{-1\/\\alpha}\\qquad (t_0 \\leq t }\n\\end{equation}\nwhere $\\Pi$ is a Poisson point process on $(t_1,t_2) \\times \\mathbb{R}$ with mean measure ${\\mathcal L}^2$.\nThis sum is almost surely absolutely convergent if $0<\\alpha<1$ but for general $0<\\alpha<2$\nit is the limit as $n \\to \\infty$ of\n$$L^0_{\\alpha,n}(t) =\\sum_{(\\X,\\Y) \\in \\Pi:|\\Y| \\leq n} 1_{(0,t]}(\\X) \\Y^{<-1\/\\alpha>}.$$ \nWhilst $\\mathbb{E}\\big(\\|L^0_{\\alpha,n} -L^0_\\alpha\\|_\\infty^2\\big)\\to 0$ as a special case of Theorem \\ref{thmrand2}, the constant value of $\\alpha$ means that $L^0_{\\alpha,n}$ and $L^0_{\\alpha,m}-L^0_{\\alpha,n}$ are independent for $m>n$ and also that $\\{L^0_{\\alpha,n}\\}_n$ is a martingale, which ensures that $\\|L^0_{\\alpha,n} -L^0_\\alpha\\|_\\infty\\to 0$ almost surely. \n\nIn the same way to $Z_n$ we can think of $L^0_{\\alpha,n}(t)$ in terms of $\\Pi^+_2$ and random signs, so that \n$$L^0_{\\alpha,n}(t) =\\sum_{(\\X,\\Y) \\in \\Pi^+_2:\\Y \\leq n} 1_{(0,t]}(\\X) S(\\X,\\Y)\\Y^{<-1\/\\alpha>}.$$ \n\n\nThe following proposition is the analogue of Proposition \\ref{cty1} in this context.\n\n\\begin{prop}\\label{cty3}\nLet $Z$ be the random function given by Theorem \\ref{thmrand2} and let $t\\in [t_0,t_1)$. Then, conditional on ${\\mathcal F}_{t}$, given $0< \\epsilon< 1\/b$ there exist almost surely random numbers $C_1, C_2 <\\infty$ such that for all $0\\leq h 0$. Let $\\Pi^+_2$ be a Poisson point process on $\\mathbb{R}$ with mean measure $2{\\mathcal L}$.\nFrom \\eqref{sub2} \n$$\n\\sum_{(X,Y)\\in \\Pi^+_2\\, :\\, t< x\\leq t+h} \\Y^{-2\/\\alpha(Z(t))}\n\\ =\\ 2S_{\\alpha(Z(t))\/2}(h)\\ \\leq\\ C h^{2\/\\alpha(Z(t)) -\\epsilon}\n$$\nwhere $C<\\infty$ for almost all realisations of $\\Pi^+_2$.\nFor such $\\Pi^+_2$, Proposition \\ref{cty1} gives on randomising the signs,\n$$\\big|Z(t+h) -Z(t)|\\Pi^+_2\\big|\\ \\leq\\ C_1 h^{1\/\\alpha(Z(t))-\\epsilon\/2-\\epsilon\/2}$$\nfor some random $C_1$, \nand hence \\eqref{holas3} holds almost surely.\n\nIn the same way,\n$$\\sum_{(X,Y)\\in \\Pi^+_2\\, :\\, t< x\\leq t+h} \\Y^{-2\/(a,b)}\\ \\leq\\ C' (h^{-1})^{-2\/(a,b) -\\epsilon\/2} \\ \\leq\\ C^{\\prime\\prime}h^{2\/b -\\epsilon} \n$$\nwhere $C', C^{\\prime\\prime}< \\infty$ for almost all realisations of $\\Pi^+_2$.\nThen in Proposition \\ref{cty1}, $L(t+h) = L^0_{\\alpha(t)}(t+h)-L^0_{\\alpha(t)}(t)$, so for such $\\Pi^+_2$, on randomising the signs,\n$$\\Big|\\big(Z(t+h)-Z(t)\\big) -\\big(L^0_{\\alpha(t)}(t+h)-L^0_{\\alpha(t)}(t)\\big)\\big|\\Pi^+_2\\Big| \\ \\leq\\ C_2 h^{1\/\\alpha(Z(t))+ 1\/b -\\,\\epsilon}$$\nfor random $C_2<\\infty$, so \\eqref{locas3} holds almost surely.\n\\end{proof}\n\n\nWe finally show that almost surely at each $t\\in [t_0,t_1)$ the random function $Z$ of Theorem \\ref{thmrand2} is right-localisable with local form an $\\alpha(Z(t))$-stable process, so that $Z$ may indeed be thought of as self-stablizing. \n\n\\begin{theo}\\label{rtloc}\nLet $Z$ be the random function given by Theorem \\ref{thmrand2} and let $t\\in [t_0,t_1)$. Then, conditional on ${\\mathcal F}_{t}$, almost surely $Z$ is strongly right-localisable at $t$, in the sense that\n$$\\frac{Z(t+ru) -Z(t)}{r^{1\/\\alpha(Z(t))}}\\bigg|\\, \\mathcal{F}_t \\ \\tod \\ L^0_{\\alpha(Z(t))}(u) \\qquad (0\\leq u \\leq 1)\n$$\nas $r\\searrow 0$, where convergence is in distribution with respect to $(D[0,1], \\rho_S)$, with $ \\rho_S$ is the Skorohod metric.\n\\end{theo}\n\n\n\\begin{proof}\nLet $0<\\epsilon< 1\/b$. For $u\\in [0,1]$ and $0 W \\log_{2}\\left(1+\\gamma \\beta\\right)\\}\n\\end{equation}\n\\noindent where $W$ is the bandwidth of the channel, $\\gamma$ is the received SNR when the channel gain is equal to unity, and $\\beta$ is the channel gain, which is exponentially distributed in the case of Rayleigh fading.\n\nThe outage probability can be written as\n\\begin{equation}\nP_{{\\rm out},is}={\\rm Pr}\\Big\\{\\beta<\\frac{2^{\\frac{r_i}{W}}-1}{\\gamma}\\Big\\}\n\\end{equation}\n\\noindent Assuming that the mean value of $\\beta$ is $\\overline{\\beta}$,\n\\begin{equation}\nP_{{\\rm out},is}=1-\\exp\\bigg(-\\frac{2^{\\frac{r_i}{W}}-1}{\\gamma\\overline{\\beta}}\\bigg)\n\\end{equation}\n\\noindent Let $\\overline{P}_{{\\rm out},is}=1-P_{{\\rm out},is}$ be the probability of correct reception. It is therefore given by\n\\begin{equation}\\label{correctreception}\n\\overline{P}_{{\\rm out},is}=\\exp\\bigg(-\\frac{2^{\\frac{b}{TW\\left(1-i\\frac{\\tau}{T}\\right)}}-1}{\\gamma\\overline{\\beta}}\\bigg)\n\\end{equation}\n\\noindent The probability $\\overline{P}_{{\\rm out},p}$ for the PU is given by a similar expression with $i=0$ as the PU transmits starting from the beginning of the time slot and the relevant primary link parameters.\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{1}\\\\\n \\caption{Maximum secondary service rate for the parameters: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{r1}\n\\end{figure}\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{5}\\\\\n \\caption{Optimal values of the sensing and access probabilities for the feedback-based system depicted in Fig.\\ \\ref{r1}.}\\label{r5}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{9}\\\\\n \\caption{Maximum secondary service rate for the parameters: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=200$ time slot.}\\label{r9}\n\\end{figure}\n\n\n \n \n \n\n\n\\begin{figure}\n \\includegraphics[width=1\\columnwidth]{delay_fig}\\\\\n \\caption{Maximum secondary service rate for different values of the primary packet delay constraint. Parameters used to generate the figure are: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, and $P_{\\rm MD}=0.08$}\\label{delay_fig}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{beta_curves2}\\\\\n \\caption{Impact of the energy arrival rate $\\lambda_e$ on the maximum secondary service rate for the parameters: $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{rx}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1.09\\columnwidth]{MPR_versus_stability}\\\\\n \\caption{Impact of the MPR capability on the maximum secondary service rate for the parameters: $\\lambda_e=0.5$, $\\overline{P}_{{\\rm out},p}=0.7$, $\\overline{P}_{{\\rm out},0s}=0.6065$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{ry}\n\\end{figure}\n\n\n\\section*{Appendix B}\nWe derive here a generic expression for the outage probability at the receiver of link $j$ when there is concurrent transmission from the transmitter of link $v$. Outage occurs when the transmission rate $r_i$, given by (\\ref{r_i}), exceeds the channel capacity\n\\begin{equation}\nP_{{\\rm out},ij}^{\\left({\\rm c}\\right)}={\\rm Pr}\\biggr\\{r_i > W \\log_{2}\\left(1+\\frac{\\gamma_{j} \\beta_{j}}{\\gamma_{v} \\beta_{v}+1}\\right)\\biggr\\}\n\\end{equation}\n\\noindent where the superscript c denotes concurrent transmission, $\\gamma_j$ is the received SNR at the receiver of link $j$ without interference when the channel gain $\\beta_j$ is equal to unity, and $\\gamma_v$ is the received SNR at the receiver of link $j$ when it only receives a signal from the interfering transmitter of link $v$ given that the gain of the interference channel, $\\beta_v$, is equal to unity.\nThe outage probability can be written as\n\\begin{equation}\\label {1900}\nP_{{\\rm out},ij}^{\\left({\\rm c}\\right)}={\\rm Pr}\\Big\\{\\frac{\\gamma_{j} \\beta_{j}}{\\gamma_{v} \\beta_{v}+1}<{2^{\\frac{r_i}{W}}-1}\\Big\\}\n\\end{equation}\n\\noindent Assuming that $\\beta_j$ and $\\beta_v$ are exponentially distributed with means $\\overline{\\beta_{j}}$ and $\\overline{\\beta_{v}}$, respectively, we can use the probability density functions of these two random variables to obtain the outage as\n \\begin{eqnarray*}\\label{193}\n P_{{\\rm out},ij}^{\\left({\\rm c}\\right)}=1-\\frac{1}{1+\\Big( {2^{\\frac{r_i}{W}}-1} \\Big)\\frac{\\gamma_v\\overline{\\beta_v}}{ \\gamma_j\\overline{\\beta_j}}} {e^{-\\frac{{2^{\\frac{r_i}{W}}-1}}{\\gamma_j \\overline{\\beta_j}}}}\n\\end{eqnarray*}\n\\noindent The probability of correct reception $ \\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},ij}=1-P^{\\left({\\rm c}\\right)}_{{\\rm out},ij}$ is thus given by\n\n \\begin{eqnarray}\\label{conctra}\n \\overline{P}_{{\\rm out},ij}^{\\left({\\rm c}\\right)}=\\frac{\\overline{P}_{{\\rm out},ij}}{1+\\Big({2^{\\frac{b}{TW\\left(1-\\frac{i\\tau}{T}\\right)}}-1} \\Big)\\frac{\\gamma_v\\overline{ \\beta_v}}{\\gamma_j \\overline{\\beta_j}}}\n\\end{eqnarray}\n\\noindent As is obvious, the probability of correct reception is lowered in the case of interference.\n\\section*{Appendix C}\nWe prove here that $\\overline{P}_{{\\rm out},0j} > \\overline{P}_{{\\rm out},1j}$. As a function of $\\tau$, $\\overline{P}_{{\\rm out},1j}$ is given by (\\ref{correctreception}) with $i=1$. Let $x=1-\\frac{\\tau}{T}$ where\n$x \\in [0,1]$. Assuming that the energy unit used per slot is $e$, the transmit power is $\\frac{e}{T-\\tau}=\\frac{e}{Tx}$. This means that the received SNR $\\gamma$ is inversely proportional to $x$. The exponent in (\\ref{correctreception}) with $i=1$ is thus proportional to $\\boldsymbol{g(x)=x (\\exp(\\frac{a}{x})-1)}$ where $a =\\frac{b \\ln 2} {WT} > 0$. Differentiating $g(x)$ with respect to $x$, the derivative is\n\\begin{equation}\n\\begin{split}\n-1+\\Big[1-\\frac{a}{x}\\Big] \\exp(\\frac{a}{x})&=-1+\\Big[1-\\frac{a}{x}\\Big]\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}\\\\&=\n-1+\\!\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}\\!-\\!\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k+1}\\\\&=\n\\sum_{k=1}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}-\\sum_{k=1}^{\\infty}\\frac{1}{(k-1)!}\\Big(\\frac{a}{x}\\Big)^{k}\\\\&=\n\\sum_{k=1}^{\\infty}\\bigg[\\frac{1}{k!}-\\frac{1}{(k-1)!}\\bigg]\\Big(\\frac{a}{x}\\Big)^{k+1}<0\n\\end{split}\n\\end{equation}\nTherefore, the derivative is always negative. Since $x=1-\\frac{\\tau}{T}$, function $g(x)$ increases with $\\tau$. This means that $\\overline{P}_{{\\rm out},1j}$ decreases with $\\tau$ and its maximum value occurs when the transmission starts at the beginning of the time slot, i.e., $\\tau=0$. This proves that $\\overline{P}_{{\\rm out},0j} > \\overline{P}_{{\\rm out},1j}$ for $\\tau > 0$. For the case of concurrent transmission given in (\\ref{conctra}), the numerator is $\\overline{P}_{{\\rm out},1j}$, which decreases with $\\tau$. The denominator is proportional to $g(x)$ which has been shown to decrease with $x$ and, hence, increase with $\\tau$. This means that $\\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},1j}$ decreases with $\\tau$, i.e., $\\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},0j} > \\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},1j}$.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbboz b/data_all_eng_slimpj/shuffled/split2/finalzzbboz new file mode 100644 index 0000000000000000000000000000000000000000..7ba01e4dff23e387ba16c8c18b9b4e26059e9e55 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbboz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA set of pieces on a chessboard is said to be independent if no piece may attack another. Independence problems on chessboards have long been studied; both in terms of maximum arrangements as well as the number of such arrangements. For all traditional chess pieces, kings, queens, bishops, rooks, knights, and pawns, the maximum size of an independent set is known. When enumerating maximum arrangements, some of the problems, for example in the case of rooks or bishops, have elementary solutions. (See Dudeney~\\cite{Dudeney} for an early discussion of independence problems.) For other pieces, such as in the case of queens, the number of maximum independent arrangements is unknown, or in the case of kings an asymptotic approximation is given by Larson~\\cite{Larson}, but an exact value is unknown. Here we wish to enumerate the number of maximum arrangements of nonattacking pawns. Arrangements of nonattacking pawns have been studied by Kitaev and Mansour~\\cite{Kitaev_Mansour} who provide upper and lower bounds on the number of arrangements of pawns on $m\\times n$ rectangles using Fibonacci numbers as well as an algorithm to generate an explicit formula.\n\nAs there are only two distinct arrangements for odd length chessboards, we focus on boards with even length. Because we can divide a $2m\\times 2m$ chessboard into $m^2$ $2\\times 2$ squares each with at most two pawns, the maximum number of independent pawns is at most $2m^2$. This value is easily achieved, and examples are illustrated in Figure~\\ref{pawns}.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics{EvenLengthArrangements}\n\\caption{Arrangements of nonattacking pawns for even length chessboards}\n\\label{pawns}\n\\end{figure}\n\nWe will provide a bijection between the set of maximum nonattacking arrangements of pawns on a $2m\\times 2m$ chessboard and the set of subsets of $m$ rows and $m$ columns of a $2m\\times 2m $ matrix.\n\n\\section{Bijection}\nInstead of considering full arrangements of nonattacking pawns on a $2m\\times 2m$ chessboard, we first consider arrangements on a $2\\times 2$ chessboard. There are four possible arrangements labeled with A, B, C, and D, as illustrated in Figure~\\ref{2times2}. We define a function $f$ on this set, where $f(A) = D$ and $f(B)=f(C)=f(D)=C$. We use this function to define an $(m+1)\\times (m+1)$ matrix $M_{2m} = (m_{i,j})_{1\\leq i, j \\leq m+1}$ whose entries correspond to arrangements of $2m$ independent pawns on a $2 \\times 2m$ rectangular chessboard.\n \n\\begin{figure}[ht]\n\\centering\n\\includegraphics{2x2Types}\n\\caption{The four maximum arrangements of 2 pawns on a $2\\times 2$ chessboard}\n\\label{2times2}\n\\end{figure}\n\n\\begin{definition}\\label{defM2m}\nLet $M_{2m}=(m_{i,j})_{1\\leq i,j \\leq m+1}$ be the matrix who entries consist of arrangements of $2m$ nonattacking pawns on a $2\\times 2m$ rectangular chessboard. We can think of each rectangle as a string of $m$ $2\\times 2$ squares, each with exactly two pawns. The entries of $M_{2m}$ are defined as follows:\n\\begin{enumerate}[i.]\n\\item For $1\\leq j \\leq m+1$, let $m_{1,j}$ be the arrangement where the leftmost $(m+1-j)$ $2\\times 2$ squares of the rectangular chessboard are of Type A and the remaining rightmost $(j-1)$ squares are of Type B. \n\\item For $1\\leq i \\leq m+1$, use $m_{1,j}$ to generate the arrangements $m_{i,j}$ by replacing the leftmost $(i-1)$ $2\\times 2$ squares of $m_{1,j}$, with their image under the function $f$ and leaving the rightmost $(m+1-i)$ $2\\times 2$ squares fixed. \n\\end{enumerate}\n\\end{definition}\n\n\nSee Figure~\\ref{entries} for an example of an entry in the first row and fifth row of $M_{14}$, and see Figure~\\ref{M6} for the entire matrix $M_6$. We claim this matrix contains all possible nonattacking arrangements of pawns on a $2\\times 2m$ rectangular chessboard.\n \n\\begin{figure}[ht]\n\\centering\n\\includegraphics{ExampleEntries}\n\\caption{Entries from the matrix $M_{14}$}\n\\label{entries}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\scalebox{0.9}{\\includegraphics{M6}}\n\\caption{Entries in the matrix $M_6 = (m_{i,j})_{1\\leq i,j \\leq 4}$}\n\\label{M6}\n\\end{figure}\n\n\\begin{proposition}\\label{uniqueness}\nEvery nonattacking arrangement of $2m$ pawns on a $2\\times 2m$ rectangle appears exactly once in the matrix $M_{2m}$.\n\\end{proposition}\n\n\\begin{proof}\nTo begin, we show the number of distinct arrangements of pawns on a $2\\times 2m$ rectangle is $(m+1)^2$. For $m=1$, a $2\\times 2$ square has the four distinct arrangements shown in Figure~\\ref{2times2}, so we induct on $m$. The leftmost $2\\times 2$ square of a $2\\times 2m$ rectangle may have Type A, B, C, or D. First, assume this leftmost square has Type D. Any maximum independent arrangement of a $2\\times 2(m-1)$ rectangle may be appended to the Type D square creating $m^2$ distinct maximum nonattacking arrangements. Next, if the leftmost square has Type A or C, it must be followed by a square of same type or of Type B. But in any $2\\times 2m$ rectangle, when reading from left to right, as soon as a Type B square is introduced in the strip, all remaining squares to the right must also be of Type B. Thus any $2\\times 2m$ strip beginning with a Type A or Type C square consists of $k$ squares of Type A or C followed by $m-k$ squares of Type B for $1\\leq k \\leq m$. Finally there is one possible arrangement beginning with a Type B square. Thus we have\n\\begin{equation*}\nm^2 +2m+1 = (m+1)^2\n\\end{equation*}\ndistinct arrangements as desired.\n\nFurther, no arrangement appears more than once in the matrix $M_{2m}$. We continue to think of the entries of the matrix $M_{2m}$ as a string of $m$ $2\\times 2$ squares. We observe, by construction, as one reads from top to bottom down a column of the matrix, the only actions on these $2\\times 2$ squares are:\n\\begin{enumerate}[i.]\n\\item Type A squares may be changed to Type D squares.\n\\item Type B squares may be changed to Type C squares.\n\\item Any type square may remain fixed.\n\\end{enumerate}\nSimilarly, as you read from left to right across a row of the matrix, the only actions are:\n\\begin{enumerate}[i.]\n\\item Type A squares may be changed to Type B squares.\n\\item Type D squares may be changed to Type C squares.\n\\item Any type square may remain fixed.\n\\end{enumerate}\nGiven any two arrangements in distinct positions in the matrix $M_{2m}$, at least one square has changed from the lower-indexed entry to the higher-indexed entry. If that square was of Type B or D, respectively, it was changed into a Type C square and no action may change it back to a Type B or D square, respectively. If the square was of Type A, then it was changed to a Type B, C, or D square, but in any case, may not return to Type A. Because Type C squares cannot be changed, we have a matrix with unique elements whose size is equal to the size of the set, so therefore each independent maximum arrangement of pawns occurs exactly once in $M_{2m}$.\n\\end{proof}\n\n\nNow, we define a map from the set of subsets of $m$ rows and $m$ columns of a $2m\\times 2m$ matrix into the set of nonattacking arrangements of $2m^2$ pawns.\n\\begin{definition}\nSuppose the rows and columns of a $2m\\times 2m$ matrix are indexed by $[2m]$. Set \n\\begin{equation*}\nA =\\{ C\\cup R : C,R\\subset [2m] \\hbox{ and } |C|=|R|=m\\},\n\\end{equation*}\nthat is, $A$ is the set of all subsets\nconsisting of $m$ rows $R=\\{r_1, r_2, \\ldots, r_{m}\\}\\subset [2m]$ and $m$ columns $C=\\{c_1,c_2, \\ldots, c_{m}\\}\\subset [2m]$. Let $B$ be the set of all nonattacking arrangements of $2m^2$ pawns on a $2m\\times 2m$ chessboard. Define the map $\\Phi: A \\longrightarrow B$ as follows:\n\nGiven a subset $C\\cup R$, assume without loss of generality that $r_1 < r_2< \\cdots a_{i+1}$ for some $i$. This implies the arrangement $m_{a_i, b_i}$ is in a lower row in $M_{2m}$ than arrangement $m_{a_{i+1},b_{i+1}}$, but appears directly above $m_{a_{i+1},b_{i+1}}$ in the $2m\\times 2m$ arrangement. We apply a similar argument to that used in Proposition~\\ref{independent}. At least one square, say $A_k$ in $m_{a_i,b_i}$ is different from the square in the same position, $B_k$, in $m_{a_{i+1},b_{i+1}}$. If $A_k$ is of type D, then $B_k$ is of type A or C, hence the pawn in the lower left corner of $A_k$ may attack the pawn in the upper right corner of $B_k$. If $A_k$ is of Type C, then $B_k$ is of Type A or B, and the pawn in the lower left corner of $A_k$ may attack the pawn in the upper right corner of $B_k$. Thus $a_i \\not> a_{i+1}$. Similarly, if $b_i > b_{i+1}$, the arrangement $m_{a_i, b_i}$ is in column further to the right in $M_{2m}$ than arrangement $m_{a_{i+1},b_{i+1}}$, but appears directly above $m_{a_{i+1},b_{i+1}}$ in the $2m\\times 2m$ arrangement. Again at least one square, say $A_k$ in $m_{a_i,b_i}$ is different from the square in the same position, $B_k$, in $m_{a_{i+1},b_{i+1}}$. If $A_k$ of Type B, then $B_k$ is of Type A or D and the pawn in the lower right corner of $A_k$ may attack the pawn in the upper left corner of $B_k$. Further if $A_k$ is of Type C, then $B_k$ is of Type A or D and the pawn in the lower right corner of $A_k$ may attack the pawn in the upper left corner of $B_k$. Thus $b_i \\not> b_{i+1}$, and we have arrived at the contradiction.\n\\end{proof}\n\n Therefore we have the following corollary.\n\n\\begin{corollary}\nThe function $\\Phi: A \\longrightarrow B$ is a bijection.\n\\end{corollary}\n\nHence, because we may choose an $m$-subset of $[2m]$ in ${2m\\choose m}$ ways, we have our main result.\n\n\\begin{theorem}\nThe number of maximum nonattacking arrangements of pawns on a $2m\\times 2m$ chessboard is ${2m\\choose m}^2$.\n\\end{theorem}\n\n\nWe may generalized this result to maximum independent arrangements of pawns on $2n\\times 2m$ rectangles.\n\n\\begin{theorem}\nThe number of maximum nonattacking arrangements of pawns on a $2n\\times 2m$ chessboard is ${m+n\\choose n}^2$.\n\\end{theorem}\n\\begin{proof}\nAssume without loss of generality that $n\\leq m$. We may utilize the bijection $\\Phi$ from above. Given a nonattacking arrangement of $2mn$ pawns on a $2n\\times 2m$ chessboard, we may divide the arrangement into $n$ rectangles of size $2\\times 2m$. These correspond to $n$ (not necessarily distinct) entries in the matrix $M_{2m}$. Thus we have a set of indices from the matrix entries\n\\begin{equation*}\nS=\\{(a_i, b_i) | 1 \\leq a_1 \\leq a_2 \\leq \\cdots \\leq a_n\\leq m+1 \\hbox{ and }1 \\leq b_1 \\leq b_2 \\leq \\cdots \\leq b_n \\leq m+1 \\}.\n\\end{equation*}\nTwo create distinct column and row entries we have\n\\begin{equation*}\nC \\cup R = \\{a_1, a_2+1, a_3+2, \\ldots, a_n+n-1\\} \\cup \\{b_1, b_2+1, b_2+3, \\ldots, b_n+n-1\\}.\n\\end{equation*}\nWe note the maximum value of elements in $C$ or $R$ is $m+n$, thus $C, R \\subset [m+n]$. Hence we are choosing an $n$-subset of rows from $[m+n]$ and an $n$-subset of columns from $[m+n]$, and the result follows.\n\\end{proof}\n\n\n\n\n\n\\newcommand{\\journal}[6]{{\\sc #1,} #2, {\\it #3}, {\\bf #4}, #6 (#5)}\n\\newcommand{\\book}[4]{{\\sc #1,} #2, #3 (#4)}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $(x_n)\\in[0,1)$ be a sequence, $s>0$ be a real number and $N$ be a natural number. The pair correlation statistic of $(x_n)$ is defined as follows:\n\\[R_2(s,(x_n),N) :=\\frac{1}{N}\\#\\Big\\{1\\leq n\\neq m\\leq N: \\|x_n-x_m\\|\\leq \\frac{s}{N}\\Big\\},\\]\nwhere for any real $x$, $\\displaystyle\\|x\\|:=\\inf_{m\\in\\mathbb{Z}}|x+m|,$ the nearest integer distance. The sequence $(x_n)$ is said to have Poissonian pair correlation (PPC) if for all $s>0$,\\\\\n\\[\\lim_{N\\to\\infty}R_2(s,(x_n),N)= 2s .\\]\nThis concept originated from theoretical physics and it plays a crucial role in the Berry\u2013Tabor conjecture.\nRudnick and Sarnak~\\cite{rudnick1998pair} first studied this notion from a mathematical point of view. Since then this topic achieved wide attention~\\cite{RSZ2001, RZ1999}, several generalizations are known, see e,g.,~\\cite{hauke2021weak,hinrichs2019multi,marklof2020pair,steinerberger2020poissonian}.\n\nThe theory of uniform distribution (or equidistribution) of a sequence has a long history. An interesting connection between the pair correlation and uniform distribution of a sequence is known. This two properties of a sequence are different in nature but they are not independent. Infact, it is known that a Poissonian pair correlated sequence necessarily uniformly distributed(see ~\\cite{aistleitner2018pair,steinerberger2020poissonian,Grepstad2017Larcher}) but the converse is not true.\n\nOnly recently, a concept of pair correlation for the higher dimensional sequences have introduced in \\cite{hinrichs2019multi} with respect to sup-norm and in \\cite{steinerberger2020poissonian} with respect to $2$-norm. Let $d \\geq 2$ and throughout the article we also assume the same. For $\\boldsymbol{x}=(x^{(1)},\\ldots,x^{(d)}) \\in \\mathbb{R}^d$, we denote $\\|\\boldsymbol{x}\\|_\\infty=\\max{\\{\\|x^{(1)}\\|,\\ldots,\\|x^{(d)}\\|\\}}$ and $\\|\\boldsymbol{x}\\|_2=(\\|x^{(1)}\\|^2+\\cdots +\\|x^{(d)}\\|^2)^{1\/2}$.\n\tLet $ (\\boldsymbol{x}_n)_{n\\geq 1}$ be a sequence in $[0,1)^d$. For $s>0$ we write\n\t\\[R_{2,\\infty}^{(d)}(s,(\\boldsymbol{x}_n),N):=\\displaystyle\\frac{1}{N}\\#\\Big\\{1\\leq m\\neq n\\leq N: \\|\\boldsymbol{x}_n-\\boldsymbol{x}_m\\|_\\infty\\leq \\frac{s}{N^{1\/d}}\\Big\\}\\] \n\tand, \\[R_{2,2}^{(d)}(s,(\\boldsymbol{x}_n),N):=\\displaystyle\\frac{1}{N}\\#\\Big\\{1\\leq n\\neq m\\leq N: \\|\\boldsymbol{x}_n-\\boldsymbol{x}_m\\|_2\\leq \\frac{s}{N^{1\/d}}\\Big\\}.\\]\n\n\t\\begin{definition}\n\tA sequence $(\\boldsymbol{x}_n)_{n\\geq 1}$ in $\\mathbb{R}^d$ is said to have $\\infty$-PPC if for all $s>0$, \n\t\\[R_{2,\\infty}^{(d)}(s,(\\boldsymbol{x}_n),N) \\to (2s)^d \\text{ as }N\\to\\infty,\\] \n\tand $2$-PPC if for all $s>0$,\n\t\\[R_{2,2}^{(d)}(s,(\\boldsymbol{x}_n),N) \\to w_ds^d \\text{ as }N\\to\\infty,\\]\n\twhere $w_d$ is the volume of unit ball of $\\mathbb{R}^d$ in $2$-norm.\n\\end{definition}\nSimilar to one dimensional case, it has been proved in \\cite{hinrichs2019multi} that $\\infty$-PPC implies uniform distribution and in \\cite{steinerberger2020poissonian} that $2$-PPC implies uniform distribution for higher dimensional sequences.\n\nThe purpose of this article is to show $\\infty$-PPC and $2$-PPC for some higher dimensional sequences of the form $(\\{a_n^{(1)}\\alpha_1\\},\\ldots,\\{a_n^{(d)}\\alpha_d\\})$, where $(a_n^{(i)})$, for $i=1,2,\\ldots,d$ are sequences of natural numbers and $\\boldsymbol{\\alpha}=(\\alpha_1,\\ldots,\\alpha_d)\\in\\mathbb{R}^d$. For simplicity, in this case we denote the respective pair correlation statistic by $R_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N)$ and $R_{2,2}^{(d)}(s,\\boldsymbol{\\alpha},N)$. \nWe say that $(a_n^{(1)},\\ldots,a_n^{(d)})$ has metric Poissonian pair correlation with respect to sup-norm ($\\infty$-MPPC) if \\[R_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N) \\rightarrow (2s)^d \\mbox{ as } N\\rightarrow \\infty,\\] for almost all $\\boldsymbol{\\alpha}\\in\\mathbb{R}^d$. Moreover, $2$-MPPC is defined analogously.\n\n\nOur results depend on the notion of additive energy of integers sequences. For a finite subset $A$ of integers, the additive energy $E(A)$ of $A$ is defined by\n\\[E(A):=\\#\\{a,b,c,d\\in A: a+b=c+d\\}.\\]\n\nRecently, Aistleitner et al.~\\cite{aistleitner2017additive} proved that for a strictly increasing sequence $(a_n)$ of natural numbers the sequence $(\\{\\alpha a_n\\})$ has Poissonian pair correlation for almost all $\\alpha,$ provided $E(A_N)\\ll N^{3-\\epsilon}$ for some $\\epsilon>0$, where $A_N$ denotes the set of first $N$ elements of $(a_n)$. In \\cite{Bloom2019GCDSA}, Bloom and Walker improved their result by relaxing the condition on the upper bound of additive energy.\nAnalogous result for some special higher dimensional sequences has been established in \\cite{hinrichs2019multi} with respect to sup-norm. Specifically, they proved the following theorem.\n\\begin{thm}\n\tLet $(a_n)$ be a strictly increasing sequence of natural numbers, $A_N$ denote the first $N$ elements of $(a_n)$ and suppose that\n\t\\[E(A_N)=\\operatorname{O}\\Big(\\frac{N^3}{(\\log N)^{1+\\epsilon}}\\Big), \\text{ for some }\\epsilon>0,\\]\n\tthen for almost all $\\boldsymbol{\\alpha}=(\\alpha_1,\\ldots,\\alpha_d)\\in\\mathbb{R}^d$, $(\\{a_n\\boldsymbol{\\alpha}\\})=(\\{a_n\\alpha_1\\},\\ldots,\\{a_n\\alpha_d\\})$ has $\\infty$-PPC.\n\\end{thm}\n\nWe consider more general sequences, namely, $(\\{a_n^{(1)}\\alpha_1\\},\\ldots,\\{a_n^{(d)}\\alpha_d\\})$ and want to show that they have $\\infty$-PPC and $2$-PPC. To state our results we introduce joint notion of additive energy for several increasing sequences of natural numbers. \n\\begin{definition}[Joint additive energy]\n\tLet $(a_n^{(1)}),(a_n^{(2)}),\\ldots,(a_n^{(d)})$ be strictly increasing sequences of natural numbers and $A_N^{(i)}$ denote the first $N$ elements of $(a_n^{(i)})$, for $1\\leq i\\leq d.$ \nThe joint additive energy $E(A_N^{(1)};\\ldots;A_N^{(d)})$ is given by\n\t\\[E\\Big(A_N^{(1)};\\ldots;A_N^{(d)}\\Big)=\\#\\Big\\{1\\leq n,m,k,l\\leq N: a_n^{(i)}+a_m^{(i)}=a_k^{(i)}+a_l^{(i)}, i=1,\\ldots,d\\Big\\}.\n\t\\]\n\\end{definition}\n\\noindent Note that joint additive energy is additive energy in higher dimension.\nIn section~\\ref{s2} we study joint additive energy in details and obtain an upper bound of it.\n \nNow, we state our results.\n\\begin{theorem}\\label{infinity-PPC thm}\n\tLet $(a_n^{(1)}),(a_n^{(2)}),\\ldots,(a_n^{(d)})$ be strictly increasing sequences of natural numbers and $A_N^{(i)}$ denote the first $N$ elements of $(a_n^{(i)})$, for $1\\leq i\\leq d.$ Suppose that for some $\\delta>0$,\n\t\\[E\\Big(A_N^{(1)};\\ldots;A_N^{(d)}\\Big) = \\operatorname{O}\\left( N^{3-\\delta}\\right).\\]\n\tThen $(a_n^{(1)},\\ldots,a_n^{(d)})$ has $\\infty$-MPPC.\n\\end{theorem}\n\\begin{corollary}\\label{infinity-PPC thm cor}\n\tSuppose that for some $\\delta>0$,\n\t\\[\\displaystyle \\min_{1\\leq i\\leq d}E(A_N^{(i)}) = \\operatorname{O}\\left( N^{3-\\delta}\\right).\\]\n\tThen $(a_n^{(1)},\\ldots,a_n^{(d)})$ has $\\infty$-MPPC.\n\\end{corollary}\n\n\n \n In \\cite{steinerberger2020poissonian}, Steinerberger introduced the notation of $2$-PPC but did not mention any sequences which satisfy $2$-PPC. In following theorems we study the $2$-PPC of certain sequences.\n\\begin{theorem}\\label{2-PPC thm1}\n\tLet $(a_n)$ be a strictly increasing sequence of natural numbers and $A_N$ denote the first $N$ elements of $(a_n).$\n\tAssume that \n\t\\[\\displaystyle E(A_N)=\\operatorname{O}\\Big(\\frac{N^3}{(\\log N)^{1+\\epsilon}}\\Big),\\text{ for some $\\epsilon>0$.}\\]\n\tThen for almost all $\\boldsymbol{\\alpha}=(\\alpha_1,\\ldots,\\alpha_d)\\in\\mathbb{R}^d$, the sequence $(\\{a_n\\boldsymbol{\\alpha}\\})=(\\{a_n\\alpha_1\\},\\ldots,\\{a_n\\alpha_d\\})$ has $2$-PPC.\n\\end{theorem}\n\\begin{theorem}\\label{2-PPC thm2}\n\tLet $(a_n),(b_n)$ be strictly increasing sequences of natural numbers and $A_N$, $B_N$ denote their first $N$ elements of $(a_n)$ and $(b_n)$ respectively. Suppose that\n\\begin{align*}\nE\\Big(A_N^{(1)};A_N^{(2)}\\Big)=\\operatorname{O}\\Big(N^{3}\\exp{\\big(-(\\log{N})^{\\frac{1}{2}+\\delta}\\big)}\\Big),\\,\\mbox{ for some } \\delta>0.\n\\end{align*}\n\tThen $(a_n,b_n)$ has $2$-MPPC.\n\\end{theorem}\n\n\\begin{remark}\nFor an arbitrary strictly increasing sequence $(a_n)$ of natural numbers and for any $l\\geq 2\\in\\mathbb{N},$\n$(a_n,n^l)$ has $\\infty$-MPPC and $2$-MPPC. \nIn particular, $(\\{n\\alpha\\},\\{n^2\\beta\\})$ has $\\infty$-PPC and $2$-PPC for almost all $(\\alpha,\\,\\beta)\\in\\mathbb{R}^2.$ It is known that $(\\{n\\alpha\\})_{n\\in \\mathbb{N}}$ does not have PPC for any $\\alpha\\in \\mathbb{R}$(for instance see ~\\cite{LS}).\nThus, the $\\infty$-PPC and $2$-PPC in higher dimension does not imply that each component has PPC.\n\\end{remark}\n\n\n\nNext, we obtain another such example in two dimension. \n\\begin{theorem}\\label{thm6}\n\tLet $A \\in [1,\\,2]$ be any real number. Then $(n,[n(\\log n)^A])$ has $\\infty$-MPPC and $2$-MPPC.\n\\end{theorem}\nCurrently we do not know any result showing that $(\\{[n(\\log n)^A]\\alpha_2\\})$ has PPC. But, from \\cite[Corollary 1]{Garaev2004} we obtain the associate additive energy $\\ll N^3(\\log N)^{1-A}$.\n\nWe fix some notations here to use later sections.\\\\\n\\textbf{Notations:}\nLet $h\\geq 2$ be an integer, $a$ be a real number and $\\boldsymbol{x}=(x_1,\\ldots,x_h),\\boldsymbol{y}=(y_1,\\ldots,y_h)\\in\\mathbb{R}^h.$ \n\\begin{itemize}\n\t\\item Coordinate wise product $\\boldsymbol{x}\\boldsymbol{y}=(x_1y_1,\\ldots,x_hy_h)$\n\t\\item Inner product $\\boldsymbol{x}.\\boldsymbol{y}=\\sum_{1\\leq i\\leq h}x_iy_i$\n\t\\item Fractional part $\\{\\boldsymbol{x}\\}=(\\{x_1\\},\\ldots,\\{x_h\\}).$\n\t\\item Scaler product $a\\boldsymbol{x}=(ax_1,\\ldots,ax_h)$\n\t\\item $\\Sigma_{\\boldsymbol{x}\\in\\mathbb{Z}^h}^{'}$ means all components of $\\boldsymbol{x}$ are non-zero.\n\\end{itemize}\nFurther, for $a, b \\in \\mathbb{Z}$ their greatest common divisor(GCD) is $(a, b).$ Define $e(x):=e^{2\\pi ix}$ for $x \\in \\mathbb{R}$. \n\nLet $h\\geq 1$. If $a_n^{(i)}=a_n$ for all $i$ then we denote $A_N^{(i)}$'s by $A_N$ and $E(A_N^{(1)};\\ldots;A_N^{(h)})$ by $E(A_N).$\nFor $\\boldsymbol{v}=(v_1,\\cdots,v_h), v_i\\neq 0\\in\\mathbb{Z},$ we define representation function $\\mathcal{R}_N(\\boldsymbol{v})$ by,\n\\begin{align}\\label{R_N}\n\\mathcal{R}_N(\\boldsymbol{v}):=\\#\\{1\\leq m\\neq n\\leq N: a_n^{(i)}-a_m^{(i)}=v_i,\\:1\\leq i\\leq h\\}.\n\\end{align}\nWe use $\\mathcal{R}_N(\\boldsymbol{v})$ and $\\mathcal{R}_N(v_1,\\ldots,v_h)$ interchangeably.\nOne can see that, \n\\[\\displaystyle\\sum_{\\boldsymbol{v}\\in\\mathbb{Z}^h}\\mathcal{R}_N(\\boldsymbol{v})^2\\leq E(A_N^{(1)};\\ldots;A_N^{(d)}).\\]\n\n\n\\section{Joint additive energy}\\label{jadditive}\\label{s2}\nLet $A_N^{(i)}$ denote the first $N$ elements of $(a_n^{(i)})$, for $1\\leq i\\leq d$. It is easy to see that the joint additive energy satisfy the following trivial estimate \n\\[N^2\\leq E(A_N^{(1)};\\ldots;A_N^{(d)})\\leq\\min_{1\\leq i\\leq d}{E(A_N^{(i)})}\\leq N^3.\n\\]\nFrom this observation we conclude that for any strictly increasing sequence $A:=(a_n)_{1\\leq n\\leq N}$ of $N$ natural numbers and any fixed integer $l\\geq 2$, $B_l:=\\{n^l: 1\\leq n\\leq N\\}$ we have $E(A;B)\\ll N^{2+\\epsilon}$ for any $\\epsilon>0$ (since $E(B)\\ll N^{2+\\epsilon}$, see ~\\cite{aistleitner2017additive}).\n\n\nFor $1\\leq s\\leq d$, Vinogradov's mean value $J_{s,d}(N)$ is given by the number of solutions in $\\mathbb{N}$ of the system\n\\[x_1^{i}+\\cdots+x_s^i = y_1^{i}+\\cdots+y_s^i, \\quad 1\\leq i\\leq d.\\] For $B_l:=\\{n^l: 1\\leq n\\leq N\\}$, we see that $E(B_1;B_2;\\ldots;B_d)$ is equal to the Vinogradov's mean value $J_{2,d}(N)$. \nThen from the work of Ford and Wooley~\\cite{FW} we get\n\\begin{equation*}\nE(B_1;B_2;\\ldots;B_d)= J_{2,d}(N) \\ll N^{2+\\epsilon},\n\\end{equation*}\nfor any $\\epsilon>0$. In this section, we concentrate on obtaining the joint additive energy of certain sequences. A natural question is how small the joint additive energy can be, whenever all the components have considerably large additive energy. For example, \\cite[Theorem 1]{Garaev2001} says that the sequence $([n\\log n])$ has additive energy $\\gg \\frac{N^3}{\\log N}$, but we show that the joint additive energy of $(n)$ and \n$([n\\log n])$ is much smaller.\n\nLet $(f(n))$, $(g(n))$ be two strictly increasing sequences of natural numbers and $F_N$, $G_N$ be the set of their first $N$ elements respectively. For a given positive integer $l$ with $1\\leq l< N$ assume that $J_l:=J_l(N)$ is the number of solutions of the system of equations\n\\begin{align}\\label{jointeq1}\n\\begin{cases}\n\tf(x)+f(y)=f(x+l)+f(z)\\\\\n\tg(x)+g(y)=g(x+l)+g(z),\n\t\\end{cases}\n\t\\text{where } \\,1\\leq x0$, $F^{''}(n)>0$ and $F^{'''}(n)<0$. For any real $0<\\epsilon<1$,\n\t\\[E(F_N;G_N)\\ll N^{2+\\epsilon}+\\frac{N^{1+\\epsilon}\\log N}{F^{''}(N)}.\\]\n\\end{theorem}\nThe following corollary is immediate.\n\\begin{corollary}\\label{coro2.3}\nLet $1\\leq A \\leq 2$ and $0<\\epsilon<1$. \n\tIf $F(n)=n(\\log n)^A$ then, \\[E(F_N;G_N)\\ll N^{2+\\epsilon}(\\log{N})^{2-A}.\\]\n\\end{corollary}\n\\begin{proof}[Proof of Theorem \\ref{n and F(n) thm}]\n\tHere we take $f(n)=n$ and $g(n)=[F(n)]$, so $J_l$ in \\eqref{jointeq1} reduces to the number of solutions of the equation\n\t\\begin{align*}\n\t\t[F(n)]+[F(m+l)]=[F(n+l)]+[F(m)], \\mbox{ with }1\\leq n1\/2,$} \\]\nand $\\zeta_Y(\\alpha)$ is defined similarly.\nFor fixed $\\alpha>1\/2$, these series converge a.e. by Kolmogorov three series theorem (see \\cite[Theorem ~15.51]{K}). \n\nNow, we recall moment estimates of $\\zeta_X(\\alpha)$ from Lemma~ 7 of \\cite{Bloom2019GCDSA} and we use it to prove Proposition~\\ref{gen gcd lemma}.\n\\begin{lemma}\\label{moment lemma} For a real number $l$,\n\t\\begin{equation*}\n\t\t\\log\\mathbb{E}\\big[|\\zeta_X(\\alpha)|^{2l}\\big]\\ll\\begin{cases}\n\t\t\tl\\log\\log l, \\quad & \\mbox{if }\\,\\alpha=1, \\\\\n\t\t\tC'(\\alpha)l^{1\/\\alpha}(\\log l)^{-1}, \\quad &\\mbox{if }\\, 1\/2<\\alpha<1,\\\\\n\t\t\tl^2\\log((\\alpha-1\/2)^{-1}), \\quad & \\mbox{if }\\, \\alpha\\rightarrow \\frac{1}{2},\n\t\t\\end{cases}\n\t\\end{equation*}\nwhere $C'(\\alpha)$ is a positive constant, $l\\geq 3$ for first two cases and $l\\geq 1$ for the final case.\n\\end{lemma}\n\n\\begin{proof}[Proof of Proposition~\\ref{gen gcd lemma}]\n\t Let us start with defining the double sum\n\t$$D(X,Y):=\\displaystyle\\sum_{a,b}f(a,b)X(a)Y(b).$$\n\tThen, we look at the expectation of $|\\zeta_X(\\alpha)\\zeta_Y(\\alpha)D(X,Y)|^2$. Squaring this product and considering the expectation to get \n\t\\begin{align*}\n\t\\mathbb{E}\\big[|\\zeta_X(\\alpha)\\zeta_Y(\\alpha)D(X,Y)|^2\\big]=& \\displaystyle\\sum_{m_1,n_1,m_2,n_2,a,b,c,d}\\frac{f(a,b)\\overline{f(c,d)}}{m_1^\\alpha n_1^\\alpha m_2^\\alpha n_2^\\alpha}\\mathbbm{1}_{n_1a=n_2c}\\mathbbm{1}_{m_1b=m_2d}.\n\t\\end{align*}\nNow, indicator functions allow us to write $n_1=\\frac{h_1c}{(a,\\, c)},\\, n_2=\\frac{h_1a}{(a,\\, c)}$ and $m_1=\\frac{h_2d}{(b,\\, d)}, \\,m_2=\\frac{h_2b}{(b,\\, d)}$ for positive integers $h_1, h_2$. Thus,\n\t\\begin{align}\\label{eq0}\n\t\t\\mathbb{E}\\big[|\\zeta_X(\\alpha)\\zeta_Y(\\alpha)D(X,Y)|^2\\big]\n\t\t=& \\sum_{h_1,h_2,a,b,c,d}f(a,b)\\overline{f(c,d)}\\frac{(a,c)^{2\\alpha}(b,d)^{2\\alpha}}{(acbd)^\\alpha}\\frac{1}{h_1^{2\\alpha}h_2^{2\\alpha}}\\\\\n\t\\nonumber\t= & \\displaystyle\\zeta(2\\alpha)^2S_f(2;\\alpha).\n\t\\end{align}\n\tAlso, we notice that $\\mathbb{E}\\big[|D(X,Y)|^2\\big]=\\|f\\|_2^2$.\n\tLet $V$ and $l$ be positive real parameters to be chosen later. Consider the event $\\mathcal{A}=(|\\zeta_X(\\alpha)|0$ be fixed and $N\\geq (2s)^d$ be an integer. For $\\boldsymbol{\\alpha}\\in \\mathbb{R}^d $ and $\\boldsymbol{a}_n\\in \\mathbb{N}^d$ consider the sequence $(\\boldsymbol{x_n})=(\\{\\boldsymbol{a}_n\\boldsymbol{\\alpha}\\})$. Then,\n\t\\begin{align*}\n\t\tR_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N)=\\frac{1}{N}\\displaystyle\\sum_{1\\leq m\\neq n\\leq N}\\chi_{s,N}(\\boldsymbol{\\alpha}(\\boldsymbol{a}_m-\\boldsymbol{a}_n)),\n\t\\end{align*}\nwhere $\\chi_{s,N}$ is a characteristic function defined for $\\boldsymbol{x}\\in \\mathbb{R}^d$ as follows:\n\\begin{align*}\n\\chi_{s,N}(\\boldsymbol{x})=\\begin{cases}\n\t1\\quad& \\text{if } \\|\\boldsymbol{x}\\|_\\infty\\leq s\/N^{1\/d},\\\\\n\t0 \\quad& \\text{otherwise.}\n\\end{cases}\n\\end{align*}\nThe Fourier series expansion of $\\chi_{s,N}$ is given by\n\\begin{align}\\label{char}\n\t\\chi_{s,N}(\\boldsymbol{\\alpha})\\sim\\displaystyle\\sum_{\\substack{\\boldsymbol{r}\\in\\mathbb{Z}^d\\\\ \\boldsymbol{r}=(r_1,\\cdots,r_d)}}c_{\\boldsymbol{r}}e(\\boldsymbol{r}.\\boldsymbol{\\alpha}),\n\\end{align}\nwhere\n\\begin{align*}\n\tc_{\\boldsymbol{r}}=&\\displaystyle\\int_{-s\/N^{1\/d}}^{s\/N^{1\/d}}\\cdots\\int_{-s\/N^{1\/d}}^{s\/N^{1\/d}}e{\\Big(-\\sum_{i\\leq d}r_i\\alpha_i\\Big)}d\\alpha_1\\cdots d\\alpha_d\\nonumber\\\\\n\t=& c_{r_1}\\cdots c_{r_d},\n\\end{align*}\nand \n\\[c_{r_j}=\\displaystyle\\int_{-s\/N^{1\/d}}^{s\/N^{1\/d}}e{(-r_j\\alpha_j)}d\\alpha_j, \\quad j=1,2,\\ldots,d.\n\\]\n\n Further, one gets the following upper bound:\n\\begin{align*}\n\t|c_{r_j}|\\leq\\min{\\Big(2s N^{-\\frac{1}{d}},\\, |r_j|^{-1}\\Big)}.\n\\end{align*}\nA straightforward calculation gives the expectation:\n \\begin{align*}\n \\mathbb{E}\\big[R_{2,\\infty}^{(d)}(s,.,N)\\big]=\\displaystyle\\int_{[0,1)^d}R_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N)d\\boldsymbol{\\alpha}=(2s)^d\\frac{N-1}{N}.\n \\end{align*}\n Now, the variance of $R_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N) $ is defined as \n \\begin{align*}\n \\operatorname{Var}(R_{2,\\infty}^{(d)}(s,.,N)) :=\\int_{[0,1)^d}\\bigg(R_{2,\\infty}^{(d)}(s,\\boldsymbol{\\alpha},N)-\\frac{(2s)^d(N-1)}{N}\\bigg)^2d\\boldsymbol{\\alpha}.\n \\end{align*}\n By using Fourier series expansion of $\\chi_{s,N}$ from \\eqref{char}, we write\n \\begin{align*}\n \\operatorname{Var}(R_{2,\\infty}^{(d)}(s,.,N)) = \\frac{1}{N^2}\\displaystyle\\int_{[0,1)^d}\\Bigg(\\sum_{1\\leq m\\neq n\\leq N}\\sum_{\\substack{\\boldsymbol{r}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}}}c_{\\boldsymbol{r}}e(\\boldsymbol{r}.(\\boldsymbol{\\alpha}(\\boldsymbol{a}_m-\\boldsymbol{a}_n)))\\Bigg)^2d\\boldsymbol{\\alpha}.\n \\end{align*}\n After squaring the integrand, we want to interchange the summations and integrations. Such a rearrangement can be justify by using the fact that the partial sums of a Fourier series of an indicator function are uniformly bounded and hence the dominated convergence theorem is applicable ( for example, see, \\cite[Chapter 3, Exercise 18]{SS})\n Thus, $\\operatorname{Var}(R_{2,\\infty}^{(d)}(s,.,N))$ equals to\n \\begin{align*}\n&\\frac{1}{N^2}\\displaystyle\\sum_{\\substack{1\\leq m\\neq n\\leq N\\\\1\\leq k\\neq l\\leq N}}\\sum_{\\substack{\\boldsymbol{r},\\boldsymbol{t}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}}}c_{\\boldsymbol{r}}c_{\\boldsymbol{t}}\\int_{[0,1)^d}e(\\boldsymbol{r}.(\\boldsymbol{\\alpha}(\\boldsymbol{a}_m-\\boldsymbol{a}_n))-\\boldsymbol{t}.(\\boldsymbol{\\alpha}(\\boldsymbol{a}_k-\\boldsymbol{a}_l)))d\\boldsymbol{\\alpha}\\\\\n\t=& \\frac{1}{N^2}\\displaystyle\\sum_{\\substack{1\\leq m\\neq n\\leq N\\\\1\\leq k\\neq l\\leq N}}\\sum_{\\substack{\\boldsymbol{r},\\boldsymbol{t}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}}}c_{\\boldsymbol{r}}c_{\\boldsymbol{t}}\\prod_{i\\leq d}\\int_{[0,1)}e\\Big(\\big(r_i(a_m^{(i)}-a_n^{(i)})-t_i(a_k^{(i)}-a_l^{(i)})\\big)\\alpha_i\\Big)d\\alpha_i.\n\\end{align*}\nBut, we have the inner most integral equals to $1$, whenever $r_i=t_i=0 $ or $r_i(a_m^{(i)}-a_n^{(i)})=t_i(a_k^{(i)}-a_l^{(i)})$, otherwise the integral is zero.\nNote that when all components of $\\boldsymbol{r}, \\boldsymbol{t}$ are nonzero, the associated sums will produce the main contribution. Otherwise, when some of them are zero, we save powers of $N$. \nPrecisely, the variance equals to\n\\begin{align}\\label{eq6}\n\t\\frac{1}{N^2}\\displaystyle\\sum_{\\substack{j_1<\\cdots\\frac{N^{1\/d}\\gcd(v_i,w_i)}{s\\min(|v_i|,|w_i|)}.\\nonumber\n\\end{align}\nFor any fixed $j_10$,\n\\begin{align*}\n\\operatorname{Var}(R_2^{(d)}(s,.,N)) &\\ll \\frac{s^d(\\log N)^d}{N^3}E\\left(A_N^{(1)};\\ldots;A_N^{(d)}\\right)\\frac{\\exp(C\\sqrt{\\log N\\log\\log N})}{(\\log N+\\operatorname{O}(1))^d}+\\frac{1}{N^{1\/d-\\gamma}}\\\\\n&\\ll \\frac{s^d}{N^3}E\\left(A_N^{(1)};\\ldots;A_N^{(d)}\\right)\\exp\\Big(C\\sqrt{\\log N\\log\\log N}\\Big).\n\\end{align*}\nThus, under the hypothesis of the theorem we get the variance is $\\operatorname{O}(N^{-\\delta\/2})$. This is the main part of the proof. The rest of the arguments follow a standard method of applying Chebyshev's inequality and Borel-cantelli lemma(see the proof of Theorem 1 in \\cite{aistleitner2017additive} and also \\cite{hinrichs2019multi, RZ1999}.)\n\\end{proof}\n\n\n\\section{Proof of theorem \\ref{2-PPC thm1} and \\ref{2-PPC thm2}.}\n\\subsection{Properties of Bessel functions}\nHere we state some properties of the Bessel function which are important tools to prove Theorem \\ref{2-PPC thm1} and Theorem~\\ref{2-PPC thm2}. For complex number $\\nu$ with $\\Re(\\nu)>-1\/2$ and $t\\geq 0$ the Bessel function $J_\\nu$ of order $\\nu$ is defined by \n\\begin{align}\\label{bessel1}\nJ_\\nu(t) = \\frac{(t\/2)^\\nu}{\\Gamma(\\nu+1\/2)\\Gamma(1\/2)}\\int_{-1}^1e^{itx}(1-x^2)^{\\nu}\\frac{dx}{\\sqrt{1-x^2}}.\n\\end{align}\nThis definition is also valid for $t\\in\\mathbb{C}.$ The function $J_\\nu$ has many interesting properties. For our application we mention a few of them below and these are essentially given in \\cite[Apendix B]{loukasgrafakos}.\nFor $\\Re(\\nu)>-1\/2$ and $t\\geq 1$ we have a nice approximation which gives an asymptotic formula as $t\\rightarrow \\infty,$\n\\begin{align}\\label{p1}\nJ_\\nu(t) = \\sqrt{\\frac{2}{\\pi t}} \\cos{\\left(t-\\frac{\\pi \\nu}{2}-\\frac{\\pi}{4}\\right)} +\\operatorname{O}_\\nu(t^{-3\/2}).\n\\end{align}\nFor $0< t\\leq 1$ and $\\Re(\\nu)>-1\/2$, we have the following upper bound\n\\begin{align}\\label{p2}\nJ_\\nu(t) \\ll_\\nu \\exp\\big({\\max\\{(\\Re(\\nu)+1\/2)^{-2}, (\\Re(\\nu)+1\/2)^{-1} \\}|\\Im(\\nu)|^2}\\big) t^{\\Re(\\nu)}.\n\\end{align}\nWhenever $t>0$ and $\\nu \\in \\mathbb{N}$, the Bessel function reduces to a simple form:\n\\begin{align}\\label{bessel2}\nJ_\\nu(t) = \\frac{1}{2\\pi}\\int_0^{2\\pi}\\cos(t\\sin{\\theta}- \\nu \\theta) d\\theta.\n\\end{align}\n\n\\begin{lemma}\\label{bessel fn lemma}\n\tLet $N$, $\\nu \\geq 2$ be two integers and $s,r>0$ be real numbers. Then\n\t\\begin{itemize}\n\t\\item[(1)] $J_{\\nu\/2}(r)\\ll_\\nu 1$ if $r\\leq 1,$\\\\\n\t\\item[(2)] $J_{\\nu\/2}(r)\\ll_\\nu \\frac{1}{\\sqrt{r}}$ if $r > 1,$\\\\\n\t\\item[(3)] $\\frac{J_{\\nu\/2}( 2\\pi s r N^{-{1}\/{\\nu}})}{r^{\\nu\/2}}\\ll_\\nu \\frac{ s^{\\nu\/2}}{\\sqrt{N}}$, for any $r>0$\\\\\n\t\\item[(4)] $|J_\\mu(r)|\\leq 1$, for any $r>0$, and $\\mu\\in\\mathbb{N}.$\n\t\\end{itemize}\t\n\\end{lemma}\nProperties $1,\\,2,\\,3$ and $4$ follows from \\eqref{p2},\\,\\eqref{p1},\\, \\eqref{bessel1} and \\eqref{bessel2}, respectively.\n\\subsection{Preperation of the proofs of Theorem \\ref{2-PPC thm1} and Theorem \\ref{2-PPC thm2}}\nFor the sequence $(\\boldsymbol{x}_n)= (\\{\\boldsymbol{a}_n\\boldsymbol{\\alpha}\\})$, the $d$-dimensional pair correlation statistic in $2$-norm is,\n\\begin{align}\n\tR_{2,2}^{(d)}(s,\\boldsymbol{\\alpha},N)=\\frac{1}{N}\\displaystyle\\sum_{1\\leq m\\neq n\\leq N}I_{s,N}(\\boldsymbol{\\alpha}(\\boldsymbol{a}_m-\\boldsymbol{a}_n)),\\nonumber\n\\end{align}\nwhere $I_{s,N}$ is the indicator function for all $\\boldsymbol{x}\\in\\mathbb{R}^d$ which satisfy $\\|\\boldsymbol{x}\\|_2\\leq s\/N^{1\/d}$.\nThen, Fourier series expansion of $I_{s,N}$ is given by\n\\begin{align}\n\tI_{s,N}(\\boldsymbol{\\alpha})\\sim\\displaystyle\\sum_{\\substack{\\boldsymbol{r}\\in\\mathbb{Z}^d\\\\ \\boldsymbol{r}=(r_1,\\cdots,r_d)}}c_{\\boldsymbol{r}}e(\\boldsymbol{r}.\\boldsymbol{\\alpha}),\\nonumber\n\\end{align}\nwhere\n\\begin{align}\\label{c_r}\nc_{\\boldsymbol{r}}= \\begin{cases}\n\\omega_d\\frac{s^d}{N}, & \\mbox{ if } \\boldsymbol{r}=\\boldsymbol{0},\\\\\n\\frac{s^{d\/2}}{\\sqrt{N}\\|\\boldsymbol{r}\\|_2^{d\/2}}J_{d\/2}\\big(\\frac{2\\pi s}{N^{1\/d}}\\|\\boldsymbol{r}\\|_2\\big), & \\mbox{ if } \\boldsymbol{r}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}.\n\\end{cases}\n\\end{align} \nFor the details of such formulas one can see~\\cite[Apendix B]{loukasgrafakos}. \n\nIt is not hard to verify that \n\\begin{align*}\n\\mathbb{E}(R_{2,2}^{(d)}(s, . ,N))=\\omega_ds^d\\frac{N-1}{N}.\n\\end{align*}\nWe now proceed to calculate the variance of $R_{2,2}^{(d)}(s, . ,N)$, that is,\n\\begin{align*}\n\\operatorname{Var}(R_{2,2}^{(d)}(s,.,N)) &= \\displaystyle\\int_{[0,1)^d}\\bigg(R_{2,2}^{(d)}(s,\\boldsymbol{\\alpha},N)-\\frac{\\omega_ds^d(N-1)}{N}\\bigg)^2d\\boldsymbol{\\alpha}\\\\\n& = \\frac{1}{N^2}\\displaystyle\\int_{[0,1)^d}\\Bigg(\\sum_{1\\leq m\\neq n\\leq N}\\sum_{\\substack{\\boldsymbol{r}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}}}c_{\\boldsymbol{r}}e(\\boldsymbol{r}.(\\boldsymbol{\\alpha}(\\boldsymbol{a}_m-\\boldsymbol{a}_n)))\\Bigg)^2d\\boldsymbol{\\alpha}.\n\\end{align*}\nBy interchanging summation and integration we obtain $\\operatorname{Var}(R_{2,2}^{(d)}(s,.,N))$ is bounded above by\n\\begin{align}\\label{eqppc2}\n \\frac{1}{N^2}\\displaystyle\\sum_{\\substack{1\\leq m\\neq n\\leq N\\\\1\\leq k\\neq l\\leq N}}\\sum_{\\substack{\\boldsymbol{r},\\boldsymbol{t}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\} }}c_{\\boldsymbol{r}}\\overline{c_{\\boldsymbol{t}}}\\prod_{i\\leq d}\\int_{[0,1)}e\\Big(\\big(r_i(a_m^{(i)}-a_n^{(i)})-t_i(a_k^{(i)}-a_l^{(i)})\\big)\\alpha_i\\Big)d\\alpha_i.\n\\end{align}\nSuch rearrangement can be justified as follows. If we split the Fourier series of the indicator function at some parameter $M$ and consider the partial sum upto $M$ and the tail part. Apply Cauchy-Schwarz inequality on the tail part to interchange sums and the integration, which leaves us with the square-integral of the tail. But the $2$-norm of tail goes to zero as $M\\rightarrow \\infty$. Thus, for arbitrarily large $M$ the partial sum upto $M$ remains for which the rearrangement is obvious.\n\n As in Theorem~\\ref{infinity-PPC thm}, it is enough to show that the variance is arbitrarily small.\n\\begin{proof}[Proof of Theorem~\\ref{2-PPC thm1}]\n\tIn this theorem we consider $a_n^{(i)}=a_n$ for all $1\\leq i\\leq d,$ then \\eqref{eqppc2} and the orthogonality of exponential function allow us to write,\n\\begin{align}\t\n\t\\operatorname{Var}(R_{2,2}^{(d)}(s,.,N))\\leq&\\frac{1}{N^2}\\displaystyle\\sum_{v,w\\in\\mathbb{Z}\\setminus \\{{0}\\}}\\mathcal{R}_N(v)\\mathcal{R}_N(w)\\sum_{\\substack{\\boldsymbol{r},\\boldsymbol{t}\\in\\mathbb{Z}^d\\setminus \\{\\boldsymbol{0}\\}\\\\ vr_i=wt_i}}|c_{\\boldsymbol{r}}c_{\\boldsymbol{t}}|. \\label{eq1ppc2}\n\\end{align}\nLet us fix $j_1 \\min \\boldsymbol{h}}\\frac{1}{\\|\\boldsymbol{h}\\|_2^{d+1}}\\\\\n\t&\\ll \\frac{(v,w)^{d+1}N^{1\/d}}{s(|vw|)^{d\/2+1\/2}}\\frac{1}{(\\min \\boldsymbol{h})^{d-q+1}}\\\\\n\t&\\ll \\frac{s^{d-q}(v,w)^q}{N^{1-q\/d}\\sqrt{|vw|^q}}.\n\\end{align*}\nCombining all three bounds with \\eqref{2nd eq} we obtain the claim \\eqref{eq3ppc2}. Then by using \\eqref{eq3ppc2} in \\eqref{eq1ppc2} we get,\n\\begin{align*}\n\t\\operatorname{Var}(R_{2,2}^{(d)}(s,.,N))\\ll &\\displaystyle\\sum_{1\\leq q\\leq d}\\frac{s^{2d-q}}{N^{4-q\/d}}\\sum_{v,w\\in\\mathbb{Z}\\setminus \\{0\\}}\\mathcal{R}_N(v)\\mathcal{R}_N(w)\\frac{(v,w)^q}{\\sqrt{|vw|^q}}\\\\\n\\end{align*}\nNow, using the result on GCD sums by G\\'al~\\cite{gel} we obtain\n\\begin{align*}\n\\operatorname{Var}(R_{2,2}^{(d)}(s,.,N)) \\ll \\begin{cases}\n\t\\frac{s^2}{N^3}E(A_N)(\\log\\log N)^2+ \\frac{s^2}{N^{1\/2 -\\gamma }},& \\mbox{ if } d=2,\\\\\n\t\\frac{s^d}{N^3}E(A_N) + \\frac{s^d}{N^{{1}\/{d}- \\gamma}}, & \\mbox{ if } d\\geq 3,\n\t\\end{cases}\n\\end{align*}\nfor some sufficiently small $\\gamma>0.$ Thus, under the hypothesis of this theorem the variance $\\operatorname{O}((\\log N)^{-1-\\epsilon\/2})$. The rest of the arguments follow from the proof of ~\\cite[Theorem 5]{Bloom2019GCDSA}.\n\\end{proof}\n\\begin{proof}[Proof of Theorem \\ref{2-PPC thm2}]\n\tFor $d=2$, (\\ref{eqppc2}) gives us,\n\\begin{align}\\label{eq4ppc2}\n\t\\operatorname{Var}(R_{2,2}^{(2)}(s,.,N))&\\leq \\frac{1}{N^2}\\sum_{\\substack{v_1,v_2,w_1,w_2\\in\\mathbb{Z}\\setminus \\{0\\}}}\\mathcal{R}_N(v_1,v_2)\\mathcal{R}_N(w_1,w_2)\\sideset{}{'}\\sum_{\\substack{\\boldsymbol{r}, \\boldsymbol{t}\\in\\mathbb{Z}^2\\\\ r_iv_i=t_iw_i}}|c_{\\boldsymbol{r}}c_{\\boldsymbol{t}}| \\\\ \\nonumber\n& + \\frac{1}{N^2}\\sum_{\\substack{v_1, w_1\\in\\mathbb{Z}\\setminus \\{0\\}}}\\mathcal{R}_N(v_1)\\mathcal{R}_N(w_1) \\sum_{\\substack{r_1,t_1 \\in\\mathbb{Z}\\setminus \\{0\\}\\\\ r_1v_1=t_1w_1}}|c_{(r_1,0)}c_{(t_1,0)}|\\\\ \\nonumber\n& + \\frac{1}{N^2}\\sum_{\\substack{v_2, w_2\\in\\mathbb{Z}\\setminus \\{0\\}}}\\mathcal{R}_N(v_2)\\mathcal{R}_N(w_2) \\sum_{\\substack{r_2,t_2\\in\\mathbb{Z}\\setminus \\{0\\}\\\\ r_2v_2=t_2w_2}}|c_{(0,r_2)}c_{(0,t_2)}|.\n\\end{align} \nNote that the second and third terms on the right hand side above are $\\operatorname{O}(N^{-1\/2+\\gamma})$ for some $\\gamma>0$, as it follows from \\eqref{eq3ppc2} with $d=2$ and $q=1$. \n\nLet us call the inner sum in first term as $I$.\nFrom the relation $r_iv_i=t_iw_i$ we have \n$r_i=\\frac{w_ih_i}{(v_i,w_i)}$ and $t_i=\\frac{v_ih_i}{(v_i,w_i)}$ for $i=1,2$ where $h_i$'s are nonzero integers. For simplicity we write,\n\\[A=\\frac{w_1}{(v_1,w_1)},\\, B=\\frac{w_2}{(v_2,w_2)},\\, C=\\frac{v_1}{(v_1,w_1)},\\, D=\\frac{v_2}{(v_2,w_2)}.\\]\nBy using \\eqref{c_r} we obtain\n\\begin{align*}\n\tI&\\ll\\frac{s^2}{N}\\sum_{h_1,h_2\\neq0}\\frac{\\Big|J_1\\Big(\\frac{2\\pi s}{\\sqrt{N}}(A^2h_1^2+B^2h_2^2)^{\\frac{1}{2}}\\Big)\\Big|}{(A^2h_1^2+B^2h_2^2)^{\\frac{1}{2}}}\\frac{\\Big|J_1\\Big(\\frac{2\\pi s}{\\sqrt{N}}(C^2h_1^2+D^2h_2^2)^{\\frac{1}{2}}\\Big)\\Big|}{(C^2h_1^2+D^2h_2^2)^{\\frac{1}{2}}}\\nonumber\\\\\n\t= &\\frac{s^2}{N|AC|}\\sum_{h_1\\neq 0}\\frac{1}{h_1^2}\\sum_{h_2\\neq 0}\\frac{\\Big|J_1\\Big(\\frac{2\\pi s|Ah_1|}{\\sqrt{N}}(1+\\frac{B^2}{A^2h_1^2}h_2^2)^{\\frac{1}{2}}\\Big) J_1\\Big(\\frac{2\\pi s|Ch_1|}{\\sqrt{N}}(1+\\frac{D^2}{C^2h_1^2}h_2^2)^{\\frac{1}{2}}\\Big)\\Big|}{\\Big(1+\\frac{B^2}{A^2h_1^2}h_2^2\\Big)^{\\frac{1}{2}} \\Big(1+\\frac{D^2}{C^2h_1^2}h_2^2\\Big)^{\\frac{1}{2}}}.\n\\end{align*}\nNow we divide the sum over $h_1$ into two parts,\n\\[|h_1|\\leq\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}\\text{ and, }|h_1|>\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}.\\]\nApplying (4) and (2) of Lemma \\ref{bessel fn lemma} respectively for small and large argument of Bessel function in the above inequality,\n\\begin{align*}\nI\t&\\ll \\frac{s^2}{N|AC|}\\Bigg(\\sum_{|h_1|\\leq\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}}\\frac{1}{h_1^2}\\sum_{h_2\\neq 0}\\Big(1+\\frac{B^2}{A^2h_1^2}h_2^2\\Big)^{-\\frac{1}{2}}\\Big(1+\\frac{D^2}{C^2h_1^2}h_2^2\\Big)^{-\\frac{1}{2}}\\\\\n\t&+\\sum_{|h_1|>\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}}\\frac{\\sqrt{N}}{s|AC|^{1\/2}|h_1|^3}\\sum_{h_2\\neq 0}\\Big(1+\\frac{B^2}{A^2h_1^2}h_2^2\\Big)^{-\\frac{1}{2}-\\frac{1}{4}}\\Big(1+\\frac{D^2}{C^2h_1^2}h_2^2\\Big)^{-\\frac{1}{2}-\\frac{1}{4}}\\Bigg).\n\\end{align*}\nNow observe that for $a_1,a_2\\in\\mathbb{R}$, $(1+a_1^2)(1+a_2^2)\\geq(1+a_1a_2)^2$. Then, sum on $h_2$ in the above two terms can be bounded by,\n\\[\\int_0^{\\infty}\\Big(1+\\Big|\\frac{BD}{AC}\\Big|\\frac{x^2}{h_1^2}\\Big)^{-1}dx\\,\n\\text{ and, } \\,\\int_0^{\\infty}\\Big(1+\\Big|\\frac{BD}{AC}\\Big|\\frac{x^2}{h_1^2}\\Big)^{-3\/2}dx\\]\nrespectively and both are bounded by\n\\[|h_1|\\Big|\\frac{AC}{BD}\\Big|^{1\/2}.\\]\nThus, we have \n\\begin{align*}\nI & \\ll \\frac{s^2}{N|AC|}\\Bigg(\\Big|\\frac{AC}{BD}\\Big|^{1\/2}\\sum_{|h_1|\\leq\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}}\\frac{1}{|h_1|}\n\t+ \\frac{\\sqrt{N}}{s \\sqrt{|BD|}}\\sum_{|h_1|>\\frac{\\sqrt{N}}{2\\pi s\\min(|A|,|C|)}}\\frac{1}{|h_1|^2}\\Bigg)\\\\\n\t& \\ll\\frac{s^2}{N|AC|}\\Bigg(\\Big|\\frac{AC}{BD}\\Big|^{1\/2}\\log N +\\frac{\\min(|A|,|C|)}{|BD|^{1\/2}}\\Bigg)\\ll \\frac{s^2}{N}\\frac{\\log{N}}{|ABCD|^{1\/2}}.\n\\end{align*}\nHence, we deduce the variance estimate \n\\begin{align}\\label{estimate1}\n\t\\operatorname{Var}(R_{2,2}^{(2)}(s,.,N))\\ll \\frac{s^2\\log N}{N^3}\\displaystyle\\sum_{\\substack{v_1,v_2,\\\\w_1,w_2\\in\\mathbb{Z}\\setminus \\{0\\}}}\\mathcal{R}_N(v_1,v_2)\\mathcal{R}_N(w_1,w_2)\\frac{(v_1,w_1)(v_2,w_2)}{\\sqrt{|v_1w_1v_2w_2|}}.\n\\end{align}\nNow applying Proposition~\\ref{gen gcd lemma} in \\eqref{estimate1} we conclude that \n\\[\\operatorname{Var}(R_{2,2}^{(2)}(s,.,N))\\ll \\frac{s^2}{N^3\\log{N}}E(A_N^{(1)};A_N^{(2)})\\exp(C\\sqrt{\\log N\\log\\log N}).\\]\nThis completes the proof.\n\\end{proof}\n\\section{Proof of Theorem~\\ref{thm6}}\nThis follows easily by combining Theorem \\ref{infinity-PPC thm} with Corollary~\\ref{coro2.3} and Theorem \\ref{2-PPC thm2} with Corollary~\\ref{coro2.3} .\n \\section{Acknowledgement} The authors would like to thank Prof. Christoph Aistleitner for helpful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}First-level heading:\\protect\\\\ The line\n\n\\section{\\label{sec1}Introduction}\nIn atomic, molecular, or cluster physics, the interaction of a target system with a projectile typically leads to its excitation, ionization, or in case of bound systems to their fragmentation with corresponding emissions of a variety of particles such as electrons, ions, neutral species and photons.\nFor processes that produce excited states of the target, there is in general not a singular de-excitation path, but branching into competing channels.\nThe weaker processes are often difficult to identify and even harder to quantify, as their occurrence may be masked by other, more intense signals or they may barely emerge above the noise level at all.\\\\\nThe simultaneous detection of several or all reaction products belonging to a particular projectile-target interaction allows to disentangle individual decay pathways and to increase the signal to noise contrast for these weak processes significantly.\nThis powerful technique is called coincidence measurement and is widely used in experimental particle, nuclear, atomic, molecular, and cluster physics (Ref.~\\onlinecite{Arion2015} and references therein).\nThe most essential experimental parameters in a coincidence experiment are the detection probabilities $P_{i} (\\in [0,1])$ of the respective particles $i$, which may be separated into products of the accepted relative solid angles $\\Omega_{\\text{rel},i} (\\in [0,1])$ and the detector efficiencies $\\varepsilon_{i} (\\in [0,1])$.\nThe total probability of recording a coincident event $P_\\text{coinc}$ is the product of the individual detection probabilities of all involved particles:\n\\begin{eqnarray}\nP_\\text{coinc}=\\prod\\limits_{\\text{all particles}}{P_i}=\\prod\\limits_\\text{all particles}{\\Omega_{\\text{rel},i}\\varepsilon_i}\n\\label{pcoinc_eq}.\n\\end{eqnarray}\nThe detector efficiency $\\varepsilon_i$ is the probability that a particle reaching the detector is recorded as an actual event.\nIt is an intrinsic characteristic of the detector and typically depends on the particle type and its properties such as energy and mass.\nThe achievable solid angle, however, is given by the experimental geometry.\nIn general, it can be described by the active detector surface area $A_\\text{D}$ and its distance to the interaction volume $D$:\n\\begin{eqnarray}\n\\Omega_\\text{rel}\\approx\\frac{A_\\text{D}}{4\\pi D^2}\n\\label{photodet_eq}.\n\\end{eqnarray}\nIn the case of charged reaction products (electrons or ions), $\\Omega_\\text{rel}$ often can be increased to values close to unity by guiding these particles towards the active detector area via electric and\/or magnetic fields.\nA prominent example for the latter case are \"reaction microscope\" experiments in which charged fragments of atoms, molecules, or clusters are measured in multiple coincidences \\cite{Doerner2000,Ullrich2003}.\nHowever, since guiding fields are not applicable to neutral fragments and photons, the solid angle for the detection of these particles is typically small and results in a low coincidence rate.\nBecause of the above mentioned challenge, reports of successful coincidence measurements with detection of these kind of particles are rare compared to ones restricted to charged particles.\nIn the following we will restrict the discussion to experiments in which a charged particle and a photon from the same ionization event are recorded.\\\\\nOne class of these experiments uses excitation by monochromatized emission of noble gas lamps to investigate fluorescing ionic fragments of various molecular systems ranging from diatomic N$_2$ or CO to fluorobenzene molecules \\cite{Bloch1975,Eland1976,Dujardin1981,Field1992}.\nAnother approach was the measurement of scattering angles in electron energy loss spectroscopy in coincidence with an emitted photon to investigate doubly-excited states in H$_2$ and H$_2$O \\cite{Ishikawa2011,Tsuchida2011} as well as interferences in the He(3l,3l') excitation \\cite{Dogan1998}.\\\\\nHowever, these measurements do not use the opportunities offered by modern synchrotron radiation facilities, in particular the narrow bandwidth of the exciting radiation in combination with the high tunability of the exciting-photon energies.\nThis type of excitation was used for photon-threshold electron measurements which allow to determine the lifetime of radiative molecular states\\cite{Schlag1977}.\nPhotoion-photon coincidence techniques were also applied at synchrotron radiation facilities to investigate for example the dissociative photoionization of N$_2$ \\cite{Kitajima1995,Kitajima1996,Soderstroma2004} and the core-hole decay of argon atoms and clusters \\cite{Gejo2013}.\\\\\nA series of experiments showed that the coincident measurement of photons and electrons, combined with polarization analysis of the fluorescence, for atomic photoionization can be used to perform so-called complete experiments, in which all amplitudes and phases in a partial wave description of the process are determined. \\cite{Beyer1995,Beyer1996,West1998,Arp1996}.\nYet, all these studies suffer from low coincidence rates, compensated to some extent by using long acquisition times \\cite{Bloch1975}.\nHowever, if the experiment needs to make use of synchrotron radiation, for which allocated beamtime is limited, reasonable statistics might not be achievable or losses in data quality have to be accepted.\\\\\nThese challanges can be overcome according to the relation given in equation \\ref{pcoinc_eq}, if collecting optics are used to increase the solid angle of photon detection and with that increase the coincidence count rate as described in Ref.~\\onlinecite{Reiss2015} for photon-photon and in Ref.~\\onlinecite{Meyer1995} for ion-photon coincidence experiments.\\\\\nIn this paper, we show how to improve the usability of optical assemblies dedicated to the coincident detection of a photon and a charged particle.\nThese assemblies dramatically enhance the solid angle of photon detection and enable efficient coincidence experiments with at least one participating photon.\nWe present two configurations:\nA) A mirror assembly surrounding the interaction volume.\nThis design is optimized for maximum solid angle.\nB) A combination of flexible optical elements adaptable to a variety of experimental constraints, suited for experiments in which a direct view on the interaction volume is impossible. \nThe applicability of both designs is demonstrated by performing electron-photon coincidence measurements on fundamental processes after excitation of supersonic noble gas jets by synchrotron radiation.\n\n\\section{\\label{sec2}Coincident detection of electrons and photons}\nIn this section, we explain the timing scheme of our experiment, followed by other details of the set-up.\nIn many coincidence experiments, including the ones described in this work, either pulsed target delivery or pulsed excitation sources like pulsed lasers, ion bunches in ion storage facilities, or synchrotron radiation pulses with an appropriate reference clock for the measurements are used to ensure that all measured particles have originated in the same physical event.\nThe reference clock pulse is used as a start signal for a time-to-digital converter (TDC).\nIn the case of electron-photon coincidence experiments, a coincidence event occurs, if at least one electron and one photon are detected after a common start signal.\nIn general, coincidence events can result from into true or accidental (or false\\cite{Wehlitz1995}) coincidences.\nFor true coincidences, both detected particles originate in the same interaction process.\nAccidental coincidences occur, if the electron and the photon originate from two different physical processes or from two independent interactions at different sites of the sample.\nFor simplicity and without loss of generality, it is assumed in this work that for a true coincidence event, both the photon and the electron reach the respective detector within the time interval between two consecutive excitation pulses.\\\\\nOften, true and accidental coincidences bear no experimental signature that allows their separation on an event-by-event basis.\nA method to eliminate accidental coincidences uses data acquisition over several consecutive excitation pulses.\nIt is assumed that the rate of accidental coincidences is the same when both particles are detected coincidentally between one reference clock pulse and the successive one as compared to the accidental coincidence signal when the two particles are recorded individually with respect to different successive reference clock pulses.\nTherefore, the true coincidence spectrum can be obtained by the subtraction of the pure accidental coincidence spectrum from the total coincidence spectrum.\nThis is a purely statistical method and does not allow identification of individual true coincidences.\\\\ \nIn a typical time of flight spectrometer, the electron spectrum is obtained by collecting the arrival times of all electron events in a histogram and the time axis encodes the kinetic energy, while the respective histogram of the photon events yields the information about the lifetime of the radiative state.\nThe signal acquisition of an electron-photon-coincidence event is shown in Figure \\ref{fig_detscheme}.\n\\begin{figure}\n\\includegraphics{Figure1}\n\\caption{\\label{fig_detscheme}Detection scheme of electron-photon coincidences.\nThe TDC is triggered by the reference clock and records the arrival times of the electrons $t_\\text{el}$ and photons $t_\\text{ph}$ relative to the reference clock pulse $t_\\text{ref}$.\nAt synchrotron radiation facilities, the time of excitation $t_\\text{exc}$ typically has a constant offset to the provided reference clock. For the showcase experiments, the time of $800\\,\\text{ns}$ between two excitations corresponds to the circulation time of an electron bunch in BESSY II in single bunch mode.}\n\\end{figure}\nThe true coincident electron spectrum is explicitly obtained by separation of the total coincident electron spectrum into four cases:\n\\begin{itemize}\n\\item Both electron and photon were detected between the first and the second reference clock pulse ($t_\\text{el}, t_\\text{ph}t^2_\\text{ref}$), named $\\text{sp}_{12}$.\n\\item The electron was detected after the second reference clock pulse and the photon was detected between the first and the second reference clock signal ($t_\\text{el}>t^2_\\text{ref}$ and $t_\\text{ph}t^2_\\text{ref}$), named $\\text{sp}_{22}$.\n\\end{itemize}\nThe spectrum of true coincidences is then obtained by:\n\\begin{eqnarray}\n\\text{True coincidences}=\\text{sp}_{11}+\\text{sp}_{22}-\\text{sp}_{12}-\\text{sp}_{21}\n\\label{coinc_eq}\n\\end{eqnarray}\nThis method of data acquisition and processing also allows the detection of multiple electrons with a photon in coincidence.\\\\\nProof-of-principle experiments were performed at the BESSY~II storage ring of the Helmholtz-Zentrum Berlin.\nIn all examples, the synchrotron was operated in single bunch mode with a circulation time of $800\\,\\text{ns}$, i.e. $800\\,\\text{ns}$ temporal spacing between two subsequent excitations.\nIn the presented examples, magnetic bottle type electron spectrometers were used for electron detection.\nDetails for one of the instruments can be found in Ref.~\\onlinecite{Mucke2012}.\nBriefly, the spectrometer uses the inhomogeneous field of a magnetic tip to collect and redirect the electrons from the interaction volume towards a drift tube.\nA magnetic field parallel to the tube axis prevents the loss of electrons due to lateral velocity components. At the end of the drift tube, electrons are amplified by a chevron stack of microchannel plates (MCPs) \\cite{Wiza1979}.\nDetection is carried out by an anode, where the corresponding voltage drop is retrieved from the high voltage potential using capacitive coupling \\cite{Mucke2012}.\nThe signal is processed by a constant fraction discriminator and its arrival time relative to the reference clock is recorded by a TDC.\\\\\nFor photon detection, a single-photon detector as described in Ref.~\\onlinecite{Hans2018} is used. The photons are passing through an MgF$_2$ window coated with a CsTe layer acting as a photocathode which converts photons into photoelectrons and allows the detection of photons with wavelengths in the range of about $120\\,\\text{nm}$ to $300\\,\\text{nm}$ ($4.1\\,\\text{eV}$ to $10.3\\,\\text{eV}$).\nThe electrons are amplified by an MCP chevron stack and the resulting electron cloud hits a delay line type position-sensitive anode \\cite{Jagutzki2002,Hans2018}.\nThe drop of the high voltage at the front MCP is measured using a capacitive coupling and used as the time signal.\nWhile the capability of position sensitive detection is not used for the exemplary measurements presented in this work, future experiments can incorporate these additional information.\nWhile the design of the magnetic bottle provides a relative solid angle $\\Omega_{\\text{rel},\\text{el}}$ close to unity, the relative solid angle $\\Omega_{\\text{rel},\\text{ph}}$ of the photon detection can be estimated to $\\Omega_{\\text{rel},\\text{ph}}\\approx\\frac{1.26\\cdot10^{-3}\\,\\text{m}^2}{4\\pi D^2}$ using Eq. \\ref{photodet_eq} with an active area diameter of $40\\,\\text{mm}$ of the used detector.\nIn the following, we illustrate two specific optics configurations in detail, which increase $\\Omega_{\\text{rel},\\text{ph}}$ in order to enable electron-photon coincidence experiments within a reasonable data acquisition time.\n\n\\section{\\label{sec3}Optics design and applications}\n\\subsection{\\label{sec_maxsolid}Configuration optimized for efficiency}\nFor this approach, the photon detector is attached to a chamber designed for electron coincidence spectroscopy of gaseous and cluster jets similar as in Ref.~\\onlinecite{Hans2016}.\nThe distance between the active detector surface and the interaction volume is $365\\,\\text{mm}$, resulting in a relative solid angle (without optics) of $\\Omega_{\\text{rel},\\text{ph}}=0.075\\,\\%$.\nA mirror system for photon detection was designed as illustrated in Figure \\ref{fig_hollowmirror} to maximize the solid angle.\nIt surrounds the complete interaction volume with apertures specifically designed for the used magnetic bottle spectrometer, target jet, and exciting-photon beam.\\\\\nThe mirror surfaces are made of polished aluminum to ensure a high reflectance for photons within the sensitivity range of the detector.\nThe system consists of a combination of a parabolic and two spherical mirrors which guide photons from the interaction volume towards the detector as shown by exemplary ray trajectories in Figure \\ref{fig_hollowmirror}:\nFirst, the paraboloid facing the detector parallelizes all photons emitted towards this hemisphere (ray path 1, blue).\nSecond, the inner spherical mirror possesses a radius such that all photons hitting this mirror from the interaction volume are reflected and also parallelized towards the detector (ray path 2, red). The size of the inner spherical mirror's area is equal to the entrance width of the parabolic part.\nThird, the outer spherical mirror opposite to the detector possesses a radius equal to the distance to the interaction volume, resulting in a reflection back into the interaction volume (ray path 3, violet).\nThose rays are then parallelized by the parabolic mirror.\n\n\\begin{figure}\n\\includegraphics{Figure2\n\\caption{\\label{fig_hollowmirror}Sketch of the basic rotational symmetric mirror system. The rotation axis is indicated by a grey dashed line.\nIn the direction of the detector, a parabolic mirror guides the photons onto its active area (path 1, blue).\nOn the opposite side of the detector, two spherical mirrors reflect photons towards it.\nThe radius of the inner spherical mirror $R_i$ is twice its distance to the interaction volume $A$ and photons are reflected in a collimated beam to the detector.\nThe radius of the outer mirror $R_o$ is equal to the distance to the interaction volume $A$ and therefore reflects the photons back into the interaction volume (path 3, violet). From here on, the path coincides with photons of path 1.}\n\\end{figure}\nThis configuration was tested in an experiment at the \\mbox{U49-2 PGM1} beamline (BESSY II, HZB) \\cite{Kachel2016}.\nThe exciting-photon energy was set to $90\\,\\text{eV}$.\nUsing a $50\\,\\mu\\text{m}$ exit slit of the beamline monochromator, the resulting photon beam with a bandwidth of $9\\,\\text{meV}$ was crossed with a He gas jet.\\\\\nFor these experimental conditions, the dominant process is the photoionization of a single 1s electron, \\mbox{i.e. $\\text{He}(1\\text{s}^2)+h\\nu\\rightarrow\\text{He}^+(1\\text{s}^1)+\\text{e}^-$}.\n\\begin{figure}[t!]\n\\includegraphics{Figure3}\n\\caption{\\label{fig_He}Electron spectrum of a He gas jet after irradiation with a photon energy of 90 eV.\na) Total electron spectrum (red solid) and its 10-fold magnification (red dashed).\nb) All recorded one electron - one photon coincidences (true + accidental).\nc) True coincident electron signal after subtraction of accidental coincidences by the method described in section \\ref{sec2}.}\n\\end{figure}\nWith a comparably low cross section the second electron can be additionally promoted into an excited state during the photoionization process, \\mbox{$\\text{He}(1\\text{s}^2)+h\\nu\\rightarrow\\text{He}^+(1\\text{s}^0n\\text{p})+\\text{e}^-$}, which results in the appearance of so-called satellite lines in the photoelectron spectrum \\cite{Heimann1986}.\nAll satellite states subsequently decay by photon emission, but only the $3\\text{p}\\rightarrow2\\text{s}$ transition with about $7.6\\,\\text{eV}$ \\cite{Kramida2018} transition energy is within the sensitivity range of the employed detector.\nThe cross section of the $n=3$ satellite is $1.5(2)\\,\\%$ ,compared to the single 1s ionization \\cite{Lindle1987}.\nOf course, the 3p electron can also decay to the 1s level with a branching ratio of $\\frac{3\\text{p}\\rightarrow2\\text{s}}{3\\text{p}\\rightarrow1\\text{s}}= 0.112$.\nIf the cross sections for all other processes are neglected, about $0.17\\,\\%$ of the detected electrons should be in coincidence with a photon in the sensitivity range of the detector.\n\nThe $4\\text{p}\\rightarrow2\\text{s}$ transition from the $n=4$ satellite at $10.2\\,\\text{eV}$ above the ground state lies at the edge of the detector sensitivity and combined with the reduced cross section of the this satellite should lead to a negligible intensity compared to the $n=3$ case.\\\\ \nThe total non-coincident photoelectron spectrum of He is shown in Figure \\ref{fig_He}a. The 1s photoelectron line is the most prominent feature followed by the $n=2$ satellite and the suggested appearance of the $n=3$ satellite. In a magnified presentation, the satellite lines up to $n=5$ can be identified (not shown). The energy axis was calibrated using the corresponding energies from Ref.~\\onlinecite{Kikas1996}.\nIf only the coincident electrons are taken into account, the $n=3$ satellite increases in intensity relative to the other features as shown in Figure \\ref{fig_He}b.\nThe elimination of accidental coincidences as described in Section \\ref{sec2} yields the (true) photon-coincident electron spectrum shown in Figure \\ref{fig_He}c.\nHere, only the $n=3$ satellite remains as the radiative $n=3 \\rightarrow n=2$ transition is within the sensitivity range of the used detector.\nThe count rate of true coincidences in this measurement was approximately $0.5\\,\\text{Hz}$ with a total electron count rate of approximately $58\\,\\text{kHz}$.\n\\begin{figure}[t!]\n\\includegraphics{Figure4}\n\\caption{\\label{fig_Ar}Ar electron spectrum after irradiation with a photon energy of $449\\,\\text{eV}$.\na) Total electron spectrum, showing the two $2\\text{p}$ photoelectron lines and the four Ar Auger lines with the highest kinetic energies (Assignment according to Ref. \\onlinecite{McGuire1975}):\n(I) \\mbox{$\\text{Ar}^{+}\\,\\text{L}_3\\text{M}_{2,3}\\text{M}_{2,3}(^1\\text{D}_{2})$}.\n(II) \\mbox{$\\text{Ar}^{+}\\,\\text{L}_{3}\\text{M}_{2,3}\\text{M}_{2,3}(^3\\text{P}_{0,1,2})$} and \\mbox{$\\text{Ar}^{+}\\,\\text{L}_{2}\\text{M}_{2,3}\\text{M}_{2,3}(^1\\text{D}_{2})$} (not resolved).\n(III) \\mbox{$\\text{Ar}^{+}\\,\\text{L}_2\\text{M}_{2,3}\\text{M}_{2,3}(^3\\text{P}_{0,1,2})$}. \nb) True coincident electron spectrum. Details are discussed in the text.}\n\\end{figure}\nFor an estimate of the effective solid angle of photon detection achieved with this configuration, the ratio of the number of coincidence events to the total intensity of the $n=3$ satellite in the total electron spectrum, which is about $0.0015$, may be used.\nThis ratio has to be normalized by the $\\frac{3\\text{p}\\rightarrow2\\text{s}}{3\\text{p}\\rightarrow1\\text{s}}$ branching ratio and the quantum efficiency of the photon detector.\nSince the exact quantum efficiency in the spectral range of the $3\\text{p}\\rightarrow 2\\text{s}$ transition in He II ($165\\,\\text{nm}$) is not known, for a conservative estimation of the lower limit we use the peak quantum efficiency of $0.255$ at $254\\,\\text{nm}$\\cite{Photek2010}, resulting in an effective solid angle in the order of $\\geq5\\,\\%$.\nThe geometrical solid angle of the mirror assembly is about $41\\,\\%$.\nWe assign the deviation to imperfect reflection of the mirror and variations in the quantum efficiency of the detector. Nevertheless, this conservative estimate results in an increase of the solid angle by a factor of about $70$ compared to the case without optics.\\\\\nAs a second example, atomic Ar was photoionized with an exciting-photon energy of $449\\,\\text{eV}$.\nAt this photon energy, the 2p photoelectrons and the Ar$^+$ LMM Auger electrons have similar kinetic energies and can be resolved simultaneously by applying a retardation voltage to the drift tube of the magnetic bottle electron spectrometer.\nWhile Auger final states of the form $\\text{Ar}^{2+}(3p^{-2})$ cannot decay further, some of the Ar$^+$ LMM Auger channels will end in radiative satellite states of the configuration $\\text{Ar}^{2+}(3p^{-3}nl)$.\nIn Figure \\ref{fig_Ar}a, the electron spectrum composed of the 2p$_{1\/2}$ and 2p$_{3\/2}$ fine structure components and the four Auger channels of highest kinetic energies, corresponding to the $\\text{Ar}^{+}\\,\\text{L}_2\\text{M}_{2,3}\\text{M}_{2,3}$ and $\\text{Ar}^{+}\\,\\text{L}_3\\text{M}_{2,3}\\text{M}_{2,3} (^3\\text{P}_{0,1,2}\\;\\text{and}\\; ^1\\text{D}_2)$ final states, is shown \\cite{McGuire1975}.\nWhile the latter Auger electrons should not be accompanied by photon emission, the photoelectrons are, because some 2p vacancies lead to radiative Auger final states (of which the corresponding Auger electrons are not within the detected range).\nThis is indeed what is observed in Figure \\ref{fig_Ar}b, which shows the true photon-coincident electron spectrum.\nSurprisingly, one Auger channel is also present at about $203.2\\,\\text{eV}$ kinetic energy.\nWe suggest that this weak channel corresponds to radiative decay of the $\\text{Ar}^{+}\\,\\text{L}_2\\text{M}_{2,3}\\text{M}_{2,3}(^1\\text{S}_0)$ Auger final state\\cite{McGuire1975} to the $\\text{Ar}^{+}\\,\\text{L}_2\\text{M}_{2,3}\\text{M}_{2,3}(^3\\text{P}_{0,1,2})$ state via magnetic dipole or electronic quadrupole transitions, which are within the sensitivity range of the detector.\n\\subsection{\\label{sec_adapt}Configuration optimized for adaptability}\nIn certain cases, the application of configuration A might be hindered by experimental constraints.\nFor example, the layout of the vacuum chamber, target source, or electron spectrometer can interfere spatially with the mirror or the direct view towards the interaction volume might be blocked.\nThen, the solid angle can still be increased significantly using a combination of flexible optical elements.\\\\\nIn the presented experiment, a magnetic bottle electron spectrometer similar to configuration A was used with a different interaction chamber.\nHowever, no port with a direct view of the interaction volume was available for the photon detector as illustrated in Figure \\ref{fig_adaptset}.\nIn addition, a valve at the entrance of the electron spectrometer would intersect with parts in the close vicinity of the interaction volume thus preventing a mirror design as described above.\nWith these constraints, an assembly of three different optical elements is used for photon guidance.\nA combination of an UV enhanced Al coated spherical mirror and a fused silica spherical plano convex lens, positioned at opposite sides of the interaction volume, may achieve a relative solid angle of up to $7.5\\,\\%$.\nSimultaneously, the lens collimates the emitted photons onto a planar mirror, which is used to redirect the photons onto the detector.\n\\begin{figure}\n\\includegraphics{Figure5}\n\\caption{\\label{fig_adaptset}Isometric view of the adaptable set-up.\nA spherical mirror and a plano convex lens are used to increase the solid angle for the photon detection.\nThe plano convex lens distance and focal length are chosen such that the transmitted photons are parallelized while the plane mirror deflects the photons onto the detector axis.\nTherefore, an observation of the emitted photons without direct view is possible.}\n\\end{figure}\nThe functionality of this configuration was validated in an experiment conducted at the \\mbox{UE56\/2 PGM1} beamline at BESSY II in Berlin.\nNeon atoms were injected effusively into the interaction chamber through a $25\\,\\mu\\text{m}$ nozzle.\nThe exciting-photon energy was set to $867.1\\,\\text{eV}$ corresponding to the resonant $1\\text{s}^22\\text{s}^22\\text{p}^6 \\rightarrow1\\text{s}^12\\text{s}^22\\text{p}^63\\text{p}$ excitation of atomic Ne.\nHere, a variety of different de-excitation pathways are possible \\cite{Hayaishi1995}.\n\\begin{figure}\n\\includegraphics{Figure6}\n\\caption{\\label{fig_Ne}Electron spectra of Ne after irradiation with a photon energy of $867.1\\,\\text{eV}$.\na) Total electron spectrum.\nb) True coincident electron spectrum.\n}\n\\end{figure}\nThe total electron spectrum in Figure 6a consists of an intense peak at short times of flight, comprised of unresolved Auger and valence electrons.\nThe slower electrons are the result of further autoionizing Auger final states. However, the only relaxation channels observable by an electron-photon coincidence are spectator Auger final states of the form $1\\text{s}^22\\text{s}^22\\text{p}^4n\\text{p}$ with fast Auger electrons included in the fast electron peak.\nConsequently, the autoionizing Auger final states vanish in the true coincidence spectrum in Figure 6b.\nHere, the true coincidence rate was about $0.01\\,\\text{Hz}$, compared to a total electron count rate of about $60\\,\\text{kHz}$.\nDespite the low count rate, the experiment yields an interpretable result.\nThis illustrates that the coincidence measurement involves an extremely efficient noise reduction, allowing to detect signals orders of magnitude weaker than the non-coincident-noise.\n\\section{\\label{sec4}Conclusion}\nWe have developed two optical assemblies which can be combined with magnetic bottle electron spectrometers to result in a highly efficient set-up for electron-photon coincidence experiments.\nWhile configuration A focuses on the optimization of the solid angle of the photon detection, configuration B is adaptable to different experimental constraints.\\\\\nBoth configurations were tested successfully for exemplary physical processes that feature electron-photon coincidences after excitation of atomic noble gas targets with synchrotron radiation.\nWe demonstrated that this method is capable of identifying obscured physical processes and can circumvent the signal to noise ratio problem of very low count rate experiments.\nWe envision this method to be capable to unravel energy and charge transfer processes in dense media.\nHere, the additional insight of photon detection allows to further characterize ultra-fast phenomena taking place only in such dense media.\nExamples where photon emission plays an decisive role are resonant Interatomic Coulombic Decay\\cite{Knie2014}, Radiative Charge Transfer\\cite{Hans2018a}, or possibly ultra fast proton transfer in liquid samples\\cite{Hans2017,Thuermer2013}.\n\\begin{acknowledgments}\nWe thank Claudia Kolbeck, Sebastian Malerz, Marvin Pohl, Anne Stephansen (FHI Berlin, Germany) and Clara Saak (Uppsala University, Sweden) for their support during the synchrotron radiaton beamtime. \nWe gratefully acknowledge the HZB for the allocation of synchrotron radiation beamtime and the BESSY II staff for their support. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) \u2013 Projektnummer 328961117 \u2013 SFB 1319 ELCH and Research Unit FOR 1789.\n\\end{acknowledgments}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe nonlinear variational wave equation $\\psi_{tt}-c(\\psi)(c(\\psi)\\psi_x)_x=0$ was derived by Saxton \\cite{S} as a model of the director field of a nematic liquid crystal. The nonlinear variational wave equation has recieved wide attention \\cite{BCZ,BZ,GHZ,HR} due to mathematical challenges in the form of wavebreaking in finite time, and distinct ways to extend the solution to a global weak solution. In this context wavebreaking means that either the time or the space derivative becomes unbounded at certain points, while the solution remains H\\\" older continuous.\n\nFurthermore, Hunter and Saxton derived the equation $u_{tx}+(uu_x)_x=\\frac12u_x^2$ from the nonlinear variational wave equation \\cite{HS} as an asymptotic equation for small perturbations in the long time regime in a moving frame. The Hunter--Saxton equation share many of the features of the nonlinear variational wave equation such as wavebreaking, conservative and dissipative weak solutions, and H\\\" older continuity \\cite{BC,BHR,BZZ,CGH,D,HZ2,HZ3,ZZ,ZZ2}. In addition it exhibited novel features such as complete integrability and interpretation as a geodesic flow \\cite{HZ,KM}. It also proved easier to work with due to there being only one family of characteristics and existence of explicit solutions.\n\nA two-component generalization for the Hunter--Saxton equation was derived from two-component Camassa--Holm \\cite{DP}, and independently form the Gurevich--Zybin system \\cite{P}. The two-component generalization is similar to the Hunter--Saxton equation since wavebreaking is also possible, there are both conservative and dissipative weak solutions, and $u$ is H\\\" older continuous \\cite{GN,N,W}. However when the second variable is nonzero almost everywhere initially, there will be no wave breaking \\cite{N}, and in that sense the introduction of a second variable regularizes the equation. The two-component Hunter--Saxton system has, however, not been shown to be related to the theory of nematic liquid crystals, and one of the aims of this paper is to establish that the two-component Hunter--Saxton system can indeed be derived from the theory of nematic liquid crystals, and that the new variable is related to the order parameter\n\nIn the Ericksen--Leslie theory of nematic liquid crystals the configuration is described by a director field $\\bn$ which gives the local orientation of the rods, and an order parameter field $s$ which gives the local degree of orientation \\cite{E,L}. When the nonlinear variational wave equation was first derived the local degree of orientation was assumed constant, so that a scalar equation was obtained \\cite{S}. Here we will follow a similar route, but account for variable degree of orientation. Thus, in Section \\ref{section:derivation}, we will arrive at a novel two-component system of nonlinear wave equations. Moreover, we show that a similar asymptotic expansion to the one by Hunter and Saxton \\cite{HS} yields the two-component Hunter--Saxton equation. In Section \\ref{section:analysis} existence of global solutions is shown in the case of constant wave speed, and local solutions is shown to exist in general. The proofs rely on fixed point iterations and standard semigroup theory for evolution equations \\cite{K}. \n\n\\section{Derivation of the equations} \\label{section:derivation}\nIn Ericksen--Leslie theory a nematic liquid crystal in a domain $\\Omega$ is desribed by a director field $\\bn: [0,T]\\times\\Omega\\rightarrow \\R\\mathbb{P}^2$ and an order parameter $s:[0,T]\\times\\Omega\\rightarrow (-\\frac12,1)$ \\cite{E,L}. The Lagrangian of the system is then \\cite{E} given by\n\\begin{equation}\n\\mc L = -(s\\bn)_t^2+ W_2(s,\\nabla s,\\bn,\\nabla\\bn) + W_0(s),\n\\end{equation}\nwith the potential energy term $W_2$ \n\\begin{align}\nW_2(s,\\nabla s,\\bn,\\nabla\\bn) &= \\left(K_1+L_1s\\right)s^2\\left(\\nabla\\cdot\\bn\\right)^2 + \\left(K_2+L_2s\\right)s^2\\left(\\bn\\cdot\\nabla\\times\\bn\\right)^2\\non\n\t&\\quad+ \\left(K_3+L_3s\\right)s^2\\left|\\bn\\times\\nabla\\times\\bn\\right|^2 \\non \n\t&\\quad+ \\left((K_2+K_4)+(L_2+L_4)s\\right)s^2\\left[\\mathrm{tr}\\:\\nabla\\bn^2-(\\nabla\\cdot\\bn)^2\\right]\\non\n\t&\\quad+ \\left(\\kappa_1+\\lambda_1s\\right)\\left|\\nabla s\\right|^2 + \\left(\\kappa_2+\\lambda_2s\\right)\\left(\\nabla s\\cdot\\bn\\right)^2 \\non\n\t&\\quad+ \\left(\\kappa_3+\\lambda_3s\\right)\\left(\\nabla\\cdot\\bn\\right)\\left(\\nabla s\\cdot\\bn\\right) + \\left(\\kappa_4+\\lambda_4s\\right)\\nabla s\\cdot\\left((\\bn\\cdot\\nabla)\\bn\\right),\n\\end{align}\nand some function $W_0$ satisfying \n\\begin{equation}\n\\label{eq:W_0 cond}\n\t\\lim_{s\\downarrow-\\frac12}W_0(s)=+\\infty,\\quad \\lim_{s\\uparrow 1}W_0(s)=+\\infty, \\text{ and } \\lim_{s\\rightarrow 0}\\frac{W_0(s)}{s^2} \\in\\R.\n\\end{equation}\nWe are interested in the case where $\\bn=(\\cos\\psi,\\sin\\psi,0)$, and $s$ and $\\psi$ depends on $t$ and $x$ only.\nThen the expression for $W_2$ can be written\n\\begin{align}\nW_2 &= s^2\\psi_x^2\\left((K_1+L_1s)\\sin^2\\psi+(K_3+L_3s)\\cos^2\\psi\\right)\\non\n\t&\\quad + s_x^2\\left((\\kappa_1+\\lambda_1s)\\sin^2\\psi+(\\tilde\\kappa_2+\\tilde\\lambda_2s)\\cos^2\\psi\\right)\\non\n\t&\\quad -s_x\\psi_x\\sin 2\\psi\\left(\\frac{\\kappa_3+\\kappa_4}2+\\frac{\\lambda_3+\\lambda_4}2s\\right),\n\\end{align}\nwhere $\\tilde\\kappa_2=\\kappa_1+\\kappa_2$ and $\\tilde\\lambda_2 = \\lambda_1+\\lambda_2$. We simplify the problem by considering the case where\n\\begin{subequations}\n\\begin{align}\n\\kappa_3 &= \\kappa_4 = \\lambda_3 = \\lambda_4 = 0,\\\\\nL_1 &= L_3 = \\lambda_1 = \\tilde\\lambda_2 = 0,\\\\\nK_1&= \\kappa_1,\\\\\nK_3 &= \\tilde\\kappa_2.\n\\end{align}\n\\end{subequations}\nThen with the definition $c(\\psi)^2= K_1\\sin^2\\psi + K_3\\cos^2\\psi$, we get the Lagrangian density\n\\begin{align}\n\\label{eq:2NVW lagr}\n\\mc L^{2NVW} &= -\\frac 12s_t ^2 -\\frac 12s^2\\psi_t^2+\\frac12s^2c(\\psi)^2\\psi_x^2+\\frac 12c(\\psi)^2s_x^2 + W_0(s)\\non\n\t&= -\\frac 12(s\\bn)_t^2+\\frac12 c(\\bn)^2(s\\bn)_x^2 + W_0(s).\n\\end{align}\n\\begin{definition}\nWe define the two-component nonlinear variational wave system to be\n\\begin{subequations}\n\\label{eq:2NVW}\n\\begin{align}\n\ts^2\\left(\\psi_{tt}-c(\\psi)\\left(c(\\psi)\\psi_x\\right)_x\\right)+2s\\left(\\psi_ts_t-c(\\psi)^2\\psi_xs_x\\right)+c(\\psi)c'(\\psi) s_x^2 &= 0,\\\\\n\ts_{tt} - c(\\psi)\\left(c(\\psi) s_x\\right)_x -c(\\psi)c'(\\psi)\\psi_xs_x -s\\left(\\psi_t^2-c(\\psi)^2\\psi_x^2\\right) + W_0'(s) &=0,\\\\\n\tc(\\psi)^2 - K_1\\sin^2\\psi-K_3\\cos^2\\psi &= 0,\\\\\n\tW_0 &\\in C^4\\left(\\left(-\\frac12,1\\right)\\right)\n\n\\end{align}\n\\end{subequations}\n\\end{definition}\n\\begin{remark}\nThe system \\eqref{eq:2NVW} is the Euler--Lagrange equations for \\eqref{eq:2NVW lagr}.\n\\end{remark}\nOne can define a conserved energy for classical solutions.\n\\begin{proposition}\nDefine the energy density \n\\begin{equation}\n\t\\label{def:E}\n\t\\mc E = \\frac12\\left(s^2(\\psi_t^2+c(\\psi)^2\\psi_x^2)+(s_t^2+c(\\psi)^2s_x^2)\\right)+W_0(s).\n\\end{equation}\nand the energy density flux\n\\begin{equation}\n\t\\label{def:F}\n\t\\mc F = \\left(s^2\\psi_t\\psi_x+s_ts_x\\right).\n\\end{equation}\nThen for classical solutions of \\eqref{eq:2NVW} the energy satisfy the equations\n\\begin{subequations}\n\\label{eq:cons E}\n\\begin{align}\n\t\\mc E_t-\\left(c(\\psi)^2\\mc F\\right)_x &= 0,\\\\\n\t\\mc F_t - \\left(\\mc E-2W_0(s)\\right)_x &= 0.\n\\end{align}\n\\end{subequations}\n\\end{proposition}\nWe will now derive the two-component Hunter--Saxton system from \\eqref{eq:2NVW lagr}. To follow the work of Hunter and Saxton \\cite{HS} we introduce $\\psi(t,x) = \\psi_0 + \\epsilon u(\\epsilon t,x-c_0t)$ and $s = s_0+\\epsilon r(\\epsilon t,x-c_0t)$ with $c_0^2 = K_1\\sin^2\\psi_0 + K_3\\cos^2\\psi_0$. Then expansion of \\eqref{eq:2NVW lagr} in powers of $\\epsilon$ gives\n\\begin{align}\n\\mc L^\\epsilon &= W_0(s_0) + \\epsilon W_0'(s_0)r + \\epsilon^2\\frac12W_0''(s_0)r^2\\non\n&\\quad + \\epsilon^3\\bigg(c_0r_tr_x+s_0^2c_0u_tu_x + s_0^2\\left(cc'\\right)_0 uu_x^2 + \\left(cc'\\right)_0 ur_x^2 + \\frac16W_0'''(s_0)r^3\\bigg)+ O(\\epsilon^4).\n\\end{align}\nIf we select $s_0$ such that $W_0'(s_0) = W_0''(s_0) = W'''(s_0) = 0$, the third order terms in the above Lagrangian gives (possibly by rescaling)\n\\begin{equation}\n\\label{eq:2HS lagr}\n\\mc L^{2HS} = u_tu_x +uu_x^2 + r_tr_x + ur_x^2.\n\\end{equation}\n\\begin{theorem}\n\tThe two-component Hunter--Saxton system\n\t\\begin{subequations}\n\t\\begin{align}\n\t(u_t+uu_x)_x &= \\frac 12u_x^2+\\frac 12\\rho^2,\\\\\n\t\\rho_t + (u\\rho)_x &= 0,\n\t\\end{align}\n\t\\end{subequations}\n\tis the Euler--Lagrange equations for the Lagrangian density \\eqref{eq:2HS lagr} with $\\rho=r_x$.\n\\end{theorem}\n\n\\section{The two-component nonlinear variational wave system} \\label{section:analysis}\nWe will first consider the case $K_1=K_3$, that is the wave speed $c$ is independent of $\\psi$. We will assume $s \\geq 0$ such that we can introduce the complex variable $\\zeta = se^{i\\psi}$, and \\eqref{eq:2NVW} reduces to\n\\begin{equation}\n\\label{eq:2NVWlinear}\n\\zeta_{tt}-c^2\\zeta_{xx} +\\frac{W_0'\\left(|\\zeta|\\right)}{|\\zeta|}\\zeta = 0,\n\\end{equation}\nwhich is a defocusing nonlinear wave equation. Recall that the physical interpretation of $\\psi$ is an angle, and thus any solutions $\\psi$ and $\\psi + n\\cdot 2\\pi$ should be considered equal from an application point of view. We have conservation laws for energy $\\mc E = \\frac12|\\zeta_t|^2+\\frac12c^2|\\zeta_x|^2+W_0(|\\zeta|)$ and energy flux $\\mc F = \\frac12\\left(\\bar\\zeta_t\\zeta_x+\\zeta_t\\bar\\zeta_x\\right)$ as follows\n\\begin{subequations}\n\\begin{align}\n\t\\frac{\\partial}{\\partial t}\\mc E - c^2\\frac{\\partial}{\\partial x}\\mc F &= 0,\\\\\n\t\\frac{\\partial}{\\partial t}\\mc F - \\frac{\\partial}{\\partial x}\\left(\\mc E-2W_0(|\\zeta|)\\right) &= 0.\n\\end{align}\n\\end{subequations}\nHence both $\\int_\\R\\mc E\\:\\d x$ and $\\int_\\R\\mc F\\:\\d x$ are conserved for any classical solution with bounded energy. To analyze solutions we will need some conditions on the function $W_0$. We will assume that $W_0(s)\\rightarrow\\infty$ rapidly enough as $s\\rightarrow1$ to be able to bound $\\|W_0(|\\zeta|)\\|_\\infty$ in terms of the total energy $E$. In addition we will assume that $W_0$ is non-negative and that $W_0$ is well behaved close to $s=0$, and also close to zeros of $W_0$. We will also relax \\eqref{eq:W_0 cond} slightly to allow for $s=0$ to be a local maximum or minimum instead of a global minimum, in accordance with \\cite{E}.\n\\begin{definition}\n\\label{def:W_0}\nWe will assume that $W_0\\in C^4([0,1))$ and non-negative, and that there exists a finite number of $s^*$ such that $W_0(s^*)=0$, and for all zeros $s^*$,\n\\begin{align}\n\t\\lim_{s\\rightarrow 0}\\frac{W_0(s)-W_0(0)}{s^2} &= \\frac 12W_0''(0)\\in\\R,\\\\\n\t\\lim_{s\\rightarrow s^*}\\frac{W_0(s)}{(s-s^*)^2} &= \\frac12W_0''(s^*)\\in [0,\\infty).\n\\end{align}\nMoreover, we let $W_0$ satisfy\n\\begin{equation}\n\t\\int_s^1W_0(u)(1-u)\\:d u = \\infty,\n\\end{equation}\nand that there exists $\\tilde s\\in[0,1)$ such that $W_0(s)>0$ and $W_0'(s)>0$ for all $s\\in (\\tilde s,1)$. \n\\end{definition}\nWe define the function spaces for the solutions in the next definition.\n\\begin{definition}\n\\label{def:spaces}\n\tLet $\\zeta^*\\in\\C$, $|\\zeta^*|<1$, and\n\t\\begin{equation}\t\n\t\tX_{\\zeta^*} = \\{(\\zeta,\\sigma)\\mid \\zeta \\in W^{1,\\infty}(\\R),\\zeta-\\zeta^*\\in H^1(\\R), \\sigma \\in L^\\infty(\\R)\\cap L^2(\\R)\\},\n\t\\end{equation}\t\n\tand define\n\t\\begin{align}\n\t\t\\mc E(\\zeta,\\sigma) &= \\frac12|\\sigma|^2+\\frac12|\\zeta_x|^2+W_0(|\\zeta|),\\\\\n\t\t\\mc F(\\zeta,\\sigma) &= \\frac12\\sigma\\bar\\zeta_x+\\frac12\\bar\\sigma\\zeta_x,\n\t\\end{align}\n\tand note that both $\\mc E$ and $\\mc F$ are real valued. Let now\n\t\\begin{equation}\n\t\tX_E = \\{(\\zeta,\\sigma)\\in X_{\\zeta^*}\\mid |\\mc E\\|_{L^1} \\leq E\\},\n\t\\end{equation}\n\tand denote\n\t\\begin{align}\n\t\t\\D_{T,E} &= \\left\\{\\zeta\\in C([0,T],W^{1,\\infty}(\\R,\\C))\\cap C^1([0,T],L^\\infty(\\R,\\C) \\mid (\\zeta(t),\\zeta_t(t))\\in X_E\\right\\}.\n\t\\end{align}\n\tIn the case $T<\\infty$ we equip $\\D_{T,E}$ with the metric $d_{\\D_{E,T}}$ induced from \n\t\\begin{align}\n\t\t\\|\\zeta\\| &= \\sup_{t\\in[0,T]}\\big(\\|\\zeta(t)\\|_{W^{1,\\infty}(\\R)} +\\| \\zeta(t)-\\zeta^*\\|_{H^1(\\R)}\\non &\\quad+ \\|\\zeta_t(t))\\|_{L^\\infty(\\R)} + \\| \\zeta_t(t))\\|_{L^2(\\R)} + \\|W_0(|\\zeta(t)|)\\|_{L^1(\\R)}\\big).\n\t\\end{align}\n\\end{definition}\nFor bounded energy we can now a priori put constraints on possible solutions.\n\\begin{proposition}\n\\label{prop:W_0}\nLet $W_0$ satisfy the conditions of Definition \\ref{def:W_0}. Then there exist positive constants $c_E, C_E$, depending on $W_0$ and $E$ only, such that for any $(\\zeta,\\sigma)\\in X_E$ we have\n\\begin{align}\n\t\\|W_0(|\\zeta|)\\|_{L^\\infty(\\R)} &\\leq C_E,\\non\n\t\\|\\zeta\\|_{L^\\infty(\\R)} &\\leq c_E < 1.\\nonumber\n\\end{align}\nMoreover for $s\\in [0,c_E]$ there is a positive constant $k_E$ such that\n\\begin{equation}\n\tW_0'(s)^2 \\leq k_E W_0(s).\n\\end{equation}\nFurthermore, we can define\n\\begin{align}\n\tL_E &= \\sup_{s\\in[0,c_E]} |W_0'(s)|,\\\\\n\tL_E' &= \\sup_{s\\in[0,c_E]} |\\frac{W_0'(s)}{s}|,\\\\\n\tL_E'' &= \\sup_{s\\in[0,c_E]} |W_0''(s)|.\n\\end{align}\n\\end{proposition}\n\\begin{proof}\nSince $\\frac12\\|\\sigma\\|_2^2+\\frac12 c^2 \\|\\zeta_x\\|_2^2 + \\|W_0(|\\zeta|)\\|_1 \\leq E$ we have that $|\\zeta(x_1)-\\zeta(x_2)|\\leq\\frac{\\sqrt{2E_1}}{c}\\sqrt{|x_1-x_2|}$ where $E_1 = E-\\frac12\\|\\sigma\\|_2^2-\\|W_0(|\\sigma|)\\|_1 \\geq 0$. Since $\\zeta$ is continuous we have that there exists $\\hat x$ such that $|\\zeta(\\hat x)| = \\|\\zeta\\|_{L^\\infty(\\R)}\\leq 1$. Then we have that\n\\begin{equation}\n\t|\\zeta(x)| \\geq \n\t\t\\begin{cases}\n\t\t\t\\|\\zeta\\|_\\infty-\\frac{\\sqrt{2E_1}}c\\sqrt{|x-\\hat x|},&\\hat x-\\frac{\\|\\zeta\\|_\\infty^2c^2}{2E_1}0,\n\\\\\n u(x,0)=u_0(x)\\geq 0,\\quad & x \\in \\R^n.\n \\end{array}\n\\right.\n\\end{align}\nHere $v(x,t)$ expresses the chemical substance concentration and it is given by the fundamental solution\n\\begin{align}\nv(x,t)=K \\ast u(x,t) =c_n \\int_{\\R^n} \\frac{u(y,t)}{|x-y|^{n-2}}dy\n\\end{align}\nwhere\n\\begin{align}\nc_n=\\frac{1}{n(n-2)b_n},~~b_n=\\frac{\\pi^{n\/2}}{\\Gamma(n\/2+1)},\n\\end{align}\n$b_n$ is the volume of $n$-dimensional unit ball. This system without the reaction has been proposed as a model for chemotaxis driven cell movement \\cite{KS70}. Here $\\sigma \\ge 1$ in chemotaxis is to model the nonlinear aggregation, $u^\\alpha \\left( 1-\\int_{\\R^n} u^\\beta dx \\right)$ with $\\alpha>1, \\beta>1$ is the reaction term representing nonlinear growth under nonlocal resource consumption of the bacteria \\cite{F37,KPP37}.\n\nInitial data will be assumed\n\\begin{align}\\label{initial}\nu_0 \\ge 0,~~ u_0 \\in W^{2,n-1}(\\R^n) \\cap L^1(\\R^n).\n\\end{align}\nUnder the initial assumptions, we consider the case $\\sigma \\ge 1$ and prove that in the following cases,\n\\begin{itemize}\n \\item[(1)] $\\sigma+1 \\le \\alpha$ and $\\alpha<1+2\\beta\/n$,\n \\item[(2)] $\\sigma+1>\\alpha$ and $(\\sigma+1)(n+2)<2 \\beta+2 \\alpha+n$,\n\\end{itemize}\nthe solution of \\er{star00} is unique and global without any restriction on the initial data.\n\nIn the follows, we define the exponent $p$ arising from Sobolev inequality \\cite{Lieb202} and the notation $Q_T$\n\\begin{align}\np&:=\\frac{2n}{n-2},~~\\label{p}\\\\\nQ_T&:=\\R^n \\times (0,T) \\mbox{~~for~all~} T>0. \\nonumber\n\\end{align}\nThroughout this paper, we deal with a strong solution of \\er{star00} which is defined:\n\\begin{definition}\\label{def1}\nLet $\\sigma \\ge 1, \\alpha>1, \\beta>1$, $u_0$ satisfies \\er{initial}. $u(x,t)$ is called a strong solution on [0,T) if\n\\begin{itemize}\n \\item[(1)] $u \\in W^{2,n-1}(Q_T),~u_t\\in L^{n-1}(Q_T)$,\n \\item[(2)] $u \\in L^\\infty(0,T;L^1\\cap L^\\infty(\\R^n))$.\n \n\\end{itemize}\n\\end{definition}\nOur main result concerning the solution can be summarized as follows:\n\\begin{thm}\\label{thm1}\nLet $\\alpha>1, \\beta>1, \\sigma \\ge 1$. If either\n$$\\sigma+1 \\le \\alpha <1+\\frac{2 \\beta}{n}$$ or $$\\alpha<\\sigma+1<\\frac{2(\\beta+\\alpha)}{n+2}+\\frac{n}{n+2},$$ then for any initial data satisfying \\er{initial}, problem \\er{star00} possesses a unique global strong solution defined by Definition \\ref{def1} which is uniformly bounded, i.e., for any $t>0,$ then\n\\begin{align}\n\\|u(\\cdot,t)\\|_{L^q(\\R^n)} \\le C_1(\\|u_0\\|_{L^q(\\R^n)}),~~q \\in [\\beta+\\alpha-1,\\infty], \\\\\n\\|u(\\cdot,t)\\|_{L^q(\\R^n)} \\le C_2(\\|u_0\\|_{L^q(\\R^n)},t),~~q \\in [1,\\beta+\\alpha-1).\n\\end{align}\nHere $C_1$ is a positive constant only depending on $\\|u_0\\|_{L^q(\\R^n)}$ but not on $t$.\n\\end{thm}\n\nActually, problem \\er{star00} contains three terms, the diffusion term $\\Delta u$, the nonlocal aggregation term $\\nabla\\cdot(u^\\sigma \\nabla v)$ and the nonlinear growth term $u^\\alpha (1-\\int_{\\R^n} u^\\beta dx)$ (where $-u^\\alpha \\int_{\\R^n} u^\\beta dx$ can be viewed as the death contributing to the global existence), then the competition arises between the diffusion, the death and the aggregation, the growth. Indeed, \\er{star00} can be recast as\n\\begin{align}\\label{starstar00}\nu_t=\\Delta u-\\sigma u^{\\sigma-1}\\nabla u \\cdot \\nabla v+u^{\\sigma+1}+u^\\alpha\\left(1-\\int_{\\R^n} u^\\beta dx\\right),\n\\end{align}\nif $\\sigma+1=\\alpha,$ then the particular nonlinear reaction exponent\n$$\\alpha=2\\beta\/n+1$$\ngives the balance of diffusion and aggregation, reaction. In fact, plugging $u_\\lambda(x,t)=\\lambda^{\\frac{n}{\\beta}}u(\\lambda x, \\lambda^2 t)$ into \\er{star00}, it's easy to verify that $u_\\lambda(x,t)$ is also a solution of \\er{star00} and the scaling preserves the $L^\\beta$ norm in space, the diffusion term $\\lambda^{n\/\\beta+2}\\Delta u(\\lambda x, \\lambda^2 t)$ has the same scaling as the aggregation term $\\lambda^{(\\sigma+1)n\/\\beta}\\nabla \\cdot (u \\nabla (K\\ast u))(\\lambda x, \\lambda^2 t)$ and the reaction term $\\lambda^{n\\alpha\/\\beta} u^\\alpha (1-\\int_{\\R^n}u^\\beta dx)(\\lambda x, \\lambda^2 t)$ if and only if $\\alpha=2\\beta\/n+1$. From observing the rescaled equation we can see that when $n(\\alpha-1)\/\\beta<2,$ for low density (small $\\lambda$), the aggregation dominates the diffusion thus prevents spreading. While for high density (large $\\lambda$), the diffusion dominates the aggregation and thus blow-up is precluded. Hence, in this case, the solution will exist globally (Theorem \\ref{thm1}). On the other hand, if $n(\\alpha-1)\/\\beta>2,$ then the diffusion dominates for low density and the density had infinite-time spreading, the aggregation manipulates for high density and the density has finite time blow-up. Therefore, our conjecture is that there exists finite time blow-up for $\\alpha-1>2\\beta\/n.$ Moreover, for $\\alpha-1=2\\beta\/n$, similar to \\cite{Jose2009}, we guess that there is a critical value for the initial data sharply separating global existence and finite time blow-up.\n\nMoreover, noticing that \\er{starstar00} includes $u_t, \\Delta u, u^{\\sigma+1}, u^\\alpha, \\nabla \\cdot (u^\\sigma \\nabla v)$. Therefore, we detect the following equations are similar to \\er{starstar00}:\n\\begin{align}\\label{DR}\n\\left\\{\n \\begin{array}{ll}\n u_t=\\Delta u+u^\\alpha, & x \\in \\R^n, t>0, \\\\\n u(x,0)=u_0(x), & x \\in \\R^n.\n \\end{array}\n\\right.\n\\end{align}\nand\n\\begin{align}\\label{DA}\n\\left\\{\n \\begin{array}{ll}\n u_t=\\Delta u-\\nabla \\cdot (u^\\sigma \\nabla v), & x \\in \\R^n, t>0, \\\\\n u(x,0)=u_0(x), & x \\in \\R^n.\n \\end{array}\n\\right.\n\\end{align}\nIn order to compare \\er{DR}, \\er{DA} with \\er{starstar00}, we take $\\sigma+1=\\alpha$ (for simplicity) to find the effects of the nonlocal reaction $u^\\alpha \\left(1-\\int_{\\R^n} u^\\beta dx \\right)$. Concerning \\er{starstar00} and \\er{DR}, in this paper, we prove that for $\\alpha<1+2\\beta\/n,$ \\er{star00} admits a unique and global solution. While for the Fujita type equation \\er{DR}, it's known that for $\\alpha<1+2\/n,$ there is no global solution \\cite{Fu66} (For comparison, $\\lambda^{\\frac{n}{\\beta}}u(\\lambda x, \\lambda^2 t)$ in \\er{star00} is just the mass invariant scaling). As to \\er{DA} and \\er{starstar00}, the most remarkable difference is that the mass conservation holds for \\er{DA} but not for \\er{starstar00}, using this property it's been proved that the solution of \\er{DA} exists globally with small initial data \\cite{BL13,BDP06,JL92,P07,SK06}, while \\er{starstar00} has a unique and global solution without any restriction on the initial data. Thus we conclude that the reaction term can prevent blow-up.\n\nIn addition, Keller-Segel model with local reaction term in bounded domain has been widely studied by virtue of comparison principle and those estimates that are valid in bounded domain \\cite{GST16,HZ16,IS16,NT13,TW07,WL17,WMZ14,ZhLi15,Zh15}, that's\n\\begin{align} \\label{star11}\n\\left\\{\n \\begin{array}{ll}\n u_t=\\Delta u-\\chi \\nabla \\cdot (u^{\\sigma} \\nabla c)+f(u), & x \\in \\Omega, t>0, \\\\\n -\\Delta c+ c=u, & x \\in \\Omega, t>0, \\\\\n \\nabla u \\cdot \\vec{n}=\\nabla v \\cdot \\vec{n}=0, & x \\in \\partial \\Omega.\n \\end{array}\\right.\n\\end{align}\nFor $\\sigma=1,$ \\cite{TW07} showed that model \\er{star11} with $f(u) \\le a-b u^2, u \\ge 0, a,b>0$ possesses a global bounded solution under the assumption $b>\\frac{n-2}{n}.$ The case $\\sigma>1$ was considered in \\cite{GST16} with $f(u)=\\mu u(1-u^\\alpha)$, if $\\alpha>\\sigma$ or $\\alpha=\\sigma$ and $\\mu>\\frac{n\\alpha -2}{n\\alpha+2(\\sigma-1)}$, then \\er{star11} admits a unique global solution. For other similar cases, one can refer to \\cite{HT17,NT13,WMZ14}.\n\nIn brief, comparing to the above models, the absence of mass conservation (model \\er{DA}) and the comparison principle (model \\er{DR},\\er{star11}) are two obstacles in our model \\er{star00} \\cite{Fu66,P07,SK06,WMZ14}. Besides, the nonlocal reaction makes the key energy estimates more difficult and many tools in the bounded domain can't work in the whole space \\cite{GST16,NT13,TW07,WMZ14}. In our results, we will use analytical methods in the energy estimates and derive the conditions on $\\alpha,\\beta,\\sigma$ for global existence (Theorem \\ref{thm1}).\n\nThe main work is devoted to the global unique solution of model \\er{star00} for $\\alpha>1, \\beta>1, \\sigma \\ge 1$, with that aim Section \\ref{sec3} considers the local existence and uniqueness of the solution. In Section \\ref{sec4}, the a priori estimates are performed and show that $-u^\\alpha \\int_{\\R^n} u^\\beta dx$ plays a crucial role on the global existence, thus complete the proof of Theorem \\ref{thm1}. Here we split the arguments into several parts strongly depending on the exponents $\\alpha,\\beta$ and $\\sigma,$ consequently the uniformly boundedness is obtained by virtue of the Moser iterative method. Section \\ref{sec5} discusses some open questions of Eq. \\er{star00}.\n\n\\section{Preliminaries} \\label{sec2}\n\\def\\theequation{2.\\arabic{equation}}\\makeatother\n\\setcounter{equation}{0}\n\\def\\thetheorem{2.\\arabic{thm}}\\makeatother\n\nWe firstly state some lemmas which will be used in the proof of local existence and Theorem 2.\n\nConsider the Cauchy problem\n\\begin{align}\\label{Cauchy}\n\\left\\{\n \\begin{array}{ll}\n z_t=\\Delta z-\\nabla \\cdot (z H)+F,~~x \\in \\R^n, t>0, \\\\\n z(x,0)=z_0(x).\n \\end{array}\n\\right.\n\\end{align}\nThen the solution $z(x,t)$ can be expressed from semigroup theory \\cite{Pa83} as follows:\n\\begin{lemma}\\label{lem21}\nLet $X$ be a Banach space, $z_0 \\in X, H \\in L^\\infty(0,T;X)$ and $F\\in L^\\infty(0,T;X)$. $z(x,t)$ is given by \\cite{BDP06,SK06}\n\\begin{align}\nz(x,t)=G(\\cdot,t)\\ast z_0+\\int_0^t \\nabla G(\\cdot,t-s) \\ast [z(\\cdot,s)H(\\cdot,s)] ds+\\int_0^t G(\\cdot,t-s)\\ast F(\\cdot,s)ds,~~0 \\le t \\le T\n\\end{align}\nis the mild solution of \\er{Cauchy} on $[0,T].$ Here $G(x,t)=\\frac{1}{(4 \\pi t)^{\\frac{n}{2}}}e^{-\\frac{|x|^2}{4t}}$ is the Green function associated to the heat equation.\n\\end{lemma}\n\nThe following lemma is an immediate consequence from Sobolev inequality \\cite{Lieb202} which will play an important role in the proof of global existence of solutions for equation \\er{star00}.\n\\begin{lemma}[\\cite{BL16}]\\label{interpolation}\nLet $p$ is expressed by \\er{p}, $1 \\le r1, \\beta>n\/2, \\sigma \\ge 1$. Assume that the initial data $u_0\\in W^{2,n-1}(\\R^n) \\cap L^1(\\R^n),$ then there exists a maximal existence time $T_{\\max} \\in (0,\\infty]$ such that $u(x,t) \\in W^{2,n-1}(Q_T) \\cap L^\\infty(0,T;L^1(\\R^n))$ is the unique non-negative strong solution of problem \\er{star00}. Furthermore, if $T_{\\max}<+\\infty,$ then\n\\begin{align}\\label{blowup}\n\\displaystyle \\lim_{t \\to T_{\\max}} \\|u(\\cdot,t)\\|_{L^\\infty(\\R^n)}=\\infty.\n\\end{align}\n\\end{proposition}\n\\begin{remark}\nHere $\\beta$ is chosen to be $\\beta>n\/2$ in order to prove the local existence and the a priori estimates Proposition \\ref{pro25}. By sobolev embedding theorem, $W^{2,n-1}(\\R^n)$ embedding into $L^\\infty(\\R^n)$ directly yields $u_0 \\in L^\\infty(\\R^n)$.\n\\end{remark}\n\\noindent\\textbf{Proof of Proposition \\ref{pro23}.} The proof can be divided into 2 steps. Step 1 investigates a semilinear parabolic equation and shows the local existence of the strong solution of Eqn. \\er{star00}. Step 2 gives the uniqueness of the strong solution. \\\\\n\n\\noindent\\textbf{Step 1 (Local existence).} In this step, we show the local existence of the strong solution, the proof is refined in spirit of \\cite{BDP06,SK06}. Here, we denote $X_T$ by\n\\begin{align}\nX_T:=\\{f \\in L^\\infty(0,T;W^{2,n-1}(\\R^n)), f_t \\in L^{n-1}(Q_T)\\big|~f \\ge 0, \\nonumber \\\\\n\\|f\\|_{L^\\infty(0,T;L^1 \\cap L^\\infty(\\R^n))} \\le C_1 \\|u_0\\|_{ L^\\infty(0,T;L^1 \\cap L^\\infty(\\R^n))}+C_2\\}\n\\end{align}\nfor some $C_1,C_2$ are constants only depending on $n,\\alpha,\\beta,\\sigma$ and $T>0$ to be determined later in Remark \\ref{Tguji}. We also define\n\\begin{align}\nW_u=\\{u \\in L^{n-1}(0,T;W^{2,n-1}(\\R^n)) \\mbox{~and~} u_t \\in L^{n-1}(0,T;L^{n-1}(\\R^n))\\}.\n\\end{align}\nFirstly, we consider\n\\begin{align}\nV=K \\ast f(x,t),\\qquad x \\in \\R^n,01)$ and using Young's inequality we have\n\\begin{align}\n&\\frac{1}{k}\\frac{d}{dt}\\int_{\\R^n} |u|^k dx+ (k-1) \\int_{\\R^n} |u|^{k-2} |\\nabla u|^2dx \\nonumber \\\\\n=& -\\int_{\\R^n} \\sigma f^{\\sigma-1} \\nabla V \\cdot \\nabla u |u|^{k-2}u dx-\\int_{\\R^n} f^{\\sigma-1} \\Delta V |u|^kdx+\\int_{\\R^n} u^{\\alpha+1} |u|^{k-2} dx\\left( 1-\\int_{\\R^n} f^\\beta dx\\right)\\nonumber \\\\\n\\le &\\frac{k-1}{4}\\int_{\\R^n} |u|^{k-2} |\\nabla u|^2dx+\\frac{\\|\\sigma f^{\\sigma-1}\\nabla V\\|_{L^\\infty(Q_T)}^2}{k-1} \\int_{\\R^n} |u|^k dx \\nonumber \\\\\n&+\\|f^{\\sigma-1}\\Delta V\\|_{L^\\infty(Q_T)} \\int_{\\R^n} |u|^k dx +\\left( 1+\\|f\\|_{L^\\infty(0,T;L^1 \\cap L^\\infty(\\R^n))}^\\beta \\right)\\int_{\\R^n} |u|^{\\alpha+k-1} dx, \\label{es1}\n\\end{align}\nletting $F_0=1+\\|f\\|_{L^\\infty(0,T;L^1 \\cap L^\\infty(\\R^n))}^\\beta$, we apply\n\\begin{align}\nv=u^{k\/2}, q=\\frac{2(k+\\alpha-1)}{k}, r=2, C_0=\\frac{2(k-1)}{k^2}\n\\end{align}\nin Lemma \\ref{interpolation} for $k>\\frac{n(\\alpha-1)}{2}$ such that\n\\begin{align}\n&F_0 \\int_{\\R^n} |u|^{\\alpha+k-1} dx \\nonumber\\\\\n\\le & \\frac{2(k-1)}{k^2} \\int_{\\R^n} |\\nabla u^{k\/2}|^2dx+C\\left(n,F_0 \\right) \\left( \\frac{k^2}{2(k-1)} \\right)^{\\frac{\\lambda q}{2-\\lambda q}} \\left( \\int_{\\R^n} |u|^k dx \\right)^{\\delta },\n\\end{align}\nwhere\n$$\n\\lambda=\\frac{\\frac{1}{2}-\\frac{1}{q}}{\\frac{1}{2}-\\frac{1}{p}} \\in (0,1),~\\delta=\\frac{1-\\frac{q}{p}}{2-\\frac{q}{2}-\\frac{2}{p}}.\n$$\nPlugging it into \\er{es1} one has\n\\begin{align}\n&\\frac{d}{dt} \\|u\\|_{L^k(\\R^n)} \\nonumber \\\\\n\\le &\\left( \\frac{\\|\\sigma f^{\\sigma-1} \\nabla V\\|_{L^\\infty(Q_T)}^2}{k-1} + \\|f^{\\sigma-1} \\Delta V \\|_{L^\\infty(Q_T)} \\right) \\|u\\|_{L^k(\\R^n)}\n+C\\left(n,F_0 \\right) \\left( \\frac{k^2}{2(k-1)} \\right)^{C\\left(\\frac{1}{k}\\right)} \\|u\\|_{L^k(\\R^n)}^{k \\delta-k+1},\n\\end{align}\ntaking $k \\to \\infty$ we derive\n\\begin{align}\n\\frac{d}{dt} \\|u\\|_{L^\\infty(\\R^n)} \\le C\\left(n,F_0,\\|f^{\\sigma-1}\\Delta V\\|_{L^\\infty(Q_T)}\\right) \\|u\\|_{L^\\infty(\\R^n)},\n\\end{align}\nusing Gronwall's inequality it's obtained that\n\\begin{align}\\label{Linflocal}\n\\displaystyle \\sup_{0n\/2$\n\\begin{align}\n&\\frac{1}{n-1} \\frac{d}{dt} \\int_{\\R^n} |w|^{n-1}dx +(n-2)\\int_{\\R^n} |\\nabla w|^2 |w|^{n-3} dx \\nonumber \\\\\n&~\\le \\frac{n-2}{4} \\int_{\\R^n} |w|^{n-3}|\\nabla w|^2 dx+\\sigma \\|f\\|_{L^\\infty(Q_T)}^{\\sigma-1}\\| u_1\\|_{W^{2,n-1}(Q_T)}\\int_{\\R^n} |f_1-f_2|\\cdot |w|^{n-2} dx \\nonumber \\\\\n&~~+\\sigma \\|f_2\\|_{L^\\infty(Q_T)}^\\sigma \\int_{\\R^n} |\\nabla w|\\cdot |w|^{n-2}dx +\\alpha \\|u\\|_{L^\\infty(Q_T)}^{\\alpha-1}F_0 \\int_{\\R^n} |w|^{n-1} dx+\\int_{\\R^n} \\beta |f|^{\\beta-1}|f_1-f_2| dx \\int_{\\R^n} u_1^\\alpha |w|^{n-2} dx \\nonumber \\\\\n&~\\le \\frac{n-2}{4} \\int_{\\R^n} |w|^{n-3}|\\nabla w|^2 dx+C\\left(\\|f\\|_{L^\\infty(Q_T)},\\|u_1\\|_{W^{2,n-1}(Q_T)}\\right) \\|f_1-f_2\\|_{L^{n-1}(\\R^n)} \\| w^{n-2}\\|_{L^{\\frac{n-1}{n-2}}(\\R^n)} \\nonumber \\\\\n&~~+\\frac{n-2}{4} \\int_{\\R^n} |w|^{n-3}|\\nabla w|^2 dx +C\\left( \\|f_2\\|_{L^\\infty(Q_T)},\\|u\\|_{L^\\infty(Q_T)},F_0 \\right) \\int_{\\R^n} |w|^{n-1} dx\\nonumber \\\\\n&~~+\\beta \\|f_1-f_2\\|_{L^{n-1}(\\R^n)} \\|f^{\\beta-1}\\|_{L^{\\frac{n-1}{n-2}}(\\R^n)} \\|w^{n-2}\\|_{L^{\\frac{n-1}{n-2}}(\\R^n)} \\|u_1^\\alpha\\|_{L^{n-1}(\\R^n)}. \\label{wcontraction}\n\\end{align}\nHere $u$ and $f$ satisfy $\\alpha u^{\\alpha-1}=u_1^\\alpha-u_2^\\alpha$ and $\\beta f^{\\beta-1}=f_1^\\beta-f_2^\\beta$ by mean value theorem, \\er{wcontraction} follows that\n\\begin{align}\n&\\frac{d}{dt} \\|w\\|_{L^{n-1}(\\R^n)} \\nonumber \\\\\n\\le & C_1\\left( \\|f\\|_{L^\\infty(Q_T)},\\|u\\|_{L^\\infty(Q_T)}\\right)\\|w\\|_{L^{n-1}(\\R^n)}+C_2\\left(\n\\|f\\|_{L^\\infty(Q_T)}, \\|u_1\\|_{W^{2,n-1}(Q_T)} \\right)\\|f_1-f_2\\|_{L^{n-1}(\\R^n)},\n\\end{align}\napplying Gronwall's inequality yields\n\\begin{align}\n\\displaystyle \\sup_{01)$ obtains that\n\\begin{align}\n\\frac{d}{dt}\\int_{\\R^n} f^r dx &=-\\frac{4(r-1)}{r} \\int_{\\R^n}|\\nabla f^{\\frac{r}{2}}|^2 dx +\\frac{r(r-1)}{\\sigma+r-1}\\int_{\\R^n} f^{\\sigma+r} dx+r (1-\\int_{\\R^n} f^\\beta dx) \\int_{\\R^n} f^{\\alpha+r-1} dx \\nonumber \\\\\n&\\le \\frac{r(r-1)}{\\sigma+r-1} \\|f\\|_{L^\\infty(\\R^n)}^\\sigma \\|f\\|_{L^r(\\R^n)}^r+r \\|f\\|_{L^\\infty(\\R^n)}^{\\alpha-1}\\|f\\|_{L^r(\\R^n)}^r,\n\\end{align}\nthus\n\\begin{align}\n\\frac{d}{dt} \\|f\\|_{L^r(\\R^n)} \\le \\frac{r-1}{\\sigma+r-1}\\|f\\|_{L^\\infty(\\R^n)}^\\sigma \\|f\\|_{L^r(\\R^n)}+ \\|f\\|_{L^\\infty(\\R^n)}^{\\alpha-1}\\|f\\|_{L^r(\\R^n)},\n\\end{align}\nletting $r \\to \\infty$ one has\n\\begin{align}\n\\|f\\|_{L^\\infty(\\R^n)} \\le \\|u_0\\|_{L^\\infty(\\R^n)}+\\int_0^t \\|f\\|_{L^\\infty(\\R^n)}^{\\sigma+1} ds+\\int_0^t \\|f\\|_{L^\\infty(\\R^n)}^\\alpha ds,\n\\end{align}\nhence from ODE inequality we have that there is a maximum existence time $T=T(\\|u_0\\|_{L^\\infty(\\R^n)})$ such that $f$ is bounded from above in $[0,T)$.\n\\end{remark}\n\n\\section{Proof of Theorem \\ref{thm1}}\\label{sec4}\n\\def\\theequation{4.\\arabic{equation}}\\makeatother\n\\setcounter{equation}{0}\n\\def\\thetheorem{4.\\arabic{thm}}\\makeatother\n\nIn this section, we derive the a priori estimates of the strong solution and complete the proof of Theorem \\ref{thm1}.\n\\begin{proposition}\\label{pro25}\nLet $\\sigma \\ge 1, \\beta>1, \\alpha>1,$ $u_0$ satisfies \\er{initial}. If\n\\begin{align}\n\\sigma+1\\le \\alpha \\mbox{ and } \\alpha<1+2\\beta\/n\n\\end{align}\nor\n\\begin{align}\n\\sigma+1>\\alpha \\mbox{ and } (\\sigma+1-\\alpha)(n+2)<2\\beta-n(\\alpha-1),\n\\end{align}\nthen for any $0\\frac{2(\\alpha-1)}{p-2}$ (which is $q\\frac{2(\\eta-1)}{p-2}$ (which is $q\\max \\left( \\frac{2(\\eta-1)}{p-2},\\frac{2(\\alpha-1)}{p-2},1 \\right)\n\\end{align}\nand\n\\begin{align}\\label{kprime}\n\\max \\left( \\frac{p(\\eta-1)}{p-2},\\frac{p(\\alpha-1)}{p-2},1,\\frac{k}{2} \\right)\\max \\left( \\frac{2(\\eta-1)}{p-2},~\\frac{2(\\alpha-1)}{p-2}, ~\\frac{2p(\\eta-1)}{p-2}-\\beta-(\\alpha-1),~ \\frac{2p(\\alpha-1)}{p-2}-\\beta-(\\alpha-1)\\right)=:K_0.\n\\end{align}\nNext recalling $b_{\\alpha}$ and $\\theta$, \\er{star} is equivalent to\n\\begin{align}\n(1-\\lambda_{\\alpha})\\left( \\frac{1}{\\beta}-\\frac{1}{k'} \\right)< \\left( 1-\\frac{\\lambda_\\alpha (k+\\alpha-1)}{k} \\right)\\left( \\frac{1}{\\beta}-\\frac{1}{k+\\alpha-1}\\right),\n\\end{align}\nit also reads\n\\begin{align}\n\\lambda_\\alpha-1<\\frac{\\lambda_\\alpha}{k} k'-\\frac{k'}{k+\\alpha-1}-\\frac{\\lambda_\\alpha(\\alpha-1)}{k \\beta} k',\n\\end{align}\nsubstituting $\\lambda_\\alpha$ into the above formula one has that for\n\\begin{align}\\label{tiaojian1}\n1 \\le \\alpha<1+\\left( 1-\\frac{2}{p} \\right) \\beta,\n\\end{align}\n\\er{star} holds true for any $k$ satisfying \\er{kxin}.\nFor \\er{starstar}, under the condition \\er{tiaojian1} similar arguments arrive at\n\\begin{align}\n\\left( \\frac{k}{2}-\\frac{k+\\eta-1}{p} \\right) \\left( \\frac{k'}{\\beta}-1 \\right)<\\left( \\left(\\frac{1}{2}-\\frac{1}{p} \\right)k'-\\frac{\\eta-1}{2} \\right) \\left( \\frac{\\alpha-1}{\\beta}-1+\\frac{k}{\\beta} \\right),\n\\end{align}\nit can be written as\n\\begin{align}\\label{starstarstar}\n\\frac{\\eta-\\alpha}{\\beta} ~\\left( \\frac{k}{2}-\\frac{k'}{p} \\right)<(\\eta+k-1-k') ~\\left( \\frac{1}{2}-\\frac{1}{p}-\\frac{\\alpha-1}{2 \\beta} \\right).\n\\end{align}\n\nDenote\n$$\nA_0=\\frac{1}{2}-\\frac{1}{p}-\\frac{\\alpha-1}{2 \\beta}>0,~A_1=\\frac{\\eta-\\alpha}{\\beta},\n$$\nrecalling \\er{k} and \\er{kprime}, if $\\eta \\le \\alpha,$ then $A_0>0$ is enough to guarantee that \\er{starstar} holds true. Otherwise if $\\eta > \\alpha,$ plugging\n$\nk'=\\frac{k+\\alpha-1+\\beta}{2}\n$\ninto \\er{starstarstar} yields\n$$\nA_0(\\eta-1)+\\frac{A_1}{2p}(\\alpha+\\beta-1)+\\Big[ \\frac{A_0}{2}-A_1 \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) \\Big] k~>~A_0 \\frac{\\alpha+\\beta-1}{2}.\n$$\nHence if\n$$\n\\frac{A_0}{2}-A_1 \\left( \\frac{1}{2}-\\frac{1}{2p} \\right)>0,\n$$\nor equivalently\n\\begin{align}\\label{tiaojian2}\n\\frac{\\eta-\\alpha}{\\beta} \\left(1-\\frac{1}{p} \\right)< \\frac{1}{2}-\\frac{1}{p}-\\frac{\\alpha-1}{2 \\beta},\n\\end{align}\nthen \\er{starstar} holds true for\n\\begin{align}\\label{DB}\nk>\\frac{A_0 \\left( \\frac{\\alpha+\\beta-1}{2}-(\\eta-1) \\right) -\\frac{A_1}{2p}(\\alpha+\\beta-1)}{\\frac{A_0}{2}-A_1\\left( \\frac{1}{2}-\\frac{1}{2p} \\right)}=:\\frac{D}{B}.\n\\end{align}\nIn the following, we prove $D>0$. In fact,\n\\begin{align}\nD&=A_0 \\left( \\frac{\\alpha+\\beta-1}{2}-(\\eta-1) \\right) -\\frac{A_1}{2p}(\\alpha+\\beta-1) \\nonumber\\\\\n& > A_0 \\left(\\frac{\\alpha+\\beta-1}{2}-\\frac{p-2}{p-1} \\frac{\\alpha+\\beta-1}{2} \\right)-\\frac{A_1}{2p}(\\alpha+\\beta-1) \\nonumber\\\\\n&= \\frac{A_0}{p-1}\\frac{\\alpha+\\beta-1}{2} -\\frac{A_1}{2p}(\\alpha+\\beta-1) \\nonumber \\\\\n&>0,\n\\end{align}\nwhere we have used \\er{tiaojian2} and its reformulation\n\\begin{align}\\label{tiaojian2bianxing}\n\\eta-1<\\frac{p-2}{2(p-1)} (\\alpha-1+\\beta).\n\\end{align}\nFurthermore, after some calculations we have\n\\begin{align}\n\\frac{D}{B} & = \\frac{\\left(-A_0-\\frac{\\alpha+\\beta-1}{2p\\beta} \\right) \\eta + \\frac{\\alpha}{p \\beta}\\frac{\\alpha+\\beta-1}{2}+A_0\\frac{\\alpha+\\beta+1}{2} }{ -\\frac{1}{\\beta} \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) \\eta+\\frac{A_0}{2}+\\frac{\\alpha}{\\beta} \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) } \\nonumber \\\\[2mm]\n& = \\frac{-\\frac{1}{\\beta}(\\beta-(\\alpha-1)) \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) \\eta+ \\big(\\beta-(\\alpha-1)\\big) \\left(\\frac{A_0}{2}+\\frac{\\alpha}{\\beta} \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) \\right)}{ -\\frac{1}{\\beta} \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) \\eta+\\frac{A_0}{2}+\\frac{\\alpha}{\\beta} \\left( \\frac{1}{2}-\\frac{1}{2p} \\right) } \\nonumber\\\\\n&=\\beta-(\\alpha-1).\n\\end{align}\n\nNow coming back to \\er{guji2} and \\er{guji3}, we infer from \\er{star} and \\er{starstar} by using Young's inequality that\n\\begin{align}\nC(k,\\alpha)\\|u\\|_{L^{k'}}^{b_\\alpha} & \\le \\left( \\|u\\|_{L^{k+\\alpha-1}}^{k+\\alpha-1} \\|u\\|_{L^\\beta}^\\beta \\right)^{\\frac{b_\\alpha \\theta}{k+\\alpha-1}} \\nonumber\\\\\n& \\le \\frac{k}{4} \\|u\\|_{L^{k+\\alpha-1}(\\R^n)}^{k+\\alpha-1} \\|u\\|_{L^\\beta(\\R^n)}^\\beta+C_1(k)\n\\end{align}\nand\n\\begin{align}\nC(k,\\eta) \\|u\\|_{L^{k'}}^{b_\\eta} & \\le \\left( \\|u\\|_{L^{k+\\alpha-1}}^{k+\\alpha-1} \\|u\\|_{L^\\beta}^\\beta \\right)^{\\frac{b_\\eta \\theta}{k+\\alpha-1}} \\nonumber \\\\\n& \\le \\frac{k}{4} \\|u\\|_{L^{k+\\alpha-1}(\\R^n)}^{k+\\alpha-1} \\|u\\|_{L^\\beta(\\R^n)}^\\beta+C_2(k).\n\\end{align}\n\nTherefore, substituting the above arguments into \\er{guji1} we thus end up with for all $0\\beta-(\\alpha-1)$ from \\er{kxin}, \\er{DB} and \\er{tiaojian1}, \\er{tiaojian2}. Precisely, for $\\eta>\\alpha$, \\er{tiaojian2} is equivalent to\n\\begin{align}\n\\frac{2(\\eta-1)}{p-2}(p-1)<\\beta+\\alpha-1,\n\\end{align}\nand then $k>\\max\\{K_0,\\beta-(\\alpha-1)\\}=\\beta-(\\alpha-1).$ Otherwise for $\\eta \\le \\alpha,$ by virtue of \\er{tiaojian1} it leads to\n$k>\\max\\{ K_0, \\beta-(\\alpha-1) \\}=\\beta-(\\alpha-1)$. \\\\\n\n\n\\noindent\\textbf{Step 2 ($L^k(\\R^n)$ estimates for $\\beta+\\alpha-1 \\le k \\le \\infty$).} Firstly we note that for $\\eta \\le \\alpha$, then $\\beta+\\alpha-1>\\frac{2(\\alpha-1)}{p-2}$ with the help of \\er{tiaojian1}, therefore we can take $k=\\beta+\\alpha-1$ in \\er{guji} and by H\\\"{o}lder inequality and Young's inequality on has\n\\begin{align}\n\\|u\\|_{L^{\\beta+\\alpha-1}(\\R^n)}^{\\beta+\\alpha-1} & \\le \\left( \\|u\\|_{L^{\\beta+2(\\alpha-1)}(\\R^n)}^{\\beta+2(\\alpha-1)} \\|u\\|_{L^\\beta(\\R^n)}^\\beta \\right)^{1\/2} \\nonumber \\\\\n& \\le \\frac{\\beta+\\alpha-1}{2} \\|u\\|_{L^{\\beta+2(\\alpha-1)}(\\R^n)}^{\\beta+2(\\alpha-1)} \\|u\\|_{L^\\beta(\\R^n)}^\\beta +\\frac{1}{2(\\beta+\\alpha-1)},\n\\end{align}\nplugging the above formula into \\er{guji} we have\n\\begin{align}\n\\frac{d}{dt} \\int_{\\R^n} u^{\\beta+\\alpha-1} dx+ \\int_{\\R^n} u^{\\beta+\\alpha-1} \\le C(\\beta+\\alpha-1)\n\\end{align}\nwhich follows the uniformly boundedness in time\n\\begin{align}\n\\int_{\\R^n} u^{\\beta+\\alpha-1} dx \\le \\|u_0\\|_{L^{\\beta+\\alpha-1}(\\R^n)}^{\\beta+\\alpha-1} +C(\\alpha,\\beta).\n\\end{align}\nOn the other hand, letting\n\\begin{align}\nv=u^{k\/2},~q=2,~1\\le r<2,~C_0=\\frac{2(k-1)}{k}\n\\end{align}\nin Lemma \\ref{interpolation} one has that for $n \\ge 1$\n\\begin{align}\\label{336}\n\\int_{\\R^n} u^k dx \\le \\frac{2(k-1)}{k} \\| \\nabla u^{\\frac{k}{2}} \\|_{L^2(\\R^n)}^2 +C(n,k) \\|u\\|_{L^{k_1}(\\R^n)}^k\n\\end{align}\nwhere $k_1=\\frac{kr}{2} \\beta+\\alpha-1\n\\end{align}\nso that\n\\begin{align}\\label{339}\n\\|u\\|_{L^{\\frac{\\beta+\\alpha-1+k}{2}}(\\R^n)}^k \\le \\left( \\|u\\|_{L^{k+\\alpha-1}(\\R^n)}^{k+\\alpha-1} \\|u\\|_{L^\\beta(\\R^n)}^\\beta \\right)^{\\frac{k}{\\beta+\\alpha-1+k}}.\n\\end{align}\nCombining \\er{336} and \\er{339} together yields\n\\begin{align}\\label{340}\n\\int_{\\R^n} u^k dx &\\le \\frac{2(k-1)}{k} \\| \\nabla u^{\\frac{k}{2}} \\|_{L^2(\\R^n)}^2+C(n,k) \\left( \\|u\\|_{L^{k+\\alpha-1}(\\R^n)}^{k+\\alpha-1} \\|u\\|_{L^\\beta(\\R^n)}^\\beta \\right)^{\\frac{k}{\\beta+\\alpha-1+k}} \\nonumber \\\\\n& \\le \\frac{2(k-1)}{k} \\| \\nabla u^{\\frac{k}{2}} \\|_{L^2(\\R^n)}^2+\\frac{k}{2} \\|u\\|_{L^{k+\\alpha-1}(\\R^n)}^{k+\\alpha-1} \\|u\\|_{L^\\beta(\\R^n)}^\\beta +C(n,k).\n\\end{align}\nSubstituting \\er{340} into \\er{guji} one has\n\\begin{align}\n\\frac{d}{dt} \\int_{\\R^n} u^k dx + \\int_{\\R^n} u^k dx \\le C(n,k).\n\\end{align}\nIt can obtained that for any $\\beta+\\alpha-12,$ hence our conjecture is that there exists finite time blow-up for $\\alpha-1>2\\beta\/n.$ As to the case $\\alpha-1=2\\beta\/n$, similar to \\cite{Jose2009}, whether there is a critical value for the initial data sharply separating global existence and finite time blow-up is also unknown. Our result is to be considered as the first step to a more general theory of chemotaxis system with nonlocal nonlinear reaction. This will be a fertile area to explore.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbstc b/data_all_eng_slimpj/shuffled/split2/finalzzbstc new file mode 100644 index 0000000000000000000000000000000000000000..0ec3a298ea6ce194bb1bf8cb95df5156ef47d1ee --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbstc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\t\n\t\\noindent In this paper, we address the universality of the determinant of a class of random Hermitian matrices.\n\tBefore discussing results specific to this symmetry assumption, we give a brief history of results in the non-Hermitian setting. In both settings, a priori bounds preceded estimates on moments of determinants, and the distribution of\n\tdeterminants for integrable models of random matrices. The universality of such determinants has then been the \n\tsubject of recent active research.\t\n\t\n\t\n\t\\subsection{Non-Hermitian matrices.}\\ \n\tEarly papers on this topic treat non-Hermitian matrices with independent and identically distributed entries.\n\t\tMore specifically, Szekeres and Tur\\'an first studied an extremal problem on the determinant of $\\pm 1$ matrices \\cite{SzeTur1937}. \n\tIn the 1950s, a series of papers \\cite{For1951,ForTuk1952, NyqRicRio1954, Tur1955, Pre1967} calculated \n\tmoments of the determinant of random matrices of fixed size (see also \\cite{Gir1980}). \n\tIn general, explicit formulae are unavailable for high order moments of the determinant except\n\twhen the entries of the matrix have particular distribution (see, for example, \\cite{Dem1989} and the references therein).\n\tEstimates for the moments and the Chebyshev inequality give upper bounds\n\ton the magnitude of the determinant. \\\\\n\t\n\t\\noindent Along a different line of research,\n\tfor an $N\\times N$ non-Hermitian random matrix $A$,\n\tErd\\H{o}s asked whether $\\det A$ is non-zero with probability\n\ttending to one as $N$ tends to infinity. In \\cite{Kom1967, Kom1968}, Kolm\\'{o}s proved\n\tthat for random matrices with Bernoulli entries,\n\tindeed $ \\det A \\neq 0$ with probability converging to 1 with $N$. \n\tIn fact, this method works for more general models, and following \\cite{Kom1967},\n\t\\cite{KahKomSze1995, TaoVu2006, TaoVu2007, BouVuWoo2010} give improved, exponentially small bounds on the probability that $\\det A = 0$. \\\\\n\t\n\n\t\\noindent \n\tIn \\cite{TaoVu2006}, the authors made the first steps towards quantifying the typical size of $\\left| \\det A \\right|$,\n\tproving that for Bernoulli random matrices, with probability\n\ttending to 1 as $N$ tends to infinity,\n\t\t\\begin{equation}\n\t\t\\label{tao eq 1}\n\t\t\\sqrt{N!}\\exp\\left(-c \\sqrt{N\\log N}\\right) \\leq \\left| \\det A\\right| \\leq \\omega(N)\\sqrt{N!}, \n\t\t\\end{equation}\n\tfor any function $\\omega(N)$ tending to infinity with $N$. In particular, with overwhelming probability\n\t\t\\[ \\log \\left| \\det A \\right| = \\left( \\frac{1}{2} + \\oo(1) \\right) N\\log N. \\]\n\t\n\t\\noindent In \\cite{Goo1963}, Goodman considered $A$ with independent standard real Gaussian entries.\n\tIn this case, he was able to express $\\left| \\det A \\right|^2$ as the product of independent\n\tchi-square variables. This enables one to identify the asymptotic distribution\n\tof $\\log\\left| \\det A \\right|$. Indeed, one can prove that\n\t\t\\begin{equation}\n\t\t\\label{GOE result}\n\t\t \\frac{\\log \\left| \\det A \\right| - \\frac{1}{2}\\log N! + \\frac{1}{2}\\log N}{\\sqrt{\\frac{1}{2}\\log N}} \n\t\t\t\\to \\mathscr{N}(0, 1),\n\t\t\\end{equation}\n\t(see \\cite{RemWes2005}). In the case of $A$ with independent complex Gaussian entries, a similar analysis yields \n\t\t\\[ \\frac{\\log \\left| \\det A \\right| - \\frac{1}{2}\\log N! + \\frac{1}{4}\\log N}{\\sqrt{\\frac{1}{4}\\log N}} \n\t\t\t\\to \\mathscr{N}(0, 1). \\]\n\t\n\t\\noindent \n\tIn \\cite{NguVu2014}, the authors proved (\\ref{GOE result}) holds under just\n\tan exponential decay hypothesis on the entries. Their method yields an explicit rate of convergence\n\tand extends to handle the complex case. Then in \\cite{BaoPanZho2015}, the authors extended \n\t(\\ref{GOE result}) to the case where the matrix entries only require bounded fourth moment.\\\\\n\t\n\t\\noindent The analysis of determinants of non-Hermitian random matrices relies crucially on the\n\tassumption that the rows of the random matrix are independent. The fact that this independence\n\tno longer holds for Hermitian random matrices forces one to look for new methods to prove similar\n\tresults to those of the non-Hermitian case. \n\tNevertheless, the history of this problem mirrors the history of the non-Hermitian case. \n\t\n\t\\subsection{Hermitian matrices.}\\ In the 1980s, Weiss posed the Hermitian analogs of \\cite{Kom1967, Kom1968} as an open problem.\n\tThis problem was solved, many years later in \\cite{CosTaoVu2006}, and \n\tthen in \\cite[Theorem 34]{TaoVu2011} the authors proved the Hermitian analog of \n\t(\\ref{tao eq 1}). This left open the question of describing the limiting\n\tdistribution of the determinant. \\\\\t\n\t\n\t\\noindent In \\cite{DelLeC2000}, Delannay and Le Ca\\\"{e}r used\n\tthe explicit formula for the joint distribution of the eigenvalues to prove\n\tthat for $H$ an $N \\times N$ matrix drawn from the GUE,\n\t\t\\begin{equation}\n\t\t\\label{GUE CLT}\n\t\t \\frac{\\log \\left| \\det H \\right| - \\frac{1}{2}\\log N! + \\frac{1}{4}\\log N}{\\sqrt{\\frac{1}{2}\\log N}} \n\t\t\t\\to \\mathscr{N}(0, 1). \n\t\t\\end{equation}\n\tAnalogously, one has\n\t\t\\begin{equation}\n\t\t\\label{GOE CLT}\n\t\t\\frac{\\log \\left| \\det H \\right| - \\frac{1}{2}\\log N! + \\frac{1}{4}\\log N}{\\sqrt{\\log N}}\n\t\t\t\\to \\mathscr{N}(0, 1)\n\t\t\\end{equation}\n\twhen $H$ is drawn from the GOE.\n\tProofs of these central limit theorems also appear in \\cite{TaoVu2012, CaiLiaZho2015, BorLaC2015,\n\tEdeLaC2015}. For related results concerning other models of random matrices, see \\cite{Rou2007} and the references therein.\\\\\n\n\t\n\t\\noindent While the authors of \\cite{TaoVu2012} give their own proof of (\\ref{GUE CLT}) and (\\ref{GOE CLT}), their main interest is\n\tto establish such a result in the more general setting of Wigner matrices. \n\tIndeed, they show that in (\\ref{GOE CLT}), we may replace $H$ by $W$, a Wigner matrix whose entries' first\n\tfour moments match those of $\\mathscr{N}(0,1)$.\n\tThey also prove the analogous result in the complex case.\n\tIn this paper, we will relax this four moment matching assumption to a \n\ttwo moment matching assumption (see Theorem \\ref{main theorem}). \\\\\n\t\n\t\\noindent Finally, we mention that new interest in averages of determinants of random (Hermitian) matrices \n\thas emerged from the study of complexity of high-dimensional landscapes \\cite{FyoWil2007,AufBenCer2013}.\n\t\n\t\n\t\\subsection{Statement of results: The determinant. }\nThis subsection gives our main result and suggests extensions in connection with the general class of log-correlated random fields. Our theorems apply to Wigner matrices as defined below. \n\t\n\n\t\n\t\\begin{definition}\n\t\\label{def wigner}\n \tA complex Wigner matrix, $W = \\left(w_{ij}\\right)$, is an $N\\times N$ \n\tHermitian matrix with entries \n\t\t\\[ W_{ii} = \\sqrt{\\frac{1}{N}}\\,x_{ii},\\, i = 1, \\dots, N, \\quad W_{ij} = \\frac{1}{\\sqrt{2N}} \\left(x_{ij} + {\\rm i} y_{ij}\\right), \\, \n\t\t1 \\leq i < j \\leq N.\\]\n\tHere \n\t$\\{x_{ii}\\}_{1\\leq i \\leq N}$, $\\{x_{ij}\\}_{1 \\leq i < j \\leq N}$, $\\{y_{ij}\\}_{1 \\leq i < j \\leq N}$ are \n\tindependent identically distributed random variables satisfying\n\t\t$\n\t\t\\mathbb{E} \\left(x_{ij}\\right) = 0, \\mathbb{E}\\left(x_{ij}^2\\right) = \\mathbb{E}\\left(y_{ij}^2\\right) = 1. \n\t\t$\n\tWe assume further \n \tthat the common distribution $\\nu$ of $\\{x_{ii}\\}_{1\\leq i \\leq N}$, $\\{x_{ij}\\}_{1 \\leq i < j \\leq N}$, $\\{y_{ij}\\}_{1 \\leq i < j \\leq N}$, has subgaussian decay, i.e.\\ there exists \n \t$\\delta_0 > 0$ such that\n \t\t\\begin{equation}\n\t\t\\label{subgaussian} \n\t\t\\int_\\mathbb{R} e^{\\delta_0 x^2}{\\rm d}\\nu(x) < \\infty. \n\t\t\\end{equation}\n\tIn particular, this means that all the moments of the entries of the matrix are bounded.\n\tIn the special case $\\nu = \\mathscr{N}(0,1)$, \n\t$W$ is said to be drawn from the Gaussian Unitary Ensemble (GUE). \n\t\n\tSimilarly, we define a real Wigner matrix to have entries of the form\n\t\t$ W_{ii} = \\sqrt{\\frac{2}{N}} x_{ii}$, $W_{ij} = \\sqrt{\\frac{1}{N}} x_{ij}$, \n\twhere $\\left\\{x_{ij}\\right\\}_{1\\leq i, j \\leq N}$ are independent identically distributed random variables satisfying\n\t\t$ \\mathbb{E}\\left(x_{ij}\\right) = 0, \\mathbb{E}\\left(x^2_{ij}\\right) = 1.$\n\tAs in the complex case, we assume\n\tthe common distribution $\\nu$ satisfies (\\ref{subgaussian}).\n\tIn the special case $\\nu = \\mathscr{N}(0,1)$,\n\t$W$ is said to be drawn from the Gaussian Orthogonal Ensemble (GOE).\n\t\\end{definition}\n\t\t\t\n\t\\noindent Our main result extends (\\ref{GUE CLT}) and (\\ref{GOE CLT}) to the above class of Wigner matrices. In particular,\n\tthis answers a conjecture from \\cite[Section 8]{TaoVu2014}, which asserts that the central limit theorem \n\t(\\ref{GOE CLT})\n\tholds for Bernoulli ($\\pm 1$) matrices. Note that in the following statement, our centering\n\tdiffers from (\\ref{GUE CLT}) and (\\ref{GOE CLT}) because we normalize our matrix entries\n\tto have variance of size $N^{-1}$. \n\t\t\t\n\n\t\\begin{theorem}\\label{main theorem}\n\t\tLet $W$ be a real Wigner matrix satisfying (\\ref{subgaussian}). Then\n\t\t\\begin{equation}\\label{main}\n\t\t\\frac{\\log\\left| \\det W\\right| + \\frac{N}{2} }{\\sqrt{\\log N}} \\to \\mathscr{N}(0, 1). \n\t\t\\end{equation}\n\t\tIf $W$ is a complex Wigner matrix satisfying (\\ref{subgaussian}), then\n\t\t\\begin{equation}\n\t\t\t\\frac{\\log\\left| \\det W \\right| + \\frac{N}{2}}{\\sqrt{\\frac{1}{2}\\log N}} \n\t\t\t\\to \\mathscr{N}(0, 1). \n\t\t\\end{equation}\n\t\\end{theorem}\n\t\n\t \\noindent Assumption (\\ref{subgaussian}) may probably be relaxed to a finite moment assumption, but we will not pursue this direction here. Similarly, it is likely that the matrix entries do not need to be identically distributed;\n\t only the first two moments need to match. However we consider the case of a \n\t unique $\\nu$ in this paper. \n\t\n\t\n\t\\begin{remark}\\label{rem:multidim}\n\tLet $H$ be drawn from the GUE normalized so that in the limit as $N\\to\\infty$, the distribution\n\tof its eigenvalues is supported on $[-1,1]$, and let\n\t\t\\[D_N(x) = -\\log\\left| \\det\\left( H - x\\right) \\right|. \\]\n\tIn \\cite{Kra2007}, Krasovsky proved that for $x_k \\in (-1,1)$, $k =1, \\hdots, m$, \n\t$x_j \\neq x_k$, uniformly in $\\Re \\left(\\alpha_k\\right) > - \\frac{1}{2}$, $\\mathbb{E}\\left( e^{-\\sum_{k=1}^m \\alpha_k D_N\\left(x_k\\right)} \\right)$ is asymptotic to\n\t\t\\begin{align}\\label{krasovsky}\n\t\t\t\\prod_{k=1}^m \\left( C\\left(\\frac{\\alpha_k}{2}\\right) \\left(1-x_k^2\\right)^{\\frac{\\alpha_k^2}{8}} \n\t\t\t N^{\\frac{\\alpha_k^2}{4}} e^{\\frac{\\alpha_k N}{2}\\left( 2x_k^2 -1 - 2\\log 2 \\right)}\\right)\n\t\t\t &\\prod_{1\\leq \\nu < \\mu \\leq m} \\left(2\\left| x_\\nu - x_\\mu \\right| \\right)^{-\\frac{\\alpha_\\nu \\alpha_\\mu}{2}}\n\t\t\t \\left(1 + {\\rm O}\\left( \\frac{\\log N}{N} \\right) \\right),\n\t\t\\end{align}\n\tas $N\\to\\infty$. Here $C(\\cdot)$ is the Barnes function.\n\tSince the above estimate holds uniformly for $\\Re \\left(\\alpha_k\\right) > -\\frac{1}{2}$, \n\t(\\ref{krasovsky}) shows that\n\tletting\n\t\t\\[\t\n\t\t\t\\widetilde D_N(x) = \\frac{D_N(x) - N\\left( x^2 - \\frac{1}{2} - \\log 2 \\right)}{\\sqrt{\\frac{1}{2} \\log N}},\n\t\t\\]\n\tthe vector $\\left( \\widetilde D_N\\left(x_1\\right), \\hdots, \\widetilde D_N\\left(x_m\\right) \\right)$ converges in distribution \n\tto a collection of $m$ independent standard Gaussians. Our proof of Theorem\n\t\\ref{main theorem} automatically extends this result to Hermitian Wigner matrices as defined above.\n\tIf one were to prove an analogous convergence for the GOE, our proof of Theorem\n\t\\ref{main theorem} would extend the result to real symmetric Wigner matrices as well. \n\t\\end{remark}\n\t\n\t\n\\begin{remark} \\label{rem:multidim2}\n\tWe note that (\\ref{krasovsky}) was proved for fixed, distinct $x_k$'s. If (\\ref{krasovsky}) holds for collapsing\n\t$x_k$'s, this means that fluctuations of the log-characteristic polynomial of the GUE become log-correlated for large dimension, \n\tas in the case of the Circular Unitary Ensemble \\cite{BourgadeZeta}. More specifically, let\n\t$\\widetilde D_N(\\cdot)$ be as above, and let $\\Delta$ denote the distance between two points $x, y$ in\n\t$(-1, 1)$. For $\\Delta \\geq 1\/N$, we expect the covariance between $\\widetilde D_N(x)$ and $\\widetilde D_N(y)$\n\tto behave like $\\frac{\\log\\left(1\/\\Delta\\right)}{\\log N}$, and for $\\Delta \\leq 1\/N$, we expect it to\n\tconverge to 1.\n\\end{remark}\n\n\n\\noindent \n\tOur method automatically\n\t establishes the content of Remark \\ref{rem:multidim2} for Wigner matrices,\n\t conditional on the knowledge of GOE and GUE cases. \n\t The exact statement is as follows, and we omit the proof, strictly similar to Theorem \\ref{main theorem}. Denote\n\t \t$$\n\t \tL_N(z)=\\log|\\det(W-z)|-N\\int_{-2}^2\\log|x-z|\\,{\\rm d} \\rho_{\\rm sc}(x).\n\t \t$$\n\t \t\n\t \t\n\\begin{theorem}\\label{multldim}\n\t\tLet $W$ be a real Wigner matrix satisfying (\\ref{subgaussian}). Let \n\t\t$\\ell\\geq 1$, $\\kappa>0$ and let $E^{(1)}_N)_{N\\geq 1},\\dots$, $(E^{(\\ell)}_N)_{N\\geq 1}$\n\t\tbe energy levels in $[-2+\\kappa,2-\\kappa]$.\n\t\tAssume that for all $i\\neq j$, for some constants $c_{ij}$ we have\n$$\n\\frac{\\log |E_N^{(i)}-E_N^{(j)}|}{-\\log N}\\to c_{ij}\\in[0,\\infty]\n$$\t\t\nas $N\\to\\infty$. Then\n\\begin{equation}\\label{eqn:logcor}\n\\frac{1}{\\sqrt{\\frac{1}{2}\\log N}}\\left(\nL_N\\left( \\left(E^{(1)}_N\\right) \\right),\\dots,\nL_N\\left(\\left(E^{(\\ell)}_N\\right)\\right)\n\\right)\n\\end{equation}\nconverges in distribution to a Gaussian vector with covariance $ (\\min(1,c_{ij}))_{1\\leq i,j\\leq N}$ (with diagonal $1$ by convention), provided the same result holds for GOE. \\\\\n\n\\noindent The same result holds for Hermitian Wigner matrices, assuming it is true in the GUE case, up to a change in the normalization from $\\sqrt{\\frac{1}{2}\\log N}$ to $\\sqrt{\\log N}$ in (\\ref{eqn:logcor}).\n\\end{theorem}\t \t\n\t \t\n\t \t\\noindent Theorem \\ref{multldim} says $L_N$ converges to a log-correlated field, provided this result holds for the Gaussian ensembles. It therefore suggests that the universal limiting behavior of extrema and convergence to Gaussian multiplicative chaos conjectured for unitary matrices in \\cite{FyoHiaKea12} extends to the class of Wigner matrices. \nTowards these conjectures, \\cite{ArgBelBou15,ChaMadNaj16,PaqZei17,FyoSim2015,LamPaq16} proved asymptotics on the maximum of characteristic polynomials of circular unitary and invariant ensembles, and \\cite{BerWebWon2017,NikSakWeb2018,Web2015} established convergence to the Gaussian multiplicative chaos, for the same models.\nWe refer to \\cite{Arg16} for a survey on log-correlated fields and their connections with random matrices, branching processes, the Gaussian free field, and analytic number theory. \n\n\n\n\n\\subsection{Statement of results: Fluctuations of Individual Eigenvalues. }\t\nWith minor modifications, the proof of Theorem \\ref{main theorem} also extends the results of \\cite{Gus2005} and \\cite{ORo2010} \nwhich describe the fluctuations of individual eigenvalues in the GUE and GOE cases, respectively. \nBy adapting the method of \\cite{TaoVu2011}, \\cite{ORo2010} proves the following theorem under\nthe assumption that the first four moments of the matrix entries match those of a standard Gaussian.\nIn Appendix B, we show that the individual eigenvalue fluctuations of the GOE (GUE) also hold for real (complex) Wigner matrices in the sense of Definition \\ref{def wigner}.\nIn particular, the fluctuations of eigenvalues of Bernoulli matrices are Gaussian in the large dimension limit, which answers a question from \\cite{TaoVu2014}. \\\\\n\n\n\\noindent To state the following theorem, we follow the notation of Gustavsson \\cite{Gus2005} and write \n$k(N) \\sim N^\\theta$ to mean that $k(N) = h(N)N^\\theta$ where $h$ is a function such that for \nall ${\\varepsilon} > 0$, for large enough $N$,\n\t\\begin{equation}\n\t\\label{gustavsson sim}\n\tN^{-{\\varepsilon}}\\leq h(N)\\leq N^{\\varepsilon}.\n\t\\end{equation}\n\tIn the following, $\\gamma_k$ denotes the $k$\\textsuperscript{th} quantile of the semicircle law,\n\t\t\t\\begin{equation}\\label{quantiles}\n\t\t\t \\frac{1}{2\\pi}\\int_{-2}^{\\gamma_k} \\sqrt{\\left(4 - x^2 \\right)_+}{\\rm d}x = \\frac{k}{N}.\n\t\t\t\\end{equation}\n\n\\begin{theorem}\n\\label{fluctuations of individual eigenvalues}\n\tLet $W$ be a Wigner matrix satisfying (\\ref{subgaussian}) with eigenvalues $\\lambda_1 < \\lambda_2 <\n\t\\dots < \\lambda_N$. Consider $\\left\\{ \\lambda_{k_i} \\right\\}_{i=1}^m$ such that $0 < k_{i+1} - k_{i} \\sim N^{\\theta_i},\n\t0 < \\theta_i \\leq 1$, and $k_i \/N \\to a_i \\in (0,1)$ as $N \\to \\infty$.\n\tLet\n\t\\begin{equation}\n\t\\label{def: X_i}\n\t\tX_i = \\frac{\\lambda_{k_i} - \\gamma_{k_i}}{\\sqrt{ \\frac{4\\log N}{\\beta \\left(4 - \\gamma_{k_i}^2 \\right) N^2} }},\n\t\t\\quad i = 1, \\dots, m,\n\t\\end{equation}\n\twith $\\beta =1$ for real Wigner matrices, and $\\beta = 2$ for complex Wigner matrices.\n\tThen as $N \\to \\infty$,\n\t\\[\n\t\t\\P\\left\\{ X_1 \\leq \\xi_1, \\dots, X_m \\leq \\xi_m \\right\\} \\to \\Phi_{\\Lambda}\\left(\\xi_1, \\dots, \\xi_m \\right),\n\t\\]\n\twhere $\\Phi_\\Lambda$ is the cumulative distribution function for the $m$-dimensional normal distribution with\n\tcovariance matrix $\\Lambda_{i,j} = 1 - \\max\\left\\{ \\theta_k: i \\leq k < j < m \\right\\}$ if \n\t$i < j$, and $\\Lambda_{i,i} = 1$.\n\\end{theorem}\n\n\t\n\\noindent The above theorem has been known to follow from the homogenization result in \\cite{BouErdYauYin2016} (this technique gives a simple expression for the relative {\\it individual} positions of coupled eigenvalues from GOE and Wigner matrices) and fluctuations of mesoscopic linear statistics; see \\cite{LanSos2018} for a proof of eigenvalue fluctuations for Wigner and invariant ensembles. However, the technique from \\cite{BouErdYauYin2016} is not enough for Theorem \\ref{main theorem}, as the determinant depends on the positions of {\\it all} eigenvalues.\n\t\n\t\n\t\\subsection{Outline of the proof. }\\ In this section, we give the main steps of\n\tthe proof of Theorem \\ref{main theorem}. Our outline discusses the real case, but\n\tthe complex case follows the same scheme. \\\\\n\t\n\t\\noindent The main conceptual idea of the proof follows the three step strategy of \n\t\\cite{ErdPecRmSchYau2010,ErdSchYau2011}. With a priori localization of eigenvalues (step one, \\cite{ErdYauYin2012Rig, CacMalSch2015}), one can prove that the determinant has universal fluctuations after a adding a small Gaussian noise (this second step relies on a stochastic advection equation from \\cite{BourgadeExtreme}). The third step proves by a density argument that the Gaussian noise does not change the distribution of the determinant, thanks to a perturbative moment matching argument as in \\cite{TaoVu2011,ErdYauYin2012Bulk}. We include Figure \\ref{diagram of implications} below to \n\thelp summarize\n\tthe argument.\\\\\n\t\n\t\n\n\t\\noindent {\\it First step: small regularization.}\n\tIn Section \\ref{smoothing section}, \n\twith Theorems \\ref{schlein} and \\ref{fixed energy thm}, we reduce the proof of Theorem \\ref{main theorem} to showing the convergence\n\\begin{equation}\\label{eqn:probaconv}\n \\frac{\\log\\left| \\det (W+\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\eta_0)\\right| + c_N}{\\sqrt{\\log N}} \\to \\mathscr{N}(0, 1)\n\t\t\t\t\\end{equation}\n\twith some explicit deterministic $c_N$, and the small regularization parameter\n\t\t\\begin{equation}\n\t\t\\label{def:eta0}\n\t\t\t\\eta_0 = \\frac{e^{ \\left(\\log N\\right)^{ \\frac{1}{4} } }}{N}.\n\t\t\\end{equation}\n\t\n\n\t\n\t\n\t\n\t\n\t\n\t\n\\noindent {\\it Second step: universality after coupling.} \nLet $M$ be a symmetric matrix which serves as the initial condition for the\n\tmatrix Dyson's Brownian Motion (DBM) given by \n\t\t\\begin{equation}\n\t\t\t\\label{eq:DBM evolution} \n\t\t{\\rm d} M_t = \\frac{1}{\\sqrt{N}}{\\rm d}B^{(t)} - \\frac{1}{2}M_t{\\rm d}t.\n\t\t\\end{equation}\nHere $B^{(t)}$ is a symmetric $N \\times N$ matrix such that $B^{(t)}_{ij}\\left(i < j\\right)$ and $B_{ii}^{(t)}\/\\sqrt{2}$ are\nindependent standard Brownian motions.\n\tThe above matrix DBM induces \n\ta collection of independent standard Brownian motions (see \\cite{AndGuiZei2010}), $\\tilde{B}^{(k)}_t\/\\sqrt{2}$, $k = 1, \\dots, N$ such that the eigenvalues of $M$ satisfy the system of stochastic differential equations\n\t\\begin{align}\n\t\t\\label{eqn:DBM}\n\t\t{\\rm d}x_k(t) &= \\frac{{\\rm d}\\tilde{B}_{t}^{(k)}}{\\sqrt{N}} + \\left( \\frac{1}{N} \\sum_{l \\neq k}\n\t\t\t\\frac{1}{x_k(t) - x_l(t)} - \\frac{1}{2} x_k(t)\\right){\\rm d}t \t\n\t\\end{align}\n\twith initial condition given by the eigenvalues of $M$.\n\tIt has been known since \\cite{McK1969} that the system \n\t(\\ref{eqn:DBM}) has a unique strong solution.\n\tWith this in mind, we follow \\cite{BouErdYauYin2016} and introduce the following\n\tcoupling scheme. First, run the matrix DBM taking $\\tilde{W}_0$, a Wigner matrix, as the initial condition. Using\n\tthe induced Brownian motions, run the dynamics given by (\\ref{eqn:DBM}) using the eigenvalues \n\t$y_1 < y_2 < \\dots < y_N$\n\tof $\\tilde{W}_0$ as the initial condition. Call the solution to this system ${\\bm y}(\\tau)$.\n\tUsing the very same (induced) Brownian motions, run\n\tthe dynamics given by (\\ref{eqn:DBM}) again, this time using the eigenvalues of a GOE matrix, ${\\bm x}(0)$,\n\tas the initial condition. Call the solution to this system ${\\bm x}(\\tau)$. \\\\\n\t\n\t\n\t\\noindent Now fix $\\epsilon > 0$ and let \n\t\\begin{equation}\n\t\t\\label{def:tau}\n\t\t\t\\tau = N^{-\\epsilon}.\n\t\t\\end{equation}\n\tUsing Lemma \\ref{extreme}, we show that \n\t\t\\begin{equation}\n\t\t\\label{coupling at tau}\n\t\t \\frac{\\sum_{k = 1}^N \\log \\left|x_k(\\tau) + {\\rm i}\\eta_0\\right| - \n\t\t\t\t\\sum_{k = 1}^N \\log \\left|y_k(\\tau) + {\\rm i}\\eta_0\\right| }{\\sqrt{\\log N}}\n\t\t\\end{equation}\n\tand\n\t\t\\begin{equation} \\label{smoothing t} \\frac{\\sum_{k = 1}^N \\log \\left|x_k(0) + z_\\tau\\right| - \n\t\t\t\t\\sum_{k = 1}^N \\log \\left|y_k(0) + z_\\tau\\right| }{\\sqrt{\\log N}}\n\t\t\\end{equation}\n\tare very close. Here $z_\\tau$ is as in (\\ref{def:ztau}) with $z = {\\rm i} \\eta_0$. \n\tThe significance of this is that since\n\t$z_\\tau \\sim \\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\tau$, we can use\n\tLemma \\ref{expectation} and well-known central limit theorems which apply to nearly macroscopic scales \n\tto show that (\\ref{smoothing t}) has variance of order ${\\varepsilon}$. Consequently,\n\t(\\ref{coupling at tau}) is also small, and since ${\\bm x}(\\tau)$ is distributed as the eigenvalues of a GOE matrix,\n\twe have proved universality of the regularized determinant after coupling.\\\\\n\t\n\n\n\t\n\t\\noindent {\\it Third step: moment matching}. In Section \\ref{conclusion}, we conclude the proof \n\tof Theorem \\ref{main theorem}. First, we choose\n\t$\\tilde{W}_0$ so that $\\tilde{W}_\\tau$ and $W$ have entries whose first four moments are\n\tclose, as in \\cite{ErdYauYin2012Bulk}. With this approximate moment matching, we use a perturbative argument, as in \\cite{TaoVu2012},\n\tto prove that (\\ref{eqn:probaconv})\n\tholds for $W$ if and only if it holds for $\\tilde{W}_\\tau$. \t\n\tBut as (\\ref{coupling at tau}) is small, this means (\\ref{eqn:probaconv})\n\tholds for $W$ if and only if it holds for a GOE matrix. By (\\ref{GOE CLT}), this concludes the proof.\n\t\n\t\n\n\n\t\n\t\n\t\\begin{wrapfigure}[22]{l}{8cm}\n\t\\vspace{-1.2cm}\n\t\\begin{center}\n\t\\begin{tikzcd}[row sep = 3cm, column sep = 3cm] \n\t\t& W \\\\\n\t\t\\tilde{W}_0 \\arrow{r} \\arrow[rightarrow]{r} [anchor=center,yshift=2ex] \n\t\t\t{\\text{Matrix DBM } {\\rm d} B_{ij}} & \\tilde{W}_\\tau \\arrow[leftrightarrow]{u} [anchor=center,rotate=-90,yshift=2ex]{\\text{Moment Matching (3)}\n\t\t\\\\ [-85pt]\n\t\t{\\bm y(0)} \\arrow{r} [anchor=center,yshift=-2ex]{\\text{Eigenvalues DBM } {\\rm d} \\tilde{B}_k}& {\\bm y(\\tau)} \\\\ \n\t\t[-30pt]\n\t\t{{\\bm x}(0)} \\arrow[rightarrow]{r}[anchor=center,yshift=2ex]{ \\text{Eigenvalues DBM }{\\rm d} \\tilde{B}_k}& {{\\bm x}(\\tau)}\n\t\t\\arrow{u} [anchor=center,rotate=-90,yshift=2ex]{\\text{Coupling (2)}} \t\n\t\\end{tikzcd} \n\t\\caption{\n \t\tWe show (\\ref{main}) holds for $\\tilde{W}_\\tau$ if and only\n\t\tif it holds for $W$, and\n\t\twe prove that (\\ref{main}) holds for\n\t\t${\\bm x}(\\tau)$ if and only if (\\ref{main}) holds for $\\tilde{W}_\\tau$. \n\t\tSince ${\\bm x}(\\tau)$ is distributed as the eigenvalues of a GOE matrix, it satisfies\n\t\t(\\ref{GOE CLT}) and we conclude the proof.\n\t\tNote that $\\log| \\det \\tilde{W}_\\tau | = \\sum \\log \\left|y_k(\\tau) \\right|$ pathwise because\n\t\t$B$ induces $\\tilde{B}$. \n } \\label{diagram of implications}\n\t\\end{center}\n\\end{wrapfigure}\n\n\t\n\t\\subsection{Notation. }\n\tWe shall make frequent use of the notations $s_W$ and $m_{sc}$ in the remainder of this paper. We \n\tstate their definitions here for easy reference.\n\tLet $W$ be a Wigner matrix with eigenvalues $\\lambda_1 < \\lambda_2 < \\dots < \\lambda_N$.\n\tFor $\\Im (z) > 0$, define\n\t\t\\begin{equation}\n\t\t\\label{def:stransform}\n\t\ts_W(z) = \\frac{1}{N} \\sum_{k=1}^N \\frac{1}{\\lambda_k - z},\n\t\t\\end{equation}\n\tthe Stieltjes transform of $W$. \n\tNext, let\n\t\t\\begin{equation}\n\t\t\\label{def:msc}\n\t\t\tm_{sc}(z) = \\frac{-z + \\sqrt{z^2 -4}}{2},\n\t\t\\end{equation}\n\twhere the square root $\\sqrt{z^2 -4}$ is chosen with the branch cut in $[-2, 2]$ so that $\\sqrt{z^2-4} \\sim z$ \n\tas $z \\to \\infty$. Note that\n\t\t\\begin{equation}\n\t\t\t\\label{self consistent m}\n\t\t\tm_{sc}(z) + \\frac{1}{m_{sc}(z)} + z = 0.\n\t\t\\end{equation} \n\n\\noindent Finally, throughout this paper, unless indicated otherwise,\n\t$C$ ($c$) denotes a large (small) constant independent of all other parameters of the problem. \n\tIt may vary from line to line.\t\n\t\n\t\t\t\t\t\n\\section{Initial Regularization}\\label{smoothing section}\n\n\t\\noindent Let $y_1 < y_2 < \\dots 0$. \n\t\tFor any $\\tilde{E} > 0$, there exist constants $M_0, N_0, C, c, c_0 >0$ such that \n\t\t\t\t\t\\[ \\mathbb{P}\\left( \\left| \\Im \\left(s_W\\left(E + {\\rm i}\\eta\\right)\\right) - \n\t\t\t\t\t\\Im \\left(m_{sc}\\left(E + {\\rm i}\\eta\\right)\\right) \\right| \n\t\t\t\t\t\t\\geq \\frac{K}{N\\eta} \\right) \n\t\t\t\t\t\t\\leq \\frac{\\left(Cq\\right)^{cq^2}}{K^q} \\]\n\t\tfor all $\\eta \\leq \\tilde{\\eta}$, $|E| \\leq \\tilde{E}$, $K > 0$, $N > N_0$ such that $N\\eta > M_0$,\n\t\tand\n\t\t$q \\in \\mathbb{N}$ with $q \\leq c_0\\left(N\\eta\\right)^{\\frac{1}{8}}$.\n\t\\end{theorem}\t\n\t\\begin{remark}\n\t\tIn \\cite{ErdYauYin2012Rig}, the authors proved that for some positive constant $C_0$, and $N$ large enough,\n\t\t\t\\[ \\left| s_W\\left(E + {\\rm i}\\eta\\right) - m_{sc}\\left(E + {\\rm i}\\eta\\right) \\right| \\leq\n\t\t\t\t\\frac{e^{C_0(\\log\\log N)^2}}{N\\eta} \\] \n\t\tholds with high probability. Though this estimate is weaker than the estimate\n\t\tof Theorem \\ref{schlein}, it holds for a more general model of Wigner matrix in which\n\t\tthe entries of the matrix need not have identical variances. On the other hand, we require the \n\t\tstronger estimate in Theorem \\ref{schlein} in our proof of Proposition\n\t\t\\ref{smoothing}, and so we restrict ourselves to Wigner matrices as defined\n\t\tin Definition \\ref{def wigner}.\n\t\tThe proof of Lemma \\ref{expectation} also relies on Definition \\ref{def wigner}. \n\t\\end{remark}\n\t\\begin{theorem}[Theorem 2.2 in \\cite{BouErdYauYin2016}]\\label{fixed energy thm}\n\t\tLet $\\rho_1$ denote the first correlation function for the eigenvalues of an $N\\times N$ Wigner matrix,\n\t\tand let $\\rho(x) = \\frac{1}{2\\pi} \\sqrt{\\left( 4 - v^2 \\right)_+}$. \n\t\tThen for any $F: \\mathbb{R} \\to \\mathbb{R}$ continuous and compactly supported, and for any $\\kappa > 0$,\n\t\twe have, \n\t\t\t\\begin{equation}\\label{FEU} \\lim_{N\\to\\infty} \\sup_{E \\in [-2 + \\kappa, 2 - \\kappa]} \\left |\\frac{1}{\\rho(E)} \n\t\t\t\\int F(v)\\rho_1\\left( E + \\frac{v}{N\\rho(E)}\\right){\\rm d}v -\n\t\t\t\\int F(v) \\rho(v)\\, {\\rm d}v \\right| = 0.\\end{equation}\n\t\\end{theorem}\n\t\\begin{remark}\n\t\tIn fact Theorem 2.2 in \\cite{BouErdYauYin2016} makes a much stronger statement, namely it states the\n\t\tanalogous convergence for all correlation functions in the case of generalized Wigner matrices.\n\t\\end{remark}\n\t\n\t\\begin{corollary}\\label{repulsion}\n\t\tFor any small fixed $\\kappa,\\gamma > 0$ there exists $C,N_0>0$ such that for any $N\\geq N_0$ and any interval $I \\subset [-2 + \\kappa, 2 - \\kappa]$ we have\n\t\t\t\\[ \\mathbb{E}\\left( \\left| \\left\\{ y_k: y_k \\in I \\right\\} \\right|\\right)\\leq CN |I| + \\gamma. \\]\n\t\t\t\t\\end{corollary}\n\t\\begin{proof}\n\t\tIn Theorem \\ref{fixed energy thm}, choosing $F$ to be an indicator of an interval of length\n\t\t$1$ gives an expected value ${\\rm O}(1)$.\n\t\tSince the statement of Theorem \\ref{fixed energy thm} holds uniformly in $E$, \n\t\twe may divide the interval $I$ into sub-intervals of length order $1\/N$\n\t\tto conclude. \n\t\\end{proof}\n\n\t\\begin{corollary}\\label{micro fixed energy}\n\t\tLet $E\\in[-2+\\kappa,2-\\kappa]$ be fixed and $I_\\beta = (E - \\beta\/2, E + \\beta\/2)$ with $\\beta=\\oo(N^{-1})$. Then\n\t\t\t\\[ \\lim_{N\\to\\infty} \\mathbb{P}\\left( \\left| \\left\\{ y_k \\in I_\\beta \\right\\}\\right| = 0 \\right) = 1. \\]\n\t\\end{corollary}\n\t\\begin{proof}\n\t\tLet ${\\varepsilon}$ be any fixed small constant.\n\t\tLet $f$ be fixed, smooth, positive, equal to $1$ on $[-1,1]$ and $0$ on $[-2,2]^c$.\n\t\tThen\n\t\t$$\n\t\t\t\t\\mathbb{P}\\left( \\left| \\left\\{ y_k \\in I_\\beta \\right\\}\\right| \\geq 1 \\right) \\leq\n\t\t\t\t\\mathbb{E} \\left( \\left| \\left\\{ y_k \\in I_\\beta \\right\\}\\right| \\right) \\leq\n\t\t\t\t\\mathbb{E} \\left( \\sum_k f\\left(N(y_k-E)\/{\\varepsilon}\\right) \\right) \\leq 10{\\varepsilon},\n\t\t$$\n\t\twhere the last bound holds for large enough $N$ by Theorem \\ref{fixed energy thm}. \n\t\\end{proof}\n\t\\begin{proof}[Proof of Proposition \\ref{smoothing}]\n\tWe first choose $\\tilde{\\eta}<\\eta_0$ so that we can use Theorem \\ref{schlein} to estimate\n\t\t\\[\\mathbb{E}\\left( \\left| g\\left(\\eta_0\\right) - g\\left(\\tilde{\\eta}\\right) \\right| \\right),\\]\n\tand then take care of the remaining error using Corollaries \\ref{repulsion} and \\ref{micro fixed energy}.\n\tLet\n\t\t\\[ \\tilde{\\eta} = \\frac{d_N}{N}, \\ \\ {\\rm with} \\ \\ d_N = \\left( \\log N \\right)^{\\frac{1}{4}},\\]\n\tand observe that\n\t\t\\begin{equation}\\label{eqn:inter1} \\mathbb{E}\\left( \\left| g\\left(\\eta_0\\right) - g\\left(\\tilde{\\eta}\\right) \\right| \\right)= \n\t\t \\mathbb{E}\\left( \\left|\\int_{\\tilde{\\eta}}^{\\eta_0} N \\Im \\left( s_W(\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i} t) -m_{sc}(\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i} t)\\right)\n\t\t \t{\\rm d}t \\right| \\right)\n\t\t \\leq \n\t\t \\int_{\\tilde{\\eta}}^{\\eta_0} \\mathbb{E} (N\\left|\\Im \\left( s_{W_1}\\left({\\rm i}t\\right) - \n\t\t \tm_{sc} ({\\rm i}t) \\right|\\right))\\, {\\rm d}t. \\end{equation}\n\tIn estimating the right hand side above, we will use the notation\n\t\t\\[ \\Delta(t) = \\left|\\Im \\left( s_{W_1}({\\rm i}t) - \n\t\t \tm_{sc}({\\rm i}t \\right) \\right|. \\]\n\tFor $N$ sufficiently large, by Theorem \\ref{schlein} with $q =2$, we can write the right hand side of (\\ref{eqn:inter1}) as\n\t\t\\begin{multline}\n\t\t\t\\int_{\\tilde{\\eta}}^{\\eta_0} \\int_0^\\infty \\mathbb{P}\\left(N \\Delta\\left( t \\right) > u\\right){\\rm d}u{\\rm d}t \n\t\t\t= \\int_{\\tilde{\\eta}}^{\\eta_0}\\left( \n\t\t\t\t\\int_0^1 \\mathbb{P}\\left( \\Delta\\left( t \\right) > \\frac{K}{Nt}\\right)\\,\\frac{{\\rm d}K}{t} +\n\t\t\t\t\\int_{1}^\\infty \\mathbb{P}\\left( \\Delta\\left( t \\right) > \\frac{K}{Nt}\\right)\\frac{{\\rm d}K}{t} \\right){\\rm d}t \\\\\n\t\t\t\\leq \\int_{\\tilde{\\eta}}^{\\eta_0} \\left( \\frac{1}{t} + \\int_1^\\infty \\frac{C}{K^2} \\frac{dK}{t} \\right){\\rm d}t\n\t\t\t\\leq\\left(1 + C\\right)\\log\\left( \\frac{\\eta_0}{\\tilde{\\eta}}\\right) \n\t\t\t={\\rm o}\\left( \\sqrt{\\log N} \\right).\\label{est1}\n\t\t\\end{multline}\n\tNext we estimate\n\t $\\sum_{k} \\left(\\log \\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log \\left| y_k\\right|\\right)$, and \n\tthis will give us a bound for $\\mathbb{E}\\left(\\left| g(\\tilde{\\eta}) \\right|\\right)$. \n\tTaylor expansion yields\n\t\t\\[ \\sum_{|y_k| > \\tilde{\\eta}} \\left(\\log \\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log \\left| y_k\\right|\\right) \\leq\n\t\t\\sum_{|y_k| > \\tilde{\\eta}} \\frac{\\tilde{\\eta}^2}{y_k^2}.\n \\]\n\tDefine\n\t\t$N_1(u) = \\left| \\left\\{ y_k : \\tilde{\\eta} \\leq |y_k| \\leq u \\right\\}\\right|$.\n\tUsing integration by parts and Corollary \\ref{repulsion}, we have\n\t\t\\begin{equation} \\mathbb{E} \\left( \\sum_{|y_k| > \\tilde{\\eta}} \\frac{\\tilde{\\eta}^2}{y_k^2} \\right) =\n\t\t\t\\mathbb{E}\\left( \\int_{\\tilde{\\eta}}^\\infty \\frac{\\tilde{\\eta}^2}{y^2}\\,{\\rm d}N_1(y) \\right)\n\t\t\t= 2\\tilde{\\eta}^2 \\int_{\\tilde{\\eta}}^\\infty \\frac{\\mathbb{E}\\left(N_1(y)\\right)}{y^3}\\,{\\rm d}y = {\\rm O}\\left (d_N\\right).\\label{eqn:est2}\n\t\t\t\\end{equation}\n\tWe now estimate $\\sum_{|y_k| \\leq \\tilde{\\eta}} \\left(\\log \\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log \\left| y_k\\right|\\right)$.\n\tWe consider two cases.\n\tFirst, let $A_N = b_N\/N$ for some very small $b_N$, for example \n\t\t\\[ b_N = e^{-\\left(\\log N\\right)^{\\frac{1}{4}}}.\\]\n\tFor $u > 0$ we denote\n\t\t$ N_2(u) = \\left| \\left\\{ y_k \\,:\\, A_N < | y_k |\\leq u \\right\\} \\right|$. Then again using integration by parts and Corollary \\ref{repulsion} we obtain\n\t\t\\begin{multline*}\n\t\t\t\\mathbb{E}\\left(\n\t\t\t\\sum_{A_N <| y_k| < \\tilde{\\eta}} \\left(\\log \\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log \\left| y_k\\right|\\right)\n\t\t\t\\right)\n\t\t\t= \\mathbb{E}\\left( \\int_{A_N}^{\\tilde{\\eta}} \n\t\t\t\t\\left( \\log \\left| y + {\\rm i}\\tilde{\\eta}\\right| - \\log \\left|y\\right|\\right){\\rm d}N_2(y) \\right)\\\\\n\t\t\t\t\\leq \\log\\left(\\sqrt{2}\\right) \\mathbb{E}\\left( N_2\\left( \\tilde{\\eta} \\right) \\right) + \n\t\t\t\t\\int_{A_N}^{\\tilde{\\eta}} \\frac{\\mathbb{E}\\left(N_2(y)\\right)}{y}{\\rm d}y\n\t\t\t\t= {\\rm O}\\left(d_N + d_N\\log\\left( \\frac{d_N}{b_N} \\right)\\right) = \\oo\\left( \\sqrt{\\log N}\\right).\n\t\t\\end{multline*}\nIt remains\n\tto estimate $\\sum_{|y_k| < A_N}\\left( \\log\\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log\\left|y_k\\right|\\right)$. \n\tBy Corollary \\ref{micro fixed energy}, we have\n\t\t\\begin{equation}\\label{est3}\n\t\t \\mathbb{P}\\left(\\sum_{|y_k| < A_N}\\left( \\log\\left| y_k + {\\rm i}\\tilde{\\eta}\\right| - \\log\\left|y_k\\right| \\right) = 0\\right) \n\t\t \t\\geq \\mathbb{P}\\left( \\left|\\left\\{ y_k \\in [-A_N,A_N]\\right\\} \\right|= 0 \\right) \\to 1.\n\t\t\\end{equation}\n\t\\noindent \n\tThe estimates (\\ref{est1}) and (\\ref{eqn:est2}) along with Markov's inequality, and the bound (\\ref{est3}),\n\tconclude the proof.\n\t\\end{proof}\n\\section{Coupling of Determinants}\\label{DBM}\n\t\n\t\\noindent In this section, we use the coupled Dyson Brownian Motion introduced in \\cite{BouErdYauYin2016} to\n\tcompare (\\ref{smoothing t}) and (\\ref{coupling at tau}). \n\tDefine $\\tilde{W}_\\tau$ by running the matrix Dyson Brownian Motion\n\t(\\ref{eq:DBM evolution}) with initial condition $\\tilde{W}_0$\n\twhere $\\tilde{W}_0$ is a Wigner matrix with eigenvalues ${\\bm y}$.\n\tRecall that this induces a collection of Brownian motions\n\t$\\tilde{B}^{(k)}_t$ \n\tso that the system (\\ref{eqn:DBM}) with initial condition ${\\bm y}$\n\thas a (unique strong) solution \n\t${\\bm y}(\\cdot)$, and ${\\bm y}(\\tau)$ are the eigenvalues of $\\tilde{W}_\\tau$. Using the same (induced) Brownian motions as we used to define ${\\bm y}(\\tau)$, define\n\t${\\bm x}(\\tau)$ by running the dynamics (\\ref{eqn:DBM}) with initial condition given by the eigenvalues of a GOE matrix.\n\tUsing the result of Section \\ref{smoothing section} as an input to Lemma \\ref{extreme}, we now prove Proposition\n\t\\ref{prop:advection} which says that (\\ref{coupling at tau}) and (\\ref{smoothing t}) are asymptotically equal in law. \\\\\n\n\n\t\n\t\\noindent To study the coupled dynamics of ${\\bm x}(t)$ and ${\\bm y}(t)$, we\n\tfollow \\cite{LanSosYau2016, BourgadeExtreme}. For $\\nu \\in [0,1]$, let\n\t\t\\begin{equation}\n\t\t\\label{def:initialcondition}\n\t\t\t\\lambda^\\nu_k(0) = \\nu x_k + \\left(1 - \\nu\\right)y_k\n\t\t\\end{equation}\n\twhere ${\\bm x}$ is the spectrum of a GOE matrix, and ${\\bm y}$ is the spectrum of $\\tilde{W}_0$.\n\tWith this initial condition, we denote the (unique strong) solution to (\\ref{eqn:DBM}) by ${\\bm \\lambda}^{(\\nu)}(t)$.\n\tNote that\n$\n\t\t\t{\\bm \\lambda}^{(0)}(\\tau) = {\\bm y}(\\tau)$ and $\n\t\t\t{\\bm \\lambda}^{(1)}(\\tau) = {\\bm x}(\\tau).\n$ \nLet\n\t\t\\begin{equation}\n\t\t\\label{eqn:ft}\n\t\tf^{(\\nu)}_t(z) =e^{-\\frac{t}{2}} \\sum_{k=1}^N \\frac{u_k(t)}{\\lambda^{(\\nu)}_k(t)-z}, \n\t\t\\quad u_k(t) = \\frac{{\\rm d}}{{\\rm d}\\nu}\\lambda^{(\\nu)}_k(t),\n\t\t\\end{equation}\n\t(see \\cite{LanSosYau2016} for existence of this derivative) and observe that\n\t\t\\begin{equation}\n\t\t\\label{main obs}\n\t\t\\frac{{\\rm d}}{{\\rm d}\\nu} \\sum_k \\log\\left| \\lambda_k^{(\\nu)}(t) - z \\right| = e^{\\frac{t}{2}}\\Re \\left(f_t(z)\\right).\n\t\t\\end{equation}\n\tLemma \\ref{extreme} below from \\cite[Proposition 3.3]{BourgadeExtreme}, tells us that we may estimate\n\t$f_\\tau(z)$ by $f_0\\left(z_\\tau\\right)$, with $z_\\tau$ as in (\\ref{def:ztau}) and $\\tau$ as in (\\ref{def:tau}). \n\t\n\t\\begin{lemma}\n\t\t\\label{extreme}\n\t\tThere exists $C_0>0$ such that with $\\varphi=e^{C_0(\\log\\log N)^2}$, \n\t\tfor any $\\nu\\in[0,1]$, $\\kappa>0$ (small) and $D>0$ (large), \n\t\t there exists $N_0(\\kappa,D)$ so that for any $N\\geq N_0$ we have\n\t\t\t\\[ \\mathbb{P}\\left(\\left| f^{(\\nu)}_t(z) - f^{(\\nu)}_0\\left(z_t\\right) \\right| < \\frac{\\varphi}{N\\eta}\\ {\\rm for\\ all}\\ 00$, $\\tau = N^{-\\epsilon}$ and let $z_\\tau$ be as in (\\ref{def:ztau}) with $z = {\\rm i}\\eta_0$.\n\tThen for any $\\delta > 0$, \n\t\t\\[ \\lim_{N\\to\\infty} \\mathbb{P}\\left(\\left|\n\t\t\\sum_k \\left( \\log \\left|x_k(\\tau) + {\\rm i}\\eta_0\\right| - \\log \\left|y_k(\\tau) + {\\rm i}\\eta_0\\right| \\right) \n\t\t - \\sum_k \\left( \\log \\left|x_k(0) + z_\\tau\\right| - \\log \\left|y_k(0) + z_\\tau\\right| \\right)\\right| > \\delta \\right) = 0. \\]\n\t\\end{proposition}\n\n\\section{Conclusion of the Proof}\\label{conclusion}\n\n\t\\noindent We will conclude the proof of Theorem \\ref{main theorem} in the real symmetric case\n\tin two steps. The first step is to prove a Green's function comparison\n\ttheorem, and the second is to establish Theorem \\ref{main theorem} assuming Lemma\n\t\\ref{expectation}, proved in the Appendix. \n\n\t\\subsection{Green's Function Comparison Theorem. }\\label{moment matching section}\n\tIn this section, we first use Lemma \\ref{moment matching} to \n\tchoose a $\\tilde{W}_0$ so that $\\tilde{W}_{\\tau}$ given by\n\t(\\ref{eq:DBM evolution})\n\tand initial condition $\\tilde{W}_0$,\n\tmatches $W$ closely up to fourth moment. We will then prove Theorem \\ref{4 moment matching theorem},\n\twhich by the result of Section \\ref{smoothing section}, says that\n\t$ \\log | \\det \\tilde{W}_\\tau |$ and \n\t$ \\log \\left| \\det W \\right|$ have the same law as $N\\to\\infty$. \n\t\n\t\\begin{lemma}[Lemma 6.5 in \\cite{ErdYauYin2012Bulk}]\n\t\t\\label{moment matching}\n\t\tLet $m_3$ and $m_4$ be two real numbers such that\n\t\t\t\\begin{equation}\\label{moment condition}\n\t\t\t m_4 - m_3^2 - 1 \\geq 0, \\quad m_4 \\leq C_2 \n\t\t\t\\end{equation}\n\t\tfor some positive constant $C_2$. Let $\\xi^G$ be a Gaussian random variable with mean \n\t\t$0$ and variance $1$. Then for any sufficiently small $\\gamma > 0$ (depending on $C_2$),\n\t\tthere exists a real random variable $\\xi$, with subgaussian decay\n\t\tand independent of $\\xi^G$ such that the first four moments of\n\t\t\t\\[ \\xi' = \\left( 1 - \\gamma\\right)^{\\frac{1}{2}} \\xi_\\gamma + \\gamma^{\\frac{1}{2}}\\xi^G\\]\n\t\tare $m_1\\left(\\xi'\\right) = 0$, $m_2\\left(\\xi'\\right) = 1$, $m_3\\left(\\xi'\\right) = m_3$,\n\t\tand \n\t\t\t\\[ \\left|m_4\\left(\\xi'\\right) - m_4\\right| \\leq C\\gamma \\]\n\t\tfor some $C$ depending on $C_2$. \n\t\\end{lemma}\n\t\\noindent Now since $\\tilde{W}_\\tau$ is defined by independent Ornstein-Uhlenbeck processes in each entry, \n\tit has the same distribution as\n\t\t\\[ e^{-\\tau\/2} \\tilde{W}_0 + \\sqrt{1-e^{-\\tau}}W \\]\n\twhere $W$ is a GOE matrix independent of $\\tilde{W}_0$.\n\tSo choosing $\\gamma = 1-e^{-\\tau}$,\n\tLemma \\ref{moment matching} says we can\n\tfind $\\tilde{W}_0$ so that the first three moments of the entries of $\\tilde{W}_\\tau$ \n\tmatch the first three moments of the entries of $W$, and the fourth moments of the entries\n\tof each differ by ${\\rm O}(\\tau)$. \n\tOur next goal is to prove Theorem \\ref{4 moment matching theorem} which says that\n\twith $\\tilde{W}_\\tau$ constructed this way, if Theorem \\ref{main theorem} \n\tholds for $\\tilde{W}_\\tau$, then it holds for $W$. We first introduce stochastic domination and state\n\tTheorem \\ref{local law} which we will use in the proof.\n\t\n\t\\begin{definition}\n\t\tLet $X = \\left(X^N(u): N \\in \\mathbb{N}, u \\in U^N\\right), Y = \\left(Y^N(u): N \\in \\mathbb{N}, u \\in U^N\\right)$\n\t\tbe two families of nonnegative random variables, where $U^N$ is a possibly $N$-dependent parameter set.\n\t\tWe say that $X$ is stochastically dominated by $Y$, uniformly in $u$, if for every $\\epsilon > 0$ and $D > 0$, there exists $N_0(\\epsilon, D)$ such that\n\t\t\t\\[ \\sup_{u \\in U^N} \\mathbb{P}\\left[ X^N(u) > N^\\epsilon Y^N(u)\\right] \\leq N^{-D} \\]\n\t\tfor $N\\geq N_0$. Stochastic domination is always uniform in all parameters,\n\t\tsuch as matrix indices and spectral parameters, that are not explicitly fixed.\n\t\tWe will use the notation $X = O_\\prec(Y)$ or $X \\prec Y$ for the above property.\n\t\\end{definition}\n\t\n\t\\begin{theorem}[Theorem 2.1 in \\cite{ErdYauYin2012Rig}]\\label{local law}\n\tLet $W$ be a Wigner matrix satisfying (\\ref{subgaussian}). Fix $\\zeta > 0$ and define the domain\n\t\t\\[ S = S_N(\\zeta) := \\left\\{ E + {\\rm i}\\eta \\,:\\, |E| \\leq \\zeta^{-1}, \\,N^{-1+\\zeta} \\leq \\eta \\leq \\zeta^{-1} \\right\\}. \\]\n\tThen uniformly for $i,j = 1,\\hdots, N$ and $z \\in S$, we have\n\t\\begin{align*}\n\t\t s(z) &= m(z) + {\\rm O}_\\prec\\left( \\frac{1}{N\\eta} \\right),\\\\\n\tG_{ij}(z)& = \\left(W -z\\right)^{-1}_{ij} = \n\t\tm(z)\\delta_{ij} + {\\rm O}_\\prec\\left( \\sqrt{\\frac{\\Im \\left(m(z)\\right)}{N\\eta}} + \\frac{1}{N\\eta} \\right).\n\t\t\\end{align*}\n\t\\end{theorem}\n\n\t\\begin{theorem}\\label{4 moment matching theorem}\n\t\tLet $F: \\mathbb{R} \\to \\mathbb{R}$ be smooth with compact support, and\n\t\tlet $W$ and $V$ be two Wigner matrices \n\t\tsatisfying (\\ref{subgaussian}) such that for $1 \\leq i, j \\leq N$,\n\t\t\t\\begin{numcases}{\\mathbb{E}\\left(w_{ij}^a\\right) = }\n\t\t\t\t\\label{moment matching assumption 3}\n\t\t\t\t\\mathbb{E}\\left(v_{ij}^a\\right) & $a \\leq 3$ \\\\\n\t\t\t\t\\label{moment matching assumption 4}\n\t\t\t\t\\mathbb{E}\\left(v_{ij}^a\\right)+ \\OO(\\tau) & $a = 4$,\n\t\t\t\\end{numcases}\n\t\twhere $\\tau$ is as in (\\ref{def:tau}). Further, let $c_N$ be any deterministic sequence and define\n\t\t\t\\[\n\t\t\t\tu_N(W) = \\frac{\\log | \\det \\left(W + \\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\eta_0\\right)| +c_N}{\\sqrt{\\log N}}.\n\t\t\t\\]\n\t\twhere $\\eta_0$ is as in (\\ref{def:eta0}). Then\n\t\t\t\\begin{equation}\n\t\t\t\t\\lim_{N\\to \\infty} \\mathbb{E} \\left(F\\left( u_N(W)\\right) - F\\left( u_N(V) \\right)\\right) = 0.\\label{eqn:enough}\n\t\t\t\\end{equation} \n\t\\end{theorem}\n\t\\begin{proof}\t\t\n\t\tAs in \\cite{TaoVu2012}, where the authors also used the following technique to analyze\n\t\tfluctuations of determinants, we show that the effect of substituting $W_{ij}$ in place of $V_{ij}$ in $V$ is\n\t\tnegligible enough that making $N^2$ replacements, we conclude the theorem. \\\\\n\n\t\t\n\t\t\\noindent Fix $(i,j)$ and let\n\t\t$E^{(ij)}$ be the matrix whose elements are $E^{(ij)}_{kl} = \\delta_{ik}\\delta_{jl}$.\n\t\tLet $W_1$ and $W_2$ be two adjacent matrices in the swapping process described above. \n\t\tSince $W_1, W_2$ differ in just the $(i,j)$ and $(j,i)$ coordinates, we may write\n\t\t\t$$\n\t\t\t\tW_1 = Q + \\frac{1}{\\sqrt{N}} U, \\ \\ \\ \\ \\\n\t\t\t\tW_2 = Q + \\frac{1}{\\sqrt{N}} \\tilde{U}\n\t\t\t$$\n\t\twhere $Q$ is a matrix with $Q_{ij} = Q_{ji} = 0$, and\n\t\t\t$$\n\t\t\t\tU = u_{ij}E^{(ij)} + u_{ji}E^{(ji)}\\ \\ \\ \\ \\ \\ \n\t\t\t\t\\tilde{U} = \\tilde{u}_{ij}E^{(ij)} + \\tilde{u}_{ji}E^{(ji)}.\n\t\t$$\n\t\tImportantly $U,\\tilde{U}$ satisfy the same moment matching conditions\n\t\twe have imposed on $\\tilde{W}_\\tau$ and $W$. \n\t\tNow by the fundamental theorem of calculus, we have for any symmetric matrix $W$,\n\t\t\t\\begin{equation}\n\t\t\t\\label{log via fundamental theorem}\n\t\t\t\\log\\left|\\det(W + {\\rm i}\\eta_0)\\right| = \n\t\t\t\t\\sum_{k=1}^N \\log\\left| x_k + {\\rm i}\\eta_0\\right| = \\log\\left|\\det(W + {\\rm i})\\right| - N\\,\n\t\t\t\t \\Im \\int_{\\eta_0}^1\n\t\t\t\ts_W\\left({\\rm i}\\eta \\right){\\rm d}\\eta. \n\t\t\t\\end{equation}\n\t\t\tFrom the central limit theorems for linear statistics of Wigner matrices on macroscopic scales \\cite{LytPas2009}, \n\t\t\t $(\\log\\left|\\det(W + {\\rm i})\\right|-\\mathbb{E}(\\log\\left|\\det(W + {\\rm i})\\right|))\/\\sqrt{\\log N}$ \n\t\t\tconverges to $0$ in probability (the same result holds with $W$ replaced with $V$), and from Lemma \\ref{expectation} (which clearly holds with $1$ in place of $\\tau$), \n\t\t\t$(\\mathbb{E}(\\log\\left|\\det(W + {\\rm i})\\right|)-\\mathbb{E}(\\log\\left|\\det(V + {\\rm i})\\right|))\/\\sqrt{\\log N}\\to 0$.\n\t\t\tTherefore (\\ref{eqn:enough}) is equivalent to\n\t\t\t\t\\begin{equation}\n\t\t\\label{key expression}\n\t\t\t\\lim_{N\\to \\infty} \\mathbb{E} \\left(\\widetilde F\\left(N\\,\n\t\t\t\t \\Im \\int_{\\eta_0}^1\n\t\t\t\ts_W\\left({\\rm i}\\eta \\right){\\rm d}\\eta\\right) - \\widetilde F\\left(N\\,\n\t\t\t\t \\Im \\int_{\\eta_0}^1\n\t\t\t\ts_V\\left({\\rm i}\\eta \\right){\\rm d}\\eta\\right)\\right)= 0,\n\t\t\t\\end{equation}\n\t\t\twhere\n\t\t\t$$\n\t\t\t\\widetilde F(x)=F\\left(\\frac{\\mathbb{E}(\\log\\left|\\det(W + {\\rm i})\\right|)+c_N-x}{\\sqrt{\\log N}}\\right).\n\t\t\t$$\n\n\t\tWe now expand $s_{W_1}$ and $s_{W_2}$ around $s_{Q}$, and then\n\t\tto Taylor expand $\\widetilde F$.\n\t\tSo let \n\t\t\t\\[ R =R(z)= \\left(Q-z\\right)^{-1} \\text{ and } S=S(z)=\\left(W_1 - z\\right)^{-1}. \\]\n\t\tBy the resolvent expansion\n\t\t\t\\[ \n\t\t\tS = R - N^{-1\/2}RUR + \\hdots + N^{-2}(RU)^4R - N^{-5\/2}(RU)^5 S,\n\t\t\t\\]\n\t\twe can write\n\t\t\t\\[N \\int_{\\eta_0}^1 s_{W_1}({\\rm i}\\eta){\\rm d}\\eta = \\int_{\\eta_0}^1\\text{Tr}\\left(S({\\rm i}\\eta)\\right){\\rm d}\\eta = \n\t\t\t\\int_{\\eta_0}^1 \\text{Tr}\\left(R({\\rm i}\\eta)\\right){\\rm d}\\eta +\n\t\t\t\\left( \\sum_{m=1}^4 N^{-m\/2}\\hat{R}^{(m)}({\\rm i}\\eta) - N^{-5\/2}\\Omega\\right)\n\t\t\t\t:= \\hat{R} + \\xi \\]\n\t\twhere \n\t\t\t\\[ \\hat{R}^{(m)} = (-1)^m \\int_{\\eta_0}^1 \\text{Tr}\\left((R({\\rm i}\\eta)U)^mR({\\rm i}\\eta)\\right){\\rm d}\\eta \n\t\t\t\\quad \\text{and} \\quad\n\t\t\t\\Omega = \\int_{\\eta_0}^1 \\text{Tr} \\left( (R({\\rm i}\\eta)U)^5S({\\rm i}\\eta) \\right){\\rm d}\\eta.\\]\n\t\tThis gives us an expansion of $s_{W_1}$ around $s_{Q}$. \n\t\tNow Taylor expand $\\widetilde F(\\hat{R} + \\xi)$ as\n\t\t\t\\begin{equation} \\label{taylor expansion} \\widetilde F\\left(\\hat{R}+\\xi\\right) = \\widetilde F\\left(\\hat{R}\\right) + \n\t\t\t\t\\widetilde F'\\left(\\hat{R}\\right)\\xi + \\hdots + \\widetilde F^{(5)}\\left(\\hat{R}+\\xi'\\right)\\xi^5 = \\sum_{m=0}^5 N^{-m\/2}A^{(m)}\n\t\t\t\\end{equation}\n\t\twhere $0 < \\xi' < \\xi$, and we have introduced the notation $A^{(m)}$ in order to arrange terms according to\n\t\tpowers of $N$. For example\n\t\t\t$$\n\t\t\t\tA^{(0)} = \\widetilde F\\left(\\hat{R}\\right),\\ \\ \n\t\t\t\tA^{(1)} = \\widetilde F'\\left(\\hat{R}\\right)\\hat{R}^{(1)}, \\ \\ \n\t\t\t\tA^{(2)} = \\widetilde F'\\left(\\hat{R}\\right)\\hat{R}^{(2)} + \\widetilde F''\\left(\\hat{R}\\right)\\left(\\hat{R}^{(1)}\\right)^2.\n\t\t\t$$\n\t\tMaking the same expansion for $W_2$, we record our two expansions as\n\t\t\t\\[ \\widetilde F\\left(\\hat{R} + \\xi_i\\right) = \\sum_{m=0}^5 N^{-m\/2} A^{(m)}_i, \\quad i = 1, 2, \\]\n\t\twith $\\xi_i$ corresponding to $W_i$. With this notation, we have\n\t\t\t\\begin{align*} \n\t\t\t\t\\mathbb{E}\\left(\\widetilde F\\left(\\hat{R} + \\xi_1\\right)\\right) - \\mathbb{E} \n\t\t\t\t\\left(\\widetilde F\\left(\\hat{R} + \\xi_2\\right)\\right)\n\t\t\t\t&= \\mathbb{E} \\left(\\sum_{m=0}^5 N^{-m\/2}\\left( A_1^{(m)} - A_2^{(m)} \\right)\\right) .\n\t\t\t\\end{align*}\n\t\tNow only the first three moments of $U,\\tilde{U}$ appear in the terms corresponding to $m=1, 2, 3$, so\n\t\tby the moment\n\t\tmatching assumption (\\ref{moment matching assumption 3}), all of these terms are all identically zero. \n\t\tNext, consider $m = 4$. Every term with first, second, and third moments of\n\t\t$U$ and $\\tilde{U}$ is again zero, and what remains is\n\t\t\t\\[ \\mathbb{E} \\left(\\widetilde F'(\\hat{R}) \\left( \\hat{R}_1^{(4)} - \\hat{R}_2^{(4)}\\right)\\right). \\] \n\t\t\\noindent So we can discard $A^{(4)}$ if \n\t\t\t\\begin{equation}\\label{4th moment}\n\t\t\t \\int_{\\eta_0}^1 \\left|\\mathbb{E} \\left( \n\t\t\t \t{\\rm Tr}\\left( (RU)^4R \\right) - {\\rm Tr}\\left( (R\\tilde{U})^4R\\right) \\right) \\right| {\\rm d}\\eta \n\t\t\t \\end{equation}\n\t\tis small. To see that this is in fact the case, we expand the traces, and apply Theorem \\ref{local law}\n\t\talong with our fourth moment matching assumption (\\ref{moment matching assumption 4}). \n\t\tSpecifically,\n\t\t\t\\[ \\text{Tr}\\left( (RU)^4R\\right) = \\sum_j \\left(\\sum_{i_1, \\hdots, i_8} \n\t\t\t\tR_{ji_1}U_{i_1i_2}R_{i_2i_3}\\hdots U_{i_7i_8}R_{i_8j}\\right). \\]\n\t\tWriting the corresponding Tr for $W_2$ and applying the moment matching assumption, we see that we can\n\t\tbound (\\ref{4th moment}) by\n\t\t\t\\[ {\\rm O}(\\tau) \\int_{\\eta_0}^1\\sum_j \\sum_{i_1, \\hdots, i_8} \\mathbb{E} \\left(\n\t\t\t\t\\left| R_{ji_1}R_{i_2i_3}R_{i_4i_5}R_{i_6i_7}R_{i_8j} \\right| \\right){\\rm d}\\eta. \\]\n\t\tTo bound the terms in the sum, we need to count the number of diagonal and off-diagonal terms in each product.\n\t\tTo do this, let us say $U_{pq}, \\tilde{U}_{pq}$ and $U_{qp}, \\tilde{U}_{qp}$ are the only non-zero entries of \n\t\t$U, \\tilde{U}$. Then\n\t\teach of the sums over $i_1, \\hdots, i_8$ are just sums over $p, q$, and when $j \\notin \\{p,q\\}$, \n\t\t$R_{ji_1}$ and $R_{i_8j}$ are certainly off-diagonal entries of $R$. This means we can apply Cauchy-Schwartz\n\t\tto write that for any $\\gamma > 0$,\n\t\t\t\\begin{align*}\n\t\t\t\t{\\rm O}(\\tau) \\int_{\\eta_0}^1\\sum_{j \\notin \\{p, q\\}} \\sum_{i_1, \\hdots, i_8} \\mathbb{E} \\left(\n\t\t\t\t\t\\left| R_{ji_1}R_{i_2i_3}R_{i_4i_5}R_{i_6i_7}R_{i_8j} \\right| \\right){\\rm d}\\eta \n\t\t\t\t= {\\rm O}\\left (\\tau N^{1 + 2\\gamma} \\int_{\\eta_0}^1 \\frac{1}{N\\eta}{\\rm d}\\eta\\right) = \n\t\t\t\t\t{\\rm O}\\left (N^{2\\gamma - \\epsilon} \\log(N)\\right).\n\t\t\t\\end{align*}\n\t\tSimilarly, \n\t\t\t\\begin{align*}\n\t\t\t\t{\\rm O}(\\tau) \\int_{\\eta_0}^1\\sum_{j \\in \\{p, q\\}} \\sum_{i_1, \\hdots, i_8} \\mathbb{E} \\left(\n\t\t\t\t\t\\left| R_{ji_1}R_{i_2i_3}R_{i_4i_5}R_{i_6i_7}R_{i_8j} \\right| \\right){\\rm d}\\eta \n\t\t\t\t= {\\rm O}\\left (\\tau N^{\\epsilon \/2} \\right) = \n\t\t\t\t\t{\\rm O}\\left (N^{- \\epsilon\/2} \\right).\n\t\t\t\\end{align*}\n\t\tSince $A^{(4)}$ has a pre-factor of $N^{-2}$ in (\\ref{taylor expansion}), and the above holds\n\t\tfor every choice of $\\gamma > 0$, in our entire \n\t\tentry swapping scheme starting from $V$ and ending with $W$, the corresponding error\n\t\tis ${\\rm o}(1)$.\\\\\n\t\t\n\t\t\\noindent Lastly we comment on the error term $A^{(5)}$. All terms in $A^{(5)}$ not involving \n\t\t$\\Omega$ can be dealt\n\t\twith as above. The only term involving $\\Omega$ is $\\widetilde F'(\\hat{R})\\Omega$, and to deal with this,\n\t\twe can expand the expression for $\\Omega$ as above. We do not have any\n\t\tmoment matching condition for the fifth moments of $U, \\tilde{U}$, but (\\ref{subgaussian}) means\n\t\tthat their fifth\n\t\tmoments are bounded which is enough for our purpose since $A^{(5)}$ has a pre-factor of $N^{-5\/2}$ above. \n\t\t\\end{proof}\t\n\t\t\n\t\\subsection{Proof of Theorem \\ref{main theorem}. }\\label{sub:end}\n\tIn this section we first prove Proposition \\ref{variance} and, using Lemma \n\t\\ref{expectation}, we\n\tconclude the proof of Theorem \\ref{main theorem}.\n\t\n\t\n\t\n\t\n\t\\begin{proposition}\\label{prop:variance} Recall $\\tau=N^{-{\\varepsilon}}$.\n\t\\label{variance} There exist ${\\varepsilon}_0,C$ such that for any fixed $0<{\\varepsilon}<{\\varepsilon}_0$, for large enough $N$, we have\n\t\\[ {\\rm Var}\\left( \\sum_k\\log|x_k(0)+\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\tau|\\right) \\leq C (1+\\epsilon \\log N).\\]\n\\end{proposition}\t\n\\begin{proof}\nWe outline two proofs, which are trivial extensions of existing linear statistics asymptotics on global scales, to the case of almost macroscopic scales. The tool for this extension is the rigidity estimate from \\cite{ErdYauYin2012Rig}: for any $c,D>0$, there exists $N_0$ such that for any $N\\geq N_0$ and $k\\in\\llbracket 1,N\\rrbracket$ we have\n\\begin{equation}\\label{eqn:rigidity}\n\\mathbb{P}\\left(|x_k-\\gamma_k|>N^{-\\frac{2}{3}+c}\\min(k,N+1-k)^{-\\frac{1}{3}}\\right)\\leq N^{-D}.\n\\end{equation}\n\n\n\\noindent For the first proof, we use (\\ref{eqn:rigidity}) to bound all the error terms in the proof of \\cite[Theorem 3.6]{LytPas2009} (these error terms all depend on \\cite[Theorem 3.5]{LytPas2009}, which can be improved via \n(\\ref{eqn:rigidity}) to ${\\rm Var}(u_N(t))\\leq N^c(1+|t|)$ and ${\\rm Var}\\left(\\mathcal{N}_N(\\varphi)\\right)\\leq N^c\\|\\varphi\\|_{\\rm Lip}^2$). What we obtain is that if $\\varphi$ (possibly depending on $N$) satisfies $\\int|t|^{100}\\hat \\varphi(t) 5\/\\tau$, integration by parts shows $\\left|\\hat{\\varphi}_N(\\xi)\\right| = {\\rm O}\\left (\\frac{1}{\\xi^2\\tau}\\right)$. \n\tWhen $5 < \\xi < 5\/\\tau$, first note\n\t\t\\begin{align*}\n\t\t\\int_0^{\\frac{5}{\\tau}} \\sin\\left(\\xi \\tau x\\right) \\frac{x}{x^2 + 1} \\, {\\rm d}x &= C + \n\t\t\t\\int_1^{\\frac{5}{\\tau}} \\frac{\\sin\\left(\\xi \\tau x\\right)}{x} \\,{\\rm d}x \n\t\t\t= C + \\int_{\\xi \\tau}^{1} \\frac{\\sin y}{y} \\,{\\rm d}y + \\int_1^{5\\xi} \\frac{\\sin y}{y}\\,{\\rm d}y.\n\t\t\\end{align*}\n\tUsing $|\\sin y| < |y|$, we see that the first term is ${\\rm O}(1)$, and integrating by parts, we see\n\tthat the second term is ${\\rm O}(1)$ as well. This means \n\t\t\\[ \\int |\\xi| \\left| \\hat{\\varphi}_N(\\xi) \\right|^2{\\rm d}\\xi \\leq C +C \n\t\t\\int_{5}^{\\frac{5}{\\tau}} \\frac{1}{\\xi} \\,{\\rm d}\\xi = {\\rm O}\\left (1+|\\log \\tau|\\right), \n\t\t\\]\n\twhich concludes the proof.\\\\\n\t\n\\noindent The second proof is similar but more direct. Theorem 3 in \\cite{KhoKhoPas} implies that for $z_1=\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\eta_1,z_2=\\mathrm{i}} %\\newcommand{\\mi}{\\mathrm{i}\\eta_2$ at macroscopic distance from the real axis, and $\\eta_1=\\im z_1>0,\\eta_2=\\im z_2<0$, we have\n\t$$\n\t\\left|{\\rm Cov}\\left(\\sum_k\\frac{1}{z_1-x_k},\\sum_k\\frac{1}{z_2-x_k}\\right)\\right|\\leq\\frac{C}{(\\eta_1-\\eta_2)^2}+f(z_1,z_2)+\\OO(N^{-1\/2}),\n\t$$\n\twhere $f$ is a function uniformly bounded on any compact subset of $\\mathbb{C}^2$.\n\tUsing (\\ref{eqn:rigidity}), one easily obtains that the formula above holds uniformly with $|\\im z_1|,|\\im z_2|>N^{-1\/10}$, and the deteriorated error term $\\OO(N^{-1\/10})$, for example.\t\nNote that\n$$\n\\log\\left|\\det(W + {\\rm i}\\eta)\\right| = \\log\\left|\\det(W + {\\rm i})\\right| - N\\,\n\t\t\t\t \\Im \\int_{\\eta}^1\n\t\t\t\ts_W\\left({\\rm i}x \\right){\\rm d}x. \n$$\nand $\\log\\left|\\det(W + {\\rm i})\\right|$ has fluctuations of order 1 due to the above macroscopic central limit theorems. \nFor for $\\eta>N^{-1\/10}$, the variance of the above integral can be bounded by \n$\n\\iint_{[\\eta,1]^2}\\frac{1}{|\\eta_1+\\eta_2|^2}\\,{\\rm d} \\eta_1{\\rm d} \\eta_2\\leq C|\\log \\eta|,\n$\nwhich concludes the proof.\n\t\\end{proof}\n\t\n\t\n\t\n\n\n\n\\noindent From (\\ref{GOE CLT}) and Proposition \\ref{smoothing}, for some explicit deterministic $c_N$ we have\n\\begin{equation}\\label{eqn:1}\n\\frac{ \t\t\t\\sum_{k =1}^N \\log \\left|x_k(\\tau) + {\\rm i} \\eta_0\\right|+c_N}{\\sqrt{\\log N}}\\to\\mathscr{N}(0,1),\n\\end{equation}\nand Proposition \\ref{prop:advection} implies that\n$$\n\\frac{ \t\t\t\\sum_{k =1}^N \\log \\left|y_k(\\tau) + {\\rm i} \\eta_0\\right|+c_N}{\\sqrt{\\log N}}\n+\\frac{ \t\t\t\\sum_{k =1}^N \\log \\left|x_k(0) + z_\\tau\\right|-\\sum_{k =1}^N \\log \\left|y_k(0) + z_\\tau\\right|}{\\sqrt{\\log N}}\n\\to\\mathscr{N}(0,1).\n$$\n\n\n\t\\noindent Lemma \\ref{expectation} and Proposition \\ref{variance} show that the second term above, call it $X$, satisfies $\\mathbb{E}(X^2) 0\\),\n\\((t,t+\\epsilon) \\cap \\operatorname{supt}(f) \\ne \\emptyset\\)\n(respectively \\((t-\\epsilon,t) \\cap \\operatorname{supt}(f) \\ne \\emptyset\\)). \nAn \\emph{orbital} of \\(f\\) is a component its support.\nAn orbital of \\(f\\) is \\emph{positive} if \\(f\\) moves elements of the orbital to\nthe right;\notherwise it is negative.\nIf \\(f\\) has only finitely many orbitals, then the left (right) transition points\nof \\(f\\) are precisely the left (right) end points of its orbitals.\n\nA precursor to the notion of a \\emph{geometrically fast} generating set is that of a\n\\emph{geometrically proper} generating set.\nA set \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is\n\\emph{geometrically proper} if there is no element of \\(I\\) which is\na left transition point of more than one element of \\(X\\)\nor a right transition point of more than one element of \\(X\\).\nObserve that any geometrically proper generating set with only finitely many transition points\nis itself finite.\nFurthermore, geometrically proper sets are equipped with a canonical ordering\ninduced by the usual ordering on the least transition points of its elements.\nWhile the precise definition of \\emph{geometrically fast}\nwill be postponed until Section \\ref{FastBumpsSec},\nthe following statements describe the key features of the definition:\n\\begin{itemize}\n\n\\item geometrically fast generating sets are geometrically proper.\n\n\\item if \\(\\{a_i \\mid i < n\\}\\) is geometrically proper, then\nthere is a \\(k \\geq 1\\) such that \n\\(\\{a_i^k \\mid i < n\\}\\) is geometrically fast.\n\n\\item if \\(\\{a_i^k \\mid i < n\\}\\) is geometrically fast and \\(k \\leq k_i\\) for \\(i < n\\), then\n\\(\\{a_i^{k_i} \\mid i < n\\}\\) is geometrically fast.\n\n\\end{itemize}\n\\noindent\nOur main result is that the isomorphism types of groups with geometrically fast generating sets\nare determined by their qualitative dynamics.\nSpecifically, we will associate a \\emph{dynamical diagram} to each geometrically fast set\n\\(\\{a_i \\mid i < n\\} \\subseteq {\\operatorname{Homeo}_+(I)}\\) which has finitely many transition points.\nRoughly speaking, this is a record of the relative order of the orbitals\nand transition points of the various \\(a_i\\), as well as the orientation of their orbitals.\nIn the following theorem \\(M_X\\) is a certain finite set of points chosen from the orbitals\nof elements of \\(X\\) and \\(M \\gen{X} = \\{t g \\mid t \\in M_X \\textrm{ and } g \\in \\gen{X}\\}\\).\nThese points will be chosen such that any nonidentity element of \\(\\gen{X}\\) moves a point in \\(M\\gen{X}\\).\n\n\\begin{thm} \\label{CombToIso}\nIf two geometrically fast sets \\(X , Y \\subseteq {\\operatorname{Homeo}_+(I)}\\) have only finitely many transition points\nand have isomorphic dynamical diagrams, then the induced\nbijection between \\(X\\) and \\(Y\\) extends to an isomorphism of\n\\(\\gen{X}\\) and \\(\\gen{Y}\\) (i.e. \\(\\gen{X}\\) is \\emph{marked isomorphic} to \\(\\gen{Y}\\)).\nMoreover, there is an order preserving bijection\n\\(\\theta : M \\gen{X} \\to M \\gen{Y}\\) such that\n\\(f \\mapsto f^\\theta\\) induces the isomorphism \n\\(\\gen{X} \\cong \\gen{Y}\\).\n\\end{thm}\n\nWe will also establish that under some circumstances the map \\(\\theta\\) can be extended to a\ncontinuous order preserving surjection \\(\\hat \\theta : I \\to I\\).\n\n\\begin{thm} \\label{SemiConjThm}\nFor each finite dynamical diagram \\(D\\), there is a geometrically fast \\(X_D \\subseteq {\\operatorname{PL}_+(I)}\\)\nsuch that if \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is geometrically fast and has dynamical diagram \\(D\\),\nthen there is a marked isomorphism \\(\\phi: \\gen{X} \\to \\gen{X_D}\\) and a continuous\norder preserving surjection \\(\\hat \\theta : I \\to I\\) such that\n\\(f \\hat \\theta = \\hat \\theta \\phi (f)\\) for all \\(f \\in \\gen{X}\\).\n\\end{thm}\n\nTheorem \\ref{CombToIso} has two immediate consequences.\nThe first follows from the readily verifiable fact that any dynamical diagram can\nbe realized inside of \\(F\\) (see, e.g., \\cite[Lemma 4.2]{CFP}).\n\n\\begin{cor} \\label{EmbeddInF}\nAny finitely generated\nsubgroup of \\({\\operatorname{Homeo}_+(I)}\\) which admits a geometrically fast generating set\nembeds into Thompson's group \\(F\\).\n\\end{cor}\n\\noindent\nSince by \\cite{brin+squier} \\(F\\) does not contain nontrivial free produces of groups,\nsubgroups of \\({\\operatorname{Homeo}_+(I)}\\) which admit geometrically fast generating sets are not free products.\nIt should also be remarked that while our motivation comes from studying\nthe groups \\(F\\) and \\({\\operatorname{PL}_+(I)}\\), the conclusion of Corollary \\ref{EmbeddInF} remains valid if\n\\(F\\) is replaced by, e.g. \\(\\operatorname{Diff}^\\infty_+ (I)\\).\n\n\\begin{cor} \\label{AlgFast}\nIf \\(\\{f_i \\mid i < n\\}\\) is geometrically fast, then\n\\(\\gen{f_i \\mid i < n}\\) is marked isomorphic to\n\\(\\gen{f_i^{k_i} \\mid i < n}\\) for any choice of \\(k_i \\geq 1\\).\n\\end{cor}\n\nIt is natural to ask how restrictive having a geometrically fast or geometrically\nproper generating set is.\nThe next theorem shows that many finitely generated subgroups of \\({\\operatorname{PL}_+(I)}\\)\nin fact do have at least a geometrically proper generating set.\n\n\\begin{thm} \\label{GeoProperGen}\nEvery \\(n\\)-generated one orbital subgroup of \\({\\operatorname{PL}_+(I)}\\)\neither contains an isomorphic copy of \\(F\\) or else admits an\n\\(n\\)-element geometrically proper generating set.\n\\end{thm}\n\n\\noindent\nNotice that every subgroup of \\({\\operatorname{Homeo}_+(I)}\\) is contained in a direct product of\none-orbital subgroups of \\({\\operatorname{Homeo}_+(I)}\\). \nThus if one's interest lies in studying the structure of subgroups of\n\\({\\operatorname{PL}_+(I)}\\) which do not contain copies of \\(F\\), then\nit is typically possible to restrict one's attention to groups\nadmitting geometrically proper generating sets.\nThe hypothesis of not containing an isomorphic copy of \\(F\\) in Theorem \\ref{GeoProperGen}\ncan not be eliminated.\nThis is a consequence of the following theorem and the fact that there are finite\nindex subgroups of \\(F\\) which are not isomorphic to \\(F\\) (see \\cite{bleak+wassink}).\n\n\\begin{thm} \\label{NoGeoProperGen}\nIf a finite index subgroup of \\(F\\) is isomorphic to \\(\\gen{X}\\) for some geometrically proper\n\\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\), then it is isomorphic to \\(F\\).\n\\end{thm}\n\\noindent\nWe conjecture, however, that every finitely generated subgroup of \\(F\\) is bi-embeddable\nwith a subgroup admitting a geometrically fast generating set.\n\nWhile the results of this paper do of course readily adapt to \\({\\operatorname{Homeo}_+(\\R)} \\cong {\\operatorname{Homeo}_+(I)}\\),\nit is important to keep in mind that \\(\\pm \\infty\\) must be allowed as possible\ntransition points when applying the definition of geometric properness and hence\ngeometric fastness.\nFor example, it is easy to establish that \\(\\gen{t + \\sin (t), t + \\cos ( t)}\\) contains\na free group using the \\emph{ping-pong lemma} stated above\n(the squares of the generators generate a free group).\nMoreover, once we define \\emph{geometrically fast} in Section \\ref{FastBumpsSec},\nit will be apparent that the squares of the generators satisfies all of the requirements\nof being geometrically fast except that it is not geometrically proper\n(since, e.g., \\(\\infty\\) is a right transition point of both functions).\nAs noted above, this group does not embed into \\(F\\) and thus\ndoes not admit a geometrically fast (or even a geometrically proper) generating set.\nSee Example \\ref{InfiniteBumpEx} below for a more detailed discussion of a related example.\n\nThe paper is organized as follows.\nWe first review some standard definitions, terminology and notation in Section \\ref{DefSec}.\nIn Section \\ref{FastBumpsSec}, we will give a formal definition of \\emph{geometrically fast}\nand a precise definition of what is meant by a \\emph{dynamical diagram}.\nSection \\ref{GeoFastCriteriaSec} gives a reformulation of \\emph{geometrically fast}\nfor finite subsets of \\({\\operatorname{Homeo}_+(I)}\\) which facilitates algorithmic checking.\nThe proof of Theorem \\ref{CombToIso} is then divided between Sections\n\\ref{PingPongSec} and \\ref{CombToIsoSec}.\nThe bulk of the work is in\nSection \\ref{PingPongSec}, which uses an analog of the \\emph{ping-pong argument}\nto study the dynamics of geometrically fast sets of one orbital homeomorphisms.\nSection \\ref{CombToIsoSec} shows how this analysis implies \nTheorem \\ref{CombToIso} and how to derive its corollaries.\nIn Section \\ref{SemiConjSec}, we will prove Theorem \\ref{SemiConjThm}.\nThe group \\(F_n\\), which is the \\(n\\)-ary analog of Thompson's group \\(F\\), is\nshown to have a geometrically fast generating set in Section \\ref{FnSec}.\nSection \\ref{ExcisionSec} examines when bumps in\ngeometrically fast generating sets are extraneous and\ncan be excised without affecting the marked isomorphism type.\nProofs of Theorems \\ref{GeoProperGen} and \\ref{NoGeoProperGen} are given in Section \\ref{GeoProperSec}.\nFinally, the concept of \\emph{geometrically fast}\nis abstracted in Section \\ref{AbstractPingPongSec},\nwhere a generalization of Theorem \\ref{CombToIso} is stated and proved, as well as corresponding embedding\ntheorems for Thompson's groups \\(F\\), \\(T\\), and \\(V\\).\nThis generalization in particular covers infinite geometrically fast subsets of \\({\\operatorname{Homeo}_+(I)}\\).\nEven in the context of geometrically fast sets \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) with only finitely many\ntransition points, this abstraction gives a new way of understanding \\(\\gen{X}\\)\nin terms of symbolic manipulation.\n\n\n\\section[Preliminaries]{Preliminary definitions, notation and conventions}\n\n\\label{DefSec}\n\nIn this section we collect a number of definitions and conventions\nwhich will be used extensively in later sections.\nThroughout this paper, the letters \\(i,j,k,m,n\\) will be assumed to range\nover the nonnegative integers unless otherwise stated.\nFor instance, we will write \\((a_i \\mid i < k)\\) to denote a sequence\nwith first entry \\(a_0\\) and last entry \\(a_{k-1}\\).\nIn particular, all counting and indexing starts at 0 unless stated otherwise.\nIf \\(f\\) is a function and \\(X\\) is a subset of the domain of \\(f\\), we will write \\(f \\restriction X\\) to\ndenote the restriction of \\(f\\) to \\(X\\).\n\nAs we have already mentioned,\n\\({\\operatorname{Homeo}_+(I)}\\) will be used to denote the\nset of all orientation preserving homeomorphisms of \\(I\\);\n\\({\\operatorname{PL}_+(I)}\\) will be used to denote the set of all piecewise linear elements\nof \\({\\operatorname{Homeo}_+(I)}\\). \nThese groups will act on the right.\nIn particular, \\(tg\\) will denote the result of applying a homeomorphism\n\\(g\\) to a point \\(t\\).\nIf \\(f\\) and \\(g\\) are elements of a group, we will\nwrite \\(f^g\\) to denote \\(g^{-1}fg\\).\n\nRecall that from the introduction that\nif \\(f\\) is in \\({\\operatorname{Homeo}_+(I)}\\), then its \\emph{support} is defined to\nbe \\(\\operatorname{supt}(f):=\\{ t \\in I \\mid tf \\ne t\\}\\).\nThe support of a subset of \\({\\operatorname{Homeo}_+(I)}\\) is the union of the supports of its\nelements. \nA left (right) transition point of \\(f\\) is a \\(t \\in I \\setminus \\operatorname{supt}(f)\\) such that for every \\(\\epsilon > 0\\),\n\\((t,t+\\epsilon) \\cap \\operatorname{supt}(f) \\ne \\emptyset\\)\n(respectively \\((t-\\epsilon,t) \\cap \\operatorname{supt}(f) \\ne \\emptyset\\)). \nAn \\emph{orbital} of \\(f\\) is a component its support.\nAn orbital of \\(f\\) is \\emph{positive} if \\(f\\) moves elements of the orbital to\nthe right;\notherwise it is negative.\nIf \\(f\\) has only finitely many orbitals, then the left (right) transition points\nof \\(f\\) are precisely the left (right) end points of its orbitals.\nAn \\emph{orbital} of a subset of \\({\\operatorname{Homeo}_+(I)}\\)\nis a component of its support.\n\nAn element of \\({\\operatorname{Homeo}_+(I)}\\) with one orbital will be referred to as a \\emph{bump function}\n(or simply a \\emph{bump}).\nIf a bump \\(a\\) satisfies that \\(t a > t\\) on its support,\nthen we say that \\(a\\) is \\emph{positive};\notherwise we say that \\(a\\) is \\emph{negative}.\nIf \\(f \\in {\\operatorname{Homeo}_+(I)}\\), then \\(b \\in {\\operatorname{Homeo}_+(I)}\\) is a \\emph{signed bump of \\(f\\)} if\n\\(b\\) is a bump which agrees with \\(f\\) on its support.\nIf \\(X\\) is a subset of \\({\\operatorname{Homeo}_+(I)}\\), then a bump \\(a\\)\nis \\emph{used in} \\(X\\)\nif \\(a\\) is positive and\nthere is an \\(f\\) in \\(X\\) such that \\(f\\)\ncoincides with either \\(a\\) or \\(a^{-1}\\) on the support of \\(a\\).\nA bump \\(a\\) is used in \\(f\\) if it is used in \\(\\{f\\}\\).\nWe adhere to the convention that only positive\nbumps are used by functions to avoid ambiguities in some statements.\nObserve that if \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is such that\nthe set \\(A\\) of bumps used in \\(X\\) is finite, then\n\\(\\gen{X}\\) is a subgroup of \\(\\gen{A}\\).\n\nIf \\((g_i \\mid i < n)\\) and \\((h_i \\mid i < n)\\) are two generating sequences for groups,\nthen we will say that \\(\\gen{g_i \\mid i < n}\\) is \\emph{marked isomorphic} to\n\\(\\gen{h_i \\mid i < n}\\) if the map \\(g_i \\mapsto h_i\\) extends to an isomorphism\nof the respective groups.\nIf \\(X\\) is a finite geometrically proper subset of \\({\\operatorname{Homeo}_+(I)}\\), then\nwe will often identify \\(X\\) with its enumeration in which the minimum transition\npoints of its elements occur in increasing order.\nWhen we write \\(\\gen{X}\\) is marked isomorphic to \\(\\gen{Y}\\), we are\nmaking implicit reference to these canonical enumerations of \\(X\\) and \\(Y\\).\n\nAt a number of points in the paper it will be important to distinguish between\nformal syntax (for instance words) and objects (such as group elements) to which they refer.\nIf \\(A\\) is a set, then a \\emph{string}\nof elements of \\(A\\) is a finite sequence of elements of \\(A\\).\nThe length of a string \\(\\str{w}\\) will be denoted \\(|\\str{w}|\\).\nWe will use \\(\\varepsilon\\) to denote the string of length 0.\nIf \\(\\str{u}\\) and \\(\\str{v}\\) are two strings,\nwe will use \\(\\str{uv}\\) to denote\ntheir concatenation;\nwe will say that \\(\\str{u}\\) is a \\emph{prefix} of \\(\\str{uv}\\)\nand \\(\\str{v}\\) is a \\emph{suffix} of \\(\\str{uv}\\).\nIf \\(A\\) is a subset of a group, then a \\emph{word} (in \\(A\\))\nis a string of elements of\n\\(A^\\pm := A \\cup A^{-1}\\).\nA \\emph{subword} of a word \\(\\str {w}\\) must preserve the order from \\(\\str {w}\\),\nbut does not have to consist of consecutive symbols from \\(\\str {w}\\).\nWe write \\(\\str {w}^{-1}\\) for the formal inverse of \\(\\str {w}\\): the product of\nthe inverses of the symbols in \\(\\str {w}\\) in reverse order.\n\nOften strings have an associated evaluation (e.g. a word represents an element of a group).\nWhile the context will often dictate whether we are working with a string or\nits evaluation, we will generally use the typewriter font (e.g. \\(\\str{w}\\)) for strings\nand symbols in the associated alphabets and standard math font (e.g. \\(w\\)) for the\nassociated evaluations.\n\nIn Section \\ref{GeoProperSec}, we will use the notion of the \\emph{left (right) germ}\nof a function \\(f \\in {\\operatorname{Homeo}_+(I)}\\) at an \\(s \\in I\\) which is fixed by \\(f\\)\n(left germs are undefined at 0 and right germs are undefined at 1).\nIf \\(0 \\leq s < 1\\), then define the \\emph{right germ of \\(f\\) at \\(s\\)} to be the set of all \\(g \\in {\\operatorname{Homeo}_+(I)}\\) such\nthat for some \\(\\epsilon > 0\\), \\(f \\restriction (s,s+\\epsilon) = g \\restriction (s,s+\\epsilon)\\);\nthis will be denoted by \\(\\gamma_s^+(f)\\).\nSimilarly if \\(0 < s \\leq 1\\), then one defines the \\emph{left germ of \\(f\\) at \\(s\\)};\nthis will be denoted by \\(\\gamma_s^-(f)\\).\nThe collections \n\\[\n\\{\\gamma_s^+(f) \\mid f \\in {\\operatorname{Homeo}_+(I)} \\textrm{ and } sf = s\\}\n\\]\n\\[\n\\{\\gamma_s^-(f) \\mid f \\in {\\operatorname{Homeo}_+(I)} \\textrm{ and } sf = s\\}\n\\]\nform groups and\nthe functions \\(\\gamma_s^+\\) and \\(\\gamma_s^-\\) are homomorphisms defined\non the subgroup of \\({\\operatorname{Homeo}_+(I)}\\) consisting of those functions which fix \\(s\\).\n\n\n \n\\section[Fast collections of bumps]{Fast collection of bumps and their dynamical diagrams}\n\n\\label{FastBumpsSec}\n\nWe are now ready to turn to the definition of \\emph{geometrically fast} in the context of finite subsets of\n\\({\\operatorname{Homeo}_+(I)}\\).\nFirst we will need to develop some terminology.\nA \\emph{marking} of a geometrically proper collection of bumps \\(A\\)\nis an assignment of a \\emph{marker} \\(t \\in \\operatorname{supt}(a)\\) to each \\(a\\) in \\(A\\).\nIf \\(a\\) is a positive bump with orbital \\((x,y)\\) and marker \\(t\\), then we define\nits \\emph{source} to be the interval \\(\\operatorname{src}(a) := (x,t)\\)\nand its \\emph{destination} to be the interval \\(\\operatorname{dest}(a) := [t a,y)\\).\nWe also set \\(\\operatorname{src}(a^{-1}) := \\operatorname{dest}(a)\\) and \\(\\operatorname{dest} (a^{-1}) := \\operatorname{src}(a)\\).\nThe source and destination of a bump are collectively called its \\emph{feet}. \nNote that there is a deliberate asymmetry in this definition:\nthe source of a positive bump is an open interval whereas the destination is half open.\nThis choice is necessary so that for any \\(t \\in \\operatorname{supt} (a)\\), there is a unique \\(k\\) such that\n\\(t a^k\\) is not in the feet of \\(a\\), something which is a key feature of the definition.\n\nA collection \\(A\\) of bumps is \\emph{geometrically fast}\nif it there is a marking of \\(A\\) for which its feet form a pairwise disjoint family\n(in particular we require that \\(A\\) is geometrically proper).\nThis is illustrated in Figure \\ref{GeoFastFig}, where the feet of\n\\(a_0\\) are \\((p,q)\\) and \\([r,s)\\) and the feet of \\(a_1\\) are\n\\((q,r)\\) and \\([s,t)\\).\n\\newcommand{\\hashlabel}[1]\n{\n\\xy\n(0,0); (0,-2)**@{-}; (0,-4)*{\\scriptstyle #1};\n\\endxy\n}\n\\begin{figure}\n\\[\n\\xy\n(0,36); (0,0)**@{-}; (36,0)**@{-}; (0,0); (36,36)**@{-};\n(16,15.6)*{\\dvarbump{4}{16}{19}}; (9,19)*{a_0};\n(22,21.6)*{\\dvarbump{2}{14}{16}}; (17.5,27.5)*{a_1};\n(14,14);(14,20)**@{.};(20,20)**@{.};\n(20,25.5)**@{.};(25.5,25.5)**@{.};\n(6.57,4.1)*{\\hashlabel{p}};\n(14.02,11.55)*{\\hashlabel{q}};\n(20.0,17.55)*{\\hashlabel{r}};\n(25.55,23.15)*{\\hashlabel{s}};\n(30,27.55)*{\\hashlabel{t}};\n\\endxy\n\\]\n\\caption{A geometrically fast set of bumps}\\label{GeoFastFig}\n\\end{figure}\nBeing geometrically fast is precisely the set of dynamical requirements made on the set\n\\(\\{a_i \\mid i < 3\\}\\) of homeomorphisms mentioned in the introduction.\nWe do not require here that \\(A\\) is finite and we will explicitly state finiteness as\na hypothesis when it is needed.\nNotice however that, since pairwise disjoint families of intervals\nin \\(I\\) are at most countable, any geometrically fast set of bumps is\nat most countable.\nThe following are readily verified and can be used axiomatically to\nderive most of the lemmas in Section \\ref{PingPongSec}\n(specifically Lemmas \\ref{LocRedBasics}--\\ref{FellowTraveler3}):\n\\begin{itemize}\n\n\\item for all \\(a \\in A^\\pm\\), \\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(a)\\) and if\n\\(x \\in \\operatorname{supt}(a)\\) then there exists a \\(k\\) such that \\(xa^k \\in \\operatorname{dest}(a)\\);\n\n\\item if \\(a \\ne b \\in A^\\pm\\), then \\(\\operatorname{dest}(a) \\cap \\operatorname{dest}(b) = \\emptyset\\);\n\n\\item if \\(a \\in A^\\pm\\) and \\(x \\in \\operatorname{supt}(a)\\), then\n\\(x a \\in \\operatorname{dest}(a)\\) if and only if \\(x \\not \\in \\operatorname{src}(a) := \\operatorname{dest}(a^{-1})\\).\n\n\\item if \\(a,b \\in A^\\pm\\), then \\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(b)\\) or\n\\(\\operatorname{dest}(a) \\cap \\operatorname{supt}(b) = \\emptyset\\).\n\n\\end{itemize}\nThis axiomatic viewpoint will be discussed further in Section \\ref{AbstractPingPongSec}.\n\nA set \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is geometrically fast\nif it is geometrically proper and the set of bumps used in \\(X\\) is geometrically fast.\nNote that while geometric properness is a consequence of the disjointness of the feet\nif \\(X\\) uses only finitely many bumps, it is an additional requirement in general.\nThis is illustrated in the next example.\n\n\\begin{example} \\label{InfiniteBumpEx}\nConsider the following homeomorphism of \\({\\mathbf R}\\):\n\\[\nt \\gamma = \n\\begin{cases}\n3t & \\textrm{ if } 0 \\leq t \\leq 1\/2 \\\\\n(t+4)\/3 & \\textrm{ if } 1\/2 \\leq t \\leq 2 \\\\\nt & \\textrm{ otherwise}\n\\end{cases}\n\\]\nDefine \\(\\alpha , \\beta \\in {\\operatorname{Homeo}_+(\\R)}\\) by \n\\((t+2p) \\alpha = t \\gamma + 2p\\) and \\((t+2p+1) \\beta = t \\gamma + 2p+1\\)\nwhere \\(p \\in {\\mathbf Z}\\) and \\(t \\in [0,2]\\).\nThus the bumps used in \\(\\alpha\\) are obtained by translating \\(\\gamma\\) by even integers;\nthe bumps in \\(\\beta\\) are the translates of \\(\\gamma\\) by odd integers.\nIf we assign the marker \\(1\/2\\) to \\(\\gamma\\) and mark the translation of \\(\\gamma\\) by \\(p\\)\nwith \\(p + 1\/2\\), then it can be seen that the feet of \\(\\alpha\\) and \\(\\beta\\) are the intervals\n\\(\\{(p,p+1\/2) \\mid p \\in {\\mathbf Z}\\} \\cup \\{[p+1\/2,p+1) \\mid p \\in {\\mathbf Z}\\}\\),\nwhich is a pairwise disjoint family.\nThus the bumps used in \\(\\{\\alpha,\\beta\\}\\) are geometrically fast.\nSince \\(\\infty\\) is a right transition point of both \\(\\alpha\\) and \\(\\beta\\), \n\\(\\{\\alpha,\\beta\\}\\) is not geometrically proper and hence not fast.\nIn fact, it follows readily from the formulation of the classical \\emph{ping-pong lemma}\nin the introduction that \\(\\gen{\\alpha,\\beta}\\) is free.\n\\end{example}\n\nObserve that if \\(X\\) is geometrically proper, each of its elements uses only finitely many bumps,\nand the set of transition points of \\(X\\) is discrete, \nthen there is a map \\(f \\mapsto k(f)\\) of \\(X\\) into the positive integers such that\n\\(\\{f^{k(f)} \\mid f \\in X\\}\\) is geometrically fast.\nTo see this, start with a marking such that the closures of the sources of the bumps used in\n\\(X\\) are disjoint;\npick \\(f \\mapsto k(f)\\) sufficiently large so that all of the feet become disjoint.\nAlso notice that if \\(\\{f^{k(f)} \\mid f \\in X\\}\\) is geometrically fast and if \\(k(f) \\leq l(f)\\) for \\(f \\in X\\),\nthen \\(\\{f^{l(f)} \\mid f \\in X \\}\\) is geometrically fast as well.\n\nIf \\(X\\) is a geometrically fast generating set with only finitely many transition points,\nthen the \\emph{dynamical diagram} \\(D_X\\) of \n\\(X\\) is the edge labeled vertex ordered directed graph defined as follows:\n\\begin{itemize}\n\n\\item the vertices of \\(D_X\\) are the feet of \\(X\\) with the order\ninduced from the order of the unit interval;\n\n\\item the edges of \\(D_X\\) are the signed bumps of \\(X\\) directed\nso that the source (destination) of the edge is the source (destination) of the\nbump;\n\n\\item the edges are labeled by the elements of \\(X\\) that they come from.\n\n\\end{itemize}\n\\noindent\nWe will adopt the convention that dynamical diagrams are necessarily finite.\nThe dynamical diagram of a generating set for the Brin-Navas group \\(B\\)\n\\cite{MR2160570} \\cite{MR2135961}\nis illustrated in the left half of Figure \\ref{BNFigure};\nthe generators are \\(f = a_0^{-1} a_2\\) and \\(g = a_1^{-1}\\),\nwhere the \\((a_i \\mid i < 3)\\) is the geometrically fast generating sequence illustrated in\nFigure \\ref{TrackingPoint}.\nWe have found that when drawing dynamical diagram \\(D_X\\) of a given \\(X\\), it is more\n{\\ae}sthetic whilst being unambiguous to collapse\npairs of vertices \\(u\\) and \\(v\\) of \\(D_X\\) such that:\n\\begin{itemize}\n\n\\item \\(v\\) is the immediate successor of \\(u\\) in the order on \\(D_X\\),\n\n\\item \\(u\\)'s neighbor is below \\(u\\), and \\(v\\)'s neighbor is above \\(v\\).\n\n\\end{itemize}\nAdditionally, arcs can be drawn as over or under arcs to indicate their direction,\neliminating the need for arrows.\nThis is illustrated in the right half of Figure \\ref{BNFigure}.\nThe result qualitatively resembles the graphs of the homeomorphisms rotated so that\nthe line \\(y =x\\) is horizontal.\n\nAn isomorphism between dynamical diagrams is a directed graph isomorphism\nwhich preserves the order of the vertices and\ninduces a bijection between the edge labels\n(i.e. two directed edges have equal labels before applying the isomorphism if and only if\nthey have equal labels after applying the isomorphism).\nNotice that such an isomorphism is unique if it exists --- there is at most one order preserving bijection\nbetween two finitely linear orders.\n\n\\begin{figure}\n\\[\n\\xy\n(0,5)*{\n\\xymatrix{\n{\\bullet} &\n{\\bullet} &\n{\\bullet} \\ar@\/^1.5pc\/[ll]^f &\n{\\bullet} \\ar@\/^1.5pc\/[rr]^f &\n{\\bullet} \\ar@\/^1.5pc\/[lll]^g&\n{\\bullet}\n}};\n(-2,10); (63,10)**@{.};\n(40,27)*{}; \n(50,18)*{};\n\\endxy\n\\quad\n\\quad\n\\xy\n(0,5)*{\n\\xymatrix{\n{\\bullet} &\n{\\bullet} &\n{\\bullet} \\ar@{-}@\/^1.5pc\/[ll]^f \\ar@{-}@\/^1.5pc\/[rr]^f&\n{\\bullet} \\ar@{-}@\/^1.5pc\/[ll]^g&\n{\\bullet} \n}};\n(-2,10); (51,10)**@{.};\n(40,27)*{}; \n(50,18)*{};\n\\endxy\n\\]\n\\caption{The dynamical diagram for the Brin-Navas generators, with an illustration\nof the contraction convention \\label{BNFigure}}\n\\end{figure}\n\\begin{figure}\n\\[\n\\xy\n(0,36); (0,0)**@{-}; (36,0)**@{-}; (0,0); (36,36)**@{-};\n(10,9.6)*{\\dvarbump{2}{14}{16}}; (5,15)*{a_0};\n(18,17.6)*{\\dvarbump{2}{14}{16}}; (13.5,23.5)*{a_1};\n(26,25.6)*{\\dvarbump{2}{14}{16}}; (22,32)*{a_2};\n(10,10);(10,15.25)**@{.};(15.25,15.25)**@{.};(15.25,21)**@{.};(21,21)**@{.};(21,26.2)**@{.};(26.2,26.2)**@{.};\n\\endxy\n\\]\n\\caption{A point is tracked through a fast transition chain}\\label{TrackingPoint}\n\\end{figure}\nObserve that the (uncontracted) dynamical diagram of any geometrically\nfast \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) which has finitely many transition points has the property that\nall of its vertices have total degree 1.\nMoreover, any finite edge labeled vertex ordered directed graph in which each vertex has total\ndegree 1 is isomorphic to the dynamical diagram of some geometrically fast \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\)\nwhich has finitely many transition points (see the proof of Theorem \\ref{SemiConjThm} in Section\n\\ref{SemiConjSec}).\nThus we will write \\emph{dynamical diagram} to mean a finite edge labeled vertex ordered\ndirected graph in which each vertex has total degree 1.\nThe edges in a dynamical diagram will be referred to as \\emph{bumps} and the vertices\nin a dynamical diagram will be referred to as \\emph{feet}.\nTerms such as \\emph{source}, \\emph{destination}, \\emph{left\/right foot} will be given the obvious meaning\nin this context.\n\nNow let \\(A\\) be a geometrically fast set of positive bump functions.\nAn element of \\(A\\) is \\emph{isolated} (in \\(A\\)) if\nits support contains no transition points of \\(A\\).\nIn the dynamical diagram of \\(A\\), this corresponds to a bump whose source and destination are\nconsecutive feet.\nThe next proposition shows that we may always eliminate isolated bumps in \\(A\\) by\nadding new bumps to \\(A\\).\nThis will be used in Section \\ref{SemiConjSec}\n\n\\begin{figure}\n\\[\n\\xy\n\\xymatrix{\n{\\bullet} \\ar@\/^2.5pc\/[rrrrr]^a &\n{\\bullet} \\ar@\/^1.0pc\/[rr]^{b_0} &\n{\\bullet} \\ar@\/^1.0pc\/[rr]^{b_1} &\n{\\bullet} &\n{\\bullet} &\n{\\bullet} &\n}\n\\endxy\n\\]\n\\caption{The bump \\(a\\) is made nonisolated by the addition of bumps \\(b_0\\) and \\(b_1\\).\n\\label{IsolatedFixFig}}\n\\end{figure}\n\n\\begin{prop} \\label{IsolatedFixProp}\nIf \\(A \\subseteq {\\operatorname{Homeo}_+(I)}\\) is a geometrically fast set of positive bump functions,\nthen there is a geometrically fast \\(B \\subseteq {\\operatorname{Homeo}_+(I)}\\) such that \\(A \\subseteq B\\)\nand \\(B\\) has no isolated bumps.\nMoreover, if \\(A\\) is finite, then \\(B\\) can be taken to be finite as well.\n\\end{prop}\n\n\\begin{proof}\nIf \\(a \\in A\\) is isolated, let \\(b_0\\) and \\(b_1\\) be a geometrically fast pair of bumps with supports contained in\n\\(\\operatorname{supt}(a) \\setminus (\\operatorname{src}(a) \\cup \\operatorname{dest}(a))\\) such that neither \\(b_0\\) nor \\(b_1\\) is isolated in \\(\\{b_0,b_1\\}\\);\nsee Figure \\ref{IsolatedFixFig}.\nSince the feet of \\(A\\) are disjoint, so are the feet of \\(A \\cup \\{b_0,b_1\\}\\) and \\(a\\) is no longer isolated\nin \\(A \\cup \\{b_0,b_1\\}\\).\nLet \\(B\\) be the result of adding such a pair of bumps for each isolated bump in \\(A\\).\n\\end{proof}\n\n\n\\section[Criteria for geometric fastness]{An algorithmically check-able criteria for geometric fastness}\n\\label{GeoFastCriteriaSec}\n\nIn this section we will consider geometrically proper sets which have finitely many transition points and\ndevelop a characterization of when they are geometrically fast.\nThis characterization moreover allows one to determine algorithmically when such sets are geometrically fast.\nIt will also provide a canonical marking of geometrically fast sets with finitely many transition points.\nWe need the following refinement of the notion of a \\emph{transition chain}\nintroduced in \\cite{MR2466019}.\nLet \\(A \\subseteq {\\operatorname{Homeo}_+(I)}\\)\nbe a finite geometrically proper set of positive bump functions. \n\nA sequence \\((a_i \\mid i\\le k)\\) of nonisolated elements of \\(A\\) is a\n\\emph{stretched transition chain of \\(A\\)} if:\n\\begin{enumerate}\n\n\\item \\label{TransitionChain}\nfor all \\(i C_{\\min} \\prod C\\) holds if either\n\\(C'_{\\max} > C'_{\\min} \\prod C'\\) or \\(C''_{\\max} > C''_{\\min} \\prod C''\\).\nIf \\(C'_{\\max} > C'_{\\min} \\prod C'\\), then\n\\[\nC_{\\max} \\geq C'_{\\max} > C'_{\\min} \\prod C' =\nC_{\\min} \\prod C' \\prod C'' =C_{\\min} \\prod C\n\\]\nsince \\(C'_{\\max}\\) is the least transition point of \\(C''\\) and hence a lower bound for the support\nof \\(\\prod C''\\).\nIf \\(C'_{\\max} \\leq C'_{\\min} \\prod C'\\) but \\(C''_{\\max} > C''_{\\min} \\prod C''\\), then\n\\[\nC_{\\max} = C''_{\\max} > C''_{\\min} \\prod C'' \\geq C'_{\\min} \\prod C' \\prod C'' = C_{\\min} \\prod C\n\\]\nsince \\(C''_{\\min}\\) is the greatest transition point of \\(C'\\) and thus\nan upper bound for the support of \\(\\prod C'\\).\n\\end{proof}\n\nObserve that the proof of Proposition \\ref{FastCriterion}\ngives an explicit construction\nof a marking of a family \\(A\\) of positive bumps.\nThis marking has the property that if \\(A\\) is geometrically fast,\nthen it is witnessed as such by the marking.\nWe will refer to this marking as the \\emph{canonical marking} of \\(A\\).\n\nFinally, let us note that Proposition \\ref{FastCriterion} gives us a means\nto algorithmically check whether a set of positive bumps \\(A\\) is fast.\nSpecifically, perform the following sequence of steps:\n\\begin{itemize}\n\n\\item determine whether \\(A\\) is geometrically proper;\n\n\\item if so, partition the non isolated elements of\n\\(A\\) into maximal stretched transition chains;\n\n\\item for each maximal stretched transition chain \\(C\\) of \\(A\\),\ndetermine whether \\(C_{\\max} \\leq C_{\\min} \\prod C\\).\n\n\\end{itemize}\nThis is possible provided we are able to perform the following basic queries:\n\\begin{itemize}\n\n\\item test for equality among the transition points of elements of \\(A\\);\n\n\\item determine the order of the transition points of elements of \\(A\\);\n\n\\item determine the truth of \\(C_{\\max} \\leq C_{\\min} \\prod C\\) whenever \\(C\\) is a stretched transition chain.\n\n\\end{itemize}\n\n\n\n\\section{The ping-pong analysis of geometrically fast sets of bumps}\n\\label{PingPongSec}\n\nIn this section, we adapt the ping-pong argument to the setting of fast families of\nbump functions.\nWhile the culmination will be Theorem \\ref{Faithful} below, the lemmas we will develop will\nbe used in subsequent sections.\nThey also readily adapt to the more abstract setting of Section \\ref{AbstractPingPongSec}.\n\nFix, until further notice, a (possibly infinite) geometrically fast collection \\(A\\) of positive bumps\nequipped with a marking; in particular we will write \\emph{word} to mean \\emph{\\(A\\)-word}.\nCentral to our analysis will be the notion of a \\emph{locally reduced word}.\nA word \\(\\str{w}\\) is \\emph{locally reduced} at \\(t\\) if it is freely reduced and whenever\n\\(\\str{ua}\\) is a prefix of \\(\\str{w}\\) for \\(a \\in A^\\pm\\),\n\\(tua \\ne tu\\).\nIf \\(\\str{w}\\) is locally reduced at every element of a set \\(J \\subseteq I\\), then we write that\n\\(\\str{w}\\) is \\emph{locally reduced on \\(J\\)}.\n\nThe next lemma collects a number of useful observations about locally\nreduced words; we omit the obvious proofs.\nRecall that if \\(\\str{u}\\) and \\(\\str{v}\\) are freely reduced, then the free\nreduction of \\(\\str{uv}\\) has the form \\(\\str{u}_0 \\str{v}_0\\) where \\(\\str{u}=\\str{u}_0 \\str{w}\\),\n\\(\\str{v}=\\str{w}^{-1}\\str{v}_0\\), and \\(\\str{w}\\) is the longest common suffix of\n\\(\\str{u}\\) and \\(\\str{v}^{-1}\\).\nIn particular, if \\(\\str{u}\\), \\(\\str{v}\\), and \\(\\str{w}\\) are freely reduced words and the\nfree reductions of \\(\\str{uv}\\) and \\(\\str{uw}\\) coincide, then \\(\\str{v} = \\str{w}\\).\n\n\\begin{lemma}\\label{LocRedBasics} All of the following are true:\n\\begin{itemize}\n\n\\item For all \\(x\\in I\\) and all words \\(\\str{w}\\), there is a subword\n\\(\\str{v}\\) of \\(\\str{w}\\) which is locally reduced at \\(x\\) so that\n\\(xv=xw\\).\n\n\\item For all \\(x\\in I\\) and words \\(\\str{u}\\) and \\(\\str{v}\\), if \\(\\str{u}\\) is\nlocally reduced at \\(x\\), and \\(\\str{v}\\) is locally reduced at \\(xu\\), then\nthe free reduction of \\(\\str{uv}\\) is locally reduced at \\(x\\).\n\n\\item For all \\(x\\in I\\) and words \\(\\str{w}\\), if \\(\\str{w}\\) is locally\nreduced at \\(x\\) and \\(\\str{w}=\\str{uv}\\), then \\(\\str{u}\\) is locally reduced at\n\\(x\\) and \\(\\str{v}\\) is locally reduced at \\(xu\\).\n\n\\item For all \\(x\\in I\\) and words \\(\\str{w}\\), if \\(\\str{w}\\) is locally\nreduced at \\(x\\), then \\(\\str{w}^{-1}\\) is locally reduced at \\(xw\\).\n\n\\end{itemize}\n\\end{lemma}\n\\noindent\n(Recall here our convention that a \\emph{subword} is not required to consist\nof consecutive symbols of the original word.)\nFor \\(x\\in I\\), we use \\(x\\gen{A}\\) to denote the orbit\nof \\(x\\) under the action of \\(\\gen{A}\\) and for\n\\(S\\subseteq I\\), we let \\(S\\gen{A}\\) be the union of\nthose \\(x\\gen{A}\\) for \\(x\\in S\\).\n\nA marker \\(t\\) of \\(A\\) is \\emph{initial} if whenever \\(s < t\\) is the marker of \\(a \\in A\\),\nthen \\(t \\ne sa\\).\nIf \\(A\\) is finite and we are working with the canonical marking, then the initial markers\nare precisely the markers of the initial intervals.\nLet \\(M_A\\) be the set of initial markers of \\(A\\).\nWe will generally suppress the subscript if the meaning is clear from the context;\nin particular we will write \\(M\\gen{A}\\) for \\(M_A\\gen{A}\\).\nNotice that every marker is contained in \\(M \\gen{A }\\).\n\nAside from developing lemmas for the next section,\nthe goal of this section is to prove that the action of\n\\(\\gen{A}\\) on \\(M\\gen{A}\\) is faithful.\nThe next lemma is the manifestation of the \\emph{ping-pong argument}\nin the context in which we are working.\nIf \\(\\str{w} \\ne \\varepsilon\\) is a word, the \\emph{source} (\\emph{destination}) of\n\\(\\str{w}\\) is the source of the first\n(destination of the last) symbol in \\(\\str{w}\\).\nThe source and destination of \\(\\varepsilon\\) are \\(\\emptyset\\).\n\n\\begin{lemma} \\label{PingPong}\nIf \\(x \\in I\\) and \\(\\str{w} \\ne \\varepsilon\\)\nis a word which is locally reduced at \\(x\\), then\neither \\(x \\in \\operatorname{src}(\\str{w})\\) or \\(xw \\in \\operatorname{dest}(\\str{w})\\).\n\\end{lemma}\n\n\\begin{proof}\nThe proof is by induction on the length of \\(\\str{w}\\).\nWe have already noted\nthat if \\(\\str{w} = \\str{a}\\) and \\(t \\in \\operatorname{supt}(a)\\),\nthen \\(ta \\in \\operatorname{dest}(a)= \\operatorname{dest}(\\str{w})\\) if and only if \\(t \\not \\in \\operatorname{src}(a) = \\operatorname{src}(\\str{w})\\).\nNext suppose that \\(\\str{w}\\) has length at least 2, \\(x \\not \\in \\operatorname{src}(\\str{w})\\),\nand let \\(\\str{v}\\) be a (possibly empty) word such that \\(\\str{w} = \\str{vab}\\)\nfor \\(a,b \\in A^\\pm\\).\nSince \\(\\str{w}\\) is locally reduced, \\(b \\ne a^{-1}\\)\nand thus the destination of \\(a\\) is not the source of \\(b\\).\nSince \\(A\\) is geometrically fast, the destination of \\(a\\) is disjoint from the source of\n\\(b\\).\nBy our inductive hypothesis, \\(y = x v a\\) is in\n\\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(b) \\setminus \\operatorname{src}(b)\\).\nThus \\(x w = x v ab = y b\\) is in the destination of \\(b\\).\n\\end{proof}\n\nIf \\(\\str{w}\\) is a word, define \\(J(\\str{w}) :=\\operatorname{supt}(a)\\setminus \\operatorname{src}(a)\\)\nwhere \\(\\str{a}\\) is the first symbol of\n\\(\\str{w}\\).\nNotice that if \\(\\str{w}\\) is locally reduced at \\(x\\), then ``\\(x \\in J(\\str{w})\\)'' is equivalent to\n``\\(x \\not \\in \\operatorname{src}(\\str{w})\\)''.\nThe following lemma is easily established by induction on the length of \\(\\str{w}\\)\nusing Lemma \\ref{PingPong}.\n\n\\begin{lemma} \\label{threaded}\nIf \\(\\str{w}\\) is a word and \\(x \\in J(\\str{w})\\), then \\(\\str{w}\\) is locally reduced\nat \\(x\\) if and only if \\(\\str{w}\\) is freely reduced and \n\\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(b)\\) whenever \\(\\str{ab}\\) are consecutive symbols in \\(\\str{w}\\).\nIn particular, if \\(\\str{w}\\) is freely reduced, then \\(\\str{w}\\) is locally reduced on \\(J(\\str{w})\\)\nprovided \\(\\str{w}\\) is locally reduced at some element of \\(J(\\str{w})\\).\n\\end{lemma}\n\nWhen applying Lemma \\ref{PingPong}, it will be useful to be able to assume\nthat \\(x\\) is not in \\(\\operatorname{src}(\\str{w})\\).\nNotice that if \\(x\\) is not in the feet of any elements of \\(A\\), then this is automatically true\n(for instance this is true if \\(x \\in M\\)).\nThe next lemma captures an important consequence of Lemma \\ref{PingPong}.\n\n\\begin{lemma}\\label{Collision}\nSuppose \\(x_0,x_1 \\in I\\), \\(\\str{u}_i\\) is locally reduced at \\(x_i\\) and\n\\(x_i \\not \\in \\operatorname{src}(\\str{u}_i)\\).\nIf \\(|\\str{u}_0| \\leq |\\str{u}_1|\\) and \\(x_0 u_0 = x_1 u_1\\),\nthen \\(\\str{u}_0\\) is a suffix of \\(\\str{u}_1\\).\nIn particular if \\(t \\in I\\) is not in any of the feet of \\(A\\)\nand \\(\\str{u}\\) and \\(\\str{v}\\) are words\nthat are locally reduced at \\(t\\) with \\(tu=tv\\), then \\(\\str{u}=\\str{v}\\).\n\\end{lemma}\n\n\\begin{proof}\nThe main part of the lemma is proved by induction on \\(|\\str{u}_0|\\).\nIf \\(\\str{u}_0 = \\varepsilon\\), this is trivially true.\nNext suppose that \\(\\str{u}_i \\str{a}_i\\) is locally\nreduced at \\(x_i\\) and \\(x_i \\not \\in \\operatorname{src}(\\str{u}_i \\str{a}_i)\\).\nIf \\(x_0 u_0 a_0 = x_1 u_1 a_1\\),\nthen by Lemma \\ref{PingPong},\n\\(\\str{a}_0 = \\str{a}_1\\).\nWe are now finished by applying our\ninduction hypothesis to conclude that \\(\\str{u}_0\\) is a suffix of \\(\\str{u}_1\\).\n\nIn order to see the second conclusion, let \\(t\\), \\(\\str{u}\\) and \\(\\str{v}\\) be given such that\n\\(x:= tu = tv\\) and assume without loss of generality that \\(|\\str{u}| \\leq |\\str{v}|\\).\nBy the main assertion of the lemma, \\(\\str{v} = \\str{wu}\\) for some \\(\\str{w}\\).\nSince \\(tw = t\\), Lemma \\ref{PingPong} implies \\(\\str{w} = \\varepsilon\\).\n\\end{proof}\n\nIf \\(x \\in \\operatorname{src}(a)\\) for some \\(a \\in A^\\pm\\), then \\(x \\in \\operatorname{dest}(a^{-1})\\).\nThis suggests that we have ``arrived at'' \\(x\\) by applying a locally reduced word to some other point.\nMoreover \\(a^{-1}\\) is the unique element \\(b\\) of \\(A^\\pm\\) such that \\(x \\in \\operatorname{dest}(b)\\).\nThus we may attempt to ``trace back'' to where \\(x\\) ``came from.''\nThis provides a recursive definition of a sequence which starts at\n\\(\\str{a}^{-1}\\) and grows to the left, possibly infinitely far.\nThis gives rise to the notion of a \\emph{history} of a point \\(x \\in I\\), which\nwill play an important role in the\nproof of Theorem \\ref{Faithful} below and also in Section \\ref{AbstractPingPongSec}.\nIf \\(t \\in I\\) is not in \\(\\operatorname{dest}(a)\\) for any \\(a \\in A^\\pm\\), then we say that\n\\(t\\) has \\emph{trivial history} and define \\(\\tilde t := \\{a \\in A : t \\in \\operatorname{supt}(a)\\}\\).\nIf \\(x \\in I\\), define \\(\\eta(x)\\) to be the set of all strings of the following form:\n\\begin{itemize}\n\n\\item words \\(\\str{u}\\) such that for some \\(t \\in I\\),\n\\(tu = x\\), \\(\\str{u}\\) is locally reduced at \\(t\\), and \\(t \\not \\in \\operatorname{src}(\\str{u})\\);\n\n\\item strings \\(\\str{\\tilde t u}\\) such that \\(t\\) has trivial history, \\(\\str{u}\\) is locally\nreduced at \\(t\\), and \\(tu = x\\).\n\n\\end{itemize}\n\\noindent\nNotice that if \\(\\str{w}\\) is a word, then \\(\\str{w}^{-1}\\) is in \\(\\eta(x)\\) if and only if\n\\(\\str{w}\\) is locally reduced at \\(x\\) and \\(xw \\not \\in \\operatorname{dest}(\\str{w})\\).\n\nWe will refer to elements of \\(\\eta(x)\\) as \\emph{histories} of \\(x\\).\nWe will say that \\(x\\) has \\emph{finite history} if \\(\\eta(x)\\) is finite.\nThe following easily established using Lemmas \\ref{LocRedBasics} and \\ref{Collision};\nthe proof is omitted.\n\n\\begin{lemma} \\label{BasicHistory}\nThe following are true for each \\(x \\in I\\):\n\\begin{itemize}\n\n\\item\n\\(\\eta(x)\\) is closed under taking suffixes;\n\n\\item\nfor each \\(n\\), \n\\(\\eta(x)\\) contains at most one sequence of length \\(n\\);\n\n\\item\nIf \\(\\str{v}\\) is a word in \\(\\eta(x)\\), then \\(\\eta(xv^{-1}) = \\{\\str{u} : \\str{uv} \\in \\eta(x)\\}\\).\n\n\\end{itemize}\n\\end{lemma}\n\\noindent\nIt is useful to think of \\(\\eta(x)\\) as the suffixes of a single sequence\nwhich is either finite or grows infinitely to the left.\n\nIn what follows, we will typically use \\(s\\) and \\(t\\) to denote elements of \\(I\\)\nwith finite history and \\(x\\) and \\(y\\) for arbitrary elements of \\(I\\).\nThe following is a key property of having a trivial history.\n\n\\begin{lemma}\\label{DisjointOrbits}\nIf \\(s\\ne t\\) have trivial history, then\n\\(s\\gen{A}\\) and \\(t\\gen{A }\\) are disjoint.\n\\end{lemma}\n\n\\begin{proof}\nIf the orbits intersect, then \\(t=sw\\) for some word \\(\\str{w}\\).\nBy Lemma \\ref{LocRedBasics} we can take \\(\\str{w}\\) to be locally reduced\nat \\(s\\).\nBy Lemma \\ref{PingPong}, \\(sw\\) is in the destination of \\(\\str{w}\\).\nBut \\(t = sw\\) has trivial history, which is impossible.\n\\end{proof}\n\nRecall that the set of\nfreely reduced words in a given generating set has the structure of\na rooted tree with the empty word as root and where ``prefix of'' is\nsynonymous with ``ancestor of.''\nThe \\emph{ping-pong argument} discovers orbits that reflect this structure.\nDefine a labeled directed graph on \\(I\\) by putting an arc with label \\(a\\) from \\(x\\) to\n\\(xa\\) whenever \\(a \\in A^\\pm\\) and \\(xa \\ne x\\).\nThe second part of Lemma \\ref{Collision} asserts that if \\(x\\) is in the orbit of a point \\(t\\) with trivial\nhistory, then there is a unique path in this graph connecting \\(t\\) to \\(x\\).\nIt follows that if there is a path between two elements of \\(I\\) with finite history, it is unique,\nyielding the following lemma.\n\n\\begin{lemma}\\label{TreeAction} If\n\\(s,t \\in I\\) have finite histories,\nthen there is at most one word \\(\\str{w}\\) which is\nlocally reduced at \\(s\\) so that \\(sw=t\\).\n\\end{lemma}\n\nNotice that the assumption of finite history in this lemma is necessary.\nFor instance if we consider the positive bumps \\(a_0\\) and \\(a_1\\) Figure \\ref{TrackingPoint},\nthere must be an \\(x \\in \\operatorname{supt}(a_0) \\cap \\operatorname{supt}(a_1)\\) such that \\(xa_0 a_1^{-1} = x\\).\nThis follows from the observation that if \\(s < t\\) are,\nrespectively, the left transition point of \\(a_1\\)\nand the right transition point of \\(a_0\\), then\n\\[s a_0 a_1^{-1} > s a_1^{-1} = s \\qquad \\textrm{ and } \\qquad t a_0 a_1^{-1} = t a_1^{-1} < t\\]\nwhich implies the existence of the desired \\(x\\) by applying the Intermediate Value Theorem\nto \\(t \\mapsto t a_0 a_1^{-1} - t\\).\n\nGiven two points \\(x,y \\in I\\) and a word \\(\\str{w}\\), it will be useful to find a single\nword \\(\\str{w}'\\) which is locally reduced at \\(x\\) and \\(y\\) and which satisfies\n\\(xw' = xw\\) and \\(yw' = yw\\).\nThe goal of the next set of lemmas is to provide a set of sufficient conditions for the existence of such a \\(\\str{w}'\\).\nIt will be convenient to introduce some additional terminology at this point.\nIf \\(x \\in I\\), then\nwe say that \\(\\str{w}\\) is a \\emph{return word} for \\(x\\) if\n\\(xw = x\\) and \\(\\str{w} \\ne \\varepsilon\\);\na \\emph{return prefix} for \\(x\\) is a prefix which is a return word.\nWe will see that ``\\(\\str{w}\\) does not have a return prefix for \\(s\\)'' is a useful hypothesis.\nThe next lemma provides some circumstances under which this is true.\n\n\\begin{lemma} \\label{NoReturn}\nIf \\(s \\in I\\) has finite history,\n\\(\\str{u}\\) is locally reduced at \\(s\\),\nand \\(\\str{w}\\) is a word of length less than \\(\\str{u}\\),\nthen \\(\\str{uw}\\) has no return prefix for \\(s\\).\n\\end{lemma}\n\n\\begin{proof}\nNotice that it suffices to prove that \\(\\str{uw}\\)\nis not a return word for \\(s\\).\nIf it were, then there would be a locally reduced subword\n\\(\\str{v}\\) of \\(\\str{w}^{-1}\\) such that \\(su = sv\\).\nSince \\(|\\str{v}| \\leq |\\str{w}| < |\\str{u}|\\),\nthis would contradict Lemma \\ref{TreeAction}.\n\\end{proof}\n\n\\begin{lemma} \\label{FellowTraveler1}\nSuppose that \\(\\str{w} \\ne \\varepsilon\\) is a word and \\(x \\in J (\\str{w})\\).\nIf \\(\\str{w}\\) has no return prefix for \\(x\\) and \\(\\str{w'}\\) is locally reduced at\n\\(x\\) with \\(xw' = xw\\), then:\n\\begin{itemize}\n\n\\item \\(J(\\str{w}') = J(\\str{w})\\);\n\n\\item \\(\\str{w}'\\) is locally reduced on \\(J(\\str{w})\\);\n\n\\item if \\(y \\in J(\\str{w})\\), then \\(yw' = yw\\).\n\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nThe proof is by induction on the length of \\(\\str{w}\\).\nIf \\(\\str{w}\\) has length \\(1\\), then there is nothing to show.\nSuppose now that \\(\\str{w} = \\str{ub}\\) for some \\(b \\in A^\\pm\\) and \\(\\str{u} \\ne \\varepsilon\\).\nLet \\(\\str{u}'\\) be locally reduced at \\(x\\) such that \\(xu' = xu\\).\nBy our inductive assumption, \\(J:= J(\\str{u}') = J(\\str{u}) = J(\\str{w})\\),\n\\(\\str{u}'\\) is locally reduced on \\(J\\) and if \n\\(y \\in J\\), then \\(yu' = yu\\).\nBy our assumption, \\(xu' = xu \\ne x\\) and so \\(\\str{u}' \\ne \\varepsilon\\).\nIf \\(\\str{u}'\\str{b}\\) is not freely reduced, then its free reduction \\(\\str{w}'\\)\nsatisfies that \\(xw' = xu'b = xub = xw \\ne x\\).\nIn particular, \\(\\str{w}' \\ne \\varepsilon\\) and retains the first symbol of \\(\\str{u}'\\).\nFurthermore, since \\(\\str{u}'\\) is locally reduced on \\(J\\) and since\n\\(\\str{w}'\\) is a prefix of \\(\\str{u}'\\), \\(\\str{w}'\\) is also locally reduced on \\(J\\).\nAlso, if \\(y \\in J\\), then \\(yw = yub = yu'b = yw'\\).\n\nSuppose now that \\(\\str{u}'\\str{b}\\) is freely reduced. \nBy Lemma \\ref{PingPong}, \\(Ju = Ju' \\subseteq \\operatorname{dest}(\\str{u}')\\) for all \\(y \\in J\\).\nIf \\(\\operatorname{dest}(\\str{u}')\\) is disjoint from \\(\\operatorname{supt}(b)\\), then\n\\(yu' = yu'b = yub = yw\\) for all \\(y \\in J\\).\nSince \\(\\str{u}'\\) is locally reduced, we are again done in this case.\nIn the remaining case, \\(\\operatorname{dest}(\\str{u}') \\subseteq \\operatorname{supt}(b)\\) in which case\n\\(\\str{w}' = \\str{u}'\\str{b}\\) is locally reduced at all \\(y \\in J\\).\nSince for all \\(y \\in J\\), \\(yu = yu' \\ne yu'b = yub = yw\\) we have that\n\\(\\str{w}'\\) is locally reduced on \\(J\\).\nClearly \\(J(\\str{w}') = J(\\str{u}') = J(\\str{w})\\) and we are finished.\n\\end{proof}\n\\noindent\nLemma \\ref{FellowTraveler1} has two immediate consequences which will be easier to\napply directly.\n\n\\begin{lemma} \\label{FellowTraveler2}\nIf \\(\\str{w}\\) is a word and\nthere is an \\(s \\in J:=J (\\str{w})\\) with finite history such that\n\\(\\str{w}\\) is a minimal return word for \\(s\\),\nthen \\(w\\) is the identity on \\(J\\).\n\\end{lemma} \n\n\\begin{proof}\nLet \\(\\str{w} = \\str{ua}\\) and let \\(\\str{u}'\\) be locally reduced at \\(s\\) with \\(su' = su\\).\nSince \\(sw = su'a = s\\), it must be that \\(\\str{u}' = \\str{a}^{-1}\\).\nBy Lemma \\ref{FellowTraveler1},\n\\(t u' = tu\\) whenever \\(t \\in J\\).\nThus \\(tw = tu' a = ta^{-1} a = t\\) for all \\(t \\in J\\).\n\\end{proof}\n\n\\begin{lemma} \\label{FellowTraveler3}\nIf \\(s,t \\in I\\) have finite histories and \\(\\eta(s) = \\eta(t)\\), then\nany return word for \\(s\\) is a return word for \\(t\\).\nIf moreover \\(s\\) and \\(t\\) have trivial history and\n\\(\\str{w} \\ne \\varepsilon\\) is not a return word for \\(s\\),\nthen there is an \\(a \\in A^\\pm\\) such that \\(\\{sw,tw\\} \\subseteq \\operatorname{dest}(a)\\).\n\\end{lemma}\n\n\\begin{proof}\nIf there is a minimal return prefix of \\(\\str{w}\\) for \\(s\\), then by Lemma \\ref{FellowTraveler2},\nit is also a return prefix for \\(t\\).\nBy iteratively removing minimal return prefixes for \\(s\\), \nwe may assume \\(\\str{w}\\) has no return prefixes for \\(s\\).\nObserve that since \\(\\eta(s) = \\eta(t)\\), \\(\\{s,t\\} \\subseteq J(\\str{w})\\).\nLemma \\ref{FellowTraveler1} now yields the desired conclusion.\n\\end{proof}\n\n\nThe following theorem shows that the restriction of the action of \\(\\gen{A}\\) to \\(M\\gen{A}\\) is faithful.\n\n\\begin{thm}\\label{Faithful}\nSuppose that \\(A \\subseteq {\\operatorname{Homeo}_+(I)}\\)\nis a (possibly infinite) geometrically fast set of positive bump functions, equipped with a marking.\nIf \\(g \\in \\gen{A}\\) is not the identity, then\nthere is a \\(y \\in M \\gen{A}\\) such that \\(y g \\ne y\\).\n\\end{thm}\n\n\\begin{remark}\nIf \\(A\\) is finite and equipped with the canonical marking, then\nthe cardinality of \\(M\\) is the sum of the number of maximal stretched transition chains in \\(A\\)\nand the number of isolated elements of \\(A\\).\n\\end{remark}\n\n\\begin{proof}\nFirst observe that if there is an \\(x \\in I\\) such that \\(xg \\ne x\\), then by continuity of \\(g\\),\nthere is an \\(x \\in I\\) such that \\(xg \\ne x\\) and \\(x\\) is not in the orbit of a transition point or\nmarker (orbits are countable and neighborhoods are uncountable).\nFix such an \\(x\\) and a word \\(\\str{w}\\) representing \\(g\\) for the duration of the proof.\nThe proof of the theorem breaks into two cases, depending on whether \\(x\\) has\nfinite history.\n\nWe will first handle the case in which \\(x\\) has trivial history;\nthis will readily yield the more general case in which \\(x\\) has finite history.\nSuppose that \\(x \\not \\in \\operatorname{src}(a)\\) for all \\(a \\in A\\).\nLet \\(y < x\\) be maximal such that \\(y\\) is either a transition point or a marker.\n\n\\begin{claim} \\label{NotSplit}\n\\(y\\) has trivial history and\n\\(\\tilde x = \\tilde y\\).\n\\end{claim}\n\n\\begin{proof}\nFirst suppose that \\(y\\) is the marker of some \\(a \\in A\\).\nNotice that by our assumption of maximality of \\(y\\), the right transition point\nof \\(a\\) is greater than \\(y\\).\nIn this case, both \\(x\\) and \\(y\\) are in the support of \\(a\\).\nFurthermore, observe that \\(y\\) is not in the foot of any \\(b \\in A\\).\nTo see this, notice that this would only be possible if \\(y\\) is in the right foot of some \\(b\\).\nHowever since \\(x\\) is not in the right foot of \\(b\\),\nthe right transition point of \\(b\\) would then be less than \\(x\\),\nwhich would contradict the maximal choice of \\(y\\).\nFinally, if \\(b \\in A \\setminus \\{a\\}\\), the maximal choice of \\(y\\) implies that\n\\(x\\) is in the support of \\(b\\) if and only if \\(y\\) is.\n\nIf \\(y\\) is a transition point of some \\(a \\in A\\), then \\(y\\) must be the right transition point\nof \\(a\\) since otherwise our maximality assumption on \\(y\\) would imply that\n\\(x\\) is in the left foot of \\(a\\), contrary to our assumption that \\(x\\) has trivial history.\nIn this case neither \\(x\\) nor \\(y\\) are in the support of \\(a\\).\nIf \\(b \\in A \\setminus \\{a\\}\\), then our maximality assumption on \\(y\\) implies\nthat \\(\\{x,y\\}\\) is either contained in or disjoint from the support of \\(b\\).\nTo see that \\(y\\) has trivial history, observe that the only way a transition point\ncan be in a foot is for it to be the left endpoint of a right foot.\nIf \\(y\\) is the left endpoint of a right foot,\nthen our maximal choice of \\(y\\) would mean that \\(x\\) is also in this foot, which is contrary\nto our assumption.\nThus \\(y\\) must have trivial history.\n\\end{proof}\n\nBy Claim \\ref{NotSplit} and Lemma \\ref{FellowTraveler3}, \\(yw \\ne y\\).\nIf \\(y\\) is a marker, we are done.\nIf \\(y\\) is a transition point of some \\(a \\in A\\), then as noted above it is the right transition point of \\(a\\).\nIf \\(s\\) is the marker of \\(a\\), then \\(sa^k \\to y\\) and by continuity \\(sa^k w \\to yw\\).\nThus for large enough \\(k\\), \\(sa^k w \\ne sa^k\\).\nSince \\(sa^k \\in M\\gen{A}\\), we are done in this case.\n\nNow suppose that \\(x\\) has finite history and let \\(\\str{\\tilde{t}u} \\in \\eta(x)\\) with \\(t \\in I\\) and \\(tu = x\\).\nBy definition of \\(\\eta(x)\\), \\(t\\) has trivial history.\nSince \\(xw \\ne x\\), we have that \\(tuw \\ne tu\\) and hence \\(tuwu^{-1} \\ne t\\).\nIt follows from the previous case\nthat there is an \\(s \\in M\\gen{A}\\) such that \\(s uwu^{-1} \\ne s\\).\nWe now have that \\(y :=su\\) is in \\(M\\gen{A}\\) and satisfies \\(yw \\ne y\\) as desired.\n\nFinally, suppose that \\(\\eta(x)\\) is infinite.\nLet \\(\\str{u} \\in \\eta(x)\\) be longer than \\(\\str{w}\\), let \\(s\\) be the marker\nfor the initial symbol of \\(\\str{u}\\), and set \\(y := xu^{-1}\\) and \\(t := su\\).\nSince \\(\\str{u}\\) is locally reduced at \\(y\\) by assumption,\nLemma \\ref{threaded} implies that \\(\\str{u}\\) is locally reduced at \\(s\\).\nBy Lemma \\ref{NoReturn}, \\(\\str{uw}\\) has no return prefix for \\(s\\).\nLet \\(\\str{v}\\) be locally reduced at \\(s\\) such that \\(suw = sv\\).\nApplying Lemma \\ref{FellowTraveler1} to \\(\\str{uw}\\), \\(s\\) and \\(\\str{v}\\), we\ncan conclude that \\(J(\\str{v}) = J(\\str{uw})\\), that\n\\(\\str{v}\\) is locally reduced at \\(y\\), and \\(yuw = yv\\).\nNotice that since \\(xw \\ne x\\), \\(yuw = yv \\ne yu\\) and in particular \\(\\str{v} \\ne \\str{u}\\).\nBy Lemma \\ref{TreeAction}, we have that \\(t = su \\ne sv = suw = tw\\).\nThis finishes the proof of Theorem \\ref{Faithful}.\n\\end{proof}\n\nWe finish this section with two lemmas which concern multi-orbital homeomorphisms but \nwhich otherwise fit the spirit of this section.\nThey will be needed in Section \\ref{ExcisionSec}.\n\n\\begin{lemma} \\label{FindPrefix}\nSuppose that \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is geometrically fast and equipped with a fixed marking.\nLet \\(A\\) be the set of bumps used in \\(X\\) and \\(s \\in M\\).\nIf \\(\\str{w}\\) is an \\(X\\)-word and \\(\\str{u}\\) is an \\(A\\)-word which is locally reduced at \\(s\\)\nand satisfies \\(su = sw\\), then for every prefix \\(\\str{u}'\\) of \\(\\str{u}\\), there is a prefix \\(\\str{w}'\\) of\n\\(\\str{w}\\) such that \\(su' = sw'\\). \n\\end{lemma}\n\n\\begin{proof}\nLet \\(\\str{w}_i\\) be the prefix of \\(\\str{w}\\) of length \\(i\\) for \\(i \\leq |\\str{w}|\\) and let\n\\(\\str{u}_i\\) be the unique word which is locally reduced at \\(s\\) such that\n\\(su_i = sw_i\\).\nNotice that if \\(\\str{u}_{i+1} \\ne \\str{u}_i\\), then \\(\\str{u}_{i+1}\\)\nis obtained by inserting or deleting a single symbol at\/from the end of \\(\\str{u}_i\\).\nIt follows that all prefixes of \\(\\str{u}\\) occur among the \\(\\str{u}_i\\)'s.\n\\end{proof}\n\nIf \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\), then an element \\(s\\) of \\(I\\) is defined to have finite history with respect to \\(X\\)\nif it has finite history with respect to the set of bumps used in \\(X\\).\nThe meaning of \\emph{return word} is unchanged in the context of \\(X\\)-words.\n\n\\begin{lemma} \\label{X_to_A}\nSuppose that \\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) is geometrically fast, equipped with a fixed marking, and that \\(A\\)\ndenotes the set of bumps used in \\(X\\).\nLet \\(\\str{w} \\ne \\varepsilon\\) be an \\(X\\)-word and \\(\\str{a} \\in A^\\pm\\) be a signed bump of\nthe first symbol of \\(\\str{w}\\).\nIf \\(s \\in J: = J(\\str{a})\\) and \\(\\str{w}\\) has no proper return prefix for \\(s\\),\nthen there is an \\(A\\)-word \\(\\str{v}\\) which begins with \\(\\str{a}\\) and is such that\n\\(v\\) and \\(w\\) coincide on \\(J\\).\n\\end{lemma}\n\n\\begin{proof}\nThe proof is by induction on the length of \\(\\str{w}\\).\nObserve that the lemma is trivially true if \\(\\str{w}\\) has length at most \\(1\\).\nTherefore suppose that \\(\\str{w} = \\str{u g}\\) with \\(g \\in X^\\pm\\) and \\(\\str{u} \\ne \\varepsilon\\).\nLet \\(\\str{v}'\\) be an \\(A\\)-word which begins with \\(\\str{a}\\) and is such that \\(v'\\) and \\(u\\) agree on \\(J\\).\nSince \\(\\str{u}\\) has no return prefix, Lemma \\ref{FindPrefix} implies that \\(\\str{v}'\\) has no return prefix.\nLet \\(\\str{v}''\\) be a subword of \\(\\str{v}'\\) which is locally reduced at \\(s\\) and satisfies\n\\(sv'' = sv'\\).\nBy Lemma \\ref{FellowTraveler1}, \\(\\str{v}''\\) is locally reduced on \\(J\\) and\n\\(J(\\str{v}'') = J(\\str{v}') = J\\).\nIn particular, \\(J u \\subseteq \\operatorname{dest}(\\str{v}'')\\).\nIf the support of \\(g\\) is disjoint from \\(\\operatorname{dest}(\\str{v}'')\\), set \\(\\str{v}:=\\str{v}'\\).\nOtherwise,\nlet \\(b \\in A^\\pm\\) be the signed bump of \\(g\\) such that \\(\\operatorname{dest}(\\str{v}'') \\subseteq \\operatorname{supt}(b)\\)\nand set \\(\\str{v}:=\\str{v}' \\str{b}\\).\nObserve that \\(\\str{v}\\) satisfies the conclusion of the lemma.\n\\end{proof}\n\n\n\n\\section[The isomorphism theorem]{The isomorphism theorem for geometrically fast generating sets}\n\n\\label{CombToIsoSec}\n\nAt this point we have developed all of the tools needed to prove Theorem \\ref{CombToIso},\nwhose statement we now recall.\n\n\\begin{customthm}{\\ref{CombToIso}}\nIf two geometrically fast sets \\(X , Y \\subseteq {\\operatorname{Homeo}_+(I)}\\) have only finitely many transition points\nand have isomorphic dynamical diagrams, then the induced\nbijection between \\(X\\) and \\(Y\\) extends to an isomorphism of\n\\(\\gen{X}\\) and \\(\\gen{Y}\\) (i.e. \\(\\gen{X}\\) is \\emph{marked isomorphic} to \\(\\gen{Y}\\)).\nMoreover, there is an order preserving bijection\n\\(\\theta : M \\gen{X} \\to M \\gen{Y}\\) such that\n\\(f \\mapsto f^\\theta\\) induces the isomorphism \n\\(\\gen{X} \\cong \\gen{Y}\\).\n\\end{customthm}\n\nObserve that it is sufficient to prove this theorem in the special case when\n\\(X\\) and \\(Y\\) are finite geometrically fast collections of positive bumps:\nif \\(A\\) and \\(B\\) are the bumps used in \\(X\\) and \\(Y\\) respectively,\nthen the dynamical diagrams of \\(A\\) and \\(B\\) are isomorphic and\nthe isomorphism of \\(\\gen{A}\\) and \\(\\gen{B}\\) restricts to \na marked isomorphism of \\(\\gen{X}\\) and \\(\\gen{Y}\\).\n\nFix, for the moment, a finite geometrically fast set of positive bumps \\(X\\).\nAs we have noted, it is a trivial matter, given a word \\(\\str{w}\\) and a \\(t \\in M\\),\nto find a subword \\(\\str{w}'\\) which is locally reduced at \\(t\\) and satisfies \\(t w = tw'\\).\nTheorem \\ref{CombToIso} will fall out of an analysis of a question of\nindependent interest:\nhow does one determine \\(\\str{w}'\\) from \\(\\str{w}\\)\nand \\(t\\) using only the dynamical diagram of \\(X\\)?\nToward this end, we define \\(\\str{\\tilde{t}w}\\) to be a \\emph{local word} if \\(t \\in M\\) and \\(\\str{w}\\) is a word.\n(Notice that \\(t \\mapsto \\tilde t\\) is injective on \\(M\\);\nthe reason for working with \\(\\tilde t\\) is in anticipation\nof a more general definition in Section \\ref{AbstractPingPongSec}.)\nA local word \\(\\str{\\tilde{t}w}\\) is \\emph{freely reduced} if \\(\\str{w}\\) is.\nIt will be convenient to adopt the convention that \\(\\operatorname{dest}(\\str{\\tilde{t}}) = \\{t\\}\\) if\n\\(t \\in M\\).\nDefine \\(\\Lambda = \\Lambda_X\\) to be the set of all freely reduced local words \\(\\str{\\tilde{t}w}\\)\nsuch that if \\(\\str{ab}\\) are consecutive symbols in \\(\\str{\\tilde{t}w}\\), then\nthe destination of \\(a\\) is between \\(\\operatorname{src}(b)\\) and \\(\\operatorname{dest}(b)\\) in the diagram's ordering.\nNotice that the assertion that \\(\\str{\\tilde{t}w}\\) is in \\(\\Lambda\\) can be formulated as an\nassertion about \\(\\str{w}\\), the element of \\(X\\) which has \\(t\\) as a marker and\nthe dynamical diagram of \\(X\\).\n\nEvery local word \\(\\str{\\tilde{t}w}\\)\ncan be converted into an element of \\(\\Lambda\\) by iteratively\nremoving symbols by the following procedure:\nif \\(\\str{ab}\\) is the first consecutive pair in \\(\\str{\\tilde{t}w}\\) which witnesses that it\nis not in \\(\\Lambda\\), then:\n\\begin{itemize}\n\n\\item if \\(b = a^{-1}\\) then delete the pair \\(\\str{ab}\\);\n\n\\item if \\(b \\ne a^{-1}\\) then delete \\(\\str{b}\\).\n\n\\end{itemize}\nObserve that since the first symbol of a local word is not removed by this procedure,\nthe result is still a local word.\nThe \\emph{local reduction} of a local word \\(\\str{\\tilde{t}w}\\) is the result of applying\nthis procedure to \\(\\str{\\tilde{t}w}\\) until it terminates at an element of \\(\\Lambda\\).\nThe following lemma admits a routine proof by induction, which we omit.\n\n\\begin{lemma} \\label{LambdaLem}\nSuppose that \\(X\\) is a geometrically fast set of positive bumps.\nIf \\(t \\in M\\) and \\(\\str{w}\\) is a word,\nthen \\(\\str{w}\\) is locally reduced at \\(t\\) if and only if \\(\\str{\\tilde{t}w}\\)\nis in \\(\\Lambda\\).\nMoreover, if \\(\\str{w}'\\) is such that \\(\\str{\\tilde{t}w}'\\)\nis the local reduction of \\(\\str{\\tilde{t}w}\\),\nthen \\(tw'\\) and \\(tw\\) coincide.\n\\end{lemma} \n\nNext, order \\(A^\\pm\\) so that if \\(a,b \\in A^\\pm\\), then \\(a\\) is less than \\(b\\) if every element\nof \\(\\operatorname{dest}(a)\\) is less than every element of \\(\\operatorname{dest}(b)\\).\nOrder \\(\\Lambda\\) with the \\emph{reverse lexicographic order}:\nif \\(\\str{uw}\\) and \\(\\str{vw}\\) are in \\(\\Lambda\\) and the last symbol of\n\\(\\str{u}\\) is less than that of \\(\\str{v}\\), then we declare\n\\(\\str{uw}\\) less than \\(\\str{vw}\\).\nRecall that the \\emph{evaluation map} on \\(\\Lambda\\) is the function\nwhich assigns the value \\(tu\\) to each string \\(\\str{\\tilde{t}u} \\in \\Lambda\\).\n(This is well defined since \\(t \\mapsto \\str{\\tilde{t}}\\) is injective on \\(M\\).)\nThis order is chosen so that the following lemma is true.\n\n\\begin{lemma} \\label{RevLex}\nThe evaluation map defined on \\(\\Lambda\\) is order preserving.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that \\(\\str{uw}\\) and \\(\\str{vw}\\) are in \\(\\Lambda\\) and the last symbol of \n\\(\\str{u}\\) is less than the last symbol of \\(\\str{v}\\).\nObserve that by Lemma \\ref{PingPong}, the evaluation of \\(\\str{u}\\)\nis an element of its destination.\nThus if the destination of \\(\\str{u}\\) is less than the destination of \\(\\str{v}\\),\nthen this is true of their evaluations as well.\nSince \\(t \\mapsto tw\\) is order preserving, we are done.\n\\end{proof}\n\nNow we are ready to prove Theorem \\ref{CombToIso}.\n\\begin{proof}[Proof of Theorem \\ref{CombToIso}]\nAs noted above, we may assume that \\(X\\) and \\(Y\\) are geometrically fast families of\npositive bump functions with isomorphic dynamical diagrams.\nBy Theorem \\ref{Faithful}, we know that \\(\\gen{X} \\restriction \\big(M\\gen{X}\\big)\\)\nis marked isomorphic to \\(\\gen{X}\\); similarly \\(\\gen{Y} \\restriction \\big(M\\gen{Y}\\big)\\) is marked\nisomorphic to \\(\\gen{Y}\\).\nIt therefore suffices to define an order preserving bijection \\(\\theta: M\\gen{X} \\to M\\gen{Y}\\)\nsuch that \\(\\theta(sa) = \\theta(s) \\tau (a)\\), where \\(s \\in M\\gen{X}\\) and \\(a \\in X\\) and\nwhere \\(\\tau : X \\to Y\\) is the bijection induced by the isomorphism of the dynamical diagrams of \\(X\\) and \\(Y\\).\n\nDefine \\(\\mu : M_X \\to M_Y\\) by \\(\\mu(s) = t\\) if \\(s\\) is the marker for \\(a \\in X\\) and \\(t\\) is the marker\nfor \\(\\tau(a) \\in Y\\).\nLet \\(\\lambda\\) denote the translation of local \\(X\\)-words into local \\(Y\\)-words\ninduced by \\(\\mu\\) and \\(\\tau\\).\nDefine \\(\\theta : M\\gen{X} \\to M\\gen{Y}\\) so that \\(\\theta(tu)\\) is the evaluation of \\(\\lambda (\\str{tu})\\)\nfor \\(\\str{\\tilde{t}u} \\in \\Lambda_X\\).\nThis is well defined by Lemmas \\ref{DisjointOrbits}, \\ref{TreeAction}, and \\ref{LambdaLem}.\nBy Lemma \\ref{RevLex} and the fact that\n\\(\\lambda\\) preserves the reverse lexicographic order,\n\\(\\theta\\) is order preserving.\n\nNow suppose that \\(s \\in M\\gen{X}\\) and \\(a \\in X^\\pm\\).\nFix \\(\\str{\\tilde{t}u} \\in \\Lambda_X\\) such that \\(s = tu\\) and let\n\\(\\str{\\tilde{t}v} \\in \\Lambda_X\\) be the local reduction of \\(\\str{\\tilde{t}ua}\\).\nObserve that on one hand \\(\\theta(sa) = \\theta (tv)\\) is the evaluation of \\(\\lambda(\\str{\\tilde{t}v})\\).\nOn the other hand \\(\\theta(s)\\tau(a)\\) is the evaluation of \\(\\lambda(\\str{\\tilde{t}ua})\\).\nSince \\(\\lambda\\) is induced by an isomorphism of dynamical diagrams, it satisfies\nthat \\(\\str{w}'\\) is the local reduction of \\(\\str{w}\\) if and only if \\(\\lambda(\\str{w}')\\) is the local\nreduction of \\(\\lambda(\\str{w})\\).\nIn particular, \\(\\lambda(\\str{\\tilde{t}v})\\) is the local reduction of \\(\\lambda(\\str{\\tilde{t}ua})\\).\nBy Lemma \\ref{LambdaLem}, these local \\(Y\\)-words have the same evaluation and\ncoincide with \\(\\theta(sa)\\) and \\(\\theta(s) \\tau(a)\\), respectively.\nThis completes the proof of Theorem \\ref{CombToIso}.\n\\end{proof}\n\nAs we noted in the introduction Theorem \\ref{CombToIso} has two immediate consequences.\nFirst, geometrically fast sets \\(X = \\{f_i \\mid i < n\\} \\subseteq {\\operatorname{Homeo}_+(I)}\\) with finitely\nmany transition points are \\emph{algebraically fast}:\nif \\(1 \\leq k_i\\) for each \\(i < n\\), then\n\\(\\gen{f_i \\mid i < n}\\) is marked isomorphic to \\(\\gen{f^{k_i}_i \\mid i < n}\\).\nThe reason for this is that the dynamical diagrams associated to\n\\(\\{f_i \\mid i < n\\}\\) and \\(\\{f_i^{k_i} \\mid i < n\\}\\) are isomorphic.\nSecond, since the dynamical diagram of any geometrically fast set with finitely many transition points\ncan be realized by a geometrically fast subset of \\(F\\) (see, e.g., \\cite[Lemma 4.2]{CFP}),\nevery group admitting a finite geometrically fast generating set can be embedded into \\(F\\).\nThe notion of \\emph{history} in the previous section is revisited in Section \\ref{AbstractPingPongSec}\nwhere it is used to prove a relative of Theorem \\ref{CombToIso}.\n\n\nMore evidence of the restrictive nature of geometrically fast generating sets\ncan be found in \\cite{kim+koberda+lodha} where groups generated by stretched\ntransition chains \\(C\\) as defined in Section \\ref{FastBumpsSec} are\nconsidered under the weaker assumption\n that consecutive pairs of elements in \\(C\\) are geometrically fast.\nGroups generated by such a \\(C\\) with \\(n\\) elements are called\n{\\itshape \\(n\\)-chain groups}.\nIt is proven in\n\\cite{kim+koberda+lodha} that every \\(n\\)-generated subgroup of\n\\({\\operatorname{Homeo}_+(I)}\\) is a subgroup of an \\((n+2)\\)-chain group.\nAnother result of \\cite{kim+koberda+lodha} is that for each\n\\(n\\ge3\\), there are uncountably many isomorphism types of\n\\(n\\)-chain groups. \nBy contrast, Theorem \\ref{CombToIso} (with Corollary \\ref{Excision} below) implies that\nthe number of isomorphism types of groups with finite, geometrically fast generating sets is\ncountable because the number of isomorphism types of dynamical\ndiagrams is countable.\n\n\n\n\\section[semi-conjugacy]{Minimal representations of\ngeometrically fast groups and topological semi-conjugacy}\n\n\\label{SemiConjSec}\n\nTheorem \\ref{CombToIso} partitions the subgroups of \\({\\operatorname{Homeo}_+(I)}\\)\ngenerated by geometrically fast sets with finitely many transition points: two such sets are\nconsidered equivalent if their dynamical diagrams are isomorphic.\nIn this section we show that each class contains a (nonunique)\nrepresentative \\(Y\\) so that for each \\(X\\) in the class there is a\nmarked isomorphism \\(\\phi : \\gen{X} \\to \\gen{Y}\\) which is\ninduced by a semi-conjugacy on \\(I\\).\nSpecifically, the bijection \\(\\theta : M\\gen{X} \\to M\\gen{Y}\\) of Theorem \\ref{CombToIso}\nextends to a continuous order preserving surjection \\(\\hat{\\theta}:I \\to I\\)\nso that for all \\(f\\in \\gen{X}\\) we have \\(f \\hat{\\theta} =\\hat{\\theta}\\phi(f)\\).\nNotice that in this situation, the graph of \\(\\phi(f)\\) is the image of the graph of \\(f\\) under the\ntransformation \\((x,y) \\mapsto (x\\hat\\theta,y\\hat\\theta)\\).\nWe will refer to such a \\(Y \\subseteq {\\operatorname{Homeo}_+(I)}\\) as \\emph{terminal}.\nTheorem \\ref{SemiConjThm} can now be stated as follows.\n\n\\begin{thm}\nEach dynamical diagram \\(D\\) can be realized by a terminal \\(X_D \\subseteq {\\operatorname{PL}_+(I)}\\).\n\\end{thm}\n\n\\begin{proof}\nAs in the proof of Theorem \\ref{CombToIso}, it suffices to prove the theorem\nunder the assumption that all bumps in \\(D\\) are positive and all labels are distinct.\nFurthermore, by Proposition \\ref{IsolatedFixProp}, we may assume that \\(D\\) has no isolated bumps.\nLet \\(n\\) denote the number of bumps of \\(D\\), set \\(\\ell := 1\/(2n)\\) and \n\\[\n\\mathcal{J} : = \\{ [i \\ell, (i+1) \\ell) \\mid 0 \\leq i < 2n \\},\n\\]\nobserving that \\(\\mathcal{J}\\) has the same cardinality as the set of feet of \\(D\\).\nOrder \\(\\mathcal{J}\\) by the order on the left endpoints of its elements.\nIf \\(i < 2n\\), we will say that the \\(i{}^{\\textrm{th}}\\) interval in \\(\\mathcal{J}\\) \\emph{corresponds} to the \\(i{}^{\\textrm{th}}\\) foot\nof \\(D\\).\n\nFor \\(i < n\\), let \\((x_i,s_i)\\) and \\((t_i,y_i)\\) be the intervals in \\(\\mathcal{J}\\) which correspond to the left and right feet of\nthe \\(i{}^{\\textrm{th}}\\) bump of \\(D\\), respectively.\nNote that since \\(D\\) has no isolated bumps, \\(s_i < t_i\\).\nDefine \\(b_i\\) to be the bump which has support \\((x_i,y_i)\\), maps \\(s_i\\) to \\(t_i\\) and is linear on\n\\((x_i,s_i)\\) and \\((s_i,y_i)\\) --- see Figure \\ref{PLBumpFig}.\n\\begin{figure}\n\\[\n\\xy\n(0,0); (30,30)**@{-};\n(3,3); (11,19)**@{-}; (27,27)**@{-}; (11,11); (11,19)**@{.};\n(19,19)**@{.};\n(3,3); (3,1)**@{-}; (3,-1)*{x_i};\n(11,11); (11,9)**@{-}; (11,7)*{s_i};\n(19,19); (19,17)**@{-}; (19,15)*{t_i};\n(27,27); (27,25)**@{-}; (27,23)*{y_i};\n\\endxy\n\\]\n\\caption{The function \\(b_i\\).\\label{PLBumpFig}}\n\\end{figure}\nIf we assign \\(b_i\\) the marker \\(s_i\\), then \nthe feet of \\(b_i\\) are either in \\(\\mathcal{J}\\) or are the interior of an element of \\(\\mathcal{J}\\);\nin particular, the feet of \\(X_D = \\{b_i \\mid i < n\\}\\) are disjoint.\nThus \\(X_D\\) is geometrically fast and has dynamical diagram isomorphic to \\(D\\).\n\nNotice that the feet \\((x_i,s_i)\\) and \\([t_i,y_i)\\) of \\(b_i\\)\nare each intervals of length \\(\\ell\\) contained in\n\\(I\\) while the middle interval \\([s_i,t_i)\\) is of length \\(m\\ell\\) for some positive integer \\(m\\).\nMoreover, since \\(D\\) has no isolated bumps,\nthere is an interval of \\(\\mathcal{J}\\) between \\(s_i\\) and \\(t_i\\);\nin particular, \\(t_i - s_i \\geq \\ell\\).\nIt follows that the slope of the graph of \\(b_i\\)\non its source \\((x_i,s_i)\\) and the slope of the graph of \\(b_i^{-1}\\)\non its source \\([t_i,y_i)\\) are both at least 2.\n\n\n\\begin{claim}\\label{OrbitDensity}\nIf \\(X_D\\) is the set of positive bumps constructed above, then \\(M\\gen{X_D}\\) is dense in \\(I\\).\n\\end{claim}\n\n\\begin{remark}\nNote that we are working under the assumption that \\(D\\) has no isolated elements.\nIf \\(D\\) has isolated bumps, then \\(M\\gen{X_D}\\) can not be dense.\n\\end{remark}\n\n\\begin{proof} \nSince every transition point of \\(X_D\\) is in the closure of \\(M\\gen{X_D}\\),\nit suffices to show that if \\(0 \\leq p < q \\leq 1\\), then \n\\((p,q)f\\) contains an endpoint of an interval in \\(\\mathcal{J}\\) for some \\(f \\in \\gen{X_D}\\).\nThe proof is by induction on the minimum \\(k \\geq 0\\) such that \\(\\ell 2^{-k} < q-p\\).\nObserve that if \\(k = 0\\), then \\(q-p > \\ell\\) and thus \\(q-p\\) must contain an endpoint of\nan interval in \\(\\mathcal{J}\\).\n\nNext observe that if \\((p,q)\\) does not contain an endpoint of an element of \\(\\mathcal{J}\\),\nthen \\((p,q)\\) is contained in the foot of some \\(b_i\\) for \\(i < n\\).\nIf \\((p,q) \\subseteq \\operatorname{src}(b_i)\\), then since the derivative of \\(b_i\\) is at least 2 on it source,\nit follows that \\((p,q)b_i\\) is at least twice as long as \\((p,q)\\).\nBy our induction hypothesis, there is an \\(f \\in \\gen{X_D}\\) such that\n\\((p,q)b_i f\\) contains an endpoint of \\(\\mathcal{J}\\).\nSimilarly, if \\((p,q) \\subseteq \\operatorname{src}(b_i^{-1})\\), then \\((p,q)b_i^{-1}\\) is at least twice\nas long as \\((p,q)\\) and we can find an \\(f \\in \\gen{Y}\\) such that\n\\((p,q)b_i^{-1} f\\) contains an endpoint of \\(\\mathcal{J}\\).\n\\end{proof}\n\nIn order to see that \\(X_D\\) is terminal, let\n\\(X \\subseteq {\\operatorname{Homeo}_+(I)}\\) be geometrically fast, have finitely many transition points, and have a dynamical diagram\nisomorphic to \\(D\\).\nLet \\(\\theta : M\\gen{X} \\to M\\gen{X_D}\\) be order preserving and satisfy that\n\\(t f \\theta = t \\theta \\phi(f)\\) for all \\(f \\in M\\gen{X}\\),\nwhere \\(\\phi : \\gen{X} \\to \\gen{X_D}\\) is the marked isomorphism.\nDefine \\(\\hat \\theta : I \\to I\\) by\n\\[\nx \\hat \\theta := \\sup \\{t\\theta \\mid t \\in M\\gen{X} \\textrm{ and } t \\leq x\\}\n\\]\nwhere we adopt the convention that \\(\\sup \\emptyset = 0\\).\nClearly \\(\\hat \\theta : I \\to I\\) is order preserving and extends \\(\\theta\\).\nIn particular its range contains \\(M\\gen{X_D}\\), which by\nClaim \\ref{OrbitDensity} is dense in \\(I\\).\nIt follows that \\(\\hat \\theta\\) is a continuous surjection (\\emph{any} order preserving map from \\(I\\) to \\(I\\) with\ndense range is a continuous surjection).\nThat \\(x f \\hat \\theta = x \\hat \\theta \\phi (f)\\) follows from\nthe fact that this is true for \\(x \\in M\\gen{X}\\) and from the continuity of \\(f\\) and \\(\\phi(f)\\).\nThis completes the proof that \\(X_D\\) is terminal.\n\\end{proof}\n\n\\section{Fast generating sets for the groups \\(F_n\\)}\n\\label{FnSec}\n\nIn this section we will give explicit generating sets\nfor some well known variations of Thompson's group \\(F\\).\nFirst notice that since \\((0,1)\\) is homeomorphic to \\({\\mathbf R}\\) by an order preserving map,\nall of the analysis of geometrically fast subsets of \\({\\operatorname{Homeo}_+(I)}\\) transfers to \\({\\operatorname{Homeo}_+(\\R)}\\)\n(with the caveat that \\(\\pm \\infty\\) must be considered as possible\ntransition points of elements of \\({\\operatorname{Homeo}_+(\\R)}\\); see Example \\ref{x+1_x^3_ex} below).\nFix an integer \\(n\\ge2\\).\nFor \\(i < n\\), let \\(g_i\\) be a homeomorphism from \\({\\mathbf R}\\) to itself defined by:\n\\[\ntg_i := \\begin{cases}\nt & \\textrm{if } t\\le i \\\\\ni+n(t-i) & \\textrm{if } i\\le t\\le i+1 \\\\\nt+(n-1) & \\textrm{if } i+1 \\le t.\n\\end{cases}\n\\]\nIn words, \\(g_i\\) is the identity below \\(i\\), has constant slope\n\\(n\\) on the interval \\([i,i+1]\\), and is translation by \\(n-1\\)\nabove \\(i+1\\).\nWe will use \\(F_n\\) to denote \\(\\gen{g_i \\mid i < n}\\).\nThe group \\(F_2\\) is one of the standard\nrepresentations of Thompson's group \\(F\\).\n(The more common representation of \\(F\\) is as a set of piecewise linear\nhomeomorphisms of the unit interval \\cite[\\S1]{CFP}.)\n\nThe groups \\(F_n\\) \\((n \\ge 2)\\) are discussed in\n\\cite[\\S4]{brown:finiteprop} where \\(F_n\\) is denoted\n\\(F_{n,\\infty}\\), and in \\cite[\\S2]{brin+fer} where \\(F_n\\) is\ndenoted \\(F_{n,0}\\).\nThe standard infinite presentation of \\(F_n\\) is given in \\cite[Cor. 2.1.5.1]{brin+fer}.\nIt follows easily from that\npresentation that the commutator quotient of \\(F_n\\) is a free abelian group of rank \\(n\\).\nIn particular the \\(F_n\\)'s are pairwise nonisomorphic.\n\nWe now describe an alternate generating set for \\(F_n\\) which \nconsists of \\(n\\) positive bump functions and is geometrically fast.\nFor \\(i 0\\), and let \\(x\\) be a point that contributes\nto \\(N(X)\\) with \\(g\\ne h\\) two functions in \\(X\\) with \\(x\\) a\ncommon left transition point or common right transition point of\nboth \\(g\\) and \\(h\\).\nSince \\(x\\in (0,1)\\), our assumption on the support of \\(G\\)\nimplies that there is an \\(f\\in X\\) with \\(xf \\ne x\\).\nLet \\(T\\) be the set of transition points of\n\\(g\\) that are moved by \\(f\\).\nSince each point in \\(T\\) is moved to infinitely\nmany places by powers of \\(f\\) and since the set of\ntransition points of elements of \\(X\\) is finite, we can find a\npower \\(f^k\\) of \\(f\\) so that no element of \\(T{f^k}\\) is a\ntransition point of an element of \\(X\\). \nLet \\(X'\\) be obtained from \\(X\\) by replacing \\(g\\) by \\(g^{f^k}\\).\nWe have arranged that \\(|X'|=|X|\\),\nthat \\(\\gen{X'} = \\gen{X}=G\\),\nthat there is only one element of \\(X'\\) outside the kernel of \\(\\varpi\\),\nand that \\(N(X')0\\) so that \\(g\\) moves all points in \\((0,\\epsilon)\\)\nand \\((1-\\epsilon,1)\\). It follows that for some \\(xy\\), and for this \\(h\\) we have \n\\(\\phi(g)\\) commutes with \\(\\phi(g)^h\\), but not with \\(h\\).\nNow, \\(g\\) commutes with \\(\\phi^{-1} (\\phi(g)^h)\\), but not with \\(\\phi^{-1}(h)\\), however, \\(\\phi^{-1}(\\phi(g)^h) = g^{\\phi^{-1}(h)}\\),\ncontradicting Claim \\ref{EquivCommute}.\n\\end{proof}\n\nAt this point we know that, by the geometric properness of \\(X\\), for each coordinate \\(i\\in\\{0,1\\}\\)\nthere is a unique element \\(h_i\\in X\\) so that\n\\(i\\) is in the extended support of \\(h_i\\) and so that \\(h_0\\ne h_1\\).\nThe fact that \\(h_0\\) and \\(h_1\\) are distinct and unique implies that\nthe image of \\(\\pi_H\\) is the product of the images of \\(\\pi_0\\) and \\(\\pi_1\\).\n\n\\begin{claim}\\label{claim_noFiniteGenSet}\nFor each element \\(h\\) of \\(X\\) whose extended support contains 0 or 1,\nthere is no finite subset \\(T\\) of the kernel of \\(\\pi_{H}\\) so that\n\\(\\ker(\\pi_{H}) \\subseteq \\gen{T\\cup \\{h\\}}\\).\n\\end{claim}\n\n\\begin{proof}\nLet \\(h\\) be given and suppose without loss of generality that \\(0\\) is\nin the extended support of \\(h\\).\nAs noted above, \\(1\\) can not be in the extended support of \\(h\\). \nSince \\(\\gen{X}'\\subseteq \\ker(\\pi_{H})\\),\nthere are nontrivial elements in \\(\\ker(\\pi_{H})\\).\nBecause the orbital of \\(H\\) is \\((0,1)\\),\nthere are points arbitrarily close to 0 and 1 moved by elements of\n\\(\\ker (\\pi_{H})\\).\nIt follows that if \\(T\\subseteq\n\\ker(\\pi_{H})\\) is finite, then there is a neighborhood of \\(1\\)\nfixed by all elements of \\(\\gen{T\\cup \\{h\\}}\\) and thus\n\\(\\gen{T\\cup \\{h\\}}\\) cannot contain all of \\(\\ker(\\pi_{H})\\).\n\\end{proof}\n\nIn order to finish the proof, it suffices to show that \\(\\pi_G\\left(\\phi^{-1}(h_0)\\right)\\) and\n\\(\\pi_G\\left(\\phi^{-1}(h_1)\\right)\\) generate the image of \\(\\pi_G\\) and that\neach have exactly one (necessarily different) nonzero coordinate. \nSince, by the proof of Claim \\ref{NonCyclicImage},\n\\(\\phi\\) induces an isomorphism \\(G\/G' \\cong H\/\\ker (\\pi_H)\\), it follows that\n\\(\\{\\pi_G(\\phi^{-1}(h_i)) \\mid i < 2\\}\\) must generate \\(G\/G'\\).\nOn the other hand, by Claims \\ref{BothGermsNontrivial} and \\ref{claim_noFiniteGenSet},\nit must be that one coordinate\nof \\(\\pi_G(\\phi^{-1}(h_i))\\) must be 0 for each \\(i \\in \\{0,1\\}\\).\nThis shows that these two elements, which generate the group \\(K\\), together make a set of the form \\(\\left\\{(p,0),(0,q)\\right\\}\\) for some non-zero \\(p\\) and \\(q\\), and therefore \\(G \\cong F\\).\n\\end{proof}\n\n\\begin{remark} \nThe group \\(E=\\{(p,q)\\in{\\mathbf Z}\\times{\\mathbf Z}\\mid p+q \\equiv0 \\mod2\\}\\)\nis a subgroup of \\({\\mathbf Z} \\times {\\mathbf Z}\\) which is not of the form \\(P \\times Q\\) and hence\n\\(\\pi^{-1}_F (E)\\) is a finite index subgroup of \\(F\\) which is not isomorphic to \\(F\\).\nIn particular, there are finite index subgroups of \\(F\\) which do not admit\ngeometrically proper generating sets.\n\\end{remark}\n\n\\section{Abstract Ping-Pong Systems}\n\n\\label{AbstractPingPongSec}\n\nIn this section we will abstract the analysis of geometrically\nfast systems of bumps in previous sections to\nthe setting of permutations of a set \\(S\\).\n(By \\emph{permutation} of \\(S\\) we simply mean a bijection from \\(S\\) to \\(S\\).)\nOur goal will be to state the analog of Theorem \\ref{CombToIso} and its consequences.\nThe proofs are an exercise for the reader.\n\nSuppose now that \\(A\\) is a collection of permutations of a set \\(S\\) such that\n\\(A \\cap A^{-1} = \\emptyset\\).\nA \\emph{ping-pong system} on \\(A\\) is an assignment \\(a \\mapsto \\operatorname{dest}(a)\\) of sets\nto each element of \\(A^\\pm\\) such that whenever \\(a\\) and \\(b\\) are in \\(A^\\pm\\) and \\(s \\in S\\):\n\\begin{itemize}\n\n\\item \\label{basic_Dp}\n\\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(a)\\) and if \\(s \\in \\operatorname{supt}(a)\\), then\nthere is an integer \\(k\\) such that \\(s a^k \\in \\operatorname{dest} (a)\\);\n\n\\item \\label{ping-pong_cond}\nif \\(s \\in \\operatorname{supt}(a)\\), then \\(s a \\in \\operatorname{dest}(a)\\) if and only if \\(s \\not \\in \\operatorname{src}(a) : = \\operatorname{dest}(a^{-1})\\);\n\n\\item if \\(a \\ne b\\), then \\(\\operatorname{dest}(a) \\cap \\operatorname{dest}(b)\\) is empty;\n\n\\item if \\(\\operatorname{dest}(a) \\cap \\operatorname{supt}(b) \\ne \\emptyset\\), then \\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(b)\\).\n\n\\end{itemize}\n\\noindent\nThe following lemma summarizes some immediate consequences of this definition.\n\\begin{lemma}\nGiven a set \\(S\\) and a collection \\(A\\) of permutations of \\(S\\)\nequipped with a ping pong system, the following are true:\n\\begin{itemize}\n\n\\item if \\(a \\in A\\) and \\(s \\in \\operatorname{supt} (a)\\), then there is a unique \\(k \\in {\\mathbf Z}\\) such that:\n\\[\ns a^k \\in \\operatorname{supt}(a) \\setminus (\\operatorname{src}(a) \\cup \\operatorname{dest}(a))\\]\n\n\\item if \\(a \\in A\\), then \\(\\operatorname{dest}(a) a \\subseteq \\operatorname{dest}(a)\\);\n\n\\end{itemize}\nIn particular, all elements of \\(A\\) have infinite order.\n\\end{lemma}\nAs remarked in Section \\ref{FastBumpsSec}, geometrically fast sets of bumps admit\na ping-pong system.\nThe meanings of \\emph{source}, \\emph{destination}, and \\emph{locally reduced word}\nall readily adapt to this new context.\nFurthermore, the proofs of Lemmas \\ref{LocRedBasics}--\\ref{FellowTraveler3} given in Section \\ref{PingPongSec}\nuse only the axiomatic properties of a ping-pong system and thus these lemmas \nare valid in the present context.\nThe next example is simplistic, but it will serve to illustrate a number of points in this section.\n\n\\begin{example} \\label{PSL2Example}\nView the real projective line \\(\\mathbf{P}\\) as \\({\\mathbf R} \\cup \\{\\infty\\}\\) and \\(\\operatorname{PSL}_2({\\mathbf Z})\\) as a group of\nfractional linear transformations of \\(\\mathbf{P}\\).\nThe homeomorphisms \\(\\alpha\\) and \\(\\beta\\) of \\(\\mathbf{P}\\) defined by\n\\[\nt\\alpha := t + 1 \\qquad t \\beta := \\frac{t}{1-t}\n\\]\ngenerate \\(\\operatorname{PSL}_2({\\mathbf Z})\\).\nIf we take \\(A = \\{\\alpha^2,\\beta^2\\}\\), then\n\\[\n\\operatorname{src}(\\alpha^2) := (-\\infty,-1) \\qquad \\qquad \\operatorname{dest}(\\alpha^2) := [1,\\infty)\n\\]\n\\[\n\\operatorname{src}(\\beta^2) := (0,1) \\qquad \\qquad \\operatorname{dest}(\\beta^2) := [-1,0) \n\\]\ndefines a ping-pong system.\nIt is well known that \\(\\gen{\\alpha^2,\\beta^2}\\) is free;\nin fact this is one of the classical applications of the Ping-Pong Lemma.\n\\end{example}\n\nIn order to better understand \\(\\gen{A}\\) when \\(A\\) is a set of permutations admitting a\nping-pong system, it will be helpful to represent \\(\\gen{A}\\) as a family of homeomorphisms\nof a certain space \\(K_A\\).\nThis compact space can be thought of as a space of \\emph{histories} in the sense\nof Section \\ref{PingPongSec}. \nIf \\(S\\) is the underlying set which elements of \\(A\\) permute,\nlet \\(M = M_A\\) denote the collection of all sets of the form\n\\[\n\\tilde s: = \\{a \\in A \\mid s \\in \\operatorname{supt}(a)\\}\n\\]\nwhere \\(s \\in S \\setminus \\bigcup \\{\\operatorname{dest}(a) \\mid a \\in A^\\pm\\}\\).\nElements of \\(M\\) will play the same role as the \ninitial markers of a geometrically fast collection of bumps.\n\n\\begin{example} \\label{PSL2Marker}\nContinuing with the Example \\ref{PSL2Example}, \\(M\\) consists of two points:\n\\(\\widetilde 0 = \\{\\alpha\\}\\) and \\(\\widetilde \\infty = \\{\\beta\\}\\).\nWe can also restrict the action of \\(\\operatorname{PSL}_2({\\mathbf Z})\\) on \\(\\mathbf{P}\\) to the irrationals.\nIn this case \\(M\\) is empty.\n\\end{example}\n\nIt will be convenient to define \\(\\operatorname{dest}(\\tilde s) := \\bigcap \\{\\operatorname{supt}(a) \\setminus \\operatorname{src}(a) \\mid a \\in \\tilde s\\}\\) and\n\\(\\operatorname{supt}(\\tilde s) := \\emptyset\\).\nDefine \\(K_A\\) to be all \\(\\eta\\) such that:\n\\begin{itemize}\n\n\\item \\(\\eta\\) is a suffix closed family of finite \nstrings in the alphabet \\(A^\\pm \\cup M\\);\n\n\\item if \\(\\str{ab}\\) are consecutive symbols of an element of \\(\\eta\\), then\n\\(\\operatorname{dest}(a) \\subseteq \\operatorname{supt}(b) \\setminus \\operatorname{src}(b)\\);\n\n\\item for each \\(n\\), there is at most one element of \\(\\eta\\) of length \\(n\\);\n\n\\item if \\(\\str{w} \\in \\eta\\) and \\(\\eta\\) does not contain a symbol from \\(M\\),\nthen \\(\\str{w}\\) is a proper suffix of an element of \\(\\eta\\).\n\n\\end{itemize}\n\\noindent\nThe second condition implies that elements of \\(\\eta\\) are freely reduced since if \\(b = a^{-1}\\),\nthen \\(\\operatorname{src}(b) = \\operatorname{dest}(a)\\).\nObserve that if \\(\\str{w}\\) is in \\(\\eta\\), the only occurrence of an element of \\(M\\) in \\(\\str{w}\\) must\nbe as the first symbol of \\(\\str{w}\\) (and there need not be any occurrence of an element of \\(M\\) in \\(\\str{w}\\)).\n\nNotice that every \\(\\eta \\in K_A\\) has at least one element other than \\(\\varepsilon\\) and that all elements\nof \\(\\eta\\) of positive length must have the same final symbol.\nWe define \\(\\operatorname{dest}(\\eta) := \\operatorname{dest}(a)\\) where \\(\\str{a}\\) is the final symbol of every\nelement of \\(\\eta\\) other than \\(\\varepsilon\\).\nWe topologize \\(K_A\\) by declaring that\n\\([\\str{w}] : = \\{\\eta \\in K_A \\mid \\str{w} \\in \\eta\\}\\) is closed and open.\nNotice that if \\(\\eta\\) is finite, it is an isolated point of \\(K_A\\).\n\n\\begin{prop}\n\\(K_A\\) is a Hausdorff space and if \\(A\\) is finite, then \\(K_A\\) is compact.\n\\end{prop}\n\n\\noindent\nEach \\(a \\in A^\\pm\\) defines a homeomorphism \\(\\hat a : K_A \\to K_A\\) by:\n\\[\n\\eta \\hat a := \n\\begin{cases}\n\\{\\str{ua} \\mid \\str{u} \\in \\eta\\} \\cup \\{\\varepsilon\\} & \\textrm{ if } \\operatorname{dest}(\\eta) \\subseteq \\operatorname{supt}(a) \\setminus \\operatorname{src}(a) \\\\\n\\{\\str{u} \\mid \\str{ua}^{-1} \\in \\eta\\} & \\textrm{ if } \\operatorname{dest} (\\eta) = \\operatorname{src}(\\str{a}) \\\\\n\\eta & \\textrm{ if } \\operatorname{dest}(\\eta) \\cap \\operatorname{supt}(\\str{a}) = \\emptyset\n\\end{cases}\n\\]\nThus \\(\\eta \\hat a\\) is obtained by \nappending \n\\(\\str{a}\\) to the end of every element of \\(\\eta\\),\ncollecting the local reductions, and possibly including \\(\\varepsilon\\).\nSet \\(\\hat A = \\{\\hat a: a \\in A\\}\\).\n\nWe say that a ping-pong system on \\(A\\) is \\emph{faithful} if\n\\(\\Lambda_A : = \\{\\eta \\in K_A : \\eta \\textrm{ is finite}\\}\\) is dense in \\(K_A\\) (i.e.\nwhenever \\(\\str{w}\\) is in some \\(\\eta \\in K_A\\), there is a finite \\(\\eta' \\in K_A\\)\nwhich has \\(\\str{w}\\) as an element).\n\n\\begin{example} \\label{NonfaithfulExample}\nAs noted above, if we restrict the elements of \\(\\operatorname{PSL}_2({\\mathbf Z})\\) to the irrationals,\nthen \\(M = \\emptyset\\) and in particular the system is not faithful.\nOn the other hand, \n\\[\n\\operatorname{dest}(\\alpha^4) := (2,\\infty) \\cap S \\qquad \\qquad \\operatorname{dest}(\\alpha^{-4}) := (-\\infty,-2) \\cap S\n\\]\n\\[\n\\operatorname{dest}(\\beta^4) := (-1\/2,0) \\cap S \\qquad \\qquad \\operatorname{dest}(\\beta^{-4}) := (0,1\/2) \\cap S.\n\\]\ndefines a ping-pong system in which \\(M\\) contains a single element \\(\\{\\alpha,\\beta\\}\\).\n\\end{example}\n\n\\noindent\nWhile not every ping-pong system is faithful,\nthe reader is invited to verify that if \\(A\\) admits a ping-pong system, then\n\\(\\{a^2 \\mid a \\in A\\}\\) admits a faithful ping-pong system.\n\n\\begin{thm}\nIf \\(A\\) is a set of permutations which admits a ping-pong system, then\n\\(a \\mapsto \\hat a\\) extends to an epimorphism of \\(\\gen{A}\\) onto \\(\\gen{\\hat A}\\).\nIf the ping-pong system is faithful, then the epimorphism is an isomorphism.\n\\end{thm}\n\nThe map \\(x \\mapsto \\eta(x)\\) defined in Section \\ref{PingPongSec} adapts \\emph{mutatis mutandis}\nto define a map \\(s \\mapsto \\eta(s)\\) from \\(S\\) into \\(K_A\\).\nIt is readily verified that if \\(a \\in A\\) and \\(s \\in S\\), then \\(\\eta(sa) = \\eta(s)\\hat a\\).\nThe existence of a faithful ping-pong system also has the following structural consequence\nwhich follows readily from the abstract form of Lemma \\ref{TreeAction}.\n\n\\begin{prop}\nIf \\(A\\) admits a faithful ping-pong system, then \\(\\gen{A}\\) is torsion free.\n\\end{prop}\n\\noindent\nThis shows in particular that \\(\\operatorname{PSL}_2({\\mathbf Z}) = \\gen{\\alpha,\\beta}\\) --- which contains elements of finite order such as\n\\(t \\mapsto -1\/t\\) --- does not admit a ping-pong system.\n\nA \\emph{blueprint} for a ping-pong system is a pair \\(\\mathfrak{B} = (\\str{B},\\str{supt})\\)\nsuch that:\n\\begin{itemize}\n\n\\item \\(\\str{B}\\) is a set and \\(\\str{supt}\\) is a binary relation on \\(\\str{B}\\)\nwhich is interpreted as a set-valued function:\n\\(\\str{b} \\in \\str{supt}(\\str{a})\\) if \\((\\str{a}, \\str{b}) \\in \\str{supt}\\);\n\n\\item if \\(\\str{a} \\in \\str{B}\\) and \\(\\str{supt}( \\str{a})\\) is nonempty, then \\(\\str{a} \\in \\str{supt}(\\str{a})\\);\n\n\\item if \\(\\str{a} \\in \\str{A}\\), then there is a unique \\(\\str{a}^{-1} \\in \\str{A} \\setminus \\{\\str{a}\\}\\) with\n\\(\\str{supt} (\\str{a}^{-1}) = \\str{supt} (\\str{a})\\).\n\n\\end{itemize}\nAdditionally, setting \n\\(\\str{A} : = \\{\\str{a} \\in \\str{B} \\mid \\str{supt}(\\str{a}) \\ne \\emptyset\\}\\) we require that:\n\\begin{itemize}\n\n\\item if \\(\\str{b} \\ne \\str{c} \\in \\str{B} \\setminus \\str{A}\\), then \\(\\str{\\tilde b} \\ne \\str{\\tilde c}\\) where\n\\({\\str{\\tilde b}} := \\{\\str{a} \\in \\str{A} \\mid \\str{b} \\in \\str{supt}(\\str{a}) \\}\\).\n \n\\end{itemize}\nTwo blueprints are isomorphic if they are isomorphic as structures.\nIf \\(A\\) is a set of permutations which admits a ping-pong system, then the blueprint\n\\(\\mathfrak{B}_A = (\\str{B}_A,\\str{supt}_A)\\)\nfor the system is defined by\n\\(\\str{B}_A := \\{\\str{a} \\mid a \\in A^\\pm\\} \\cup \\{{\\str{\\tilde s}} \\mid \\tilde s \\in \\widetilde S\\}\\)\nwith \\(\\str{b} \\in \\str{supt}_A(\\str{a})\\) if \\(\\operatorname{dest}(b) \\subseteq \\operatorname{supt}(a)\\).\nAlso, if \\(\\mathfrak{B} = (\\str{A},\\str{supt})\\) is a blueprint for a ping-pong system, then\none defines \\(K_{\\mathfrak{B}}\\) and homeomorphisms \\({\\str{\\hat a}} : K_{\\mathfrak{B}} \\to K_{\\mathfrak{B}}\\)\nfor \\(\\str{a} \\in \\str{A}^{\\mathfrak{B}}\\) by a routine adaptation of the construction above.\nIn fact \\(K_A\\) is can be regarded as factoring though its blueprint in the sense that\n\\(K_A = K_{\\mathfrak{B}_A}\\) modulo identifying \\(a\\) and \\(\\str{a}\\).\nIt is routine to check that the following theorem holds as well.\n\n\\begin{thm} \\label{AbsCombToIso}\nIf \\(\\mathfrak{B}_0\\) and \\(\\mathfrak{B}_1\\) are isomorphic blueprints for ping-pong systems,\nthen the isomorphism induces an homeomorphism \\(\\theta:K_{\\mathfrak{B}_0} \\to K_{\\mathfrak{B}_1}\\)\nsuch that \\(g \\mapsto g^\\theta\\) defines an isomorphism between\n\\(\\gen{\\hat {\\str{a}} \\mid \\str{a} \\in \\str{A}_0}\\) and \\(\\gen{\\hat {\\str{a}} \\mid \\str{a} \\in \\str{A}_1}\\).\n\\end{thm}\n\nIf \\(A\\) is a set of permutations which admits a faithful ping-pong system,\nthen the blueprint for \\(A\\) and for \\(\\{a^{k(a)} \\mid a \\in A\\}\\) are canonically\nisomorphic whenever \\(a \\mapsto k(a)\\) is an assignment of a positive integer to each element of \\(A\\).\n\n\\begin{cor}\nAny set of permutations \\(A\\) which admits a faithful ping-pong system is an algebraically\nfast generating set for \\(\\gen{A}\\).\n\\end{cor}\n\nIf the system is not faithful, then \\(\\{a^2 \\mid a \\in A^2\\}\\) may contain new markers, as\nwas illustrated in Example \\ref{NonfaithfulExample}.\nNotice that in this example,\nboth \\(\\gen{\\alpha^2,\\beta^2}\\) and \\(\\gen{\\alpha^4,\\beta^4}\\) are free --- and hence marked isomorphic ---\neven though the blueprints associated to the ping-pong systems are not isomorphic.\n\nA blueprint \\(\\mathfrak{B}\\) is (cyclically) orderable if there is a (cyclic) ordering on \\(\\str{B}\\)\nsuch that for all \\(\\str{a} \\in \\str{A}\\), \n\\(\\str{supt}(\\str{a})\\) is an interval in the (cyclic) ordering with endpoints \\(\\operatorname{src}(\\str{a})\\) and \\(\\operatorname{dest}(\\str{a})\\).\nIt is readily checked that the (cyclic) order on \\(\\str{B}\\)\ninduces a reverse lexicographic (cyclic) order on \\(K_{\\mathfrak{B}}\\) which is preserved by the\nhomeomorphisms \\(\\hat {\\str{a}}\\) for \\(\\str{a} \\in \\str{A}\\).\n\n\\begin{cor}\nIf \\(A\\) is a finite set of permutations which admits a faithful ping-pong system, then\n\\(\\gen{A}\\) embeds into Thompson's group \\(V\\).\nIf the blueprint of \\(A\\) is cyclically orderable, then \\(\\gen{A}\\) embeds into\n\\(T\\).\nIf the blueprint of \\(A\\) is orderable, then \\(\\gen{A}\\) embeds into \\(F\\). \n\\end{cor}\n\n\\begin{proof}\nIt is readily verified that any finite blueprint can be realized as the blueprint of\na ping-pong system of a finite subset of \\(V\\).\nMoreover, if the blueprint is cyclically orderable (or orderable), then it can be realized\nby elements of \\(T\\) (respectively \\(F\\)). \nThe corollary follows by Theorem \\ref{AbsCombToIso}.\n\\end{proof}\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\n\\providecommand{\\MRhref}[2]{\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nHypertensive disorders of pregnancy are a class of high blood pressure disorders that occur during the second half of pregnancy, which include gestational hypertension, preeclampsia and severe preeclampsia. They are characterized by a diastolic blood pressure higher than 90 mm Hg and\/or a systolic blood pressure higher than 140 mm Hg and they are often accompanied by proteinuria. These disorders affect about 10\\% of pregnant women around the world, with preeclampsia occurring in 2--8\\% of all pregnancies \\citep{timokhina2019}. These disorders represent one of the leading causes \nof maternal and fetal morbidity and mortality, contributing to 7--8\\% of maternal death worldwide \\citep{dolea2003,shah2009,mcclure2009}. The World \nHealth Organization estimates that the incidence of preeclampsia is seven \ntimes higher in developing countries than in developed countries. However, the occurrence of these diseases appears under-reported in low and middle income countries, implying that the true incidence is unknown \\citep{igberase2006,malik2018}. While there is evidence that hypertensive disorders of pregnancy are related with the development of cardiac dysfunctions both in the mother and in the child \\citep{bellamy2007,davis2012, ambrovzic2020dynamic, garcia2020maternal, aksu2021cardiac, demartelly2021long}, there is no common agreement on the relation between the severity of hypertension and cardiac dysfunction \\citep{tatapudi2017} and echocardiography is not included in baseline evaluation of hypertensive disorders of pregnancy. Further investigations on these disorders are needed, especially for developing countries, where women often give birth at a younger age with respect to developed countries. \n\nThe goal of this work is to detect which cardiac function is altered and \nunder which hypertensive disorders by relying on a principled Bayesian nonparametric approach. An interesting case-control study to explore the relation between cardiac dysfunction and hypertensive disorders is provided by \\citet{data}, where the measures of ten different cardiac function indexes were recorded in four groups of pregnant women in India. Groups of women are characterized by different hypertensive disorder diagnoses, that are naturally ordered based on the severity of the diagnosed disorder: healthy (C), gestational hypertension (G), mild preeclampsia (M) and severe preeclampsia (S). \nHypertensive diagnoses are used as identifiers for what we call populations of patients and we refer to cardiac function indexes also with the term response variables. For each response variable we want to determine a partition of the four populations of patients. This amounts to identifying similarities between different hypertensive disorders, with respect to each cardiac index. Supposing, for instance, that the selected partition assigns all the populations to the same cluster, one can conclude that no \nalteration is shown for the corresponding cardiac index across different hypertensive diseases.\n\nOur goal of identifying a partition of the four patients' populations for each of the ten responses can be rephrased as a problem of multiple model selection: we want to select the most plausible partition for each cardiac index. \nFrequentist hypothesis testing does not allow to deal with more than two \npopulations in a straightforward way\nand pairwise comparisons may lead \nto conflicting conclusions. Conversely, a Bayesian approach yields the posterior distribution on the space of partitions, which can be used for simultaneous comparisons. \nMoreover, the presence of $M=10$ jointly tested cardiac indexes requires to perform model selection repeatedly ten times. Once again, a Bayesian approach seems to be preferred, because, as observed for instance by \\citet{scott2006}, it does not require the introduction of a penalty term for multiple comparison, thanks to the prior distribution build-in penalty. \n\nHere we design\na Bayesian nonparametric model, that is tailored to deal with both a collection of ordered populations and the multivariate information of the response variables, while preserving the typical flexibility of nonparametric models and producing easily interpretable results.\nWhen applied to the dataset on transthoracic echocardiography results for a cohort of Indian pregnant women in Section \\ref{s:results}, our model effectively identifies \nmodified cardiac functions in hypertensive patients compared to healthy subjects and progressively increased alterations with the severity of the disorder, in addition to other more subtle findings. The observed data $X_{i,j,m}$ represent the measurement of the $m$-th response variable (cardiac index) on the $i$-th individual (pregnant woman) in the $j$-th population (hypertensive disorder) \nand, as in standard univariate ANOVA models, they are assumed to be partially exchangeable across disorders.\nThis means that for every $m\\in\\{1,\\ldots,M\\}$, \nthe law of $(\\,(X_{i,1,m})_{i\\ge 1},\\ldots,(X_{i,J,m})_{i\\ge 1})$ is invariant with respect to permutations within each sequence of random variables, namely for any positive integers $n_1,\\ldots,n_J$ \n\\[\n(\\,(X_{i,1,m})_{i=1}^{n_1},\\ldots,(X_{i,J,m})_{i=1}^{n_J})\\stackrel{\\mbox{\\scriptsize{d}}}{=} \n(\\,(X_{{\\sigma_1(i)},1,m})_{i=1}^{n_1},\\ldots,(X_{{\\sigma_J(i)},J,m})_{i=1}^{n_J})\n\\]\nfor all permutations $\\sigma_j$ of $(1,\\ldots,n_j)$, with $j=1,\\ldots,J$. This is a natural generalization of exchangeability to tackle heterogeneous data and, by de Finetti's representation theorem, it amounts to assuming the existence of a collection of (possibly dependent) random probability measures $\\{\\pi_{j,m}:\\:\\: j=1,\\ldots,J\\;\\: m=1,\\ldots,M\\}$ such that\n\\[\nX_{i,j,m} \\mid \\pi_{j,m} \\stackrel{\\mbox{\\scriptsize{\\rm iid}}}{\\sim} \\pi_{j,m} \\qquad i=1,\\ldots,n_j\n\\] \nHence, for any two populations $j\\ne j'$, homogeneity corresponds to $\\pi_{j,m}=\\pi_{j',m}$ (almost surely). However, a reliable assessment of this type of homogeneity is troublesome when having just few patients per \ndiagnosis, as it happens in the mild preeclampsia subsample. \nWithout relying on simplifying parametric assumptions, a small sub-sample size may not be sufficiently informative to infer equality of entire unknown distributions. \nTo overcome this issue, without introducing parametric assumptions, we resort to an alternative weaker notion of homogeneity between populations $j$ and $j'$: we only require the conditional means of the two populations to (almost surely) coincide\n\\begin{equation}\n\t\\label{eq:equality_means}\n\t\\mathbb{E}(X_{i,j,m}\\mid\\pi_{j,m})=\\mathbb{E}(X_{i,j',m}\\mid\\pi_{j',m}).\n\\end{equation}\nAccording to this definition, the detection of heterogeneities in cardiac function indexes amounts to inferring which cardiac indexes have means that differ \nacross diagnoses, as it is done in standard parametric ANOVA models.\nBesides clustering populations according to \\eqref{eq:equality_means}, it is also of interest to cluster patients, both within and across different groups, once the effect of the specific hypertensive disorder is taken \ninto account. This task may be achieved by assuming a model that decomposes the observations as \n\\begin{equation}\n\t\\label{eq:decomp_x}\n\tX_{i,j,m}=\\theta_{j,m}+\\varepsilon_{i,j,m}\\qquad\n\t\\varepsilon_{i,j,m}|(\\xi_{i,j,m},\\sigma_{i,j,m}^2)\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim} \\mbox{N}(\\xi_{i,j,m},\\sigma_{i,j,m}^2)\n\\end{equation} \nand the $\\xi_{i,j,m}$ have a symmetric distribution around the origin, in order to ensure $E(\\xi_{i,j,m})=0$. In view of this decomposition, we \nwill let $\\theta_{j,m}$ govern the clustering of populations while the $(\\xi_{i,j,m},\\sigma^2_{i,j,m})$'s determine the clustering of individuals, namely patients, \nafter removing the effect of the specific hypertensive disorder. In order to pursue this, for each cardiac index $m$, we will specify a\nhierarchical process prior for $(\\xi_{i,j,m},\\sigma^2_{i,j,m})$ that is suited to infer the clustering structure both within and across different \nhypertensive disorders for a specific cardiac index. In particular, we will deploy a novel instance of hierarchical Dirichlet process, introduced in \\cite{teh2006}, that we name \\textit{symmetric}, to highlight its centering in $0$.\n\n\nEarly examples of Bayesian nonparametric models for ANOVA can be found in \\cite{cifreg78} and \\cite{mulierepetrone}, while the first popular proposal, due to \\citet{de2004}, uses the dependent Dirichlet process (DDP) \\citep{maceachern2000dependent} and is therefore termed ANOVA-DDP. This model is mainly tailored to estimate populations' probability distributions, while we draw inferences over clusters of populations' means and obtain estimates of the unknown distributions as a by-product. Moreover, the ANOVA-DDP of \\citet{de2004} was not introduced as a model selection procedure. A popular Bayesian nonparametric model, that does cluster \npopulations and can be used for model selection, is the nested Dirichlet process of \\citet*{rodriguez2008}. As shown in \\citet{camerlenghi2019}, such a prior is biased towards homogeneity, in the sense that even a single \ntie between populations $j$ and $j'$, namely $X_{i,j,m}=X_{i',j',m}$ for some $i$ and $i'$, entails $\\pi_{j,m}=\\pi_{j'm}$ (almost surely). In order to overcome such a drawback, a novel class of nested, and more flexible, priors has been proposed in \n\\citet{camerlenghi2019}. See also \\cite{soriano_ma} for related work. Interesting alternatives that extend the analysis to more than two populations can be found in \\cite{ChristMa}, \\cite*{Rebaudo} and in \\cite*{Beraha}. Another similar proposal is the one by \\citet{gutierrez2019bayesian}, whose model identifies differences over cases' distributions and the control group. These models imply that two populations belong to the same cluster if they share the entire distribution. However, as already mentioned, distribution-based clustering is not ideal when dealing with scenarios as the one of hypertensive dataset. Further evidence will be provided in Section~\\ref{ss:sim}, through simulation \nstudies. In addition, note that all these contributions deal with only one response variable and would need to be suitably generalized to fit \nthe setup of this paper. As far as the contributions treating multiple response variables are concerned, uses of nonparametric priors for multiple \ntesting can be found, for instance, in \\citet{gopalan1998}, \\citet*{do2005bayesian}, \\citet{dahl2007multiple}, \\citet*{guindani2009}, \\citet{martin2012} and more recently in \\citet*{cipolli2016bayesian}, who propose an approximate finite P\\'olya tree multiple testing procedure to compare two-samples' locations, and in \\citet{dentitwo}. However, in all these contributions, models are developed directly over summaries of the original data (e.g. averages, z-scores) and, as such, do not allow to draw any inference on the entire distributions and clusters of subjects. \n\nThe outline of the paper is as follows. In Section~\\ref{s:model} we introduce the model, which makes use of an original hierarchical prior structure for symmetric distributions (Section~\\ref{ss:s-HDP}). In Section~\\ref{s:partition} we derive the prior law of the random partitions induced by \nthe model, key ingredient for the Gibbs sampling scheme devised in Section~\\ref{s:inference}. In Section~\\ref{s:results}, we first present a series of simulation studies that highlight the behaviour of the model before applying it to obtain our results on cardiac dysfunction in hypertensive disorders. Section~\\ref{s:conclusion} contains some concluding remarks. As Supplementary Material we provide the datasets and Python codes, some further background material and details about the derivation of \nthe posterior sampling scheme as well as additional simulation studies and results on the application, including an analysis of prior sensitivity.\n\n\\section{The Bayesian nonparametric model}\n\\label{s:model}\n\nThe use of discrete nonparametric priors for Bayesian model-based clustering has become standard practice. The Dirichlet process (DP) \\citep{ferguson1973} is the most popular instance, and clustering is typically addressed by resorting to a mixture model, which with our data structure amounts to\n\\begin{equation*}\n\t\\label{eq:mixture_kernel}\n\tX_{i,j,m}|\\psi_{i,j,m}\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim} k(X_{i,j,m};\\psi_{i,j,m}),\\qquad\n\t\\psi_{i,j,m}|\\tilde p_{j,m}\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim} \\tilde p_{j,m} \n\\end{equation*}\nfor $m=1,\\ldots,M$, $j=1,\\ldots,J$ and $i=1,\\ldots,n_j$. Here $k(\\,\\cdot\\,;\\,\\cdot\\,)$ is some kernel and the $\\tilde p_{j,m}$'s are discrete random probability measures. Hence, the $\\psi_{i,j,m}$'s may exhibit ties. The model specification for $\\tilde p_{j,m}$ will be tailored to address the following goals: (i) cluster the $J$ probability distributions based on their means; (ii) cluster the observations $X_{i,j,m}$ according to the ties induced on the $\\psi_{i,j,m}$'s by the $\\tilde p_{j,m}$'s for \na given fixed $j$ and across different $j$'s. These two issues will be targeted separately: we first design a clustering scheme for the populations, through the specification of a prior on the means of the $X_{i,j,m}$'s and, then, we cluster the data using a hierarchical DP having a specific invariance structure that is ideally suited to the application at hand. \n\n\\subsection{The prior on disease-specific locations}\n\\label{ss:like}\nAs a model for the observations we consider a nonparametric mixture of Gaussian distributions specified as\n\\begin{equation} \n\t\\label{eq:model}\n\tX_{i,j,m}\\,|\\,(\\bm{\\theta}_m,\\bm{\\xi}_m,\\bm{\\sigma}^2_m)\n\t\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim} \\mathcal{N}(\\theta_{j,m}+\\xi_{i,j,m},\\sigma_{i,j,m}^2)\n\\end{equation}\nwhere $\\bm{\\theta}_m=(\\theta_{1,m},\\ldots,\\theta_{J,m})$, $\\bm{\\xi}_m=(\\xi_{1,1,m},\\ldots,\\xi_{1,n_1,m},\\xi_{2,1,m},\\ldots,\n\\xi_{n_J,J,m})$, with a similar definition for the vector $\\bm{\\sigma}_m^2$, and $\\mathcal{N}(\\,\\mu,\\,\\sigma^2\\,)$ denotes a normal distribution with mean $\\mu$ and variance $\\sigma^2$. The assumption in \\eqref{eq:model} clearly reflects \\eqref{eq:decomp_x}. Moreover, in order to account for the two levels of clustering we are interested in, we will assume that \n\\begin{equation}\n\t\\label{eq:model_parameters}(\\bm{\\theta}_{1},\\ldots,\\bm{\\theta}_{M})\\sim P,\\qquad\n\t(\\xi_{i,j,m},\\sigma_{i,j,m}^2)\\,|\\,\\tilde{q}_{j,m}\\stackrel{\\mbox{\\scriptsize{\\rm iid}}}{\\sim} \\tilde q_{j,m}\\quad (i=1,\\ldots,n_j)\n\\end{equation}\nwhere $\\tilde q_{1,m},\\ldots,\\tilde q_{J,m}$ are discrete random probability measures independent from $(\\bm{\\theta}_{1},\\ldots,\\bm{\\theta}_{M})$. Thus, the likelihood corresponds to\n\\begin{equation}\\label{eq1}\n\t\\prod_{m=1}^M\\,\\prod_{j=1}^J \\,\n\t\\prod_{i=1}^{n_j}\n\t\\frac{1}{\\sigma_{i,j,m}}\\:\n\t\\varphi\\Big(\\frac{x_{i,j,m}-\\theta_{j,m}-\\xi_{i,j,m}}{\\sigma_{i,j,m}}\\Big)\n\t\\:\\tilde{q}_{j,m}(\\mathrm{d}\\xi_{i,j,m},\n\t\\mathrm{d}\\sigma_{i,j,m})\n\\end{equation}\nwith $\\varphi$ denoting the standard Gaussian density. Relevant inferences can be carried out if one is able to marginalize this expression with respect to both $(\\bm{\\theta}_{1},\\ldots,\\bm{\\theta}_{M})$ and $(\\tilde q_{1,m},\\ldots,\\tilde{q}_{J,m})$ for each \n$m=1,\\ldots,M$. \n\nThis specification allows to address the model selection problem in the following way. If $\\mathcal{M}^m$ stands for the space of all partitions of the $J$ populations for the $m$-th cardiac function index, then $\\mathcal{M}^m=\\{M_b^m \\,:\\, b =1,\\ldots,\\mbox{card}(\\mathcal{P}_J)\\}$ where $\\mathcal{P}_J$ is the collection of all possible partitions of $[J]=\\{1,\\ldots,J\\}$. In our specific case, $J=4$ and card$(\\mathcal{P}_J)=15$, thus we have 15 competing models per cardiac index. Each competing model corresponds to a specific partition in $\\mathcal{M}^m$. In particular, the partition arises from ties between the population specific means in $\\bm{\\theta}_m$ and, hence, the distribution $P$ in \\eqref{eq:model_parameters} needs to associate positive probabilities to ties between the parameters within the vector $\\bm{\\theta}_m$, for each $m=1,\\ldots,M$. \n\n\nLet us start considering as distribution $P$ a well-known effective clustering prior, i.e. a mixture of DPs in the spirit of \\cite{antoniak1974mixtures}, namely\n\\begin{equation}\n\t\\label{eq2}\n\t\\begin{aligned}\n\t\t\\theta_{j,m}\\mid \\tilde p_m \\enskip&\\stackrel{\\mbox{\\scriptsize{\\rm iid}}}{\\sim}\\enskip \\tilde p_m \\qquad &j=1,\\ldots,J \\\\\n\t\t\\tilde p_m\\mid\\omega\\enskip &\\stackrel{\\mbox{\\scriptsize{\\rm iid}}}{\\sim} \\enskip \\mbox{DP}(\\omega,G_{m})\\qquad& m=1,\\ldots,M \\\\\n\t\t\\omega \\enskip &\\sim \\enskip p_\\omega&\n\t\\end{aligned}\n\\end{equation}\nwhere DP$(\\omega,G_m)$ denotes the DP with concentration parameter $\\omega$ and non-atomic baseline probability measure $G_m$ and $p_{\\omega}$ is a probability measure on $\\mathds{R} ^+$. The discreteness of the DP implies the presence (with positive probability) of ties within the vector of locations $\\bm{\\theta}_m$ associated to a certain cardiac index $m$, as desired. The ties give rise to a random partition: as shown in \\cite{antoniak1974mixtures}, the probability of observing a specific partition of the elements in $\\bm{\\theta}_m$ consisting of $k\\le J$ distinct values with respective frequencies $n_1,\\ldots,n_k$ coincides with\n\\begin{equation}\n\t\\label{eq:eppf_dir}\n\t\\Pi_k^{(J)}(n_1,\\ldots,n_k)=\\frac{\\omega^k}{(\\omega)_J}\\,\\prod_{i=1}^k (n_i-1)!\n\\end{equation}\nwhere $(\\omega)_J=\\Gamma(\\omega+J)\/\\Gamma(\\omega)$. The use of a shared concentration parameter over \\eqref{eq:eppf_dir} to address multiple model selection has been already successfully employed in \\cite{moser2021multiple}, where they cluster parameters in a probit model. When there is no pre-experimental information available on competing partitions, the use of \\eqref{eq:eppf_dir} as a prior for model selection has some relevant benefits. Indeed, it induces borrowing of strength across diagnoses and, being $\\omega$ random, it generates borrowing of information also across cardiac indexes, thus improving the Bayesian learning mechanism. These two features can also be given a frequentist interpretation in terms of desirable penalties.\nAs a matter of fact, the procedure penalizes for the multiplicity of the model selections that are performed. The penalty has to be meant \nin the following way: while $J$ and\/or $M$ increase, the prior odds change in favor of less complex models. For more details on this, see \\cite{scott2010bayes}. Summing up, the mixture of DPs automatically induces a prior distribution on $\\{\\mathcal{M}^m:m=1,\\ldots,M\\}$, that arises from \\eqref{eq:eppf_dir} combined with the prior $p_\\omega$ on $\\omega$, and it presents desirable properties for model selection \nthat can be interpreted either in terms of borrowing of information or in terms of penalties. \n\nHowever, in the analysis of hypertensive disorders, some prior information on competing models is available, and this is not yet incorporated in \\eqref{eq:eppf_dir}. In fact, as already mentioned, there is a natural order of the diagnoses, which is given by the severity of the disorders, i.e.\\ C, G, M, S. Partitions that do not comply with this ordering, e.g. $\\{\\{C, S\\}\\{G\\},\\{M\\}\\}$, should be excluded from the support of the prior. Thus, we consider a prior over $\\mathcal{M}^m$ that associates zero probability to partitions that do not respect the natural order of the diagnoses and a probability proportional to that in \\eqref{eq:eppf_dir} for the remaining partitions, i.e.\\\n\\begin{equation}\\label{eq:prior2}\n\t\\mathbb{P}(M_b^m\\mid \\omega) \\propto \\begin{cases}\n\t\\Pi_k^{(J)}(n_1,\\ldots,n_k)&\\text{if $M_b^m$ is compatible with the natural order}\\\\\n0 & \\text{otherwise}\\end{cases}\n\\end{equation} \nThis amounts to a distribution $P$ for $(\\bm{\\theta}_{1},\\ldots,\\bm{\\theta}_{M})$ given by \n\\begin{equation}\n\t\\label{eqnew}\n\t\\begin{aligned}\n\t\t(\\theta_{1,m},\\ldots,\\theta_{J,m})\\mid \\omega \\enskip&\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim}\\enskip P_{\\omega, G_m} \\qquad m=1,\\ldots,M \\\\\n\t\t\\omega \\enskip &\\sim \\enskip p_\\omega&\n\t\\end{aligned}\n\\end{equation}\nwhere $P_{\\omega, G_m}$ is the distribution obtained sampling a partition according to \\eqref{eq:prior2} and associating to each cluster a unique value sampled from $G_m$. \nUsing \\eqref{eqnew} as a prior for the disease-specific locations, we preserve the desirable properties of the mixture of DPs mentioned before, while incorporating prior information on the severity of the diseases.\n\nAs detailed in the next section, we further define random probability measures $\\tilde q_{j,m}$ that satisfy the symmetry condition\n\\begin{equation}\n\t\\label{eq:symmetry}\n\t\\tilde q_{j,m}(A\\times B ) \n\t= \\tilde \n\tq_{j,m}((-A)\\times B )\\qquad\\qquad \\text{a.s.}\n\\end{equation}\nfor any $A$ and $B$. This condition ensures that the parameters $\\theta_{j,m}$, for $j=1,\\ldots,J$ and $m=1,\\ldots,M$, in \\eqref{eq:model} are identified, namely $\\mathbb{E}(X_{i,j,m}\\mid \\boldsymbol{\\theta}_{m}, \\, \\tilde q_{j,m} ) = \\theta_{j,m}$ with probability one. This identifiability property is crucial to make inference over the location parameters $\\bm{\\theta}_m$'s. Similar model specifications for discrete \nexchangeable data have been proposed and studied in \\cite{dalal1979nonparametric}, \\cite{doss1984}, \\cite{diaconis1986} and \\cite*{ghosal1999}, of which \\eqref{eq1} represents a generalization to \ndensity functions and partially exchangeable data. \n\n\\subsection{The prior for the error terms}\n\\label{ss:s-HDP}\nWhile the clustering of populations is governed by \\eqref{eq:prior2}, we use a mixture of hierarchical discrete processes for the error terms. This has the advantage of modeling the clustering of the observations, both within and across different samples, once the disease-specific effects are account for. This clustering structure allows to model heterogeneity across patients in a much more realistic way with respect to standard ANOVA models based on assumption of normality. Cardiac indexes may be influenced by a number of factors that are not directly observed in the study, such as pre-existing conditions \\citep{hall2011heart} and psychosocial factors \\citep{pedersen2017psychosocial}. These unobserved relevant factors may be shared across patients with the same or a different diagnosis and may also result in outliers. To take into account this latent heterogeneity of the data, we introduce the hierarchical\nsymmetric DP that satisfies the symmetry condition in \\eqref{eq:symmetry} and, moreover, allows to model heterogeneous data similarly to the \nhugely popular hierarchical DP \\citep{teh2006}.\n\nThe basic building block of the proposed prior is the invariant Dirichlet \nprocess, which was introduced for a single population ($J=1$) in an exchangeable framework by \\cite{dalal1979}. Such a modification of the DP satisfies a symmetry condition, in the sense that it is a random probability measure that is invariant with respect to a chosen group of transformations $\\mathcal{G}$. A more formal definition and detailed description of the invariant DP can be found in Section A of the Supplement. For our purposes it is enough to consider the specific case of the symmetric Dirichlet process, \nwhich can be constructed through a symmetrization of a Dirichlet process. Consider a non-atomic probability measure $P_0$ on $\\mathbb{R}$\nand let $\\tilde Q_0 \\sim \\text{DP}(\\alpha,P_0)$. \nIf \n\\begin{equation}\n\t\\label{eq:inv_DP}\n\t\\tilde Q(A) = \\frac{\\tilde Q_0(A) + \\tilde Q_0(-A)}{2}\\quad \\quad \\forall A \\in \\mathcal{B}(\\mathbb{R})\n\\end{equation}\nwhere $-A=\\{x\\in \\mathbb{R}: -x \\in A\\}$, then $\\tilde Q$ is symmetric about 0 (almost surely) and termed symmetric DP, in symbols $\\tilde Q\\sim\\text{s-DP}(\\alpha\\,,\\,P_0)$. For convenience and without loss of generality, we assume that $P_0$ is symmetric: this implies that $P_0$ is the expected value of $\\tilde Q$ making it an interpretable parameter.\nThe random probability measure $\\tilde Q$ is the basic building block of the hierarchical process that we use to model the heterogeneity of the error terms across different populations, $j=1,\\ldots,J$, in such a way that clusters identified by the unique values can be shared within and across populations. \nThis prior is termed \\textit{symmetric hierarchical Dirichlet process} (s-HDP) and described as\n\\begin{equation}\n\t\\label{eq:s-hdp_def}\n\t\\begin{split}\n\t\t\\tilde q_{j,m}\\mid\\gamma_{j,m}, \\, \n\t\t\\tilde \n\t\tq_{0,m}\\enskip\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim}\\enskip\\text{s-DP}(\\gamma_{j,m},\\tilde q_{0,m})\\\\\n\t\t\\tilde q_{0,m}\\mid\\alpha_{m}\\enskip\\stackrel{\\mbox{\\scriptsize{\\rm ind}}}{\\sim}\\enskip\\text{s-DP}(\\alpha_{m},P_{0,m})\n\t\\end{split}\n\\end{equation}\nwhere $\\gamma_{j,m}$ and $\\alpha_{m}$ are positive parameters and $P_{0,m}$ is a non-atomic probability distribution symmetric about 0. We use the \nnotation $(\\tilde q_{1,m},\\ldots,\\tilde q_{J,m})\\sim \\mbox{s-HDP}(\\bm{\\gamma}_m,\\alpha_m,P_{0,m})$, where $\\bm{\\gamma}_m=(\\gamma_{1,m},\\ldots,\\gamma_{j,m})$. This definition clearly ensures the validity of \\eqref{eq:symmetry}. \nA graphical model representation of the over-all proposed model is displayed in Figure~\\ref{fig:figure1}. \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}[-latex ,auto ,node distance =2 cm and 2cm ,on grid ,\n\t\t\tsemithick ,\n\t\t\tstate\/.style ={ circle ,top color =white , bottom color = white ,\n\t\t\t\tdraw,black , text=black , minimum width =1 cm},\n\t\t\ttransition\/.style = {rectangle, draw=black!50, thick, minimum width=6.1cm, minimum height = 5.8cm},\n\t\t\ttransition3\/.style = {rectangle, draw=black!50, thick, minimum width=4cm, minimum height = 3.8cm}, \n\t\t\ttransition2\/.style = {rectangle, draw=black!50, thick, minimum width=6.5cm, minimum height = 8cm}, scale=0.8]\n\t\t\t\\node[state] (C){$\\boldsymbol{\\theta_{m}}$};\n\t\t\t\\node[draw=none] (A) [below =of C] { };\n\t\t\t\\node[draw=none] (B) [below =of A] {};\n\t\t\t\\node[circle ,top color =white , bottom color = white ,\n\t\t\tdraw,black , text=black , minimum width =1 cm] (D) [left =2 cm of C] {${\\omega}$};\n\t\t\t\\node[circle ,top color =white , bottom color = ashgrey ,\n\t\t\tdraw,black , text=black , minimum width =1 cm] (E) [below = of B] {${X_{i,j,m}}$};\n\t\t\t\\node[state] (F) [ right =of B] {$\\varepsilon_{i,j,m}$};\n\t\t\t\\node[state] (G) [above =of F] {${\\tilde q_{j,m}}$};\n\t\t\t\\node[state] (Z) [above =of G] {${\\tilde q_{0,m}}$};\n\t\t\t\\node[state] (X) [right =of Z] {${\\alpha_{m}}$};\n\t\t\t\\node[state] (Y) [right =of G] {${\\gamma_{j,m}}$};\n\t\t\t\\node[transition] (H)[ below =0.02cm of F] { };\n\t\t\t\\node[transition3] (J)[ above right =1.5cm of E] { };\n\t\t\t\\node[transition2] (L)[ below =1cm of G] { };\n\t\t\n\t\t\t\\path (D) edge node[right] {} (C);\n\t\t\t\\path (C) edge node[below] {} (E);\n\t\t\t\\path (G) edge node[below] {} (F);\n\t\t\t\\path (F) edge node[below] {} (E);\n\t\t\t\\path (X) edge node[below] {} (Z);\n\t\t\t\\path (Y) edge node[below] {} (G);\n\t\t\t\\path (Z) edge node[below] {} (G);\n\t\t\\end{tikzpicture}\n\t\t\\caption{\\small Graphical representation of the model. Each node represents a random variable and each rectangle denotes conditional i.i.d. replications of the model within the rectangle.}\n\t\t\\label{fig:figure1}\n\t\\end{center}\n\t\\vspace{-0.5 cm}\n\\end{figure}\n\nStill referring to the decomposition of the observations into disease-specific locations and an error term, i.e. $X_{i,j,m}=\\theta_{j,m}+\\varepsilon_{i,j,m}$, it turns out that the $\\varepsilon_{i,j,m}$'s are from a symmetric hierarchical DP mixture (s-HDP mixture) with a normal kernel. Hence, the patients' clusters are identified through the $\\varepsilon_{i,j,m}$, which, according to \\eqref{eq:model}, are conditionally independent from a $\\mathcal{N}(\\xi_{i,j,m},\\sigma_{i,j,m}^2)$ given $(\\xi_{i,j,m},\\sigma_{i,j,m}^2)$. The choice of the specific invariant DP is aimed at ensuring that $\\mathbb{E}(\\varepsilon_{i,j,m}|\\tilde \nq_{j,m})=0$. \nThe clusters identified by the s-HDP mixture can be interpreted as representing common unobserved factors across patients, once the disease-specific locations have been accounted for. Indeed, for any pair of patients, we may consider the decomposition $X_{i,j,m}-X_{i',j',m}=\\,\\Delta^{(m)}_{\\theta} + \\Delta^{(m)}_{\\xi} + (e_{i,j,m}-e_{i',j',m})$ where $\\Delta^{(m)}_{\\theta}=\\theta_{j,m} - \\theta_{j',m}$, $\\Delta^{(m)}_{\\xi}=\\xi_{i,j,m} - \\xi_{i',j',m}$ and $e_{i,j,m}$ and $e_{i',j',m}$ are independent and normally distributed random variables with zero mean and variances $\\sigma_{i,j,m}^2$ and $\\sigma_{i',j',m}^2$, respectively. \n\nHence, patients' clustering reflects the residual heterogeneity that is not captured by the disease-specific component $\\Delta^{(m)}_{\\theta}$ and are related to the subject-specific locations $\\Delta^{(m)}_{\\xi}$ and to the zero-mean error component $(e_{i,j,m}-e_{i',j',m})$. \nIn view of this interpretation, using a s-HDP mixture over error terms offers a three-fold advantage. Firstly, the presence of clearly separated clusters of patients within and across populations will indicate the presence of unobserved relevant factors which affect the cardiac response variables. Secondly, single patients with very low probabilities of co-clustering \nwith all other subjects will have to be interpreted as outliers. \nFinally, the estimated clustering structure can also be used to check whether the relative effect of a certain disease (with respect to another) is fully explained by the corresponding $\\Delta^{(m)}_{\\theta}$. To clarify this last point consider two diseases: if the posterior co-clustering probabilities among patients sharing the same disease are different between the two populations, this will indicate that different diagnoses not only have an influence on disease-specific locations (which is measured by $\\Delta^{(m)}_{\\theta}$), but they also have \nan impact on the shape of the distribution of the corresponding cardiac index. More details on this can be found in Section D of the Supplement.\n\n\n\\section{Marginal distributions and random partitions}\n\\label{s:partition}\nAs emphasized in the previous sections, ties among the $\\theta_{j,m}$'s and the $(\\xi_{i,j,m},\\sigma_{i,j,m}^2)$'s are relevant for inferring the clustering structure both among the populations (hypertensive diseases) and among the individual units (patients). Indeed, for each $m$ (cardiac index) they induce a random partition that emerges as a composition of two partitions generated respectively by the prior in \\eqref{eqnew} and the s-HDP. The laws of these random partitions are not only crucial to understand the clustering mechanism, but also necessary in order to derive posterior sampling schemes. In this section such a law is derived and used to compute the predictive distributions that, jointly with the likelihood, determine the full conditionals of the Gibbs sampler devised in Section~\\ref{s:inference}. \nTo reduce the notational burden, in this and the following section, we remove the dependence of observations and parameters on the specific response variable $m$ and denote with $\\phi_{i,j}$ the pair $(\\xi_{i,j},\\sigma_{i,j}^2)$ and with $\\boldsymbol{\\phi}$ the collection $(\\phi_{1,1},\\ldots,\\phi_{1,n_1},\\phi_{2,1},\\ldots \\phi_{n_J,J})$. \n \nConditionally on $\\omega$, the law of the partition in \\eqref{eq:prior2} leads to the following predictive distribution for the disease-specific locations\n\\[\n\\theta_j\\,|\\,\\omega,\\theta_1,\\ldots,\\theta_{j-1}\\:\\sim\\:\na_j(\\omega,\\theta_1,\\ldots,\\theta_{j-1})\\,\\delta_{\\theta_{j-1}}+\n\\left[1-a_j(\\omega,\\theta_1,\\ldots,\\theta_{j-1})\\right] G\n\\]\nwhere \n\\begin{equation}\n\\label{eq:a}\na_j(\\omega,\\theta_1,\\ldots,\\theta_{j-1}) = \\frac{\\sum_{(*_j)}\\Pi^{(J)}_{k}(n_1,\\ldots,n_k)}{\\sum_{(\\Delta_j)}\\Pi^{(J)}_{k}(n_1,\\ldots,n_k)}\n\\end{equation}\nwhere the sum at the denominator runs over the set of partitions consistent with the one generated by\n \t$(\\theta_1,\\ldots,\\theta_{j-1})$ and the one at the numerator runs over a subset of those \n \tpartitions where one further has $\\theta_j=\\theta_{j-1}$. For $j=4$, the predictive equals\n\\[\n\\theta_{4}\\mid \\omega,\\theta_{1}, \\theta_{2},\\theta_{3} \\begin{cases} \n\t\\frac{3}{\\omega+3}\\delta_{\\theta_{3}} + \n\t\\frac{\\omega}{\\omega+3}G &\\mbox{if }\\theta_{1}=\\theta_{2}=\\theta_{3}\\\\\n\t\\frac{2}{\\omega+2}\\delta_{\\theta_{3}} + \n\t\\frac{\\omega}{\\omega+2}G &\\mbox{if }\\theta_{1}\\neq\\theta_{2}=\\theta_{3}\\\\\n\t\\frac{1}{\\omega+1}\\delta_{\\theta_{3}} + \n\t\\frac{\\omega}{\\omega+1}G&\\mbox{otherwise}\n\t\\end{cases}\n\\]\nExplicit expressions for the function $a$, for $j=1,2,3$, can be easily computed using \\eqref{eq:a} and \\eqref{eq:prior2} and are provided in Section B of the Supplement.\n\nMoving to second-level partitions induced by the s-HDP, we recall that the key concept for studying random partitions on multi-sample data is the \\textit{partially exchangeable partition probability function} (pEPPF). See, e.g., \\cite*{lnp2014} and \\citet{camerlenghi2019b}. The pEPPF returns the probability of \na specific multi-sample partition and represents the appropriate generalization of the well-known single-sample EPPF, which in the DP case corresponds to \\eqref{eq:eppf_dir}.\nDiscreteness of the s-HDP $(\\tilde q_1,\\ldots,\\tilde q_m)$ in \\eqref{eq:s-hdp_def} induces a partition of the elements of \n$\\boldsymbol{\\phi}$ into equivalence classes identified by the distinct values. Taking into account the underlying partially exchangeable structure, such a random partition is characterized by the pEPPF\n\\begin{equation}\n\t\\label{eq:peppf}\n\t\\tilde \\Pi_k^{(N)}(\\bm{n}_1,\\ldots,\\bm{n}_J)=\\mathbb{E}\\left(\\int_{\\Phi^k}\n\t\\prod_{j=1}^J \\prod_{h=1}^k\\tilde q_{j,m}^{n_{j,h}}(\\mathrm{d}\\phi_i)\\right)\n\\end{equation}\nwhere $\\bm{n}_{j}=(n_{j,1},\\ldots,n_{j,k})$ are non-negative integers, for any $j=1,\\ldots,J$, such that $n_{j,h}$ is the number of elements in $\\bm{\\phi}$ corresponding to population $j$ and belonging to cluster $h$. Thus $\\sum_{j=1}^J n_{j,h}\\ge 1$ for any $h=1,\\ldots,k$, $\\sum_{h=1}^kn_{j,h}=n_j$ and $\\sum_{h=1}^k \\sum_{j=1}^J n_{j,h}=N$.\nThe determination of probability distributions of this type is challenging and only recently the first explicit instances have appeared in the literature. See e.g., \\citet{lnp2014}, \\citet{camerlenghi2019} and \\citet{camerlenghi2019b}. With respect to the hierarchical case considered in \\citet{camerlenghi2019b}, the main difference is that here we have to take into account the specific structure \\eqref{eq:inv_DP} of the $\\tilde q_{j,m}$. The almost sure symmetry of the process generates a natural random matching between sets in the induced partition. Therefore, instead of studying the marginal law in \\eqref{eq:peppf}, we derive the joint law of the partition and of the random matching. Formally, consider a specific partition $\\{A_1^{+},A_1^{-},\\ldots,A_k^{+},A_k^{-}\\}$ of $\\boldsymbol{\\phi}$, such that, for $h=1,\\ldots,k$, all the elements in $A_h^+$ belong to $\\mathbb{R}^+\\times\\mathbb{R}^+$, all the elements in $A_h^-$ belong to $\\mathbb{R}^-\\times\\mathbb{R}^+$ and, if $\\phi_{i,j}\\in A_h^+$ and $\\phi_{i',j'}\\in A_h^-$, then the element-wise absolute values of $\\phi_{i,j}$ and $\\phi_{i',j'}$ are equal. Denote with $n^+_{j,h}$ the number of elements in $A_h^{+}\\cap\\{\\phi_{i,j},i=1,\\ldots,n_j\\}$ and with $n^-_{j,h}$ the number of elements in $A_h^{-}\\cap\\{\\phi_{i,j},i=1,\\ldots,n_j\\}$. The probability of observing $\\{A_1^{+},A_1^{-},\\ldots,A_k^{+},A_k^{-}\\}$ is\n\\begin{equation}\n\t\\label{eq:peppfsym}\n\t\\ddtilde{\\Pi}_{k}^{(N)}(\\boldsymbol{n_1}^+,\\boldsymbol{n_1}^-,\\ldots,\\boldsymbol{n_J}^+,\\boldsymbol{n_J}^-) = \\mathbb{E}\\left(\\int_{\\Phi^k}\\prod_{j=1}^J\\prod_{h=1}^k{\\tilde q_{j,m}}^ {n^{+}_{j,h}+n^{-}_{j,h}}(d\\phi)\\right)\n\\end{equation} \nwith $\\boldsymbol{n_j^+}=(n_{j,1}^+,\\ldots,n_{j,k}^+)$. As for the determination of \\eqref{eq:peppfsym}, a more intuitive understanding may be gained if one considers its corresponding Chinese restaurant franchise (CRF) metaphor, which displays a variation of both the standard Chinese restaurant franchise of \\cite{teh2006} and the skewed Chinese restaurant process of \\cite*{iglesias2009nonparametric}. Figure~\\ref{fig:figure2} provides a graphical representation. \n\\begin{figure}\n\t\\centering\n\t\\resizebox{14cm}{!}{\n\t\t\\begin{tikzpicture}[-latex, auto, node distance = 2.4 cm and 3.4cm, \n\t\ton grid,\n\t\tsemithick,\n\t\tstate\/.style ={circle, top color = white, bottom color = white,\n\t\t\tdraw, black, text=black, minimum width = 0.5 cm},\n\t\tfill fraction\/.style n args={2}{path picture={\n\t\t\t\t\\fill[#1] (path picture bounding box.south west) rectangle\n\t\t\t\t($(path picture bounding box.north west)!#2!(path picture bounding box.north east)$);}},\n\t\ttransition\/.style = {rectangle, draw=black!50, thick, minimum width=17cm, minimum height = 6.7cm} ]\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (A)\n\t\t{\\Large$\\boldsymbol{\\phi^{**}_1}$\\quad $-\\boldsymbol{\\phi^{**}_1}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (B) [right =of A]\n\t\t{\\Large$\\boldsymbol{\\phi^{**}_2}$\\quad $-\\boldsymbol{\\phi^{**}_2}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (C) [right =of B] \t{\\Large$\\boldsymbol{\\phi^{**}_3}$\\quad $-\\boldsymbol{\\phi^{**}_3}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (D) [right =of C] \t{\\Large$\\boldsymbol{\\phi^{**}_1}$\\quad $-\\boldsymbol{\\phi^{**}_1}$};\n\t\t\\node[circle, fill, draw, scale=0.3] (M) [right =1.5 cm of D] { };\n\t\t\\node[circle, fill, draw, scale=0.3] (N) [right =0.5 cm of M] { };\n\t\t\\node[circle, fill, draw, scale=0.3] (O) [right =0.5 cm of N] { };\n\t\t\\node[draw=none,fill=none] (E) [above left =1.5 cm of A] {\\large$\\phi_{1,1}$};\n\t\t\\node[draw=none,fill=none] (F) [above left =1.55 cm of B] {\\large$\\phi_{2,1}$};\n\t\t\\node[draw=none,fill=none] (G) [below left =0.60 cm of E] {\\large$\\phi_{3,1}$};\n\t\t\\node[draw=none,fill=none] (H) [left =1.56 cm of A] {\\large$\\phi_{4,1}$};\n\t\t\\node[draw=none,fill=none] (I) [above left =1.55 cm of C] {\\large$\\phi_{5,1}$};\n\t\t\\node[draw=none,fill=none] (L) [above right=1.55 cm of B] {\\large$\\phi_{6,1}$};\n\t\t\\node[draw=none,fill=none] (P) [below left =1.35 cm of A] { };\n\t\t\\node[draw=none,fill=none] (Q) [above right=1.50 cm of A] {\\large$\\phi_{7,1}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (AA) \n\t\t[below =3.2cm of A]\n\t\t{\\Large$\\boldsymbol{\\phi^{**}_3}$\\quad $-\\boldsymbol{\\phi^{**}_3}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (BB) \n\t\t[below =3.2cm of B]\n\t\t{\\Large$\\boldsymbol{\\phi^{**}_1}$\\quad $-\\boldsymbol{\\phi^{**}_1}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (CC) \n\t\t[below =3.2cm of C] \t{\\Large$\\boldsymbol{\\phi^{**}_4}$\\quad $-\\boldsymbol{\\phi^{**}_4}$};\n\t\t\\node[circle, draw, scale=0.7, fill fraction={darkgray}{0.5}] (DD) \n\t\t[below =3.2cm of D]\n\t\t{\\Large$\\boldsymbol{\\phi^{**}_5}$\\quad $-\\boldsymbol{\\phi^{**}_5}$};\n\t\t\\node[circle, fill, draw, scale=0.3] (MM) [right =1.5 cm of DD] { };\n\t\t\\node[circle, fill, draw, scale=0.3] (NN) [right =0.5 cm of MM] { };\n\t\t\\node[circle, fill, draw, scale=0.3] (OO) [right =0.5 cm of NN] { };\n\t\t\\node[draw=none,fill=none] (EE) [above left =1.5 cm of AA] {\\large$\\phi_{4,2}$};\n\t\t\\node[draw=none,fill=none] (FF) [above left =1.55 cm of BB] {\\large$\\phi_{2,2}$};\n\t\t\\node[draw=none,fill=none] (GG) [below left =0.60 cm of EE] {\\large$\\phi_{3,2}$};\n\t\n\t\t\\node[draw=none,fill=none] (II) [above left =1.55 cm of CC] {\\large$\\phi_{5,2}$};\n\t\t\\node[draw=none,fill=none] (LL) [above right=1.55 cm of CC] {\\large$\\phi_{6,2}$};\n\t\t\\node[draw=none,fill=none] (PP) [below left =1.35 cm of AA] { };\n\t\t\\node[draw=none,fill=none] (QQ) [above right=1.5 cm of AA] {\\large$\\phi_{7,2}$};\n\t\t\\node[transition] (R) [above left=2.2 cm of CC]{ };\n\t\\end{tikzpicture}\n}\n\\caption{{Chinese restaurant franchise representation of the symmetric hierarchical DP for $J=2$ populations. Each circle represents a table.}}\n\\label{fig:figure2}\n\\end{figure}\nThe scheme is as follows: there are $J$ restaurants\nsharing the same menu and the customers are identified by their choice of \n$\\phi_{i,j}$ but, unlike in the usual CRF, at each table two \\textit{symmetric dishes} are served. Denote with $\\phi^*_{t,j}=(\\xi^*_{t,j},\\, \\sigma_{t,j}^{2*})$ and $-\\phi^*_{t,j}=(-\\xi^*_{t,j},\\, \\sigma_{t,j}^{2*})$ the two dishes served at table $t$ in restaurant $j$, with \n$\\phi^{**}_{h}=(\\xi^{**}_{h},\\, \\sigma^{**2}_{h})$ and $-\\phi^{**}_{h}=(-\\xi_{h},\\, \\sigma_{h}^{**2})$ the $h$-th pair of dishes in the menu and with $n_{j,h}^{+}$ and $n_{j,h}^{-}$ the number of customers in restaurant $j$ eating dish $\\phi^{**}_{h}$ and $-\\phi^{**}_{h}$, respectively. This means that two options are available to a customer entering restaurant \n$j$: she\/he will either sit at an already occupied table, with probability proportional to the number of customers at that table or will sit at a new table with probability proportional to the concentration parameter $\\gamma_j$. In the former case, the customer will choose the dish $\\phi^*_{t,j}$ with probability $1\/2$ and $-\\phi^*_{t,j}$ otherwise. In the latter \ncase, the customer will eat a dish served at another table of the franchise with probability proportional to half the number of tables that serve that dish, or will make a new order with probability proportional to the concentration parameter $\\alpha$. In view of this scheme, the probability in \\eqref{eq:peppfsym} turns out to be\n\\[\n\\ddtilde{\\Pi}_{k}^{(N)}(\\boldsymbol{n_1^+},\\ldots,\\boldsymbol{n_J^-}) = 2 ^ {-N}\\bar\\Pi_{k}^{(N)}(\\boldsymbol{n_1^+}+\\boldsymbol{n_1^-},\\ldots,\\boldsymbol{n_J^+}+\\boldsymbol{n_J^-}) \n\\]\nand $\\bar{\\Pi}_k^{(N)}$ on the right-hand-side is the pEPPF of the hierarchical DP derived in \\citet{camerlenghi2019b}, namely\n\\[\n\\bar{\\Pi}_k^{(N)}(\\bm{n}_1,\\ldots,\\bm{n}_k)=\\Biggl(\\prod_{j=1}^J \\frac{\\prod_{i=1}^k (\\gamma_j)_{n_{j,h}}}{(\\gamma_j)_{n_j}}\\Biggr)\\:\n\\sum\\limits_{\\boldsymbol{\\ell}}\\frac{\\alpha^k}{(\\alpha)_{|\\boldsymbol{\\ell}|}}\\:\n\\prod_{h=1}^k (\\ell_{\\bullet,h} -1)!\\prod_{j=1}^J P(K_{n_{j,h}}=\\ell_{j,h})\n\\] \nwhere each sums runs over all $\\ell_{j,h}$ in $\\{1,\\ldots,n_{j,h}\\}$, if $n_{j,h}\\ge 1$, and equals $1$ if $n_{j,h}=0$, whereas $\\ell_{\\bullet,h}=\\sum_{j=1}^J \\ell_{j,h}$ and $|\\boldsymbol{\\ell}| = \\sum_{j=1}^J \\sum_{h=1}^k \\ell_{j,h}$. Note that the latent variable $\\ell_{j,h}$ is the number of tables in restaurant $j$ serving the $h$-th pair of dishes. Moreover, $K_{n_{j,h}}$ is a random variable denoting the number of distinct clusters, out of $n_{j,h}$ observations generated by a DP with parameter $\\gamma_j$ and diffuse baseline $P_0$ and it is well-known that \n\\[\n\\mathbb{P}(K_{n_{j,h}}=\\ell_{j,h})=\\frac{\\gamma_j^{\\ell_{j,h}}}\n{(\\gamma_j)_{n_{j,h}}}\\:\n|\\mathfrak{s}(n_{j,h},\\ell_{j,h})|\n\\] \nwhere $|\\mathfrak{s}(n_{j,h},\\ell_{j,h})|$ is the signless Stilring number of the first kind. \nIn view of this, one can deduce the predictive distribution\n\\begin{equation*}\n\\begin{split}\n\t\\mathbb{P}(\\phi_{n_j+1,j}\\in \\cdot&\\,|\\, \\boldsymbol{\\phi}) =\n\t\\frac{\\gamma_j}{i-1+\\gamma_j}\n\t\\sum\\limits_{\\boldsymbol{\\ell}}\n\t\\frac{\\alpha}{|\\boldsymbol{\\ell}| + \\alpha}\\pi(\\boldsymbol{\\ell}\\,|\\,\\boldsymbol{\\phi})P_0(\\cdot)\\\\\n\t&+\\sum\\limits_{h=1}^{k}\n\t\\left[\\frac{n_{j,h}^+ + n_{j,h}^-}{n_j+\\gamma_j} + \n\t\\frac{\\gamma_j}{n_j+\\gamma_j}\n\t\\sum\\limits_{\\boldsymbol{\\ell}}\n\t\\frac{\\ell_{\\bullet,h}}{|\\boldsymbol{\\ell}| + \\alpha}\\pi(\\boldsymbol{\\ell}\\,|\\,\\boldsymbol{\\phi})\\right]\n\t\\Bigg(\\frac{\\delta_{\\phi^{**}_{h}}(\\cdot)+\\delta_{-\\phi^{**}_{h}}(\\cdot)}{2}\t\\Bigg)\n\\end{split}\n\\end{equation*}\nwhere \n\\begin{equation*}\n\\pi(\\boldsymbol{\\ell}\\,|\\,\\boldsymbol{\\phi})\\propto\n\\frac{\\alpha^k}{(\\alpha)_{|\\boldsymbol{\\ell}|}}\n\\prod_{h=1}^k (\\ell_{\\bullet,h} -1)!\\prod_{j=1}^J\n\\frac{\\gamma_j^{\\ell_{j,h}}}\n{(\\gamma_j)_{n_{j,h}^+ + n_{j,h}^-}}\\:\n|\\mathfrak{s}(n_{j,h}^+ + n_{j,h}^-,\\ell_{j,h})|\\mathds{1}_{\\{1,\\ldots,n_{j,h}^+ + n_{j,h}^-\\}}(\\ell_{j,h})\n\\end{equation*}\nis the posterior distribution of the latent variables $\\ell_{j,h}$'s, and $\\mathds{1}_A$ is the indicator function of set $A$.\n\n\\section{Posterior inference}\n\\label{s:inference}\nThe findings of the previous section are the key ingredients to perform posterior inference with a marginal Gibbs sampler.\nThe output of the sampler is structured into three levels: the first produces posterior probabilities on partitions of disease-specific locations; the second generates density estimates; the third provides clusters of patients. \nFor notational simplicity, we omit\nthe dependence on $m$, except for the description of the sampling step that generates $\\omega$.\nRecall that ${\\boldsymbol{\\theta}}=(\\theta_{1},\\ldots,\\theta_{J}) $ and \n$\\boldsymbol{\\phi}=\\{(\\phi_{1,j},\\ldots,\\phi_{n_j,j}):\\,j=1,\\ldots,J\\}$, with $\\phi_{i,j}=(\\xi_{i,j},\\sigma_{i,j}^2)$. The target distribution of \nthe sampler is the joint distribution of $\\bm{\\theta}$, $\\bm{\\phi}$ and $\\omega$ conditionally on the observed data $\\bm{X}$.\n\n\\textbf{Sampling $\\boldsymbol{\\phi}$.} In view of the CRF representation of the \ns-HDP, $t_{i,j}$ stands for the label of the table where the $i$-th customer in restaurant $j$ sits \nand $h_{t,j}$ for the dish label served at table $t$ in restaurant $j$ and with $\\boldsymbol{t}$ and $\\boldsymbol{h}$ we denote the corresponding arrays. \nMoreover, define the assignment variable\n$s_{i,j}=\\mathds{1}(\\phi_{i,j}=\\phi^*_{t_{i,j},j})-\n\\mathds{1}(\\phi_{i,j}=-\\phi^*_{t_{i,j},j})$ and $\\boldsymbol{s}$ is the corresponding arrays. In order to generate $\\boldsymbol{\\phi}$, we need to sample \n\\begin{itemize}\n\\item[(i)] $(t_{i,j}, s_{i,j})$ for $i=1,\\ldots,n_j$ and $j=1,\\ldots,J$;\n\\item[(ii)] $h_{t,j}$ for $t \\in \\boldsymbol{t}$ and $j=1,\\ldots,J$;\n\\item[(ii)] $\\phi^{**}_{h}$ for $h \\in \\boldsymbol{h}$. \n\\end{itemize}\n\nNote that, using the latent allocation indicators in $\\boldsymbol{t}$ and $\\boldsymbol{h}$, the sampling scheme is more efficient than sampling directly from the full conditional of each $\\phi_{i,j}$, since the algorithm can update more than one parameter simultaneously \\citep{neal2000}. Define $\\varepsilon_{i,j} = X_{i,j} - \\theta_{j}$ and denote with $h(\\varepsilon_{i,j,m}|\\phi^*)$ the conditional normal density of $\\varepsilon_{i,j}$\ngiven $\\phi^*=(\\xi^*,\\sigma^{2*})$, while the marginal density\nis\n\\[\n\\bar h (\\varepsilon_{i,j}) = \\int h(\\varepsilon_{i,j}|\\phi) P_{0}(d\\phi)\n\\]\n\nTo sample $(t_{i,j}, s_{i,j})$ from their joint full conditional, we first sample $t_{i,j}$ from\n\\[\nP(t_{i,j}=t\\mid \\boldsymbol{t}^{-(i,j)}, \\boldsymbol{h}^{-(i,j)}, \\boldsymbol{\\phi}^{*-({i,j})},\\boldsymbol{\\phi}^{**-({i,j})},\\varepsilon_{i,j}) \\propto\n\\begin{cases} n_{t,j}^{-(i,j)}\\,p_{\\mbox{\\footnotesize old}}(\\varepsilon_{i,j}|\\phi^*_{t,j})\n& \\mbox{if } t\\in \\boldsymbol{t}^{-(i,j)}\n\\\\ \n\\gamma_{j}\\,p_{\\mbox{\\footnotesize new}}(\\varepsilon_{i,j}|{\\boldsymbol{\\phi}^{**-({i,j})}})\n& \\mbox{if } t=t^{\\mbox{new}}\\end{cases}\n\\]\nwhere $\\boldsymbol{t}^{-(i,j)}$, $\\boldsymbol{h}^{-(i,j)}$ $\\boldsymbol{\\phi}^{*-({i,j})}$,$\\boldsymbol{\\phi}^{**-({i,j})}$ coincide with the arrays $\\boldsymbol{t}$, $\\boldsymbol{h}$ $\\boldsymbol{\\phi}^*$,$\\boldsymbol{\\phi}^{**}$ after having removed the entries corresponding to the $i$-th customer in restaurant $j$. Moreover\n\\[\np_{\\mbox{\\footnotesize old}}(\\varepsilon_{i,j} | \\phi^*_{t,j}) \n=\n\\frac{1}{2}h(\\varepsilon_{i,j}|\\phi^*_{t,j}) + \\frac{1}{2}h(\\varepsilon_{i,j}|-\\phi^*_{t,j})\n\\]\nand\n\\[\np_{\\mbox{\\footnotesize new}}(\\varepsilon_{i,j}|{\\boldsymbol{\\phi}^{**-({i,j})}}) \n= \\sum\\limits_{h=1}^{k^{-(i,j)}}\\frac{\\ell_{\\bullet,h}}{|\\boldsymbol{\\ell}|+ \\alpha}\\left\\{\\frac{1}{2}h(\\varepsilon_{i,j}|\\phi^{**}_h)+\\frac{1}{2}h(\\varepsilon_{i,j}|-\\phi^{**}_h)\t\\right\\}+\\frac{\\alpha}{|\\boldsymbol{\\ell}|+ \\alpha} \\bar h (\\varepsilon_{i,j})\n\\]\t\nThen we sample $s_{i,j}$ from its full conditional\n\\[\np(s_{i,j}=s\\mid\\boldsymbol{\\phi}^*, t_{i,j},\\epsilon_{i,j}) \\propto \\begin{cases} h(\\varepsilon_{i,j}|\\phi^*_{t_{i,j}}) & \\mbox{if } s=1 \\\\ h(\\varepsilon_{i,j}|-\\phi^*_{t_{i,j}}) & \\mbox{if } s=-1\\end{cases}\n\\]\nThe conditional distribution of $h_{t,j}$ is\n\\[\np(h_{t,j}=h\\mid \\boldsymbol{t},\\boldsymbol{h}^{-(t,j)},\\boldsymbol{\\phi}^{**-(t,j)},\\boldsymbol{s},\\boldsymbol{\\varepsilon}) \n\\propto\n\\begin{cases} \\ell_{\\bullet,h}^{-(t,j)}\\prod\\limits_{\\{(i,j):\\:t_{i,j}=t\\}}\nh(s_{i,j}\\,\\varepsilon_{i,j}|\\phi_{h})\n& \\mbox{if } h \\in \\boldsymbol{h}^{-(t,j)} \\\\ \n\\alpha\\,\\displaystyle\\int \\prod\\limits_{\\{(i,j):\\:t_{i,j}=t\\}} h(s_{i,j}\\,\\varepsilon_{i,j}|\\phi) P_{0}(d\\phi) & \\mbox{if } h=h^{new}\\end{cases}\n\\]\n\nFinally, when $P_{0}$ is conjugate with respect to the Gaussian kernel, the full conditional distribution of $\\phi_h^{**}$ is obtained in closed form as posterior distribution of a Gaussian model, using as observations the collection $\\{\\,(s_{i,j}\\,\\epsilon_{i,j}):\\: h\t_{t_{i,j},j} = h\\}$. \n\n\\textbf{Sampling $\\boldsymbol{\\theta}$.} When sampling the disease-specific location parameters, one can rely on \na Chinese restaurant process restricted to those partitions that are consistent with the ordering of the diseases.\nThus, in order to generate $\\boldsymbol{\\theta}$, we first sample the labels $\\boldsymbol{t}_{\\theta}=\\{t_{1},\\ldots,t_{J}\\}$, where $t_{j}$ is the label of the table where the $j$-th customer sits. Then, we sample the dish $\\theta^*_{t}$ associated to table $t$ for all $t\\in\\boldsymbol{t}_{\\theta}$. If $z_{i,j} = X_{i,j} - \\xi_{i,j}$, the conditional density of $\\bm{z}_{j}=(z_{1,j},\\ldots,z_{n_j,j})$ associated to the location parameter $\\theta^*$,\ngiven $\\bm{\\sigma}_{j}=(\\sigma_{1,j},\\ldots,\\sigma_{n_j,j})$, is\n\\[\nf_{\\theta^*}(\\bm{z}_{j}|\\bm{\\sigma}_{j})=\\frac{1}{\\sqrt{2\\pi}\\prod\\limits_{i=1}^{n_j} \\sigma_{i,j}}\\:\n\\exp\\left\\{-\\frac{1}{2}\\sum\\limits_{i=1}^{n_j}\\frac{(z_{i,j}-\\theta^*)^2}{\\sigma_{i,j}^2}\n\\right\\}\n\\]\nUnder the prior in \\eqref{eqnew}, the full conditional distribution of $\\boldsymbol{t}_{\\theta}$ is provided by\n\\[\n\\begin{split}\n\tp(t_{j}=t\\mid t_1,\\ldots,&t_{j-1},\\theta_{j-1},\\bm{z}_{j}, \\bm{\\sigma}_{j})\\\\\n\t&\\propto\n\t\\begin{cases} \n\t\ta(\\omega,\\theta_1,\\ldots,\\theta_{j-1})\\,f_{\\theta_{j-1}}(\\bm{z}_{j}|\\bm{\\sigma}_{j}) \n\t\t& \\mbox{if } t=t_j \\\\[4pt] \n\t\t[1 - a(\\omega,\\theta_1,\\ldots,\\theta_{j-1})] \\, \n\t\t\\displaystyle\\int f_{\\theta}(\\bm{z}_{j}|\\bm{\\sigma}_{j})\\, G (d\\theta)& \\mbox{if } t=t^{\\mbox{\\footnotesize new}}\\\\[4pt]\n\t\t0&\\mbox{otherwise}\n\t\\end{cases}\n\\end{split}\n\\]\nFinally, when $G$ is conjugate with respect to the Gaussian kernel, the full conditional distribution of $\\theta_{t}^{*}$, given $\\{\\bm{z}_{j}:\\: t_{j}=t\\}$, is obtained in closed form using conjugacy of the Normal-Normal model.\n\n\\textbf{Sampling the concentration parameter.} Finally, \nthe concentration parameter $\\omega$ can be sampled through an importance sampling step using as importance distribution the prior $p_{\\omega}$ over $\\omega$. Denoting with $M_m$ the selected partition for $\\bm{\\theta}_{m}$ and with $T_m$ the number of clusters in $M_m$, we have \n\\[\np(\\omega\\,|\\,M_m:m=1,\\ldots,M)\\propto p_{\\omega} (\\omega) \\,\\frac{\\omega^ {\\sum_{m=1}^M T_m - M}}{(\\omega + 2)^{M} (\\omega^2 + \\omega + 3)^{M}}.\n\\]\n\n\\section{Results}\n\\label{s:results}\n\\subsection{Simulation studies}\n\\label{ss:sim}\nWe perform a series of simulation studies with two main goals. First, we aim to highlight the drawbacks of clustering based on the entire distribution with respect to our proposal in the context of small sample sizes. Second, we check the model's ability of detecting the presence of underlying relevant factors in the sense described in Section~\\ref{ss:s-HDP}. \n\nTo accomplish the first goal, we compare the results obtained using our model with the nested Dirichlet process (NDP) \\citep{rodriguez2008}, arguably the most popular Bayesian model to cluster populations. Mimicking the real hypertensive dataset, we simulate data for 4 samples, ideally corresponding to four diseases, with respective sample sizes \nof 50, 19, 9 and 22, which correspond to the sample sizes of the real data investigated in Section \\ref{ss:real}. Since the NDP does not allow to treat jointly multiple response variables, we consider only one response variable to ensure a fair comparison. The observations are sampled from the following distributions and 100 simulation studies are performed.\n\\begin{equation*}\n\\begin{split}\n\tX_{i,1}\\,\\overset{iid}{\\sim}0.5\\,\\mathcal{N}(\\,0,\\,0.5\\,)+0.5\\,\\mathcal{N}(\\,2,\\,0.5\\,) \\qquad&\\text{for}\\enskip i=1,\\ldots,n_1\\\\\n\tX_{i,2}\\,\\overset{iid}{\\sim}0.5\\,\\mathcal{N}(\\,2,\\,0.5\\,)+0.5\\,\\mathcal{N}(\\,4,\\,0.5\\,)\\qquad&\\text{for}\\enskip i=1,\\ldots,n_2\\\\\n\tX_{i,3}\\,\\overset{iid}{\\sim}0.5\\,\\mathcal{N}(\\,4,\\,0.5\\,)+0.5\\,\\mathcal{N}(\\,6,\\,0.5\\,) \\qquad&\\text{for}\\enskip i=1,\\ldots,n_3\\\\\n\tX_{i,4}\\,\\overset{iid}{\\sim}0.5\\,\\mathcal{N}(\\,6,\\,0.5\\,)+0.5\\,\\mathcal{N}(\\,8,\\,0.5\\,) \\qquad&\\text{for}\\enskip i=1,\\ldots,n_4\\\\\n\\end{split}\n\\end{equation*}\nNote that the true data generating process corresponds to samples from distinct distributions with pairwise sharing of a mixture component. Alternative scenarios are considered in the additional simulation studies that can be found in Section D of the Supplement.\n\nThe implementation of the NDP was carried out through the marginal sampling scheme proposed in \\cite{zuanetti2018clustering}, which is suitably extended \nto accommodate hyperpriors on the concentration parameters of the NDP. To simplify the choice of the hyperparameters, as suggested by \\citet[p.~535 and p.~551--554]{gelman2013bayesian} we estimate both models over standardized data. For our model, we set $G_m=\\mathcal{N}(0,\\,1)$ and $P_{0,m}=\\text{NIG}(\\mu=0, \\,\\tau=1,\\, \\alpha=2,\\,\\beta=4)$. Here, $\\text{NIG}(\\mu, \\,\\tau,\\, \\alpha,\\,\\beta)$ indicates a normal inverse gamma distribution. The base distribution for the NDP is $\\text{NIG}(\\mu=0, \\,\\tau=0.01,\\, \\alpha=3,\\,\\beta=3)$, as in \\cite{rodriguez2008}. Finally, we use gamma priors with shape 3 and rate 3 for all concentration parameters, which is a common choice.\nFor each simulation study, we perform 10,000 iterations of the MCMC algorithms with the first 5,000 used as burn-in.\n\n\\begin{table}[t]\n\t\t\\caption{Simulation studies summaries.}\n\t\\label{tab:table1}\n\\begin{center}\n\t\t\\begin{tabular}{lcccccc}\n\t\t\t&\\multicolumn{3}{c}{\\textbf{sHDP}}& \\multicolumn{3}{c}{\\textbf{NDP}}\\\\\\hline\\hline\n\t\t\t&MAP&Average&Median&MAP&Average&Median\\\\\n\t\t\tPartitions& count& post. prob.&post. prob.& count& post. prob.& post. prob.\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1},\\textcolor{burntorange}{2},\\textcolor{bostonuniversityred}{3},\\textcolor{burgundy}{4}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1}\\}\\{\\textcolor{burntorange}{2},\\textcolor{bostonuniversityred}{3},\\textcolor{burgundy}{4}\\}&0&0.000&0.000&2&0.020&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1},\\textcolor{burntorange}{2}\\}\\{\\textcolor{bostonuniversityred}{3},\\textcolor{burgundy}{4}\\}&0&0.000&0.000&\\textbf{72}&\\textbf{0.695}&\\textbf{0.860}\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{bostonuniversityred}{3},\\textcolor{burgundy}{4}\\}\\{\\textcolor{burntorange}{2}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1}\\}\\{\\textcolor{burntorange}{2}\\}\\{\\textcolor{bostonuniversityred}{3},\\textcolor{burgundy}{4}\\}&0&0.027&0.007&3&0.035&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1},\\textcolor{burntorange}{2},\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burgundy}{4}\\}&0&0.000&0.000&5&0.061&0.000\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{burgundy}{4}\\}\\{\\textcolor{burntorange}{2},\\textcolor{bostonuniversityred}{3}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1}\\}\\{\\textcolor{burntorange}{2},\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burgundy}{4}\\}&1&0.054&0.015&0&0.014&0.000\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burntorange}{2},\\textcolor{burgundy}{4}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{burntorange}{2},\\textcolor{burgundy}{4}\\}\\{\\textcolor{bostonuniversityred}{3}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1}\\}\\{\\textcolor{burntorange}{2},\\textcolor{burgundy}{4}\\}\\{\\textcolor{bostonuniversityred}{3}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1},\\textcolor{burntorange}{2}\\}\\{\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burgundy}{4}\\}&0&0.004&0.000&18&0.175&0.032\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burntorange}{2}\\}\\{\\textcolor{burgundy}{4}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{1},\\textcolor{burgundy}{4}\\}\\{\\textcolor{burntorange}{2}\\}\\{\\textcolor{bostonuniversityred}{3}\\}&0&0.000&0.000&0&0.000&0.000\\\\\\hline\n\t\t\t\\{\\textcolor{aoe}{1}\\}\\{\\textcolor{burntorange}{2}\\}\\{\\textcolor{bostonuniversityred}{3}\\}\\{\\textcolor{burgundy}{4}\\}&\\textbf{99}&\\textbf{0.915}&\\textbf{0.954}&0&0.000&0.000\n\t\\end{tabular}\n\\end{center}\n\\vspace{-\\baselineskip}\n\\end{table}\n\nTable~\\ref{tab:table1} displays summaries of the results on population clustering, darker rows correspond to partitions that are not consistent with the natural ordering of the diseases. The true clustering structure is given by the finest partition. \nAs already observed in \\cite{rodriguez2008}, the NDP tends to identify fewer, rather than more clusters, due to the presence of small sample sizes. Using the \\textit{maximum a posteriori} estimate, our model correctly \nidentifies the partition in 99 out of 100 simulation studies and a partition with three elements or more in 100 out of 100 simulation studies. The \nsame counts for the NDP are, respectively, 0 out of 100 and 21 out of 100. Analogous conclusions can be drawn looking at posterior probability averages and medians across the 100 simulation studies (see Table~\\ref{tab:table1}) leaving \nno doubt about the model to be preferred under this scenario. \n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{7cm}\n\t\t\\centering\n\t\t\\includegraphics[ width=7cm]{figures\/confmean26}\n\t\t\\caption{95\\% credible intervals for population-specific locations}\n\t\t\\label{fig:fig3a}\n\t\\end{subfigure}\\hspace{0.2cm}%\n\t\\begin{subfigure}{7cm}\n\t\n\t\t\\centering\n\t\t\\includegraphics[width=7cm]{figures\/K_secondlevel_scenario0_shdp26}\n\t\t\\caption{Number of second-level clusters.}\n\t\t\\label{fig:fig3b}\n\t\\end{subfigure}\n\t\\begin{subfigure}{7cm}\n\t\n\t\t\\centering\n\t\t\\includegraphics[ width=7cm, trim={2cm 0 2cm 0}]{figures\/cluster_scenario0_natord26}\n\t\t\\caption{Co-clustering.}\n\t\t\\label{fig:fig3c}\n\t\\end{subfigure}\\hspace{0.2cm}%\n\t\\begin{subfigure}{7cm}\n\t\n\t\t\\centering\n\t\t\\includegraphics[width=7cm,, trim={2cm 0 2cm 0}]{figures\/cluster_scenario0_reord26}\n\t\t\\caption{Co-clustering.}\n\t\t\\label{fig:fig3d}\n\t\\end{subfigure}\n\\caption{ Panel (a): Mean point estimates and 95\\% credible intervals for the four populations, vertical lines correspond to true values. Panel (b): Posterior distribution on the number of second-level clusters. Panels (c) and (d): heatmaps of second-level clustering, darker colors correspond to higher probability of co-clustering; in (c) patients are ordered based on the diagnosis and the four black squares highlight the within-sample probabilities and in (d) patients are reordered based on co-clustering probabilities.}\n\\end{figure}\n\nFinally, we randomly select three simulation studies among the 100 to better understand the performance in estimating the other model parameters. Here we comment on one of the studies, the other two leading to similar results are reported in Section D.1.1 of the Supplement. \nFigure~\\ref{fig:fig3a} shows point estimates and credible intervals for the population-specific location parameters $\\theta_1,\\theta_2,\\theta_3,\\theta_4$. The true means belong to the 95\\% credible intervals.\n\nMoreover, it turns out that the model is able to detect the presence of two clusters of subjects leading to a posterior distribution for the number of clusters that is \nrather concentrated on the true value, see Figure~\\ref{fig:fig3b}--\\ref{fig:fig3d}. Moreover, the point estimate for the subject partition, obtained minimizing the Binder loss function, also contains two clusters, proving the ability of the model to detect the underlying relevant factor. In Section D of the Supplement, a number of additional simulation studies are conducted, both using alternative specifications over the disorder-specific parameters and different data generating mechanisms: the results highlight a good performance of the model, which appears also able to detect outliers, to highlight non-location effects of the disorders and to produce reliable outputs even under deviation from symmetry.\n\n\\subsection{Impact of hypertensive disorders on maternal cardiac dysfunction}\n\\label{ss:real}\nOur analysis is based on the dataset of \\cite{data}, which can be obtained from https:\/\/data.mendeley.com\/datasets\/d72zr4xggx\/1. The dataset contains observations for $10$ cardiac function measurements collected through a prospective case-control study on women in the third semester of pregnancy divided in $n_1=50$ control cases (C), $n_2=19$ patients with gestational hypertension (G), $n_3=9$ patients with mild preeclampsia (M) and $n_4=22$ patients with severe preeclampsia (S). The cases are \nwomen admitted from 2012 to 2014 to King George Hospital in Visakhapatnam, India. The healthy sample is composed by normotensive pregnant women. All women with hypertension were on antihypertensive treatment with oral Labetalol or Nifedipine. Women with severe hypertension were treated with either oral nifedipine and parenteral labetalol or a combination. For more details on the dataset, we refer to \\cite{tatapudi2017}. The prior specification is the same as in the previous section. Sections E.2 and E.3 of the Supplement contain a prior-sensitivity analysis and show rather robust results w.r.t. different prior specifications. Inference is based on 10,000 MCMC iterations with the \nfirst half used as burn-in. \n\n\\begin{table}\n\\caption{Posterior probabilities over partitions of means. Maximum a posteriori probabilities are in \\textbf{bold}.}\n\\label{tab:table2}\n\\begin{center}\n\\resizebox{\\textwidth}{!}{\n\t\t\\begin{tabular}{lcccccccccc}\n\t\t\tpartitions&CI&CWI&LVMI&IVST&LVPW&EF&FS&EW&AW&E\/A\\\\\\hline\\hline\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.021&0.000&0.000&0.000&0.000&\\textbf{0.365}&\\textbf{0.303}&0.096&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.002&\\textbf{0.546}&0.001&0.083&0.016&0.078&0.190&0.021&0.036&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.002&0.000&0.001&0.000&0.000&0.037&0.038&0.072&0.076&0.049\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.001&0.139&0.001&0.019&0.024&0.028&0.078&0.042&0.232&0.055\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&\\textbf{0.463}&0.000&\\textbf{0.595}&0.000&0.000&0.276&0.045&\\textbf{0.498}&0.020&0.002\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.146&0.099&0.188&\\textbf{0.551}&\\textbf{0.672}&0.074&0.164&0.092&0.260&0.033\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.233&0.000&0.107&0.000&0.000&0.083&0.062&0.114&0.091&0.371\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.133&0.216&0.108&0.347&0.288&0.060&0.121&0.065&\\textbf{0.287}&\\textbf{0.491}\\\\\n\t\t\t\\hline\n\t\t\t$\\sum\\log_{15} \\left(p_i ^{-p_i}\\right)$ &0.501&0.430&0.415&0.361&0.289&0.632&0.688&0.598&0.613&0.424\n\t\\end{tabular}}\n\\end{center}\n\\end{table} \nTable \\ref{tab:table2} displays the posterior distributions for the partitions of unknown disease-specific means along with the corresponding entropy measurements, that can be used as measures of uncertainty. \nFirst note that if one takes also the ordering among distinct disease-specific locations into account, the posterior partition probabilities are, as desired, concentrated on specific orders of the associated unique values for all ten cardiac indexes. For instance, we have \n$\\mathbb{P}(\\{\\theta_{C,CI}=\\theta_{G,CI}=\\theta_{M,CI}\\}\\{\\theta_{S,CI}\\}\\mid X) =\\mathbb{P}(\\theta_{C,CI}=\\theta_{G,CI}=\\theta_{M,CI}>\\theta_{S,CI}\\mid X) = 0.463$. The ordered partitions with the highest posterior probability are displayed in Table~\\ref{tab:tableord}. \n\\begin{table}\n\\caption{Posterior probabilities over ordered partitions of means.}\n\\label{tab:tableord}\n\\begin{center}\n\t\\begin{tabular}{lcc} \n\t\t& ordered partition with& \\\\\n\t\tcardiac index& highest posterior probability&posterior prob\\\\\n\t\t\\hline\\hline\n\t\tCI&\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$>$\\{\\textcolor{burgundy}{S}\\}&0.463\\\\\n\t\tCWI&\\{\\textcolor{aoe}{C}\\}$<$\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}&0.546\\\\\n\t\tLVMI&\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$<$\\{\\textcolor{burgundy}{S}\\}&0.595\\\\\n\t\tIVST&\\{\\textcolor{aoe}{C}\\}$<$\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$<$\\{\\textcolor{burgundy}{S}\\}&0.548\\\\\n\t\tLVPW&\\{\\textcolor{aoe}{C}\\}$<$\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$<$\\{\\textcolor{burgundy}{S}\\}&0.671\\\\\n\t\tEF&\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}&0.365\\\\\n\t\tFS&\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}&0.303\\\\\n\t\tEW&\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$>$\\{\\textcolor{burgundy}{S}\\}&0.497\\\\\n\t\tAW&\\{\\textcolor{aoe}{C}\\}$<$\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}$<$\\{\\textcolor{burgundy}{S}\\}&0.256\\\\\n\t\tE\/A&\\{\\textcolor{aoe}{C}\\}$>$\\{\\textcolor{burntorange}{G}\\}$>$\\{\\textcolor{bostonuniversityred}{M}\\}$>$\\{\\textcolor{burgundy}{S}\\}&0.466\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{figures\/confmean0}\n\t\\end{subfigure}\\hspace{0.05\\textwidth}%\n\t\\begin{subfigure}{.45\\textwidth}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figures\/confmean1}\n\t\\end{subfigure}\n\\caption{95\\% credible intervals for population-specific locations for CI and CWI}\n\\label{fig:fig5}\n\\end{figure}\n\nConsidering the posterior probabilities summarized in Table~\\ref{tab:table2} and in Table~\\ref{tab:tableord}, we find that the cardiac index (CI) is reduced in severe preeclampsia compared to all other patients, indicating reduced myocardial contractility in the presence of the most severe disorder. \nThe cardiac work index (CWI) is a good indicator to distinguish between cases and control, but not among cases. The left ventricular mass index (LVMI) is increased in severe preeclampsia patients compared to other pregnant women, indicating ventricular remodelling. While inter ventricular septal thickness (IVST) and left ventricular posterior wall thickness (LVPW) differ both between cases and controls and between severe preeclampsia and other disorders, indicating a progressive increase in the indexes with the severity of the disorder. The posterior probabilities associated to \nindexes of systolic function such as ejection fraction (EF) and fraction shortening (FS) are relatively concentrated on the partition of complete homogeneity, letting us to conclude that no differences are present among \npatients.\nAs for the parameters of the diastolic function, the posterior distribution for the E-wave indicator identifies a modified index in severe preeclampsia patients, while the mean E\/A ratio indicates a \ndecreasing diastolic function with the severity of the disorder. The posterior for the A-wave index is actually concentrated on three distinct partitions, leaving a relatively high uncertainty regarding the modifications of the index. However, considering jointly the three partitions with the highest posterior probability, differences are detected between control and cases with a total posterior probability equal to 0.779. Figure~\\ref{fig:fig5} shows point estimates and credible intervals for disorder-specific location parameters for the first two cardiac indexes. Analogous plots for all cardiac indexes can be found in Section E.1 of the Supplement.\n\nTable~\\ref{tab:table3} shows the results obtained using the prior in \\eqref{eq:eppf_dir}, instead of \\eqref{eq:prior2}. We remark that for all ten cardiac indexes, the posterior associates negligible probabilities to partitions that are in contrast with the natural order of the diagnoses. This is particularly reassuring in that the model, even without imposing such an order a priori, is able to single it out systematically across cardiac indexes.\nMoreover, we observe how the partitions identified by MAP are the same of Table~\\ref{tab:table2} for all cardiac index except AW. However, even under this alternative prior, the A-wave index is concentrated on the same three distinct partitions leading to the conclusion that there exists a difference between cases and control.\n\\begin{table}\n\t\\caption{Posterior probabilities over partitions of means. Maximum a posteriori probabilities are in \\textbf{bold}.}\n\t\\label{tab:table3}\n\t\\vspace{-0.3cm}\n\t\\begin{center}\n\\resizebox{\\textwidth}{!}{\n\t\t\\begin{tabular}{lcccccccccc}\n\t\t\tpartitions&CI&CWI&LVMI&IVST&LVPW&EF&FS&EW&AW&E\/A\\\\\\hline\\hline\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.019&0.000&0.000&0.000&0.000&\\textbf{0.332}&\\textbf{0.247}&0.078&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.002&\\textbf{0.643}&0.001&0.114&0.031&0.065&0.130&0.048&0.080&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.004&0.000&0.003&0.000&0.000&0.044&0.019&0.152&0.073&0.103\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G}\\}\n\t\t\t&0.004&0.000&0.000&0.000&0.000&0.037&0.105&0.013&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.002&0.065&0.002&0.047&0.078&0.027&0.036&0.063&\\textbf{0.424}&0.167\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&\\textbf{0.316}&0.000&\\textbf{0.527}&0.000&0.000&0.178&0.032&\\textbf{0.288}&0.002&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.023&0.000&0.000&0.000&0.000&0.019&0.103&0.006&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.173&0.089&0.124&\\textbf{0.472}&\\textbf{0.594}&0.033&0.054&0.064&0.140&0.042\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\n\t\t\t&0.002&0.000&0.001&0.003&0.000&0.044&0.031&0.017&0.000&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.018&0.000&0.000&0.000&0.000&0.061&0.067&0.016&0.000&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G},\\textcolor{burgundy}{S}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.005&0.163&0.001&0.095&0.006&0.028&0.040&0.015&0.016&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C},\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.213&0.000&0.124&0.000&0.000&0.052&0.014&0.121&0.036&0.241\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.074&0.000&0.137&0.003&0.000&0.041&0.022&0.055&0.001&0.000\\\\\n\t\t\t\t\\rowcolor{shadecolor}\\{\\textcolor{aoe}{C},\\textcolor{burgundy}{S}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\n\t\t\t&0.014&0.000&0.000&0.000&0.000&0.011&0.067&0.004&0.000&0.000\\\\\n\t\t\t\\{\\textcolor{aoe}{C}\\}\\{\\textcolor{burntorange}{G}\\}\\{\\textcolor{bostonuniversityred}{M}\\}\\{\\textcolor{burgundy}{S}\\}\n\t\t\t&0.133&0.040&0.079&0.265&0.291&0.029&0.033&0.059&0.229&\\textbf{0.448}\\\\\n\t\t\t\\hline\n\t\t\t$\\sum\\log_{15} \\left(p_i ^{-p_i}\\right)$&0.687&0.407&0.509&0.501&0.371&0.828&0.886&0.823&0.582&0.505\n\t\t\\end{tabular}}\n\t\\end{center}\n\\vspace{-0.1cm}\n\\end{table} \n\n\n\\begin{figure}[t]\n\t\\vspace{-0.1 cm}\n\t\\centering \n\t\\begin{subfigure}{.4\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[ width=\\linewidth]{figures\/EA}\n\t\t\\caption{density estimation}\n\t\\end{subfigure}\\hspace{0.05\\textwidth}%\n\t\\begin{subfigure}{.25\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[trim={3cm 1cm 3cm 1cm}, clip, width=\\linewidth]{figures\/cluster_EA}\n\t\t\\caption{co-clustering}\\label{fig:fig4b}\n\t\\end{subfigure}\\hspace{0.05\\textwidth}%\n\t\\begin{subfigure}{.25\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[trim={3cm 1cm 3cm 1cm}, clip, width=\\linewidth]{figures\/cluster_ord_EA}\n\t\t\\caption{co-clustering}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.4\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[ width=\\linewidth]{figures\/LVMI}\n\t\t\\caption{density estimation}\n\t\\end{subfigure}\\hspace{0.05\\textwidth}%\n\t\\begin{subfigure}{.25\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[trim={3cm 1cm 3cm 1cm}, clip, width=\\linewidth]{figures\/cluster_LVMI}\n\t\t\\caption{co-clustering}\\label{fig:fig4e}\n\t\\end{subfigure}\\hspace{0.05\\textwidth}%\n\t\\begin{subfigure}{.25\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[trim={3cm 1cm 3cm 1cm}, clip, width=\\linewidth]{figures\/cluster_ord_LVMI}\n\t\t\\caption{co-clustering}\\label{fig:fig4f}\n\t\\end{subfigure}\n\\vspace{-0.1 cm}\n\\caption{ Panels (a) and (d): density estimates. Panels (b)--(c) and (e)--(f): heatmaps of the posterior probabilities of co-clustering; in (b) and (e) patients are ordered based on the diagnosis and the four black squares highlight the within-sample probabilities; in (c) and (f) patients are reordered based on co-clustering probabilities.}\n\\label{fig:fig4}\n\\vspace{-0.1 cm}\n\\end{figure} \n\nAs far as prediction and second-level clustering are concerned, Figure~\\ref{fig:fig4} displays the density estimates and the heatmap of co-clustering probabilities between pairs of patients for the E\/A ratio and LVMI. Figure~\\ref{fig:fig4b} shows that co-clustering probabilities are similar within and across diagnoses, indicating that the effect of the diseases on the distribution of the cardiac index is mostly explained through shifts between disease-specific locations. Moreover Figure~\\ref{fig:fig4b} suggests the presence of three outliers that have low probability of co-clustering with all the other subjects and that would be ignored by the model using a more traditional ANOVA structure. On the other hand, Figure~\\ref{fig:fig4e} shows a slightly different pattern for co-clustering probabilities in the fourth square, which suggests that the heterogeneity between severe preeclampsia patients and the other patients is not entirely explained by shifts in disease-specific locations. Finally, Figure~\\ref{fig:fig4f} suggests the presence of an underlying relevant factor. The corresponding figures for all ten response variables are reported in Section E.1 of the Supplement and can be used for prediction and for a graphical analysis aimed at controlling the presence of underlying relevant factors, outliers and differences across diseases distinct from shifts between disease-specific locations.\n\nOur results are coherent with almost all of the findings in \\cite{tatapudi2017}, where results were obtained through a series of independent frequentist tests. However, importantly, we are able to provide more insights thanks to the simultaneous comparison approach and the latent clustering of subjects. For instance, considering the response LVMI, \\cite{tatapudi2017} detected a significant increase in cases compared to controls and an increase in severe preeclampsia compared to gestational hypertensive and mild preeclampsia patients. Such results do not clarify whether a modification exists between the control group and gestational hypertensive patients or between the latter and mild preeclampsia patients. Moreover, in contrast to our analysis, their results do not provide any information concerning the presence of underlying common factors, outliers or distributional effects (different from shifts in locations).\n\n\n\n\n\n\n\\section{Concluding remarks}\n\\label{s:conclusion}\nWe designed a Bayesian nonparametric model to detect clusters of hypertensive disorders over different cardiac function indexes and found modified cardiac functions in hypertensive patients compared to healthy subjects as well as progressively increased alterations with the severity of the disorder. The proposed model has application potential also beyond the \nconsidered setup when the goal is to cluster populations according to multivariate information: it borrows strength across response variables, preserves the flexibility intrinsic to nonparametric models, and correctly detects partitions of populations even in presence of small sample sizes, \nwhen alternative distribution-based clustering models tend to underestimate the number of clusters. The key component of the model is the s-HDP, a hierarchical nonparametric structure for the error terms that offers flexibility and serves as a tool to investigate the presence of unobserved factors, outliers and effects other than changes in locations. Interesting extensions of the model include generalizations to other types of invariances in order to accommodate identifiability in generalized linear models, for instance in presence of count data and a log link function, as well as generalizations to other types of processes, beyond the Dirichlet process.\n\n\\section*{Acknowledgements}\nMost of the paper was completed while B. Franzolini was a Ph.D. student at the Bocconi University, Milan.\n A. Lijoi and I. Pr\\\"unster are partially supported by MIUR, PRIN Project 2015SNS29B.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nThe authors acknowledge K. Ishii and K. Tomiyasu for their useful remarks. The experiments at the Materials and Life Science Experimental Facility at J-PARC were performed under a user program (Proposal No. 2013A0052). M.F. is supported by a Grant-in-Aid for Scientific Research (A) (16H02125).\n\n\t\\begin{figure}[t]\n\t\\begin{center}\n\t\\includegraphics[width=75mm]{Fig3_dispersion_v1.pdf}\n\t\\caption{(Color online)~Momentum-dependence of (a) peak-position and (b) intensity in Sr$_3$Ir$_2$O$_7$. The gray lines are the results from RIXS [\\ref{Kim2012}]. The intensity is not corrected with $|{\\rm f}({\\bf Q})|^2$ and the absorption coefficient.}\n\t\\label{dispersion}\n\t\\end{center}\n\t\\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction and Vocabulary }\n\\label{sec:vocabulary}\n\nOne of the important issues of contemporary physics is the\nunderstanding of strong interactions and in particular the study\nof the properties of \\textbf{strongly interacting matter} -- a system of strongly interacting particles in equilibrium.\nThe advent of the quark model of hadrons and the development of \nthe commonly accepted theory of strong interactions, \\textbf{quantum chromodynamics (QCD)},\nnaturally led to expectations that matter at very high densities\nmay exist in a state of quasi-free quarks and gluons, the \\textbf{quark-gluon\nplasma (QGP)}.\n\n\n\n\\begin{figure}[hbt]\n\n\\centering\n\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/v_qgp_hic.jpg}\n\n\\caption{\n\\textit{Left:} \n Artistic sketch of the two phases of strongly interaction matter, hadron-resonance gas and quark-gluon plasma.\n \\textit{Middle:}\nPhase diagram of QCD in temperature $T$ and baryon chemical potential $\\mu_B$, and the region covered by running or planned experiments~\\cite{Alemany:2019vsk}. The density range covered by LHC, LHC-FT and SPS experiments is indicated by the shaded areas in the figure. The lower boundary of the grey and blue shaded area follows the chemical freeze-out.\nThe upper boundary relates to the parameters at the early stage of the collisions. The potential deconfinement critical point is labelled with d-CP, the onset of deconfinement with OD. The black line at small temperatures and high densities shows the nuclear liquid-gas transition, also ending in a critical point n-CP. The density range of other experiments is indicated in the bar below the figure. This\nincludes RHIC at BNL, NICA at JINR, SIS100 at FAIR, J-PARC-HI at J-PARC, the Nuclotron at JINR (NUCL), and HIAF at HIRFL.\n \\textit{Right:} \n Evolution of a heavy-ion collision at high energies. Successive snapshots of a central collision are shown versus time. \n }\n \\label{fig:v_qgp}\n\\end{figure}\n\n\nDoes the QGP exist in nature?\nHow does the transition proceed from a low-density state of strongly interacting matter, \nin which quarks and gluons are confined in hadrons, to the QGP? \nIs it similar to the transition from liquid water to water vapour along\na first order transition line ending in a second order critical point and followed by a\ncross over transition, see illustration plots in Fig.~\\ref{fig:v_qgp}?\n\nThe study of high energy collisions of two atomic nuclei gives us\nthe unique possibility to address these issues in well controlled\nlaboratory experiments. \nThis is because it is observed that a system of strongly \ninteracting particles created in \\textbf{central heavy ion collisions} is close to (at least local) equilibrium.\nHow does the transition from a non-equilibrium system created in \\textbf{inelastic proton-proton interactions}\nto the equilibrium system in central heavy ion collisions look like?\n\nThese questions have motivated broad\nexperimental and theoretical efforts for about 50 years. \nSystematic measurements of particle production properties in nucleus-nucleus (A+A) collisions at different collision energies and for different masses of\ncolliding nuclei have been performed.\nBy changing collision energy and nuclear mass number \none changes macroscopic parameters of the created system -- its volume, energy, and net baryon number.\nThis allows to move across the phase diagram and look for the theoretically predicted boundaries of equilibration and matter phases, see illustration plots in Fig.~\\ref{fig:v_qgp}.\nConsequently several physics phenomena might be \nobserved when studying experimentally nuclear collisions at high energies. \nThese are:\n\\begin{enumerate}[(i)]\n\\item \n\\textbf{onset of fireball} -- beginning of creation of large-volume ($\\gg 1$~fm$^3$) strongly interacting matter,\n\n\\item\n\\textbf{onset of deconfinement} -- beginning of QGP creation with increasing collision energy,\n\n\\item \n\\textbf{deconfinement critical point} -- a hypothetical end point of the first order transition line to quark-gluon plasma that has properties of a second order phase transition. \n\\end{enumerate} \n\nThese phenomena are expected to lead to rapid changes of hadron production properties -- the \\textbf{critical structures} -- \nwhen changing collision energy and\/or nuclear mass number of the colliding nuclei. \n\n\n\n\n\n\\section{ Strongly interacting matter in heavy ion collisions}\n\\label{sec:strongly}\n\n\n\\textbf{Strongly interacting matter.}\nThe equation of state defines the macroscopic properties of matter in equilibrium. It is a subject of statistical mechanics. The first step in this modelling is to clarify the types of particle species and \ninter-particle interactions.\nOne should also choose an appropriate statistical ensemble\nwhich fixes the boundary conditions and conserves the corresponding global physical quantities, like energy and conserved charges.\nStrongly interacting matter at high energy density can be formed at the early stages of\nrelativistic \nA+A collisions.\nAs mentioned in the introduction the questions --\n\\begin{enumerate}[(i)]\n\\item\nWhat types of particles should be considered as fundamental?\n\\item\nWhat are the composite objects?\n\\item\nWhat are the fundamental forces between the matter constituents?\n\\item\nWhat are the conserved charges?\n\\end{enumerate}\n-- should be addressed.\nAnswers to these questions are changing with time as our knowledge about basic properties of elementary particles and their interactions increases.\n\nThe first model of strongly interacting matter at high energy density was formulated in 1950 by Fermi~\\cite{fermi:1950jd}.\nIt assumes that a system created in high energy proton-proton (p+p) interactions emits pions like black-body radiation, i.e., pions are treated as non-interacting particles,\nand the pion mass $m_\\pi\\cong 140$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace is neglected compared to the high temperature of the system.\nThe pressure $p$ and energy density $\\varepsilon$ can then be represented by the following functions of the temperature $T$ \n(the system of units with $h\/(2\\pi)=c=k_B=1$ will be used),\n\\eq{\\label{p}\np(T)~=~\\frac{\\sigma}{3}T^4~,~~~~~\\varepsilon(T)~\\equiv~T\\frac{dp}{dT}~-~p~=\\sigma T^4~,\n}\nwhere $\\sigma= \\pi^2g\/30$ is the the so-called Stephan-Boltzmann constant, with $g$ being the degeneracy factor (the number of spin and isospin states), and $g=3$ counting the three isospin states $\\pi^+,~\\pi^0,~\\pi^-$. \n\n\\vspace{0.2cm}\n\\textbf{Hadrons and resonances.}\nThe study of particle production in high energy collisions started in the 1950s with discoveries\nof the lightest hadrons -- $\\pi$, $K$, and $\\Lambda$\n-- in cosmic-ray experiments. Soon after with the rapid advent of particle accelerators\nnew particles were discovered almost day--by--day.\nThe main feature of the strong interactions appears to be creation of new and new types of particle species -- hadrons and resonances -- when increasing the collision energy. \nA huge number (several hundreds) of different hadron and resonance species are known today. The simplest statistical model treats the hadron matter, i.e. a system of strongly interacting particles at not too high energy density, as a mixture of ideal gases of different hadron-resonance species. \n\n\\vspace{0.2cm}\n\\textbf{Hadron-resonance gas.}\nIn the grand canonical ensemble the pressure \nfunction is then written as\n\\begin{eqnarray}\n\\label{p-id}\np^{\\rm id}(T,\\mu) \n= \\sum_i\\frac{g_i}{6\\pi^2}\\int d m\\, f_i(m)\\int_0^{\\infty} \\frac{k^4dk}{\\sqrt{k^2+m^2}}\n\\left[ \\exp\\left(\\frac{\\sqrt{k^2+m^2} - \\mu_i}{T}\\right)+\\eta_i\\right]^{-1}~,\n\\end{eqnarray}\nwhere $g_i$ is the degeneracy factor of the $i^{\\textrm{th}}$ particle and the normalized\nfunction $f_i(m)$\ntakes into account the Breit-Wigner shape of resonances with finite width\n$\\Gamma_i$ around their average mass $m_i$. For the stable hadrons,\n$f_i(m)=\\delta(m-m_i)$.\nThe sum over $i$ in Eq.~\\eqref{p-id} \nis taken over all\nnon-strange and strange hadrons listed in the Particle Data Tables.\nNote, that in the equation\n$\\eta_i =-1$ and $\\eta_i = 1$ for bosons\nand fermions, respectively, while $\\eta = 0$ corresponds to the Boltzmann approximation.\nThe chemical potential for the $i^{\\textrm{th}}$ hadron is given by\n\\begin{equation}\n\\mu_i\\ =\\ b_i\\,\\mu_B\\, +\\, s_i\\,\\mu_S\\, +\\, q_i\\,\\mu_Q\n\\label{eq:mui}\n\\end{equation}\nwith $b_i = 0,\\, \\pm 1$, $s_i = 0,\\, \\pm 1,\\, \\pm 2,\\, \\pm 3$, and\n$q_i = 0,\\, \\pm 1,\\, \\pm 2$\nbeing the corresponding baryonic number, strangeness, and electric charge of\nthe $i$th hadron. Hadrons composed of charmed and beauty quarks are rather heavy and thus rare in the hadron-resonance gas, and their contribution to the thermodynamical functions are often neglected. \nChemical potentials are denoted as $\\mu\\equiv (\\mu_B,\\mu_S,\\mu_Q)$ and correspond to the conservation of net-baryon number, strangeness, and electric charge in the hadron-resonance gas.\nThe entropy density $s$, net-charge densities $n_i$ (with $i=B,Q,S$), and energy density $\\varepsilon$ are calculated from the pressure function (\\ref{p-id}) according to the standard thermodynamic identities:\n\\eq{\\label{therm-id}\ns(T,\\mu)=\\left(\\frac{\\partial p}{\\partial T}\\right)_\\mu~,~~~\nn_i(T,\\mu)=\\left(\\frac{\\partial p}{\\partial \\mu_i}\\right)_T~,~~~~ \\varepsilon(T,\\mu)=Ts+\\sum_in_i\\mu_i-p~.\n}\n\nNote that only one chemical potential $\\mu_B$ is considered as independent variable in fits of the model to particle multiplicities produced in A+A reactions. Two others, $\\mu_S$ and $\\mu_Q$, should be found, at each pair of $T$ and $\\mu_B$, from the requirements that the net-strangeness density equals to zero, $n_S=0$, and the ratio of the net-electric charge density, $n_Q$, to $n_B$ equals to the ratio of the number of protons, $Z$, to the number of all nucleons, $A$ (protons and neutrons) in the colliding nuclei, $n_Q\/n_B=Z\/A$. \nEquations (\\ref{p-id}-\\ref{therm-id}) define the ideal hadron-resonance gas \nmodel. In spite of evident simplifications this model rather successfully fits the rich data on mean multiplicities of hadrons measured in central A+A collisions at high energies. \n\n\\vspace{0.2cm}\n\\textbf{Hagedorn model.} \nIs there an upper limit for the masses of mesonic and baryonic resonances? \nIn 1965 Hagedorn formulated a statistical model assuming an exponentially increasing spectrum of hadron-resonance states at large masses~\\cite{Hagedorn:1965st}:\n\\eq{\\label{H}\n\\rho(m)~\\cong~C\\,m^{-a}\\,\\exp\\left(\\frac{m}{T_H}\\right)~, \n}\nwhere $C$, $a$, and $T_H$ are the model parameters. \nAt that time the number of experimentally detected hadron-resonance states was much smaller than it is today. Nevertheless, Hagedorn made the brave assumption that these $m$-states interpolate the low-mass spectrum and extend to $m\\rightarrow \\infty$, and that their density at large $m$ behaves as in Eq.~(\\ref{H}). \nThe pressure function at $\\mu=0$ then becomes\n\\eq{ \\label{pH}\np(T)~=~T\\int_0^\\infty dm\\,\\rho(m)\\,\\phi_m(T)~,\n}\nwith the function $\\phi_m(T)$ behaving at $m\/T \\gg 1$ as\n\\eq{ \\label{large-m}\n \\phi_m(T)~\\cong ~ g\\,\\left(\\frac{mT}{2\\pi}\\right)^{3\/2}\\,\\exp\\left(-\\,\\frac{m}{T}\\right)~.\n}\nThe result (\\ref{large-m}) can be found from Eq.~(\\ref{p-id}) after $k$-integration. At $m\/T\\gg1$ and $\\mu=0$\nboth quantum statistics and relativistic effects become negligible.\n\n\n There are two exponential functions in the integrand (\\ref{pH}): $\\exp(-m\/T)$ defines the exponentially decreasing contribution of each individual $m$-state, and $\\exp(m\/T_H)$ defines the exponentially increasing number of these $m$-states. It is clear that the $m$-integral\nin Eq.~(\\ref{pH}) exists only for $T \\le T_H$.\nTherefore, a new hypothetical\nphysical constant -- the limiting temperature\n$T_H$ -- was introduced. The numerical value of $T_H$ was estimated by Hagedorn from two sources: from the straightforward comparison of Eq.~(\\ref{H}) with the experimental mass spectrum of hadrons and resonances, $\\Delta N\/\\Delta m$, and from the inverse slope parameter of the transverse momentum spectra of final state hadrons in p+p interactions at high energies. Both estimates gave similar values $T_H=150-160$~MeV. \nThe hadron states with large $m$ in the Hagedorn model are named the Hagedorn fireballs. These states were defined in a democratic (bootstrap) way:\nthe Hagedorn fireball consists of an arbitrary number of non-interacting Hagedorn fireballs, each of which in turn consists of ... \n\n\nIn the 1960s it was not clear up to what masses the hadron-resonances spectrum can be extended. \nThe answer to this question is still unclear today. The large (exponential) density of resonance states\n$\\rho(m)$ and the finite widths $\\Gamma(m)$ of these states make their experimental observation very problematic. \nMoreover, several conceptual problems of the Hagedorn model were obvious from the very beginning. The lightest hadron species, e.g., pion, kaon and proton can not be considered as composed of other (non-interacting) hadrons, and should therefore have their own (non-democratic) status. Besides, the fireballs are treated as point like non-interacting objects. However, from nuclear physics it was already evident that at least protons and neutrons are (strongly) interacting particles: nucleons should have both attractive and repulsive interactions to be able to form stable nuclei. Most probably, similar interactions exist between other types of baryons. Evident physical arguments suggest that the same type of repulsive and attractive interactions should exist between anti-baryon species. \n\n\n\\vspace{0.2cm}\n\\textbf{Quark-gluon plasma.}\nThe quark model of hadron classification was proposed by \nGell--Mann~\\cite{GellMann:1964nj} and Zweig~\\cite{Zweig:352337} in 1964. It was the alternative to the bootstrap approach. Only three types of objects -- $u$, $d$, $s$ quarks and their anti-quarks -- were needed to construct the quantum numbers of all known hadrons and successfully predict several new ones. A 15 years \nperiod then started in which the idea of the existence of sub-hadronic particles -- quarks and gluons -- \nwas transformed into the fundamental theory of strong interactions,\nquantum chromodynamics (QCD). Soon after the discovery of the $J\/\\psi$-meson in 1974 three new types of quarks -- $c$, $b$, and $t$ -- were added to QCD. \nIn parallel, an important conjecture was formulated~\\cite{Ivanenko:1965dg,Itoh:1970uw} --\nmatter at high energy density, as in super-dense star cores, may consist of quasi-free Gell-Mann-Zweig quarks instead of\ndensely packed hadrons. Some years later Shuryak investigated the properties of QCD matter and came to a qualitatively similar conclusion: QCD matter at high temperature is best described by quark and gluon degrees of\nfreedom and the name quark-gluon plasma was coined~\\cite{Shuryak:1977ut,Shuryak:1980tp}.\n\nQuestions concerning QGP properties and properties of its transition\nto matter consisting of hadrons, have been considered since the late 1970s (see, e.g., Ref.~\\cite{Cabibbo:1975ig}). The Hagedorn model was still rather popular at that time due to its successful phenomenological applications. For example, the temperature parameter $T_{\\rm ch}$ found from fitting the data on hadron multiplicities in p+p interactions and A+A collisions at high energies (the so-called chemical freeze-out temperature) was found to be close to the limiting Hagedorn temperature, $T_{\\rm ch} = 140-160~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace \\cong T_H $. \nHagedorn and Rafelski~\\cite{Hagedorn:1980cv}\nas well as Gorenstein, Petrov, and Zinovjev~\\cite{Gorenstein:1981fa} \nsuggested that the \nupper limit of the hadron temperature, the Hagedorn\ntemperature $T_H$, is not the limiting temperature but the transition temperature\nto the QGP, $T_C = T_H \\approx 150$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace. \nThe Hagedorn fireball was then interpreted\nas the quark-gluon bag formed in the early stage of the collision. It also had an exponential mass spectrum (\\ref{H}) like in the Hagedorn model,\nbut was not a point like object. The average volume of the quark-gluon bag increases linearly with its mass.\nThis causes the excluded volume effects in the system of bags and leads to the transition to the high temperature QGP phase.\nNote that the first QCD-inspired estimate of the transition temperature to the\nQGP gave $T_C \\approx 500~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace$~\\cite{Shuryak:1977ut}, the most recent QCD-based estimates obtain $T_C \\approx T_H \\approx 150~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace$~\\cite{Bazavov:2018mes}. \n\nMany physicists started to speculate that the QGP could be formed in\nA+A collisions at sufficiently high energies in which one expects that strongly interacting matter of high energy density will be created.\nTherefore the QGP might be discovered in laboratory experiments.\n\n\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/v_hic_events.jpg}\n\\caption{\nTracks produced in nucleus-nucleus collisions recorded by heavy-ion experiments\nlocated in JINR Dubna (the SKM-200 streamer chamber~\\cite{Aksinenko:1980nm}), LBL Berkeley (the LBL streamer chamber~\\cite{Sandoval:1982cn}), CERN SPS (the NA35 streamer chamber~\\cite{Stock:1987rn}) and CERN SPS (the NA49 time projection chambers~\\cite{Afanasev:1999iu}), from left to right, respectively. \n }\n\\label{fig:s_events}\n\\end{figure}\n\n\\vspace{0.2cm}\n\\textbf{The first experiments.}\nIn parallel to the theoretical ideas and models, experimental studies of A+A\ncollisions were initiated in 1970 at the Synchrophasotron (JINR Dubna)~\\cite{Issinsky:1994it,Malakhov:2013zjq} and in 1975 at the Bevelac (LBL Berkeley)~\\cite{Alonso:1985awy}. Figure~\\ref{fig:s_events} (\\textit{first} and \\textit{second from left}) shows two examples of recorded collisions.\n\nSeveral effects were observed which could be attributed to a collective behaviour of the created system of hadrons. These are anisotropic and radial flow of particles~\\cite{Lisa:1994yr,Nagamiya:1981sd,Gustafsson:1984nd}, enhanced production of strange particles~\\cite{Anikina:1984cu} and suppressed production of pions~\\cite{Sandoval:1980bm}. They could only be\nexplained by\nassuming that strongly interacting matter is produced in the studied collisions~\\cite{Stock:1985xe,Danielewicz:1985hn}. \nIn what follows we use the term fireball as the notation for a large volume\n($\\gg 1$~fm$^3$) system consisting of strongly interacting particles close to at least local equilibrium. They can be either hadrons and resonances or quarks and gluons. \n\n\n\\vspace{0.2cm}\n\\textbf{Initiating the hunt for QGP.}\nThe two findings,\n\\begin{enumerate}[(i)]\n\\item\ntheoretical: QCD matter at sufficiently high temperature is in the state of a QGP;\n\\item\nexperimental: strongly interacting matter is produced in heavy ion collisions at energies of several GeV per nucleon;\n\\end{enumerate}\nled activists of the field~\\cite{Satz:2016xba} to the important decision to collide heavy ions at the maximum possible energy with the aim to discover the QGP.\nIn the 1980s the maximum possible energy for heavy ion collisions was available at CERN, Geneva.\nThis is why heavy ion physics entered the Super Proton Synchrotron (SPS) program at CERN.\nThe \\textit{Workshop on future relativistic heavy ion experiments}, GSI Darmstadt, October 7-10, 1980, organized by Bock and Stock~\\cite{Bock:1981iyr}, with an opening talk by Willis and a summary talk by Specht led to the formulation of the new program. Moreover, it initiated a series of \\textit{Quark Matter} conferences~\\cite{Satz:2016xba}.\n\n\n\\section{Evidence for the quark-gluon plasma}\n\\label{sec:q}\n\n\\textbf{Predicted QGP signals.}\nThe experimental search for a quark-gluon plasma in heavy ion collisions at the CERN SPS was shaped by several model predictions of possible QGP signals:\n\\begin{enumerate}[(i)]\n \\item\n suppressed production of charmonium states, in particular $J\/\\psi$ mesons~\\cite{Matsui:1986dk},\n \\item\n enhanced production of strange and multi-strange hadrons from the QGP~\\cite{Rafelski:1982pu},\n \\item \n characteristic radiation of photons and dilepton pairs from the QGP~\\cite{Shuryak:1980tp}.\n\\end{enumerate}\n\n\n\\vspace{0.2cm}\n\\textbf{Measurements at the CERN SPS.}\nThe search for the QGP at the CERN SPS was performed in two steps:\n\\begin{enumerate}[(i)]\n \\item \n In 1986-1987 oxygen and sulphur nuclei were accelerated to 200\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace.\n Data on collisions with various nuclear targets were recorded by seven experiments, NA34-2, NA35, NA36, NA38, WA80, WA85 and WA94. \n \\item\n In 1996-2003 lead and indium beams at 158\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace were collided with lead and indium targets. Data were recorded by nine experiments, NA44, NA45, NA49, NA50, NA52, NA57, NA60, WA97 and WA98. \n\\end{enumerate}\n\nFigure~\\ref{fig:s_events} shows a S+Au collision at 200\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace (\\textit{second from right}) and a Pb+Pb collision at 158\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace (\\textit{right}) recorded by the NA35 streamer chamber~\\cite{Stock:1987rn} and the NA49 time projection chambers~\\cite{Afanasev:1999iu}, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth,clip=true]{plots\/q_sm1.jpg}\n \\caption{Transverse energy distributions in S+A collisions at the top CERN SPS energy measured by HELIOS\/NA34-2~\\cite{Akesson:1988yt}.}\n \\label{fig:detadet}\n\\end{figure}\n\nAn estimate of the energy density in A+A\ncollisions can be obtained from measurement\nof the transverse energy production and the size of the collision system. Already in \nS collisions with heavy nuclei (see Fig.~\\ref{fig:detadet}) it was found that values above 1~GeV\/fm$^3$ were\nreached (NA34~\\cite{Akesson:1987kh,Akesson:1988yt}, NA35~\\cite{Heck:1988cm}, NA49~\\cite{Margetis:1994tt}).\nMoreover the fireball showed effective temperature increasing linearly with particle mass,\na characteristic of collective radial expansion (see Fig.~\\ref{fig:exp-stat} (\\textit{left})). Also mean\nmultiplicities of produced hadrons are well reproduced by the statistical model~\\cite{Becattini:2003wp} (see Fig.~\\ref{fig:exp-stat} (\\textit{right})).\nThus conditions in collisions of heavy nuclei at the top energy of the CERN SPS are promising for the production of the QGP.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\linewidth,clip=true]{plots\/q_sm2.jpg}\n \\includegraphics[width=0.51\\linewidth,clip=true]{plots\/q_sm3.jpg}\n \\caption{\\textit{Left:} Inverse slope parameter (effective temperature) of the transverse mass distribution versus particle mass measured by WA97, NA44 and NA49~\\cite{Antinori:2000sb}. \\textit{Right:} Mean hadron multiplicities\n measured by NA49 compared to the statistical model fit~\\cite{Becattini:2003wp}. Pb+Pb collisions at the top CERN SPS energy. }\n \\label{fig:exp-stat}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\centering\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/q_signals1a.jpg}\n\\caption{\n\\textit{Left:} Ratio of $J\/\\psi$ meson to Drell-Yan muon pair production (data points) yields\ncompared to predictions (curves) of $J\/\\psi$ absorption by hadronic \nmatter~\\cite{Baglin:1990iv,Alessandro:2004ap} (NA38, NA50).\n\\textit{Center:} Comparison of \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace yield ratio in p+p, p+A and A+A collisions~\\cite{Bartke:1990cn,Appelshauser:1998vn}\n(NA35, NA49). \\textit{Right:} Comparison of the mid-rapidity ratios of\nof hyperon production to number of wounded nucleons in p+Be, p+Pb and Pb+Pb\ncollisions~\\cite{Antinori:2006ij} (NA57). Top CERN SPS energy.\n }\n\\label{fig:q_signalsa}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\centering\n \\includegraphics[width=0.4\\linewidth,clip=true]{plots\/q_sm4.jpg}\n \\includegraphics[width=0.4\\linewidth,clip=true]{plots\/q_sm5.jpg}\n\\caption{\n\\textit{Left:} Direct photon signal observed in central Pb+Pb collisions~\\cite{Manko:1999tgz} (WA98).\n\\textit{Right:} Effective temperature of directly produced dimouns in In+In collisions as function of dimuon mass~\\cite{Arnaldi:2008er} (NA60). Top CERN SPS energy.\n }\n\\label{fig:q_signalsb}\n\\end{figure}\n\n\\vspace{0.2cm}\n\\textbf{The QGP discovery.}\nResults on central collisions of medium size and heavy nuclei from the QGP search programme at the CERN SPS appeared to be consistent with the predictions for the QGP:\n\\begin{enumerate}[(i)]\n \\item\n The relative yield of $J\/\\psi$ mesons is significantly suppressed compared to that in p+p and p+A interactions\n (NA38~\\cite{Baglin:1990iv}, NA50~\\cite{Alessandro:2004ap}, NA60~\\cite{Arnaldi:2007zz}) as expected for the $J\/\\psi$ melting in a QGP\n (see Fig.~\\ref{fig:q_signalsa} (\\textit{left})).\n \\item\n The relative strangeness yield is consistent with the yield expected for the equilibrium QGP. Moreover, it is significantly enhanced compared to that in p+p and p+A \n interactions (see Fig.~\\ref{fig:q_signalsa} (\\textit{center}))\n (NA35~\\cite{Bartke:1990cn}, NA49~\\cite{Appelshauser:1998vn}). \n Even larger enhancement is measured for the relative yield of multi-strange hyperons (see Fig.~\\ref{fig:q_signalsa} (\\textit{right})),\n (WA97~\\cite{Andersen:1999ym}, NA57~\\cite{Antinori:2006ij}). \n Note that QGP formation was not expected in p+p and p+A collisions.\n \\item\n Spectra of directly produced dimouns (virtual photons) and photons emerge above a dominant background at large mass respectively transverse momentum and show a thermal contribution with an effective temperature of about 200~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace. This is significantly larger than the expected transition temperature to QGP (see Fig.~\\ref{fig:q_signalsb})\n (NA60~\\cite{Arnaldi:2008er} and WA98~\\cite{Manko:1999tgz}).\n \\end{enumerate}\n\n\n\\vspace{0.2cm}\n\\textbf{Standard model of heavy-ion collisions.}\nThese and other results\nestablished the standard picture of heavy-ion collisions~\\cite{Florkowski:2010zz}:\n\\begin{enumerate}[(i)]\n \\item \n High density strongly interacting matter is created at the early stage of heavy-ion collisions. Starting at SPS collision energies it is in the QGP phase.\n \\item\n The high-density matter enters a hydrodynamic expansion, cools down and emits photons and dileptons.\n \\item\n At the phase transition temperature, $T_C \\approx 150~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace$, hadrons are created.\n Statistical haronization models fit hadron yields at this stage quite well.\n \\item \n The hadronic matter after hadronization is still dense enough to modify the hadron composition and continue expansion.\n \\item\n At sufficiently low densities the hadron interaction rate drops to zero (freeze-out).\n resonances decay and long-lived hadrons freely fly away e.g. towards particle detectors at the CERN SPS.\n\\end{enumerate}\n\n\\vspace{0.2cm}\n\\textbf{Conclusion from the QGP-search.}\nThese major achievements were compiled by the heavy-ion community~\\cite{Heinz:2000bk} and led to the CERN press release - \non February 10, 2000 the CERN Director General Luciano Maiani said:\n\\textit{\nThe combined data coming from the seven experiments on CERN's Heavy Ion programme have given a clear picture of a new state of matter. This result verifies an important prediction of the present theory of fundamental forces between quarks. It is also an important step forward in the understanding of the early evolution of the universe. We now have evidence of a new state of matter where quarks and gluons are not confined. There is still an entirely new territory to be explored concerning the physical properties of quark-gluon matter. The challenge now passes to the Relativistic Heavy Ion Collider at the Brookhaven National Laboratory and later to CERN's Large Hadron Collider.}\n\nThis was in fact the moment when the majority of heavy-ion physicists moved to\nstudy heavy-ion collisions at much higher energies at the Relativistic Heavy Ion Collider (RHIC) of Brookhaven National Laboratory (BNL).\nRich and precise results obtained during the period of 2000-2010 at RHIC provided extensive information on the properties of the QGP. There were already no doubts about QGP formation at the early stage of A+A collisions at the CERN SPS, and all the more at RHIC energies. Two basic properties of the QGP were established at the RHIC BNL: jet quenching (deceleration of high momentum partons in the hot QGP) and a small ratio of the shear viscosity $\\eta$ to the entropy density $s$. It was estimated that $\\eta\/s\\cong 0.1$, i.e the QGP appears to be an almost perfect liquid (see Ref.~\\cite{Adams:2005dq,Adcox:2004mh,Back:2004je,Arsene:2004fa} for details). \n\nThe situation after the announcement of the QGP discovery in 2000 at CERN was however rather confusing. Many were pretty sure about its formation in central Pb+Pb collisions at the CERN SPS, but unambiguous evidence of the QGP state was still missing. \nNeedless to say that the Nobel prize for the QGP discovery was not yet awarded. \nThis may be attributed to the difficulty of obtaining unique and quantitative predictions of\nthe expected QGP signals from QCD.\n\n\\vspace{0.2cm}\n\\textbf{Question marks.}\nLet us briefly discuss questions addressed to the two main signals of the QGP: the $J\/\\psi$ suppression and the strangeness enhancement.\n\n{\\bf The $J\/\\psi $ suppression.} The standard picture of $J\/\\psi$ production in collisions of hadrons and nuclei assumes\na two step process: \nthe creation of a $c\\overline{c}$ pair in hard parton collisions at the very early stage of the reaction and a subsequent formation of a bound charmonium state or two open charm hadrons. Further more it was assumed that the initial yield of $c\\overline{c}$ pairs is proportional to the yield Drell-Yan pairs. Then the $J\/\\psi $\/(Drell-Yan pairs) ratio is expected to be the same in p+p, p+A and A+A collisions providing there are no other processes which can lead to $J\/\\psi$\ndisintegration and\/or creation.\nThe measured suppression of the ratio in p+A collisions respectively to p+p interactions was interpreted as due to $J\/\\psi$ interactions with nucleons of the target nucleus and with hadronic secondaries (`co-movers'). \nIn central Pb+Pb collisions at 158\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace the suppression was found to be significantly stronger than expected in the models including nuclear and co-mover suppression. This anomalous $J\/\\psi$ suppression was interpreted as the evidence of the QGP creation in central Pb+Pb collisions at the top CERN SPS. \nHowever, the uncertainties related to the assumption $c\\overline{c} \\sim \\textrm{Drell-Yan pairs}$ and estimates of the nuclear and co-mover suppression\nlead to uncertainty in interpretation of the anomalous $J\/\\psi$ suppression as the QGP signal.\nMoreover, models of $J\/\\psi$ production in the later stages of the collision process have been developed:\n\\begin{enumerate}[(i)]\n\\item\nthe statistical model of $J\/\\psi$ production at the hadronization~\\cite{Gazdzicki:1999rk},\n\\item\nthe dynamical and statistical models of $J\/\\psi$ production via coalescence of $c\\overline{c}$ quarks~\\mbox{\\cite{Thews:2000rj,Levai:2000ne,BraunMunzinger:2000px,Gorenstein:2000ck}}.\n\\end{enumerate}\nClearly, in order to distinguish between different effects and verify the $J\/\\psi$ signal of the QGP creation systematic data on open charm production is needed.\n\n\\vspace{0.2cm}\n\\textbf{Strangeness enhancement.}\nA fast equilibration of $s\\overline{s}$ pairs was predicted as a QGP signature~\\cite{Rafelski:1982pu,Rafelski:2019twp}. This is mainly because of the small mass of the strange quark, $m_s\\sim 100$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace compared to the QGP temperature: $T \\ge T_C > m_s \\approx 100$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace. The estimated strangeness equilibration time was found to similar to the life time of the QGP phase in heavy-ion collisions at high energies.\nIn fact the strangeness yield measured in A+A collisions at the top SPS energy and above corresponds to the QGP equilibrium yield, for recent review see Ref.~\\cite{Rafelski:2019twp}.\nMoreover, it was estimated that the strangeness equilibration time in the confined matter is about 10 times longer than the life time of the hadronic phase in A+A collisions. This is because masses of strange hadrons, starting form the lightest one, the kaon ($m_K\\sim 500$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace) are much larger than the maximum temperature of the hadron-resonance gas $T\\le T_C \\approx 150$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace. Thus a small yield of strangeness was expected for reactions in which the QGP was not expected, p+p and p+A interactions and A+A collisions at low energies. Consequently the enhanced production of strangeness was predicted as the next QGP signal~\\cite{Glendenning:1984ta}.\n \nThe strangeness enhancement is quantified by comparing a strange-hadron to pion ratio in A+A collisions with that in p+p interactions.\nIn particular a double ratio is calculated:\n\\eq{\\label{str-pion}\nR(\\sqrt{s_{NN}})= \\frac{\\langle K^+\\rangle_{AA}\/\\langle \\pi^+\\rangle_{AA}}{\\langle K^+\\rangle_{pp}\/\\langle \\pi^+\\rangle_{pp}}~,\n}\nwhere $\\langle\\ldots \\rangle_{AA}$ and $\\langle \\ldots \\rangle_{pp}$ denote the event averages of $K^+$ and $\\pi^+$ yields in, respectively, A+A collisions and p+p interactions at the same center of mass collision energy $\\sqrt{s_{NN}}$ of the nucleon pair. Ratios of different strange hadrons to pions \nwere considered, e.g., $\\langle K+\\overline{K}\\rangle\/\\langle \\pi\\rangle$, $\\langle \\Lambda\\rangle\/\\langle \\pi\\rangle$, $\\ldots$, $\\langle\\Omega\\rangle \/\\langle \\pi \\rangle $, \nand then analyzed by forming the double ratios $R$, similar to the one given by Eq.~(\\ref{str-pion}).\n The confrontation of this expectation with the data was for the\nfirst time possible in 1988 when results from the SPS and the AGS\nbecame available. NA35 reported~\\cite{Bartke:1990cn} that in central S+S collisions at 200\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace\nthe strangeness to pion ratio (\\ref{str-pion}) is indeed about two times higher than in nucleon-nucleon interactions at\nthe same energy per nucleon. But an even larger enhancement ($R= 14 - 5 $) \nwas measured at the AGS at 2$A -10A$~GeV~\\cite{Pinkenburg:2001fj,Abbott:1990mm} demonstrating that strangeness enhancement {\\it increases} with {\\it decreasing} collision energy.\nMoreover, the enhancement factor (\\ref{str-pion}) should evidently go to infinity at the threshold energy of strange hadron production in nucleon-nucleon interactions. \nNote also that the strangeness neutrality introduced to statistical models using the canonical ensemble leads to a suppression of the relative yield of strange particles in systems with a low multiplicity of strangeness carriers~\\cite{Rafelski:1980gk}, e.g. p+p interactions at SPS energies. \nIn any case, the AGS measurements indicating a strangeness enhancement larger than that at the CERN SPS show clearly difficulties in interpreting the strangeness enhancement as the QGP signal. \n\n\\vspace{0.2cm}\n\\textbf{New strategy.}\nDifficulties in interpretation of the QGP signatures forced scientists to rethink the QGP-hunt strategy. \nThe emerging new strategy was similar to the one followed by physics studying molecular liquids and gases. In these essentially simpler and familiar cases it is also sometimes difficult to distinguish the properties of a dense gas from those of a liquid. It is much easier to identify the effects of the liquid-gas transition. \nThus, if one believes that the QGP is formed in central Pb+Pb collisions at the top SPS energy one should observe qualitative signals of the {\\it transition} to the QGP at a lower collision energy. \nSeveral such signals were predicted within the statistical model of the early stage\n~\\cite{Gazdzicki:1998vd}.\nTheir observation would serve as strong evidence of QGP creation in heavy-ion collisions at high enough collision energies.\n\nThis idea motivated some of us to propose the collision energy scan at the CERN SPS with the aim to search for the {\\it onset of deconfinement}. This was the beginning of the search for the critical structures in heavy-ion collisions, for detail see the next section and Ref.~\\cite{Gazdzicki:2003fj}.\n\n\n\n\\section{ Critical structures }\n\\label{sec:cs}\n\n\n\\subsection{Evidence for the onset of deconfinement}\n\\label{sec:cs_ood}\n\n\n\\textbf{Predicted signals of the onset of deconfinement.}\nThe experimental search for the onset of deconfinement in heavy ion collisions at the CERN SPS was shaped by several model predictions of possible measurable signals:\n\\begin{enumerate}[(i)]\n \\item \n characteristic enhanced production of pions and suppression of the strangeness to pion ratio ~\\cite{Gazdzicki:1998vd},\n \\item\n softening of collective flow of hadrons~\\cite{Gorenstein:2003cu,Csernai:1999nf,Bleicher:2000sx,Bleicher:2005tb}, which should be observed in hadron distributions in transverse~\\cite{Gorenstein:2003cu} and longitudinal momenta~\\cite{Bleicher:2005tb} and azimuth angle~\\cite{Csernai:1999nf,Bleicher:2000sx}.\n\\end{enumerate}\n\n\\vspace{0.2cm}\n\\textbf{Measurements at the CERN SPS and RHIC BES.}\nThe search for the onset of deconfinement at the CERN SPS started \nin 1999 with the data taking on Pb+Pb collisions at 40\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace.\nThe data were registered by NA49, NA45, NA50 and NA57. \nIn 2000 a beam at 80\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace was delivered to NA49 and NA45.\nThe program was completed in 2002 by runs of NA49 and NA60 at 20\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace and 30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace.\nThus, together with the previously recorded data at 158\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace, NA49 gathered data at five collision energies. Other experiments collected data at two (NA50, NA57) or three (NA45, NA60) energies. Starting in 2010 the beam energy scan program BES was started at RHIC with the aim of covering the low\nenergy range overlapping with the CERN SPS and providing important\nconsistency checks on the measurements.\n\n\n\\begin{figure}[hbt]\n\\centering\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/cs_ood_signals.jpg}\n\\caption{\nExamples of results illustrating the observation of the onset-of-deconfinement signals in central Pb+Pb (Au+Au) collisions~\\cite{Aduszkiewicz:2019zsv}, see text for details and more references.\n }\n\\label{fig:cs_ood_signals}\n\\end{figure}\n\n\\vspace{0.2cm}\n\\textbf{Discovery of the onset of deconfinement.}\nResults on the collision energy dependence of hadron production in central Pb+Pb collisions from the onset-of-deconfinement search programme at the CERN SPS~\\cite{Afanasiev:2002mx,Alt:2007aa} appeared to be consistent with the predicted signals (for review see Ref.~\\cite{Gazdzicki:2010iv}):\n\\begin{enumerate}[(i)]\n\\item\nThe average number of pions per wounded nucleon, $\\langle N_\\pi \\rangle \/\\langle W \\rangle $, \nin low energy A+A collisions is smaller that this value in p+p reactions. This relation is however changed to the opposite at collision energies larger than $\\approx$~30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace, the so-called {\\bf kink} structure. \n \\item\n The collision energy dependence of the $\\langle K^+\\rangle_{AA}\/\\langle \\pi^+\\rangle_{AA}$ ratio shows the so-called \\textbf{horn} structure. Following a fast rise the ratio passes through a maximum in the SPS range, at approximately 30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace, and then decreases and settles to a plateau value at higher energies. This plateau was found to continue up to the RHIC and LHC energies.\n \\item\n The collision energy dependence of the inverse slope parameter of the transverse mass spectra, $T^*$, of charged kaons shows the so-called \\textbf{step} structure. Following a fast rise the $T^*$ parameter passes through a stationary region (or even a weak minimum for $K^-$), which starts at the low SPS energies, approximately 30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace, and then enters a domain of a steady increase above the top SPS energy.\n \\end{enumerate}\n \nFigure~\\ref{fig:cs_ood_signals} shows examples of the most recent plots~\\cite{Aduszkiewicz:2019zsv} illustrating the observation of the onset-of-deconfinement signals. As seen data from the RHIC BES~I programme\n(2010-2014) and LHC (see Ref.~\\cite{Aduszkiewicz:2019zsv} for references to original experimental papers) confirm the NA49 results and their interpretation. \n\n\nTwo comments are appropriate here.\nThe strangeness to pion ratio, e.g., \n$\\langle K^+\\rangle_{AA}\/\\langle \\pi^+\\rangle_{AA}$,\nstrongly increases with collision energy in the hadron phase. This happens because \n$m_K\/T\\gg 1$, whereas $m_\\pi\/T\\cong 1$.\nThus, a much stronger increase with increasing temperature is expected for the multiplicities of heavy strange hadrons than that of pions. The strangeness to pion ratio reaches its maximum inside the hadron phase at the onset of the deconfinement. \nThe plateau-like behavior at high collision energies reflects the approximately constant value of the \nstrangeness to entropy ratio in the QGP. It equals to the ratio of the degeneracy factor of strange quarks, \n\\eq{\\label{s-quark}\ng_s=\\frac{7}{8}\\cdot 2\\cdot 2\\cdot 3 =10.5~,\n}\nto the total degeneracy factor of the quark-gluon constituents in the QGP,\n\\eq{\\label{qg}\ng= 2\\cdot 8+ \\frac{7}{8}\\cdot 2\\cdot 2\\cdot 3 \\cdot 3 = 47.5~.\n}\nThese degeneracy factors count 2 spin states of quarks and gluons, 3 flavor quark states, 8 colour states of gluons and 3 colour states of quarks, one more factor 2 appears due to antiquarks (the factor 7\/8 is due to the Fermi statistics of quarks). \nThe strangeness to entropy ratio in the HRG at the largest hadron temperature\n$T\\cong T_H\\cong 150$~MeV appears to be larger than this ratio in the QGP which is approximately constant at all QGP temperatures $T\\ge T_H$. Therefore, \nthe transition region from hadron matter to the QGP \nreveals itself as the {\\it suppression} of strangeness yield relative to pion yield.\n\nThe second comment concerns the inverse slope parameter $T^*$ of the transverse mass ($m_T=\\sqrt{m^2+p_T^2}$) spectrum \n\\eq{\\label{pT}\n\\frac{dN}{m_T dm_T}\\sim \\exp\\left(-\\,\\frac{m_T}{T^*}\\right)~.\n}\nThe parameter $T^*$ is sensitive to both the thermal and collective motion transverse to the collision axis and behaves as\n\\eq{\\label{T*}\nT^*\\cong T+ \\frac{1}{2}m v_T^2~,\n}\nwhere $T$ is the temperature and $v_T$ is the transverse collective (hydrodynamic) velocity of the hadronic matter at the kinetic freeze-out. The parameter $T^*$ increases strongly with collision energy \nup to the energy $\\approx$~30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace. This is because an increasing collision energy leads to an increase of both terms in Eq.~(\\ref{T*}) - the temperature $T$ and velocity $v_T$ - in the hadron phase ($v_T$ increases due to the increase of the pressure). At collision energy larger than $\\approx$~30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}}\\xspace the parameter $T^*$ is approximately independent of the collision energy in the SPS energy range. In this region one expects the transition between confined and deconfined matter. In the transition region both values - $T$ and $v_T$ - remain approximately constant, and this leads to the plateau-like structure in the energy dependence of the $T^*$ parameter. \nAt RHIC--LHC energies, the parameter $T^*$ again increases with collision energy. The early stage QGP pressure increases with collision energy, and thus $v_T$ in Eq.~(\\ref{T*}) increases too. \n\n\\vspace{0.1cm}\nThe workshop \\textit{Tracing the onset of deconfinement in nucleus-nucleus collisions}, ECT* Trento, April 24-29, 2004, \nsummarized the results from the energy scan programme at the CERN SPS and concluded that future measurements in the SPS energy range are needed~\\cite{Gazdzicki:2004gss}.\nThe goal is to search for the deconfinement critical point and study system size dependence of the onset of deconfinement. Possibilities to perform these measurements at the CERN SPS, FAIR SIS300 and RHIC were discussed. The event initiated a series of the \\textit{Critical point and onset of deconfinement} workshops.\n\n\n\\begin{figure}[hbt]\n\\centering\n \\includegraphics[width=0.45\\linewidth,clip=true]{plots\/cs_cp_hill.jpg}\n \\includegraphics[width=0.45\\linewidth,clip=true]{plots\/cs_cp_sigma.jpg}\n\\caption{\n \\textit{Left:}\n Sketch of the expected signal of the deconfinement critical point - a maximum of fluctuations in the (nuclear mass number)-(collision energy) plane.\n \\textit{Right:}\n Results from the NA61\\slash SHINE\\xspace two dimensional scan of energy and system size for (pion multiplicity)-(transverse momentum) fluctuations in terms of the strongly intensive quantity $\\Sigma[P_T,N]$~\\cite{Gazdzicki:2017zrq}.\n }\n\\label{fig:cs_cp_signals}\n\\end{figure}\n\n\\subsection{Searching for deconfinement critical point}\n\\label{sec:cs_cp}\n\n\n\\textbf{Predicted d-CP signals.}\nThe possible existence and location of the deconfinement critical point (d-CP) is a subject of\nvivid theoretical discussion, for a recent review see Ref.~\\cite{Bzdak:2019pkr}. \nThe experimental search for the d-CP\nin A+A collisions at the CERN SPS was shaped by several model predictions (for detail see Ref.~\\cite{Bialas:1990xd}) of its potential signals:\n\\begin{enumerate}[(i)]\n \\item \n characteristic multiplicity fluctuations of hadrons~\\cite{Bialas:1990xd,Antoniou:2000ms,Hatta:2003wn,Antoniou:2006zb},\n \\item\n enhanced fluctuations of (pion multiplicity)-(transverse momentum)~\\cite{Stephanov:1999zu},\n\\end{enumerate}\n\n\nThe signals were expected to have a maximum in the parameter space of collision energy and nuclear mass number of colliding nuclei - \\textbf{the hill of fluctuations}~\\cite{Gazdzicki:2006fy}.\nThis motivated NA61\\slash SHINE\\xspace to perform a two dimensional scan at the CERN SPS~\\cite{Gazdzicki:995681} in these two parameters, which are well controlled in laboratory experiments. \n\n\n\n\\begin{figure}[hbt]\n\\centering\n\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/cs_cp_exp.jpg}\n\n\n\\caption{\n Summary of data recorded by NA61\\slash SHINE\\xspace at the CERN SPS~(\\textit{left}) and STAR at RHIC~(\\textit{right}) relevant for the search for the deconfinement-CP, see text for details. \n }\n\\label{fig:cs_cp_exp}\n\\end{figure}\n\n\\vspace{0.2cm}\n\\textbf{Measurements at SPS and RHIC.}\nThe systematic search for the d-CP of strongly interacting matter was started in 2009 with the NA61\\slash SHINE\\xspace data taking on p+p interactions at six beam momenta in the range from 13\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace to 158\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace.\nIn the following years data on Be+Be, Ar+Sc, Xe+La and Pb+Pb collisions were\nrecorded, see Fig.~\\ref{fig:cs_cp_exp}~(\\textit{left}) for an overview.\n\nIn 2010 the beam energy scan (BES-I and BES-II) with Au+Au collisions started at the BNL RHIC~\\cite{Odyniec:2019kfh}. Search for the deconfinement critical point has been the most important goal of this programme. Above the collision energy of $\\sqrt{s_{NN}} = 7.7~\\ensuremath{\\mbox{Ge\\kern-0.1em V}}\\xspace$ ($\\approx$~30\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace) the scan was conducted in the collider mode, whereas below in the fixed target mode. The location of the recorded data in the phase diagram are shown in Fig.~\\ref{fig:cs_cp_exp}~(\\textit{right}). \n\n\n\\vspace{0.2cm}\n\\textbf{Status of the d-CP search.}\nMany experimental results have already been obtained within the d-CP search programmes at SPS and RHIC, for a recent review see Ref.~\\cite{Czopowicz:2020twk}. Five of them were considered as possible indications of the d-CP and are presented and discussed in the following. \n\n\n\n\\begin{enumerate}[(i)]\n\\item\nA maximum of fluctuations is expected in a scan of the phase diagram (see Fig.~\\ref{fig:cs_cp_signals} (\\textit{left})). Measurements of (pion multiplicity)-(transverse momentum) fluctuations from NA61\\slash SHINE\\xspace shown in Fig.~\\ref{fig:cs_cp_signals} (\\textit{right}) do not show such a \nfeature~\\cite{Gazdzicki:2017zrq}.\n\\item\nThe energy dependence of fluctuations of conserved quantities such as the net baryon number is predicted to be sensitive to the presence of the d-CP. This holds in particular for higher moments. The scaled third and fourth moments of the net-proton multiplicity distribution in Au+Au collisions from the STAR experiment is plotted in Fig.~\\ref{fig:cs_cp_results1}~\\cite{Adam:2020unf}. The non-monotonic behaviour of the fourth moment in central collisions and its sign change around $\\sqrt{s_{NN}} \\approx 7$~\\ensuremath{\\mbox{Ge\\kern-0.1em V}}\\xspace is debated as possible indication of the d-CP.\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.7\\linewidth,clip=true]{plots\/cs_cp_results1.jpg}\n\\caption{\n The energy dependence of the scaled third (\\textit{left}) and fourth (\\textit{right}) moments of the net-proton multiplicity distribution in central and peripheral Au+Au collisions from the STAR experiment~~\\cite{Adam:2020unf}.\n }\n\\label{fig:cs_cp_results1}\n\\end{figure}\n\n\\item\nAt the d-CP the correlation length diverges and leads to power-law type fluctuations of the baryon number. These were investigated by the NA61\\slash SHINE\\xspace experiment by measuring the momentum bin size dependence of the scaled second factorial moment of the proton multiplicity distribution (intermittency study) in semi-central Ar+Sc collisions at $\\sqrt{s_{NN}} \\approx 17~\\ensuremath{\\mbox{Ge\\kern-0.1em V}}\\xspace$~\\cite{Mackowiak:2019qm}. While previous measurements by the NA49 experiment in Si+Si collisions indicated a signal the new measurements \nshown in Fig.~\\ref{fig:cs_cp_results2} do not confirm the effect.\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.4\\linewidth,clip=true]{plots\/cs_cp_results2.jpg}\n\\caption{\n Scaled second factorial moment $\\Delta F_2$ (background subtracted) of the proton multiplicity distribution as function of the number of subdivisions M of transverse momentum space obtained in Ar+Sc collisions at $\\sqrt{s_{NN}} \\approx$ 17~\\ensuremath{\\mbox{Ge\\kern-0.1em V}}\\xspace~\\cite{Mackowiak:2019qm}.\n }\n\\label{fig:cs_cp_results2}\n\\end{figure}\n\n\\item\nThe ratio of yields of light nuclei production can be related to nucleon number fluctuations~\\cite{Liu:2019nii}. The measurements from STAR in central Au+Au collisions show strong collision energy dependence and peak at $\\sqrt{s_{NN}} \\approx 20 - 30~\\ensuremath{\\mbox{Ge\\kern-0.1em V}}\\xspace$~\\cite{Xu:2019qm}. These results are presented in Fig.~\\ref{fig:cs_cp_results3}. Such behaviour is not reproduced by model calculations without a d-CP and may thus be attributable\nto a critical point.\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.4\\linewidth,clip=true]{plots\/cs_cp_results3.jpg}\n\\caption{\n Energy dependence of the ratio of yields of light nuclei production in central Au+Au collisions~\\cite{Xu:2019qm} measured by the STAR experiment at RHIC. \n }\n\\label{fig:cs_cp_results3}\n\\end{figure}\n\n\\item\nThe energy and centrality dependence of short-range two-pion correlations as parameterized by source radius parameters determined from Bose-Einstein correlation analysis was used to search for indications of the d-CP~\\cite{Lacey:2014wqa,Adamczyk:2014mxp}.\nThe result for the difference $R^2_{out}$ - $R^2_{side}$ in Au+Au collisions at RHIC is shown in Fig.~\\ref{fig:cs_cp_results4}.\nA finite size scaling analysis of these results led to an estimate of the position of a d-CP at $T \\approx$~165~MeV and \n$\\mu_B$ $\\approx$ 95~MeV.\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.4\\linewidth,clip=true]{plots\/cs_cp_results4.jpg}\n\\caption{\n Centrality and energy dependence of the difference $R^2_{out}$ - $R^2_{side}$\n of radius parameters obtained from Bose-Einstein two-pion correlation analysis in Au+Au collisions from the PHENIX experiment at RHIC\n ~\\cite{Lacey:2014wqa,Adamczyk:2014mxp}.\n }\n\\label{fig:cs_cp_results4}\n\\end{figure}\n\n\\end{enumerate}\n\nThese observations, when interpreted as due to the d-CP, yield different estimates of the d-CP location on the\nphase diagram of strongly interacting matter, see Fig.~\\ref{fig:cs_cp_results_pd}~\\cite{Czopowicz:2020twk}.\nThus, as for now, the experimental results concerning the d-CP are inconclusive. New results from NA61\\slash SHINE\\xspace and STAR BES-II are expected within the coming years. \n\\vspace{0.2cm}\n\n\n\\textbf{Nuclear and deconfinement critical points.}\nThe nuclear critical point (n-CP) corresponds to the liquid-gas phase transition in the system of interacting nucleons and is located at small temperature $T_C\\approx 19$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace and large baryonic chemical potential $\\mu_B\\approx 915$~\\ensuremath{\\mbox{Me\\kern-0.1em V}}\\xspace, see Fig.~\\ref{fig:v_qgp} (\\textit{middle}) and Fig.~\\ref{fig:cs_cp_results_pd} for illustration.\n\nThe effect of the n-CP on fluctuations of conserved charges,\nbaryon number (B), electric charge (Q), and strangeness (S),\nwas studied in Refs.~\\cite{Vovchenko:2017ayq, Poberezhnyuk:2019pxs} within the \nHRG\nmodel with van der Waals interactions between baryons and between anti-baryons. \nThe second, third, and fourth order cumulants \n(susceptibilities) are\ncalculated in the grand canonical ensemble from the pressure function by taking the derivatives over the corresponding chemical potentials:\n\\eq{\n\\chi^{i}_{n}& =\\frac{\\partial ^{n}\\left( p\/T^{4}\\right) }{\\partial \\left( \\mu_{i} \n\/T\\right) ^{n}}~,\n}\nwhere $i$ stands for $B, Q, S$ and $n$ is the moment order. \n\nThe obtained results show that\nthe n-CP may significantly impact event-by-event fluctuations in A+A collisions even at high energies. \nThus, the nuclear-CP should be taken into account in future searches for the deconfinement-CP.\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.6\\linewidth,clip=true]{plots\/cs_cp_results_pd.jpg}\n\\caption{\nCompilation of theoretical predictions~\\cite{Stephanov:2007fk} and experimental hints~\\cite{Czopowicz:2020twk} on the location of the deconfinement critical\npoint, d-CP, in the phase diagram of strongly interacting matter. The position \nof nuclear critical point, n-CP, as suggested by theoretical and experimental results is indicated for comparision.\n }\n\\label{fig:cs_cp_results_pd}\n\\end{figure}\n\n\n\\subsection{Indication of the onset of fireball}\n\\label{sec:cs_oof}\n\n\\textbf{Predictions of reference models on system-size dependence.}\nThere are two models often used to obtain reference predictions concerning the system-size dependence of hadron production properties~\\cite{Gazdzicki:2013lda} - the Wounded Nucleon Model (WNM)~\\cite{Bialas:1976ed} and the Statistical Model (SM)~\\cite{Rafelski:1980gk}. For the \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio at the CERN SPS energies they read: \n\\begin{enumerate}[(i)]\n\\item\nThe WNM prediction: the \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio is independent of the system size (number of wounded nucleons).\n\\item\nThe SM prediction: in the canonical formulation incorporating global quantum number conservation the \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio increases monotonically with the system size and approaches the limit given\nby the grand canonical approximation of the model. The rate of this increase is the fastest for small systems.\n\\end{enumerate}\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.47\\linewidth,clip=true]{plots\/cs_oof_system-size.jpg}\n \\includegraphics[width=0.45\\linewidth,clip=true]{plots\/cs_oof_energy.jpg}\n\\caption{\n Measurements of the the \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio in p+p, Be+Be, Ar+Sc and Pb+Pb collisions: system size dependence at 150\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace~\\cite{Podlaski:2019} (\\textit{left}) and collision energy dependence~\\cite{Aduszkiewicz:2019zsv} (\\textit{right}). \n }\n\\label{fig:cs_oof_results}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.47\\linewidth,clip=true]{plots\/cs_oof_system-size_o.jpg}\n \\includegraphics[width=0.45\\linewidth,clip=true]{plots\/cs_oof_energy_o.jpg}\n\\caption{\n Measurements of the scaled variance $\\omega$ of the multiplicity distribution of negatively charged hadrons in inelastic p+p interactions and central Be+Be and Ar+Sc collisions~\\cite{AndreySeryakovfortheNA61\/SHINE:2017ygz}: system size dependence at 150\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace (\\textit{left}) and collision energy dependence (\\textit{right}). \n }\n\\label{fig:cs_oof_results_o}\n\\end{figure}\n\n\\vspace{0.2cm}\n\\textbf{Unexpected result of measurements.}\nMeasurements of the system size dependence of hadron production properties at different collision energies were carried out by NA61\\slash SHINE\\xspace, for detail see Sec.~\\ref{sec:cs_ood}. \nFigures~\\ref{fig:cs_oof_results} and~\\ref{fig:cs_oof_results_o} show the unexpected result~\\cite{Podlaski:2019,AndreySeryakovfortheNA61\/SHINE:2017ygz}.\nThe \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio in Fig.~\\ref{fig:cs_oof_results} and the scaled variance of the multiplicity distribution at 150\\ensuremath{A\\,\\mbox{Ge\\kern-0.1em V}\\!\/\\!c}\\xspace in Fig.~\\ref{fig:cs_oof_results_o} are similar in inelastic p+p interactions and in central Be+Be collisions, whereas they are very different in central Ar+Sc collisions which are close to central Pb+Pb collisions. Both reference models, WNM and SM, qualitatively disagree with the data. \nThe WNM seems to work in the collisions of light nuclei (up to Be+Be) and becomes qualitatively wrong for heavy nuclei (like Pb+Pb). On the contrary, the SM is approximately valid for collisions of heavy nuclei. However, its predictions\ndisagree with the data on p+p to Be+Be collisions. \n\n\nThe rapid change of hadron production properties when moving from Be+Be to Ar+Sc collisions is interpreted and referred to as the onset of fireball. \nFrom Fig.~\\ref{fig:cs_oof_results}~(\\textit{right}) follows that the increase of the \\ensuremath{\\textup{K}^+}\\xspace\/\\ensuremath{\\pi^+}\\xspace ratio depends on the collision energy. On the other hand,\nthe scaled variance $\\omega$ of the multiplicity distribution shows only weak collision energy dependence \n(see Fig.~\\ref{fig:cs_oof_results_o}~(\\textit{right})).\nThe physics behind the onset of fireball is under discussion~\\cite{Motornenko:2018gdc}.\nWhereas one does not observe the formation of fireball in the collisions of light nuclei at the SPS energies, the large size system of strongly interacting matter is probably formed at LHC energies even in p+p high multiplicity events. \n\n\n\\vspace{0.3cm}\n\\textbf{In summary}, the scans in collision energy and nuclear mass number of colliding nuclei\nperformed at SPS and RHIC indicate four domains\nof hadron production separated by two thresholds: the onset of deconfinement and the\nonset of fireball. The sketch presented in Fig.~\\ref{fig:future}~(\\textit{left}) illustrates this conclusion.\n\n\n\\section{Plans for future measurements}\n\\label{sec:futures}\n\n\\begin{figure}[hbt]\n\n\\centering\n \\includegraphics[width=0.99\\linewidth,clip=true]{plots\/future_phases.jpg}\n \\caption{\\textit{Left:} \nTwo-dimensional scan conducted by NA61\\slash SHINE\\xspace varying collision energy and nuclear mass number of colliding nuclei indicates four domains of hadron\nproduction separated by two thresholds: the onset of deconfinement and the onset\nof fireball. The onset of deconfinement is well established in central Pb+Pb~(Au+Au)\ncollisions, its presence in collisions of low mass nuclei, in particular, inelastic p+p\ninteractions is questionable.\n\\textit{Right:} \nRegions in the phase diagram of strongly interacting matter studied by present (red) and future (green) heavy ion facilities.\n }\n\\label{fig:future}\n\\end{figure}\n\nLet us close by discussing possible future measurements which are suggested by this review and which should be considered as priorities: \n\\begin{enumerate}[(i)]\n\\item\nA collision energy scan in the onset of deconfinement region to measure open and hidden charm production in Pb+Pb collisions and establish the impact of the onset on the heavy quark sector. \nThis requires high statistics data collected with detectors optimized for open and hidden charm measurements. Detailed physics arguments and possible experimental set-ups are presented in Refs.~\\cite{Aduszkiewicz:2309890,Agnello:2018evr}.\n\\item\nA detailed study of the onset of fireball and its collision energy dependence in the onset of deconfinement region. The goal is to understand the underlying physics of this phenomenon, for details see Ref.~\\cite{Aduszkiewicz:2309890}. This requires a two dimensional scan in the nuclear mass number of the colliding nuclei and in collision energy performed with small steps in nuclear mass number.\n\\end{enumerate}\nConclusive results from the data recorded by NA61\\slash SHINE\\xspace and RHIC BES-II are needed to\nplan future measurements for the deconfinement-CP search.\n\nFigure~\\ref{fig:future}~(\\textit{right}) presents a compilation of present and future facilities and their region of coverage in the phase diagram of strongly interacting matter. \nCharm measurements are planned by NA61\\slash SHINE\\xspace~\\cite{Aduszkiewicz:2309890}, NA60+~\\cite{Agnello:2018evr} at the CERN SPS, they are considered by MPD~\\cite{Kekelidze:2018nyo} at NICA and J-PARC-HI~\\cite{Sako:2019hzh} at J-PARC.\nA detailed two dimensional scan is considered by NA61\\slash SHINE\\xspace at the CERN SPS~\\cite{Aduszkiewicz:2309890}.\n\n\n\\begin{acknowledgments} \nThe work was supported by the the National Science Centre Poland grant 2018\\slash 30\\slash A\\slash ST2\\slash 00226 and the German Research Foundation grant GA1480\\slash 8-1. The work of M.I.G. is supported by the Program of Fundamental Research of the Department of Physics and Astronomy of National Academy of Sciences of Ukraine.\n\\end{acknowledgments}\n\n\\bibliographystyle{ieeetr}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdoin b/data_all_eng_slimpj/shuffled/split2/finalzzdoin new file mode 100644 index 0000000000000000000000000000000000000000..4ed8b4bd6e84ccba36ddea6a3f66095c94627650 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdoin @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nWhile data-driven methods for system analysis and control have become increasingly popular over the recent years, only few such methods give theoretical guarantees on, e.g., stability or constraint satisfaction of system variables~\\cite{Hou13,Recht18}.\nA control method, which is naturally well-suited for achieving these objectives is model predictive control (MPC), which can handle nonlinear system dynamics, hard constraints on input, state and output, and it takes performance criteria into account~\\cite{Rawlings09}.\nIt centers around the repeated online solution of an optimization problem over predicted future system trajectories.\nThus, for the implementation of MPC, a model of the plant is required, which is usually obtained from first principles or from measured data via system identification~\\cite{Ljung87}.\nAn appealing alternative is to implement an MPC controller directly from measured data, without prior knowledge of an accurate model.\nIn various recent works, learning-based or adaptive MPC schemes have been proposed, which improve an inaccurate initial model using online measurements~\\cite{adetola2011robust,Aswani13,tanaskovic2014adaptive,Berkenkamp17,Zanon19}, while giving guarantees on the resulting closed loop.\nSimilarly, MPC based on Gaussian Processes has received increasing attraction~\\cite{Hewing18}, but proving desirable closed-loop properties remains an open issue.\nA different approach, which uses linear combinations of past trajectories to predict future trajectories, has been presented in~\\cite{salvador2018data}, but also no guarantees on, e.g., stability of the closed loop were given.\nThe design of purely data-driven MPC approaches with guarantees on stability and constraint satisfaction thus remains an open problem.\n\nIn this paper, we present a novel data-driven MPC scheme to control linear time-invariant (LTI) systems with stability and robustness guarantees for the closed loop.\nOur approach relies on a result from behavioral systems theory, which shows that the Hankel matrix consisting of a previously measured input-output trajectory spans the vector space of all trajectories of an LTI system, given that the input component is persistently exciting~\\cite{Willems05}.\nAlthough this result has found various applications in the field of system identification~\\cite{Katayama05,Markovsky05,Markovsky08}, it has only recently been used to develop data-driven methods for system analysis and control with theoretical guarantees.\nAn exposition of the main result of~\\cite{Willems05} in the classical state-space control framework and an extension to certain classes of nonlinear systems are provided in~\\cite{Berberich19}.\nFurther, the result is employed in~\\cite{Persis19} to design state- and output-feedback controllers and in~\\cite{Romer19} to verify dissipation inequalities from measured data, whereas~\\cite{Waarde19} investigates data-driven control without requiring persistently exciting data.\n\nMoreover, the recent contributions~\\cite{Yang15,Coulson19,Coulson19b} set up an MPC scheme based on~\\cite{Willems05}, but no guarantees on recursive feasibility or closed-loop stability can be given since neither terminal ingredients are included in the MPC scheme nor sufficient lower bounds on the prediction horizon are derived.\nIn the present paper, we propose a related MPC scheme, which utilizes terminal equality constraints, and we provide a theoretical analysis of various desirable properties of the closed loop.\nTo the best of our knowledge, this is the first analysis regarding recursive feasibility and stability\nof purely data-driven MPC.\nThe main advantage of the proposed MPC scheme over existing adaptive or learning-based methods such as~\\cite{adetola2011robust,Aswani13,tanaskovic2014adaptive,Berkenkamp17,Zanon19} is that it requires only an initially measured, persistently exciting data trajectory as well as an upper bound on the system order, but no (set-based) model description and no online estimation process.\nMoreover, since it relies on the data-driven system description from~\\cite{Willems05}, the presented scheme is inherently an output-feedback MPC scheme and does not require online state measurements.\\newpage\n\nAfter stating the required definitions and existing results in Section~\\ref{sec:setting}, we expand the nominal MPC scheme of~\\cite{Yang15,Coulson19} by terminal equality constraints in Section~\\ref{sec:tec}.\nUnder the assumption that the output of the plant can be measured exactly, we prove recursive feasibility, constraint satisfaction, and exponential stability of the scheme.\nIn Section~\\ref{sec:robust}, we propose a robust data-driven MPC scheme to account for bounded additive noise in both the initial data for prediction as well as the online measurements.\nUnder suitable assumptions on the system and design parameters, we prove that the closed loop under application of the scheme in a multi-step fashion leads to a practically exponentially stable closed loop.\nIn Section~\\ref{sec:example}, we illustrate the advantages of the proposed scheme over the scheme without terminal constraints from~\\cite{Yang15,Coulson19,Coulson19b} by means of a numerical example.\nThe paper is concluded in Section~\\ref{sec:conclusion}.\n\n\\section{Preliminaries}\\label{sec:setting}\n\nLet $\\mathbb{I}_{[a,b]}$ denote the set of integers in the interval $[a,b]$.\nFor a vector $x$ and a positive definite matrix $P=P^\\top\\succ0$, we write $\\lVert x\\rVert_P=\\sqrt{x^\\top Px}$.\nFurther, we denote the minimal and maximal eigenvalue of $P$ by $\\lambda_{\\min}(P)$ and $\\lambda_{\\max}(P)$, respectively.\nFor two matrices $P_1=P_1^\\top,P_2=P_2^\\top$, we write $\\lambda_{\\min}(P_1,P_2)=\\min\\{\\lambda_{\\min}(P_1),\\lambda_{\\min}(P_2)\\}$, and similarly for $\\lambda_{\\max}(P_1,P_2)$.\nMoreover, $\\lVert x\\rVert_2$, $\\lVert x\\rVert_1$, and $\\lVert x\\rVert_\\infty$ denote the Euclidean, $\\ell_1$-, and $\\ell_\\infty$-norm of $x$, respectively.\nIf the argument is matrix-valued, then we mean the corresponding induced norm.\nFor $\\delta>0$, we define $\\mathbb{B}_\\delta=\\left\\{x\\in\\mathbb{R}^n\\mid \\lVert x\\rVert_2\\leq\\delta\\right\\}$.\nA sequence $\\{x_k\\}_{k=0}^{N-1}$ induces the Hankel matrix\n\\begin{align*}\nH_L&(x)\\coloneqq\\begin{bmatrix}x_0 & x_1& \\dots & x_{N-L}\\\\\nx_1 & x_2 & \\dots & x_{N-L+1}\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\nx_{L-1} & x_{L} & \\dots & x_{N-1}\n\\end{bmatrix}.\n\\end{align*}\nFor a stacked window of the sequence, we write\n\\begin{align*}\nx_{[a,b]}=\\begin{bmatrix}x_a\\\\\\vdots\\\\x_b\\end{bmatrix}.\n\\end{align*}\nWe denote by $x$ either the sequence itself or the stacked vector $x_{[0,N-1]}$ containing all of its components.\nWe consider the following standard definition of persistence of excitation.\n\n\\begin{definition}\nWe say that a sequence $\\{u_k\\}_{k=0}^{N-1}$ with $u_k\\in\\mathbb{R}^m$ is persistently exciting of order $L$ if $\\text{rank}(H_L(u))=mL$.\n\\end{definition}\n\nOur goal is to control an unknown LTI system, denoted by $G$, of order $n$ with $m$ inputs and $p$ outputs, using only measured input-output data.\n\n\\begin{definition}\\label{def:trajectory_of}\nWe say that an input-output sequence $\\{u_k,y_k\\}_{k=0}^{N-1}$ is a trajectory of an LTI system $G$, if there exists an initial condition $\\bar{x}\\in\\mathbb{R}^n$ as well as a state sequence $\\{x_k\\}_{k=0}^{N}$ such that\n\\begin{align*}\nx_{k+1}&=Ax_k+Bu_k,\\>\\>x_0=\\bar{x},\\\\\ny_k&=Cx_k+Du_k,\n\\end{align*}\nfor $k=0,\\dots,N-1$, where $(A,B,C,D)$ is a minimal realization of $G$.\n\\end{definition}\n\nNote that we define a trajectory of an LTI system as an input-output sequence that can be produced by a minimal realization, entailing controllability and observability of the system.\nExtending the results of this paper to systems whose input-output behavior cannot be explained via a minimal realization is an interesting issue for future research.\nThe following result lays the foundation of the present paper.\nIt shows that a Hankel matrix, involving a single persistently exciting trajectory, spans the vector space of all system trajectories of an LTI system.\nThe result originates from behavioral systems theory~\\cite{Willems05}, but we employ the formulation in the classical state-space control framework~\\cite{Berberich19}.\n\n\\begin{theorem}[\\cite{Berberich19}]\\label{thm:traj_rep}\nSuppose $\\{u_k^d,y_k^d\\}_{k=0}^{N-1}$ is a trajectory of an LTI system $G$, where $u^d$ is persistently exciting of order $L+n$.\nThen, $\\{\\bar{u}_k,\\bar{y}_k\\}_{k=0}^{L-1}$ is a trajectory of $G$ if and only if there exists $\\alpha\\in\\mathbb{R}^{N-L+1}$ such that\n\\begin{align}\\label{eq:thm_hankel}\n\\begin{bmatrix}H_L(u^d)\\\\H_L(y^d)\\end{bmatrix}\\alpha\n=\\begin{bmatrix}\\bar{u}\\\\\\bar{y}\\end{bmatrix}.\n\\end{align}\n\\end{theorem}\n\nRecently, Theorem~\\ref{thm:traj_rep} has received increasing attention to develop data-driven controllers~\\cite{Persis19}, verify dissipativity~\\cite{Romer19}, or to design MPC schemes~\\cite{Yang15,Coulson19,Coulson19b}.\nThis is due to the fact that~\\eqref{eq:thm_hankel} provides an appealing data-driven characterization of all trajectories of the unknown LTI system, without requiring any prior identification step.\nIn this paper, we use Theorem~\\ref{thm:traj_rep} to develop a data-driven MPC scheme with \\emph{provable}\nstability guarantees despite noisy measurements.\nNote that, if a sequence is persistently exciting of order $L$, then it is also persistently exciting of order $\\tilde{L}$ for any $\\tilde{L}\\leq L$.\nTherefore, Theorem~\\ref{thm:traj_rep} and hence all of our results hold true if $n$ is replaced by a (potentially rough) upper bound.\n\nAlthough we assume that only input-output data of the unknown system are available, we make extensive use of the fact that an input-output trajectory of length greater than or equal to $n$ induces a unique internal state in some minimal realization of the unknown system.\n\\begin{comment}\nThe following result shows that input and initial conditions suffice to fix an $\\alpha$, which predicts the corresponding output.\nThe following result is from data-driven simulation~\\cite{Markovsky08,Berberich19} and slightly reformulated.\n\\begin{proposition}\\label{prop:DD_sim}\nSuppose $\\{u_k^d,y_k^d\\}_{k=0}^{N-1}$ is a trajectory of an LTI system $G$, where $u$ is persistently exciting of order $L+n$.\nLet $\\{\\bar{u}_k,\\bar{y}_k\\}_{k=0}^{L-1}$ be an arbitrary trajectory of $G$.\nDenote by $\\{x_k^d\\}_{k=0}^{N-1}$ and $\\{\\bar{x}_k\\}_{k=0}^{L-1}$ the corresponding state trajectories in some minimal realization.\nIf $\\nu\\geq n$, then there exists an $\\alpha\\in\\mathbb{R}^{N-L+1}$ such that\n\\begin{align}\\label{eq:prop_DD_sim}\n\\begin{bmatrix}\nH_L\\left(u^d\\right)\\\\H_1\\left(x^d_{[0,N-L]}\\right)\n\\end{bmatrix}\\alpha=\n\\begin{bmatrix}\n\\bar{u}\\\\\\bar{x}_0\n\\end{bmatrix}.\n\\end{align}\nFurther, it holds that $\\bar{y}=H_L(y^d)\\alpha$.\n\\end{proposition}\n\\begin{proof}\nFollows directly from~\\cite[Proposition 1]{Markovsky08}.\n\\end{proof}\nSince the matrix on the left-hand-side of~\\eqref{eq:prop_DD_sim} will be essential for many of our arguments, we abbreviate it as \n\\begin{align}\\label{eq:Hux}\nH_{ux}\\coloneqq\\begin{bmatrix}\nH_L\\left(u^d\\right)\\\\H_1\\left(x^d_{[0,N-L]}\\right)\n\\end{bmatrix}.\n\\end{align}\nA simple choice to satisfy~\\eqref{eq:prop_DD_sim} is $\\alpha=H_{ux}^\\dagger\\begin{bmatrix}\\bar{u}\\\\\\bar{x}_0\\end{bmatrix}$, where $H_{ux}^\\dagger$ denotes the right-inverse of $H_{ux}$.y\n\\end{comment}\nWe employ MPC to stabilize a desired equilibrium of the system.\nSince a model of this system is not available, we define an equilibrium via input-output pairs.\n\n\\begin{definition}\\label{def:equil}\nWe say that an input-output pair $(u^s,y^s)\\in\\mathbb{R}^{m+p}$ is an equilibrium of an LTI system $G$, if the sequence $\\{\\bar{u}_k,\\bar{y}_k\\}_{k=0}^{n}$ with $(\\bar{u}_k,\\bar{y}_k)=(u^s,y^s)$ for all $k\\in\\mathbb{I}_{[0,n]}$ is a trajectory of $G$.\n\\end{definition}\n\nFor an equilibrium $(u^s,y^s)$, we define $u^s_n$ and $y^s_n$ as the column vectors containing $n$ times $u^s$ and $y^s$, respectively.\nWe assume that the system is subject to pointwise-in-time input and output constraints, i.e., $u_t\\in\\mathbb{U}\\subseteq\\mathbb{R}^m$, $y_t\\in\\mathbb{Y}\\subseteq\\mathbb{R}^p$ for all $t\\geq0$, and we assume $(u^s,y^s)\\in\\text{int}(\\mathbb{U}\\times\\mathbb{Y})$.\nThroughout this paper, $\\left\\{u_k^d,y_k^d\\right\\}_{k=0}^{N-1}$ denotes an a priori measured data trajectory of length $N$, which is used for prediction as in~\\eqref{eq:thm_hankel}.\nThe predicted input- and output-trajectories at time $t$ over some prediction horizon $L$ are written as $\\left\\{\\bar{u}_k(t),\\bar{y}_k(t)\\right\\}_{k=-n}^{L-1}$.\nNote that the time indices start at $k=-n$, since the last $n$ inputs and outputs will be used to invoke a unique initial state at time $t$.\nFurther, the closed-loop input, the state in some minimal realization, and the output at time $t$ are denoted by $u_t$, $x_t$, and $y_t$, respectively.\n\n\\section{Nominal data-driven MPC}\\label{sec:tec}\nIn this section, we propose a simple, nominal data-driven MPC scheme with terminal equality constraints. \nThe scheme relies on noise-free measurements to predict future trajectories using Theorem~\\ref{thm:traj_rep} and is described in Section~\\ref{sec:tec_scheme}.\nUnder mild assumptions, we prove recursive feasibility, constraint satisfaction, and exponential stability of the closed loop in Section~\\ref{sec:stab}.\n \n\\subsection{Nominal MPC scheme}\\label{sec:tec_scheme}\nCommonly, MPC relies on a model of the plant to predict future trajectories and to optimize over them.\nTheorem~\\ref{thm:traj_rep} provides an appealing alternative to a model since~\\eqref{eq:thm_hankel} suffices to capture all system trajectories.\nThus, to implement a data-driven MPC scheme, one can simply replace the system dynamics constraint by the constraint that the predicted input-output trajectories satisfy~\\eqref{eq:thm_hankel}.\nTo be more precise, the proposed data-driven MPC scheme minimizes, at time $t$, given the last $n$ input-output pairs, the following open-loop cost\\\\\n\\begin{subequations}\n\\begin{align}\nJ_L(u_{[t-n,t-1]},y_{[t-n,t-1]},&\\alpha(t))=\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k(t),\\bar{y}_k(t)\\right),\\\\\\label{eq:MPC_modela}\n\\begin{bmatrix}\\bar{u}_{[-n,L-1]}(t)\\\\\\bar{y}_{[-n,L-1]}(t)\\end{bmatrix}&=\\begin{bmatrix}H_{L+n}(u^d)\\\\H_{L+n}(y^d)\\end{bmatrix}\\alpha(t),\\\\\n\\label{eq:MPC_initial_conditionsa}\n\\begin{bmatrix}\\bar{u}_{[-n,-1]}(t)\\\\\\bar{y}_{[-n,-1]}(t)\\end{bmatrix}&=\\begin{bmatrix}u_{[t-n,t-1]}\\\\y_{[t-n,t-1]}\\end{bmatrix}.\n\\end{align}\n\\end{subequations}\nAs described above, the constraint~\\eqref{eq:MPC_modela} replaces the system dynamics compared to classical model-based MPC schemes.\nFurther,~\\eqref{eq:MPC_initial_conditionsa} ensures that the internal state of the true trajectory aligns with the internal state of the predicted trajectory at time $t$.\nNote that the overall length of the trajectory $(\\bar{u}(t),\\bar{y}(t))$ is $L+n$ since the past $n$ elements $\\{\\bar{u}_k(t),\\bar{y}_k(t)\\}_{k=-n}^{-1}$ are used to specify the initial conditions in~\\eqref{eq:MPC_initial_conditionsa}.\nThese initial conditions are specified until time step $t-1$, since the input at time $t$ might already influence the output at time $t$, in case of a feedthrough-element of the plant.\nThe open-loop cost depends only on the decision variable $\\alpha(t)$, since $\\bar{u}(t)$ and $\\bar{y}(t)$ are fixed implicitly through the dynamic constraint~\\eqref{eq:MPC_modela}.\nThroughout the paper, we consider quadratic stage costs, which penalize the distance w.r.t. a desired equilibrium $(u^s,y^s)$, i.e.,\n\\begin{align*}\n\\ell(\\bar{u},\\bar{y})=\\lVert\\bar{u}-u^s\\rVert_R^2+\\lVert\\bar{y}-y^s\\rVert_Q^2,\n\\end{align*}\nwhere $Q,R\\succ0$.\nIn~\\cite{Yang15,Coulson19}, it was suggested to directly minimize the above open-loop cost subject to constraints on input and output.\nIt is well-known that MPC without terminal constraints requires a sufficiently long prediction horizon to ensure stability and constraint satisfaction~\\cite{grune2017nonlinear,grune2012nmpc}.\nWithout such an assumption, the application of MPC can even destabilize an open-loop stable system.\nThere are two main approaches in the literature to guarantee stability:\na) providing bounds on the minimal required prediction horizon~\\cite{grune2012nmpc} and b) including terminal ingredients such as terminal cost functions or terminal region constraints~\\cite{Mayne00}.\nBoth approaches are usually based on model knowledge and thus, it is not straightforward to use them in the present, purely data-driven setting.\n\nIn this paper, we consider a simple terminal equality constraint, which can be directly included into the data-driven MPC framework, and which guarantees exponential stability of the closed loop.\nTo this end, we propose the following data-driven MPC scheme with a terminal equality constraint.\n\\begin{subequations}\\label{eq:term_eq_MPC}\n\\begin{align}\\nonumber\nJ_L^*(u_{[t-n,t-1]}&,y_{[t-n,t-1]})=\\\\\n\\underset{\\substack{\\alpha(t)\\\\\\bar{u}(t),\\bar{y}(t)}}{\\min}\\>\\>&\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k(t),\\bar{y}_k(t)\\right)\\\\\\label{eq:MPC_model}\ns.t.&\\begin{bmatrix}\\bar{u}_{[-n,L-1]}(t)\\\\\\bar{y}_{[-n,L-1]}(t)\\end{bmatrix}=\\begin{bmatrix}H_{L+n}(u^d)\\\\H_{L+n}(y^d)\\end{bmatrix}\\alpha(t),\\\\\n\\label{eq:MPC_initial_conditions}\n&\\begin{bmatrix}\\bar{u}_{[-n,-1]}(t)\\\\\\bar{y}_{[-n,-1]}(t)\\end{bmatrix}=\\begin{bmatrix}u_{[t-n,t-1]}\\\\y_{[t-n,t-1]}\\end{bmatrix},\\\\\\label{eq:term_eq_MPC2}\n&\\begin{bmatrix}\\bar{u}_{[L-n,L-1]}(t)\\\\\\bar{y}_{[L-n,L-1]}(t)\\end{bmatrix}=\\begin{bmatrix}u^s_n\\\\y^s_n\\end{bmatrix},\\\\\\label{eq:term_eq_MPC3}\n&\\bar{u}_k(t)\\in\\mathbb{U},\\>\\>\\bar{y}_k(t)\\in\\mathbb{Y},\\>\\>k\\in\\mathbb{I}_{[0,L-1]}.\n\\end{align}\n\\end{subequations}\nThe terminal equality constraint~\\eqref{eq:term_eq_MPC2} implies that $\\bar{x}_L(t)$, which is the internal state predicted $L$ steps ahead corresponding to the predicted input-output trajectory, aligns with the steady-state $x^s$ corresponding to $(u^s,y^s)$, i.e., $\\bar{x}_L(t)=x^s$ in any minimal realization.\nWhile Problem~\\eqref{eq:term_eq_MPC} requires that $(u^s,y^s)$ is an equilibrium of the unknown system in the sense of Definition~\\ref{def:equil}, this requirement can be dropped when $(u^s,y^s)$ is replaced by an artificial equilibrium, which is also optimized online (compare~\\cite{limon2008mpc}).\nThe recent paper~\\cite{berberich2020tracking} extends the above MPC scheme to such a setting, thereby leading to a significantly larger region of attraction for the closed loop without requiring knowledge of a reachable equilibrium of the unknown system.\nAs in standard MPC, Problem~\\eqref{eq:term_eq_MPC} is solved in a receding horizon fashion, which is summarized in Algorithm~\\ref{alg:MPC}.\n\n\\begin{algorithm}\n\\begin{Algorithm}\\label{alg:MPC}\n\\normalfont{\\textbf{Data-Driven MPC Scheme}}\n\\begin{enumerate}\n\\item At time $t$, take the past $n$ measurements $u_{[t-n,t-1]}$, $y_{[t-n,t-1]}$ and solve~\\eqref{eq:term_eq_MPC}.\n\\item Apply the input $u_t=\\bar{u}_0^*(t)$.\n\\item Set $t=t+1$ and go back to 1).\n\\end{enumerate}\n\\end{Algorithm}\n\\end{algorithm}\n\nWith slight abuse of notation, we will denote the open-loop cost and the optimal open-loop cost of~\\eqref{eq:term_eq_MPC} by $J_L(x_t,\\alpha(t))$ and $J_L^*(x_t)$, respectively,\nwhere $x_t$ is the state in some minimal realization, induced by $u_{[t-n,t-1]}$, $y_{[t-n,t-1]}$.\n\n\\subsection{Closed-loop guarantees}\\label{sec:stab}\nWithout loss of generality, we assume for the analysis that $u^s=0$, $y^s=0$, and thus $x^s=0$.\nFurther, we define the set of initial states, for which~\\eqref{eq:term_eq_MPC} is feasible, by $\\mathbb{X}_L=\\left\\{x\\in\\mathbb{R}^n\\mid J_L^*(x)<\\infty\\right\\}$.\nTo prove exponential stability of the proposed scheme, we assume that the optimal value function of~\\eqref{eq:term_eq_MPC} is quadratically upper bounded.\nThis is, e.g., satisfied in the present linear-quadratic setting if the constraints are polytopic\\footnote{While~\\cite{Bemporad02} considered model-based linear-quadratic MPC, the result applies similarly to the present data-driven MPC setting since~\\eqref{eq:MPC_model} (together with the initial conditions~\\eqref{eq:MPC_initial_conditions}) describes the input-output behavior of the system exactly and thus, both settings are equivalent in the nominal case.}~\\cite{Bemporad02}.\n\\begin{assumption}\\label{ass:quad_upper_bound}\nThe optimal value function $J_L^*(x)$ is quadratically upper bounded on $\\mathbb{X}_L$, i.e., there exists $c_u>0$ such that $J_L^*(x)\\leq c_u\\lVert x\\rVert_2^2$ for all $x\\in\\mathbb{X}_L$.\n\\end{assumption}\n\nMoreover, we assume that the input $u^d$ generating the data used for prediction is sufficiently rich in the following sense.\n\n\\begin{assumption}\\label{ass:pe}\nThe input $u^d$ of the data trajectory is persistently exciting of order $L+2n$.\n\\end{assumption}\n\nNote that we assume persistence of excitation of order $L+2n$, although Theorem~\\ref{thm:traj_rep} requires only an order of $L+n$.\nThis is due to the fact that the reconstructed trajectories in~\\eqref{eq:term_eq_MPC} are of length $L+n$ (compared to length $L$ in Theorem~\\ref{thm:traj_rep}), since $n$ components are used to fix the initial conditions.\nFurthermore, due to the terminal constraints~\\eqref{eq:term_eq_MPC2}, the prediction horizon needs to be at least as long as the system order $n$.\n\n\\begin{assumption}\\label{ass:length_n}\nThe prediction horizon satisfies $L\\geq n$.\n\\end{assumption}\n\nThe following result shows that the MPC scheme based on~\\eqref{eq:term_eq_MPC} is recursively feasible, ensures constraint satisfaction, and leads to an exponentially stable closed loop.\n\n\\begin{theorem}\\label{thm:mpc_tec}\nSuppose Assumptions~\\ref{ass:quad_upper_bound},~\\ref{ass:pe} and~\\ref{ass:length_n} are satisfied.\nIf the MPC problem~\\eqref{eq:term_eq_MPC} is feasible at initial time $t=0$, then \n\\begin{itemize}\n\\item[(i)] it is feasible at any $t\\in\\mathbb{N}$,\n\\item[(ii)] the closed loop satisfies the constraints, i.e., $u_t\\in\\mathbb{U}$ and $y_t\\in\\mathbb{Y}$ for all $t\\in\\mathbb{N}$,\n\\item[(iii)] the equilibrium $x^s=0$ is exponentially stable for the resulting closed loop.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nRecursive feasibility (i) and constraint satisfaction (ii) follow from standard MPC arguments, i.e., by defining a candidate solution as the shifted, previously optimal solution and appending zero (compare~\\cite{Rawlings09}).\\\\\n\\textbf{(iii). Exponential Stability}\\\\\nDenote the standard candidate solution mentioned above by $\\bar{u}'(t+1),\\bar{y}'(t+1),\\alpha'(t+1)$.\nThe cost of this solution is\n\\begin{align*}\n&J_L(x_{t+1},\\alpha'(t+1))\\\\\n&=\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k'(t+1),\\bar{y}_k'(t+1)\\right)\n=\\sum_{k=1}^{L-1}\\ell\\left(\\bar{u}^*_k(t),\\bar{y}^*_k(t)\\right)\\\\\n&=J_L^*(x_t)-\\ell\\left(\\bar{u}_0^*(t),\\bar{y}_0^*(t)\\right).\n\\end{align*}\nHence, it holds that\n\\begin{align}\\label{eq:thm_eq0_proof1}\nJ_L^*(x_{t+1})\\leq J_L^*(x_t)-\\ell\\left(\\bar{u}_0^*(t),\\bar{y}_0^*(t)\\right).\n\\end{align}\nSince $x$ is the state of an observable (and hence detectable) minimal realization, there exists a matrix $P\\succ0$ such that $W(x)=\\lVert x\\rVert_P^2$ is an input-output-to-state stability (IOSS) Lyapunov function\\footnote{Note that, in~\\cite[Section 3.2]{cai2008input}, only strictly proper systems with $y=Cx$ are considered, while we allow for more general systems with $y=Cx+Du$.\nThe result from~\\cite{cai2008input} can be extended to $y=Cx+Du$ by considering a modified $\\tilde{B}=B+LD$ in~\\cite[Inequality (12)]{cai2008input}.}, which satisfies\n\\begin{align}\\label{eq:thm_eq0_proof2}\nW(Ax+Bu)-W(x)\\leq-\\frac{1}{2}\\lVert x\\rVert_2^2+c_1\\lVert u\\rVert_2^2 +c_2\\lVert y\\rVert_2^2,\n\\end{align}\nfor all $x\\in\\mathbb{R}^n,u\\in\\mathbb{R}^m,y=Cx+Du$, and for suitable $c_1,c_2>0$~\\cite{cai2008input}.\nDefine the candidate Lyapunov function $V(x)=\\gamma W(x)+J_L^*(x)$ for some $\\gamma>0$.\nNote that $V$ is quadratically lower bounded, i.e., $V(x)\\geq\\gamma W(x)\\geq\\gamma\\lambda_{\\min}(P)\\lVert x\\rVert_2^2$ for all $x\\in\\mathbb{X}_L$.\nFurther, $J_L^*$ is quadratically upper bounded by Assumption~\\ref{ass:quad_upper_bound}, i.e., $J_L^*(x)\\leq c_u\\lVert x\\rVert_2^2$ for all $x\\in\\mathbb{X}_L$.\nHence, we have\n\\begin{align*}\nV(x)=J_L^*(x)+\\gamma W(x)\\leq\\left(c_u+\\gamma\\lambda_{\\max}(P)\\right)\\lVert x\\rVert_2^2,\n\\end{align*}\nfor all $x\\in\\mathbb{X}_L$, i.e., $V$ is quadratically upper bounded.\nWe consider now\n\\begin{align*}\n\\gamma=\\frac{\\lambda_{\\min}(Q,R)}{\\max\\{c_1,c_2\\}}>0.\n\\end{align*}\nAlong the closed-loop trajectories, using both~\\eqref{eq:thm_eq0_proof1} as well as~\\eqref{eq:thm_eq0_proof2}, it holds that\n\\begin{align*}\nV(x_{t+1})-V(x_t)\\leq&\\>\\gamma\\left(-\\frac{1}{2}\\lVert x_t\\rVert^2_2+c_1\\lVert u_t\\rVert^2_2 +c_2\\lVert y_t\\rVert^2_2\\right)\\\\\n&-\\lVert u_t\\rVert_R^2-\\lVert y_t\\rVert_Q^2\\\\\n\\leq&-\\frac{\\gamma}{2}\\lVert x_t\\rVert^2_2.\n\\end{align*}\nIt follows from standard Lyapunov arguments with Lyapunov function $V$ that the equilibrium $x^s=0$ is exponentially stable with region of attraction $\\mathbb{X}_L$.\n\\end{proof}\n\nThe proof of Theorem~\\ref{thm:mpc_tec} applies standard arguments from model-based MPC with terminal constraints (compare~\\cite{Rawlings09}) to the data-driven system description derived in~\\cite{Willems05}, similar to the approaches of~\\cite{Yang15,Coulson19} which did however not address closed-loop guarantees. \nTo handle the fact that the stage cost $\\ell$ is merely positive \\emph{semi}-definite in the state, detectability of the stage cost is exploited via an IOSS Lyapunov function~\\cite{cai2008input}, similar to~\\cite{grimm2005model}.\nAs we will see in Section~\\ref{sec:robust}, this analogy between model-based MPC and the proposed data-driven MPC scheme is only present in the nominal case, where the data is noise-free.\nFor the more realistic case of noisy output measurements, we develop a robust data-driven MPC scheme and we provide a novel theoretical analysis of the closed loop in Section~\\ref{sec:robust}, which is the main contribution of this paper.\n\n\\begin{remark}\\label{rk:complexity_nominal}\nWe would like to emphasize the simplicity of the proposed MPC scheme.\nWithout any prior identification step, a single measured data trajectory can be used directly to set up an MPC scheme for a linear system.\nCompared to other learning-based MPC approaches such as~\\cite{adetola2011robust,Aswani13,tanaskovic2014adaptive,Berkenkamp17,Zanon19}, which require initial model knowledge as well as an online estimation process, the complexity of~\\eqref{eq:term_eq_MPC} is similar to classical MPC schemes, which rely on full model knowledge.\nTo be more precise, the decision variables $\\bar{u}(t),\\bar{y}(t)$ can be replaced by $\\alpha(t)$ via~\\eqref{eq:MPC_model} (using a condensed formulation) and hence, since $\\alpha(t)\\in\\mathbb{R}^{N-L-n+1}$, Problem~\\eqref{eq:term_eq_MPC} contains in total $N-L-n+1$ decision variables.\nFor $u^d$ to be persistently exciting of order $L+2n$, it needs to hold that $N-L-2n+1\\geq m(L+2n)$.\nAssuming equality, Problem~\\eqref{eq:term_eq_MPC} hence has $m(L+2n)+n$ free parameters.\nOn the contrary, a condensed model-based MPC optimization problem contains $mL$ decision variables for the input trajectory (assuming that state measurements are available).\nThus, the online complexity of the proposed data-driven MPC approach is slightly larger ($2mn+n$ additional decision variables) than that of model-based MPC, but it does not require an a priori (offline) identification step.\nIt is worth noting that the difference in complexity is independent of the horizon $L$.\nMoreover, the proposed data-driven MPC is inherently an output-feedback controller since no state measurements are required for its implementation.\nFinally, as in model-based MPC, for convex polytopic (or quadratic) constraints $\\mathbb{U},\\mathbb{Y}$,~\\eqref{eq:term_eq_MPC} is a convex (quadratically constrained) quadratic program which can be solved efficiently.\n\\end{remark}\n\\section{Robust data-driven MPC}\\label{sec:robust}\nIn this section, we propose a multi-step robust data-driven MPC scheme and we prove practical exponential stability of the closed loop in the presence of bounded additive output measurement noise.\nThe scheme includes a slack variable, which is regularized in the cost and compensates noise both in the initial data $(u^d,y^d)$ used for prediction and in the online measurement updates $\\left(u_{[t-n,t-1]},y_{[t-n,t-1]}\\right)$. \nSection~\\ref{sec:scheme} contains the scheme, which is essentially a robust modification of the nominal scheme of Section~\\ref{sec:tec}, as well as detailed explanations of the key ingredients.\nIn Sections~\\ref{sec:Lyapunov_bound} and~\\ref{sec:prediction_error}, we prove two technical Lemmas, which will be required for our main theoretical results.\nRecursive feasibility of the closed loop is proven in Section~\\ref{sec:robust_feasibility}.\nIn Section~\\ref{sec:robust_stability}, we show that, under suitable assumptions, the closed loop resulting from the application of the multi-step MPC scheme leads to a practically exponentially stable closed loop.\nMoreover, if the noise bound tends to zero, then the region of attraction of the closed loop approaches the set of all initially feasible points.\nIn this section, we do not consider output constraints, i.e., $\\mathbb{Y}=\\mathbb{R}^p$.\nIn~\\cite{berberich2020constraints}, we recently extended the results of this section by incorporating tightened output constraints in order to guarantee closed-loop constraint satisfaction despite noisy data.\n\n\n\\subsection{Robust MPC scheme}\\label{sec:scheme}\n\nIn practice, the output of the unknown LTI system $G$ is usually not available exactly, but might be subject to measurement noise.\nThis implies that the stacked data-dependent Hankel matrices in~\\eqref{eq:thm_hankel} do not span the system's trajectory space exactly and thus, the output trajectories cannot be predicted accurately.\nMoreover, noisy output measurements enter the initial conditions in Problem~\\eqref{eq:term_eq_MPC}, which deteriorates the prediction accuracy even further.\nTherefore, a direct application of the MPC scheme of Section~\\ref{sec:tec} may lead to feasibility issues or it may render the closed loop unstable.\nIn this section, we tackle the issue of noisy measurements with a robust data-driven MPC scheme with terminal constraints.\nWe consider output measurements with bounded additive noise in the initially available data $\\tilde{y}_k^d=y_k^d+\\varepsilon_k^d$ as well as in the online measurements $\\tilde{y}_k=y_k+\\varepsilon_k$.\nWe make no assumptions on the nature of the noise, but we require that it is bounded as $\\lVert\\varepsilon_k^d\\rVert_{\\infty}\\leq\\bar{\\varepsilon}$ and $\\lVert\\varepsilon_k\\rVert_{\\infty}\\leq\\bar{\\varepsilon}$ for some $\\bar{\\varepsilon}>0$. \nThus, the present setting includes two types of noise.\nThe data used for the prediction via the Hankel matrices in~\\eqref{eq:thm_hankel} is perturbed by $\\varepsilon^d$, which can thus be interpreted as a multiplicative model uncertainty.\nOn the other hand, $\\varepsilon$ perturbs the online measurements and hence, the overall control goal is a noisy output-feedback problem.\n\nThe key idea to account for noisy measurements is to relax the equality constraint~\\eqref{eq:MPC_model}, where the relaxation parameter is penalized appropriately in the cost function.\nGiven a noisy initial input-output trajectory $\\left(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]}\\right)$ of length $n$, and noisy data $(u^d,\\tilde{y}^d)$, we propose the following robust modification of~\\eqref{eq:term_eq_MPC}.\n\\begin{subequations}\\label{eq:robust_MPC}\n\\begin{align}\\nonumber\nJ_L^*&\\big(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]}\\big)=\\\\\\nonumber\n\\underset{\\substack{\\alpha(t),\\sigma(t)\\\\\\bar{u}(t),\\bar{y}(t)}}{\\min}&\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k(t),\\bar{y}_k(t)\\right)+\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha(t)\\rVert_2^2+\\lambda_\\sigma\\lVert\\sigma(t)\\rVert_2^2\\\\\n\\label{eq:robust_MPC1} s.t.\\>\\> &\\>\\begin{bmatrix}\n\\bar{u}(t)\\\\\\bar{y}(t)+\\sigma(t)\\end{bmatrix}=\\begin{bmatrix}H_{L+n}\\left(u^d\\right)\\\\H_{L+n}\\left(\\tilde{y}^d\\right)\\end{bmatrix}\\alpha(t),\\\\\\label{eq:robust_MPC2}\n&\\>\\begin{bmatrix}\\bar{u}_{[-n,-1]}(t)\\\\\\bar{y}_{[-n,-1]}(t)\\end{bmatrix}=\\begin{bmatrix}u_{[t-n,t-1]}\\\\\\tilde{y}_{[t-n,t-1]}\\end{bmatrix},\\\\\\label{eq:robust_MPC3}\n&\\>\\begin{bmatrix}\\bar{u}_{[L-n,L-1]}(t)\\\\\\bar{y}_{[L-n,L-1]}(t)\\end{bmatrix}=\\begin{bmatrix}u^s_n\\\\y^s_n\\end{bmatrix},\\>\\>\\bar{u}_k(t)\\in\\mathbb{U},\\\\\n\\label{eq:robust_MPC5}\n&\\>\\lVert\\sigma_k(t)\\rVert_\\infty\\leq\\bar{\\varepsilon}\\left(1+\\lVert\\alpha(t)\\rVert_1\\right),\\>\\>k\\in\\mathbb{I}_{[0,L-1]}.\n\\end{align}\n\\end{subequations}\nCompared to the nominal MPC problem~\\eqref{eq:term_eq_MPC}, the output data trajectory $\\tilde{y}^d$ as well as the initial output $\\tilde{y}_{[t-n,t-1]}$, which is obtained via online measurements, have been replaced by their noisy counterparts.\nFurther, the following ingredients have been added:\n\\begin{itemize}\n\\item[a)] A slack variable $\\sigma$, bounded by~\\eqref{eq:robust_MPC5}, to account for the noisy online measurements $\\tilde{y}_{[t-n,t-1]}$ and for the noisy data $\\tilde{y}^d$ used for prediction, which can be interpreted as a multiplicative model uncertainty,\n\\item[b)] Quadratic regularization (i.e., \\emph{ridge regularization}) of $\\alpha$ and $\\sigma$ with weights $\\lambda_\\alpha\\bar{\\varepsilon},\\lambda_\\sigma>0$, i.e., the regularization of $\\alpha$ depends on the noise level.\n\\end{itemize}\nThe above $\\ell_2$-norm regularization for $\\alpha(t)$ implies that small values of $\\lVert\\alpha(t)\\rVert_2^2$ are preferred.\nSince the noisy Hankel matrix $H_{L+n}\\left(\\tilde{y}^d\\right)$ is multiplied by $\\alpha(t)$ in~\\eqref{eq:robust_MPC1}, this implicitly reduces the influence of the noise on the prediction accuracy.\nIntuitively, for increasing $\\lambda_\\alpha$, the term $\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha(t)\\rVert_2^2$ reduces the ``complexity'' of the data-driven system description~\\eqref{eq:robust_MPC1}, similar to regularization methods in linear regression, thus allowing for a tradeoff between tracking performance and the avoidance of overfitting.\nThe term $\\lambda_\\sigma\\lVert\\sigma(t)\\rVert_2^2$ yields small values for the slack variable $\\sigma(t)$, thus improving the prediction accuracy.\nFor our theoretical results, $\\lambda_\\sigma$ can be chosen to be zero since $\\sigma(t)$ is already rendered small by the constraint~\\eqref{eq:robust_MPC5}.\nHowever, as we discuss in more detail in Remark~\\ref{rk:sigma_bound}, the constraint~\\eqref{eq:robust_MPC5} is non-convex but can be neglected if $\\lambda_\\sigma$ is large enough.\n\nAn alternative to the present regularization terms are general quadratic regularization kernels, i.e., costs of the form $\\lVert\\alpha(t)\\rVert_{P_\\alpha}^2$, $\\lVert\\sigma(t)\\rVert_{P_\\sigma}^2$ for suitable matrices $P_\\alpha,P_\\sigma\\succ0$.\nFurther, in~\\cite{Coulson19,Coulson19b}, $\\ell_1$-regularizations of $\\alpha$ and $\\sigma$ were suggested and the resulting MPC scheme, without terminal equality constraints, was successfully applied to a nonlinear stochastic control problem.\nHowever, theoretical guarantees on closed-loop stability were not given.\nThroughout this paper, we consider simple quadratic penalty terms since this simplifies the arguments, but we conjecture that our theoretical results remain to hold for general norms $\\lVert\\alpha(t)\\rVert_p,\\lVert\\sigma(t)\\rVert_q$ with arbitrary $p,q=1,\\dots,\\infty$.\nAn interesting open question, which is beyond the scope of this paper, is to investigate the impact of particular choices of regularization norms on the practical performance of the presented MPC approach.\nThe choice of norms in the constraint~\\eqref{eq:robust_MPC5} is independent of the norms in the cost and essentially follows from the $\\ell_\\infty$-noise bound and the proofs of the value function upper bound (Lemma~\\ref{lem:value_fcn_upper_bound}) and recursive feasibility (Proposition~\\ref{prop:robust_rec_feas}).\n\nIn this section, we study the closed loop resulting from an application of~\\eqref{eq:robust_MPC} in an $n$-step MPC scheme (compare~\\cite{Gruene15,Worthmann17}).\nTo be more precise, we consider the scenario that, after solving~\\eqref{eq:robust_MPC} online, the first $n$ computed inputs are applied to the system.\nThereafter, the horizon is shifted by $n$ steps, before the whole scheme is repeated (compare Algorithm~\\ref{alg:MPC_n_step}).\n\n\\begin{algorithm}\n\\begin{Algorithm}\\label{alg:MPC_n_step}\n\\normalfont{\\textbf{$n$-Step Data-Driven MPC Scheme}}\n\\begin{enumerate}\n\\item At time $t$, take the past $n$ measurements $u_{[t-n,t-1]}$, $\\tilde{y}_{[t-n,t-1]}$ and solve~\\eqref{eq:robust_MPC}.\n\\item Apply the input sequence $u_{[t,t+n-1]}=\\bar{u}_{[0,n-1]}^*(t)$ over the next $n$ time steps.\n\\item Set $t=t+n$ and go back to 1).\n\\end{enumerate}\n\\end{Algorithm}\n\\end{algorithm}\n\nAs we will see in the remainder of this section, for the considered setting with output measurement noise, the multi-step MPC scheme described in Algorithm~\\ref{alg:MPC_n_step} has superior theoretical properties compared to its corresponding $1$-step version.\nThis is mainly due to the terminal equality constraints~\\eqref{eq:robust_MPC3}, which complicate the proof of recursive feasibility, similar as in model-based robust MPC with terminal equality constraints and model mismatch.\nIn particular, we show in this section that, for an $n$-step MPC scheme with a terminal equality constraint, practical exponential stability can be proven.\nOn the other hand, we comment on the differences for the corresponding $1$-step MPC scheme in Section~\\ref{sec:robust_feasibility} (Remark~\\ref{rk:one_step}).\nIn particular, for a $1$-step MPC scheme relying on~\\eqref{eq:robust_MPC}, recursive feasibility holds only locally around $(u^s,y^s)$ and thus, only local stability can be guaranteed.\nNevertheless, as we will see in Section~\\ref{sec:example} for a numerical example, the practical performance of the $n$-step scheme is almost indistinguishable from the $1$-step scheme.\n\n\\begin{remark}\nIn the nominal case of Section~\\ref{sec:tec}, i.e., for $\\bar{\\varepsilon}=0$,~\\eqref{eq:robust_MPC5} implies $\\sigma=0$.\nFurther, the regularization of $\\alpha$ vanishes for $\\bar{\\varepsilon}=0$, and the system dynamics~\\eqref{eq:robust_MPC1} as well as the initial conditions~\\eqref{eq:robust_MPC2} approach their nominal counterparts.\nThus, for $\\bar{\\varepsilon}=0$, Problem~\\eqref{eq:robust_MPC} reduces to the nominal Problem~\\eqref{eq:term_eq_MPC}.\n\\end{remark}\n\n\\begin{remark}\\label{rk:sigma_bound}\nIf the constraint~\\eqref{eq:robust_MPC5} is neglected and the input constraint set $\\mathbb{U}$ is a convex polytope, then Problem~\\eqref{eq:robust_MPC} is a strictly convex quadratic program and can be solved efficiently.\nHowever, the constraint on the slack variable $\\sigma$ in~\\eqref{eq:robust_MPC5} is non-convex due to the dependence of the right-hand side on $\\lVert\\alpha(t)\\rVert_1$, making it difficult to implement~\\eqref{eq:robust_MPC} in an efficient way.\nAs will become clear later in this section,~\\eqref{eq:robust_MPC5} is required to prove recursive feasibility and practical exponential stability.\nIt may, however, be replaced by the (convex) constraint $\\lVert\\sigma_k(t)\\rVert_\\infty\\leq c\\cdot\\bar{\\varepsilon}$ for a sufficiently large constant $c>0$, retaining the same theoretical guarantees.\nGenerally, a larger choice of $c$ increases the region of attraction, but also the size of the exponentially stable set to which the closed loop converges.\nFurthermore, the constraint~\\eqref{eq:robust_MPC5} can be enforced implicitly by choosing $\\lambda_\\sigma$ large enough.\nIn simulation examples, it was observed that the constraint~\\eqref{eq:robust_MPC5} is usually satisfied (for suitably large choices of $\\lambda_\\sigma$) without enforcing it explicitly in the optimization problem and thus, it may in most cases be neglected in the online optimization.\n\\end{remark}\n\nAs in the previous section, we require that the measured input $u^d$ is persistently exciting of order $L+2n$ (Assumption~\\ref{ass:pe}).\nFurther, to establish a local upper bound on the optimal cost of~\\eqref{eq:robust_MPC} and to prove recursive feasibility, we require that the horizon $L$ is not shorter than twice the system's order, as captured in the following assumption.\n\n\\begin{assumption}\\label{ass:length_2n}\nThe prediction horizon satisfies $L\\geq2n$.\n\\end{assumption}\n\nIn some minimal realization, we denote the state trajectory corresponding to $(u^d,y^d)$ by $x^d$.\nAccording to~\\cite[Corollary 2]{Willems05}, Assumption~\\ref{ass:pe} implies that the matrix\n\\begin{align}\\label{eq:ass_pe_matrix}\nH_{ux}=\\begin{bmatrix}H_{L+n}\\left(u^d\\right)\\\\H_1\\left(x^d_{[0,N-L-n]}\\right)\\end{bmatrix}\n\\end{align}\nhas full row rank and thus admits a right-inverse $H_{ux}^\\dagger=H_{ux}^\\top\\left(H_{ux}H_{ux}^\\top\\right)^{-1}$.\nDefine the quantity\n\\begin{align}\nc_{pe}\\coloneqq\\left\\lVert H_{ux}^\\dagger\\right\\rVert_2^2.\n\\end{align}\nFor our stability results, we will require that $c_{pe}\\bar{\\varepsilon}$ is bounded from above by a sufficiently small number.\nEssentially, this corresponds to a quantitative ``persistence-of-excitation-to-noise''-bound.\nTo be more precise, abbreviate in the following $U=H_{L+n}(u^d)$ and suppose that\n\\begin{align}\\label{eq:ass_pe_quantitative}\n\\rho I_{m(L+n)}\\preceq UU^\\top\\preceq\\nu I_{m(L+n)}\n\\end{align}\nfor scalar constants $\\rho,\\nu>0$.\nFurther, define the quantity $c_{pe}^u=\\lVert U^\\dagger\\rVert_2^2=\\lVert U^\\top(UU^\\top)^{-1}\\rVert_2^2$.\nThen, it holds that\n\\begin{align}\\label{eq:ass_pe_bound}\nc_{pe}^u&\\leq \\left\\lVert U^\\top\\right\\rVert_2^2\\left\\lVert \\left(UU^\\top\\right)^{-1}\\right\\rVert_2^2\\\\\\nonumber\n&=\\lambda_{\\max}(UU^\\top)\\cdot\\lambda_{\\max}\\left((UU^\\top)^{-1}(UU^\\top)^{-1}\\right)\\\\\\nonumber\n&\\leq\\frac{\\lambda_{\\max}(UU^\\top)}{\\lambda_{\\min}(UU^\\top)^2} \\stackrel{\\eqref{eq:ass_pe_quantitative}}{\\leq}\\frac{\\nu}{\\rho^2}.\n\\end{align}\nThus, if a persistently exciting input $u^d$ is multiplied by a constant $c>1$, then $c_{pe}^u$ decreases proportionally to $\\frac{1}{c^2}$.\nFurther, the constant $\\rho$ can typically be chosen larger if the data length $N$ increases.\nThe same arguments can be carried out when assuming a bound of the form~\\eqref{eq:ass_pe_quantitative} for the matrix~\\eqref{eq:ass_pe_matrix}, but finding a suitable input which generates data achieving such a bound is less obvious.\nIt is well-known for classical definitions of persistence of excitation that larger excitation of the input implies larger excitation of the state.\nTherefore, we conjecture (and we have observed for various practical simulation examples) that $c_{pe}$ decreases with increasing data horizons $N$ and with multiplications of a persistently exciting input data trajectory $u^d$ by a scalar constant greater than one.\nThis means that, for a given noise level $\\bar{\\varepsilon}$, robust stability as guaranteed in the following sections can be obtained by choosing a large enough persistently exciting input $u^d$ and\/or a sufficiently large data horizon $N$.\n\nSimilar to Section~\\ref{sec:tec}, we denote the open-loop cost of the robust MPC problem~\\eqref{eq:robust_MPC} by $J_L\\left(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]},\\alpha(t),\\sigma(t)\\right)$, and the optimal cost by $J_L^*\\left(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]}\\right)$.\nMoreover, we assume for the analysis that $(u^s,y^s)=(0,0)$.\nFor the presented robust data-driven MPC scheme, setpoints $(u^s,y^s)\\neq (0,0)$ change mainly one quantitative constant in Lemma~\\ref{lem:value_fcn_upper_bound}.\nWe comment on the main differences in the case $(u^s,y^s)\\neq(0,0)$ in Section~\\ref{sec:robust_feasibility} (Remark~\\ref{rk:wlog_zero}).\n\n\\subsection{Local upper bound of Lyapunov function}\\label{sec:Lyapunov_bound}\n\nIn this section, we show that the optimal cost of~\\eqref{eq:robust_MPC} admits a quadratic upper bound, similar to the nominal case (cf. Assumption~\\ref{ass:quad_upper_bound}).\nIt is straightforward to see that such an upper bound can not be quadratic in the state $x$ of some minimal realization:\nthe optimal cost $J_L^*$ depends explicitly on $\\alpha^*(t)$ via $\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_2^2$, which in turn depends on the past $n$ inputs and outputs $(u_{[t-n,t-1]},y_{[t-n,t-1]})$ through~\\eqref{eq:robust_MPC1} and~\\eqref{eq:robust_MPC2}. \nEven if the current state is zero, i.e., $x_t=0$, these may in general be arbitrarily large and hence, $\\alpha$ and therefore also $J_L^*$ may be arbitrarily large.\nThus, $J_L^*$ does not admit an upper bound in the state $x_t$ of a minimal realization.\nTo overcome this issue, we consider a different (not minimal) state of the system, defined as\n\\begin{align*}\n\\xi_t\\coloneqq\\begin{bmatrix}u_{[t-n,t-1]}\\\\y_{[t-n,t-1]}\\end{bmatrix}.\n\\end{align*}\nFurther, we define the noisy version of $\\xi$ as\n\\begin{align*}\n\\tilde{\\xi}_t\\coloneqq\\begin{bmatrix}u_{[t-n,t-1]}\\\\\\tilde{y}_{[t-n,t-1]}\\end{bmatrix}\n=\\begin{bmatrix}u_{[t-n,t-1]}\\\\y_{[t-n,t-1]}+\\varepsilon_{[t-n,t-1]}\\end{bmatrix}.\n\\end{align*}\nDenote the (not invertible) linear transformation from $\\xi$ to an arbitrary but fixed state $x$ in some minimal realization by $T$, i.e., $x_t=T\\xi_t$.\nClearly, this implies $\\lVert x_t\\rVert_2^2\\leq\\lVert T\\rVert_2^2\\lVert\\xi_t\\rVert_2^2\\eqqcolon\\Gamma_x\\lVert\\xi_t\\rVert_2^2$.\nNote that $\\xi$ is the state of a detectable state-space realization and thus, there exists an IOSS Lyapunov function $W(\\xi)=\\lVert \\xi\\rVert_P^2$, similar to the proof of Theorem~\\ref{thm:mpc_tec}.\nFor some $\\gamma>0$, define $V_t\\coloneqq J_L^*(\\tilde{\\xi}_t)+\\gamma W(\\xi_t)$.\nThe following result shows that, for the state $\\xi$, a meaningful quadratic upper bound on $V$ can be proven.\n\n\\begin{lemma}\\label{lem:value_fcn_upper_bound}\nSuppose Assumptions~\\ref{ass:pe} and~\\ref{ass:length_2n} hold.\nThen, there exists a constant $c_3>0$ as well as a $\\delta>0$ such that, for all $\\xi_t\\in \\mathbb{B}_\\delta$, Problem~\\eqref{eq:robust_MPC} is feasible and $V$ is bounded as\n\\begin{align}\\label{eq:lem:value_fcn_bound}\n\\gamma\\lambda_{\\min}(P)\\lVert \\xi_t\\rVert^2_2\\leq V_t\\leq c_3\\lVert \\xi_t\\rVert_2^2+c_4,\n\\end{align}\nwhere $c_4=2np\\bar{\\varepsilon}^2\\lambda_\\sigma$.\n\\end{lemma}\n\\begin{proof}\nThe lower bound is trivial.\nFor the upper bound, we construct a feasible candidate solution to Problem~\\eqref{eq:robust_MPC} which brings the state $x$ in some minimal realization (and thus the output $y$) to zero in $L$ steps.\nObviously, we have $\\bar{u}_{[-n,-1]}(t)=u_{[t-n,t-1]}$ as well as $\\bar{y}_{[-n,-1]}(t)=\\tilde{y}_{[t-n,t-1]}$ by~\\eqref{eq:robust_MPC2}.\nBy assumption, we have $L\\geq2n$ as well as $0\\in\\text{int}(\\mathbb{U})$.\nThus, by controllability, there exists a $\\delta>0$ such that for any $x_t$ with $\\frac{1}{\\Gamma_x}\\lVert x_t\\rVert_2\\leq\\lVert\\xi_t\\rVert_2\\leq\\delta$, there exists an input trajectory $u_{[t,t+L-1]}\\in\\mathbb{U}^L$, which brings the state $x_{[t,t+L-1]}$ and the corresponding output $y_{[t,t+L-1]}$ to the origin in $L-n$ steps while satisfying \n\\begin{align}\\label{eq:thm_robust_proof_control}\n\\left\\lVert\\begin{bmatrix}u_{[t,t+L-1]}\n\\\\y_{[t,t+L-1]}\\end{bmatrix}\n\\right\\rVert_2^2&\\leq \\Gamma_{uy}\\lVert x_t\\rVert_2^2\n\\end{align}\nfor a suitable constant $\\Gamma_{uy}>0$.\nAs candidate input-output trajectories for~\\eqref{eq:robust_MPC}, we choose these $u,y$, i.e., $\\bar{u}_{[0,L-1]}(t)=u_{[t,t+L-1]},\\bar{y}_{[0,L-1]}(t)=y_{[t,t+L-1]}$.\nMoreover, $\\alpha(t)$ is chosen as\n\\begin{align}\\label{eq:lem_value_fcn_bound_alpha}\n\\alpha(t)=H_{ux}^\\dagger\\begin{bmatrix}u_{[t-n,t+L-1]}\\\\x_{t-n}\n\\end{bmatrix},\n\\end{align}\nwhere $H_{ux}$ is defined in~\\eqref{eq:ass_pe_matrix}.\nAs is described in more detail in~\\cite{Markovsky08,Berberich19}, the output of an LTI system is a linear combination of its initial condition and the input, and therefore, the above choice of $\\alpha(t)$ implies\n\\begin{align*}\n\\begin{bmatrix}H_{L+n}\\left(u^d\\right)\\\\H_{L+n}\\left(y^d\\right)\\end{bmatrix}\\alpha(t)&=\n\\begin{bmatrix}\\bar{u}_{[-n,L-1]}(t)\\\\y_{[t-n,t+L-1]}\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\\bar{u}_{[-n,L-1]}(t)\\\\\n\\bar{y}_{[-n,-1]}(t)-\\varepsilon_{[t-n,t-1]}\n\\\\\\bar{y}_{[0,L-1]}(t)\\end{bmatrix},\n\\end{align*}\nwhere $\\varepsilon_{[t-n,t-1]}$ is the true noise instance.\nFor the slack variable $\\sigma$, we choose \n\\begin{align}\\label{eq:lem_1_sigma}\n\\begin{split}\n\\sigma_{[-n,-1]}(t)&=H_n\\left(\\varepsilon^d_{[0,N-L-1]}\\right)\\alpha(t)-\\varepsilon_{[t-n,t-1]},\\\\\n\\sigma_{[0,L-1]}(t)&=H_L\\left(\\varepsilon^d_{[n,N-1]}\\right)\\alpha(t),\n\\end{split}\n\\end{align}\nwhich implies that~\\eqref{eq:robust_MPC1}-\\eqref{eq:robust_MPC3} are satisfied.\nFinally, writing $e_i$ for a row vector whose $i$-th component is equal to $1$ and which is zero otherwise, we obtain\n\\begin{align}\\label{eq:infty_bound}\n\\begin{split}\n\\lVert H_{L+n}(\\varepsilon^d)\\alpha(t)\\rVert_\\infty&=\\max_{i\\in\\mathbb{I}_{[1,p(L+n)]}}|e_iH_{L+n}(\\varepsilon^d)\\alpha(t)|\\\\\n&\\leq\\bar{\\varepsilon}\\lVert\\alpha(t)\\rVert_1.\n\\end{split}\n\\end{align}\nThis implies $\\lVert\\sigma(t)\\rVert_\\infty\\leq\\bar{\\varepsilon}\\left(\\lVert\\alpha(t)\\rVert_1+1\\right)$, which in turn proves that~\\eqref{eq:robust_MPC5} is satisfied.\n\nIn the following, we employ the above candidate solution to bound the optimal cost and thereby, the function $V$.\nDue to observability of the pair $(A,C)$, corresponding to the minimal realization with state $x$, it holds that\n\\begin{align}\\label{eq:lem_value_fcn_bound_ctrb}\n\\begin{bmatrix}\\bar{u}_{[-n,-1]}(t)\\\\x_{t-n}\\end{bmatrix}=\n\\underbrace{\\begin{bmatrix}\nI_{mn}&0\\\\M_1&\\Phi_\\dagger\n\\end{bmatrix}}_{M\\coloneqq}\\xi_t,\n\\end{align}\nwhere $\\Phi_\\dagger=(\\Phi^\\top\\Phi)^{-1}\\Phi^\\top$ is a left-inverse of the observability matrix $\\Phi$.\nThe lower block of~\\eqref{eq:lem_value_fcn_bound_ctrb} follows from observability and the linear system dynamics $x_{k+1}=Ax_k+Bu_k,\\>y_k=Cx_k+Du_k$ for $k\\in\\mathbb{I}_{[t-n,t-1]}$, which can be used to compute the matrix $M_1$ depending on $A,B,C,D$.\nHence, $\\alpha(t)$ can be bounded as\n\\begin{align}\\nonumber\n\\lVert&\\alpha(t)\\rVert_2^2\\stackrel{\\eqref{eq:lem_value_fcn_bound_alpha}}{\\leq}\\left\\lVert H_{ux}^\\dagger\\right\\rVert_2^2\n\\left(\\left\\lVert\\bar{u}_{[-n,L-1]}(t)\\right\\rVert_2^2\n+\\left\\lVert x_{t-n}\\right\\rVert_2^2\\right)\\\\\\nonumber\n&=\\left\\lVert H_{ux}^\\dagger\\right\\rVert_2^2\n\\left(\\left\\lVert\\bar{u}_{[0,L-1]}(t)\\right\\rVert_2^2\n+\\left\\lVert\\begin{bmatrix}\\bar{u}_{[-n,-1]}(t)\\\\x_{t-n}\\end{bmatrix}\\right\\rVert_2^2\\right)\\\\\\label{eq:lem_value_fcn_bound_3}\n&\\stackrel{\\eqref{eq:thm_robust_proof_control},\\eqref{eq:lem_value_fcn_bound_ctrb}}{\\leq}\\underbrace{\\left\\lVert H_{ux}^\\dagger\\right\\rVert_2^2}_{c_{pe}=}\n\\left(\\Gamma_{uy}\\lVert x_t\\rVert_2^2+\\lVert M\\rVert_2^2\\lVert\\xi_t\\rVert_2^2\\right).\n\\end{align}\nUsing standard norm equivalence properties, it holds for arbitrary $k\\in\\mathbb{N}$ that\n\\begin{align}\\label{eq:lem_constant}\n\\left\\lVert H_{k}\\left(\\varepsilon^d_{[0,N-L-n+k-1]}\\right)\\right\\rVert_2^2\n\\leq c_5k\\bar{\\varepsilon}^2,\n\\end{align}\nwhere $c_5\\coloneqq p(N-L-n+1)$.\nBased on the definition of $\\sigma(t)$ in~\\eqref{eq:lem_1_sigma}, and using~\\eqref{eq:lem_constant} as well as the inequality $(a+b)^2\\leq2(a^2+b^2)$, we can bound $\\sigma(t)$ in terms of $\\alpha(t)$ as\n\\begin{align}\\label{eq:lem_bound_sigma}\n\\lVert\\sigma(t)\\rVert_2^2&\\leq 2np\\bar{\\varepsilon}^2+c_5(L+2n)\\bar{\\varepsilon}^2\\lVert\\alpha(t)\\rVert_2^2.\n\\end{align}\nCombining the above inequalities, $V$ is upper bounded as\n\\begin{align}\n\\nonumber\n&V_t\\leq J_L(\\tilde{\\xi}_t,\\alpha(t),\\sigma(t))+\\gamma W(\\xi_t)\\\\\n\\nonumber\n&\\leq\n\\lambda_{\\max}(Q,R)\\Gamma_{uy}\\lVert x_t\\rVert_2^2+\\gamma\\lambda_{\\max}(P)\\lVert \\xi_t\\rVert_2^2\\\\\n\\nonumber\n&+\\left(\\lambda_\\alpha+c_5(L+2n)\\lambda_\\sigma\\bar{\\varepsilon}\\right)c_{pe}\\bar{\\varepsilon}\\left(\\Gamma_{uy}\\lVert x_t\\rVert_2^2+\\lVert M\\rVert_2^2\\lVert\\xi_t\\rVert_2^2\\right)\\\\\\nonumber\n&+2np\\bar{\\varepsilon}^2\\lambda_\\sigma.\n\\end{align}\nFinally, $x_t$ is bounded by $\\xi_t$ as $\\lVert x_t\\rVert_2^2\\leq\\Gamma_x\\lVert\\xi_t\\rVert_2^2$, which leads to $V_t\\leq c_3\\lVert \\xi_t\\rVert_2^2 +c_4$, where \n\\begin{align*}\nc_3&=\\lambda_{\\max}(Q,R)\\Gamma_{uy}\\Gamma_x+\\gamma\\lambda_{\\max}(P)\\\\\n&+\\left(\\lambda_\\alpha+c_5(L+2n)\\lambda_\\sigma\\bar{\\varepsilon}\\right)c_{pe}\\bar{\\varepsilon}\\left(\\Gamma_{uy}\\Gamma_x+\\lVert M\\rVert_2^2\\right),\\\\\nc_4&=2np\\bar{\\varepsilon}^2\\lambda_\\sigma.\n\\end{align*}\n\\end{proof}\nIn Section~\\ref{sec:tec}, we assumed that the optimal cost is quadratically upper bounded (cf. Assumption~\\ref{ass:quad_upper_bound}), which is not restrictive in the nominal linear-quadratic setting.\nLemma~\\ref{lem:value_fcn_upper_bound} proves that, under mild assumptions, the optimal cost of the robust MPC problem~\\eqref{eq:robust_MPC} admits (locally) a similar upper bound and can thus be seen as the robust counterpart of Assumption~\\ref{ass:quad_upper_bound}.\n\nThe term $c_4$ is solely due to the slack variable $\\sigma$.\nThis can be explained by noting that, for $\\xi_t=0$, $\\alpha(t)$, $\\bar{u}_{[0,L-1]}(t)$, $\\bar{y}_{[0,L-1]}(t)$ can all be chosen to be zero, as long as $\\sigma$ compensates the noise, i.e., $\\sigma_{[-n,-1]}(t)=-\\varepsilon_{[t-n,t-1]}$.\n\n\\subsection{Prediction error bound}\\label{sec:prediction_error}\nDenote the optimizers of~\\eqref{eq:robust_MPC} by $\\alpha^*(t),\\sigma^*(t),\\bar{u}^*(t),\\bar{y}^*(t)$, and the output trajectory resulting from an open-loop application of $\\bar{u}^*(t)$ by $\\hat{y}$.\nOne of the reasons why it is difficult to analyze the presented MPC scheme is the non-trivial relation between the predicted output $\\bar{y}^*(t)$ and the ``actual'' output $\\hat{y}$.\nIn the following, we derive a bound on the difference between the two quantities, which will play an important role in proving recursive feasibility\nand practical stabiliy of the proposed scheme.\nFor an integer $k$, define constants $\\rho_{2,k},\\rho_{\\infty,k}$ such that\n\\begin{align*}\n\\rho_{2,k}&\\geq\\left\\lVert CA^k\\Phi_\\dagger\\right\\rVert_2^2,\\\\\n\\rho_{\\infty,k}&\\geq\\left\\lVert CA^k\\Phi_\\dagger\\right\\rVert_\\infty, \n\\end{align*}\nwhere $\\Phi_\\dagger$ is a left-inverse of the observability matrix $\\Phi$.\n\n\\begin{lemma}\\label{lem:prediction_error}\nIf~\\eqref{eq:robust_MPC} is feasible at time $t$, then the following inequalities hold for all $k\\in\\mathbb{I}_{[0,L-1]}$\n\\begin{align}\n\\label{eq:lem_prediction_error1}\n\\lVert &\\hat{y}_{t+k}-\\bar{y}^*_k(t)\\rVert_2^2\\leq8c_5\\bar{\\varepsilon}^2\\lVert\\alpha^*(t)\\rVert_2^2+2\\lVert\\sigma_k^*(t)\\rVert_2^2\\\\\\nonumber\n&+\\rho_{2,n+k}\\left(16n\\bar{\\varepsilon}^2\\left(c_5\\lVert\\alpha^*(t)\\rVert_2^2+p\\right)+4\\lVert\\sigma_{[-n,-1]}^*(t)\\rVert_2^2\\right),\\\\\n\\label{eq:lem_prediction_error}\n\\lVert &\\hat{y}_{t+k}-\\bar{y}^*_k(t)\\rVert_\\infty\\leq\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_1+\\lVert\\sigma_k^*(t)\\rVert_\\infty \\\\\\nonumber\n&+\\rho_{\\infty,n+k}\\left(\\bar{\\varepsilon}\\left(\\lVert\\alpha^*(t)\\rVert_1+1\\right)+\\left\\lVert\\sigma_{[-n,-1]}^*(t)\\right\\rVert_\\infty\\right),\n\\end{align}\nwith $c_5$ from~\\eqref{eq:lem_constant}.\n\\end{lemma}\n\\begin{proof}\nWe show only~\\eqref{eq:lem_prediction_error} and note that~\\eqref{eq:lem_prediction_error1} can be derived following the same steps, using~\\eqref{eq:lem_constant} as well as the inequality $(a+b)^2\\leq2a^2+2b^2$.\nAs written above, $\\hat{y}$ is the trajectory, resulting from an open-loop application of $\\bar{u}^*(t)$ and with initial conditions specified by $\\left(u_{[t-n,t-1]},\\hat{y}_{[t-n,t-1]}\\right)=\\left(u_{[t-n,t-1]},y_{[t-n,t-1]}\\right)$.\nOn the other hand, according to~\\eqref{eq:robust_MPC1}, $\\bar{y}^*(t)$ is comprised as \n\\begin{align*}\n\\bar{y}^*(t)=H_{L+n}\\left(\\varepsilon^d\\right)\\alpha^*(t)+H_{L+n}\\left(y^d\\right)\\alpha^*(t)-\\sigma^*(t).\n\\end{align*}\nIt follows directly from~\\eqref{eq:robust_MPC1} and~\\eqref{eq:robust_MPC2} that the second term on the right-hand side $H_{L+n}\\left(y^d\\right)\\alpha^*(t)$ is a trajectory of $G$, resulting from an open-loop application of $\\bar{u}^*(t)$ and with initial output conditions \n\\begin{align*}\n\\tilde{y}_{[t-n,t-1]}+\\sigma_{[-n,-1]}^*(t)-H_n\\left(\\varepsilon^d_{[0,N-L-1]}\\right)\\alpha^*(t).\n\\end{align*}\nDefine\n\\begin{align*}\ny^-_{[t-n,t+L-1]}=\\hat{y}_{[t-n,t+L-1]}-H_{L+n}\\left(y^d\\right)\\alpha^*(t).\n\\end{align*}\nSince $G$ is LTI and $y^-$ contains the difference between two trajectories with the same input, we can assume $\\bar{u}^*(t)=0$ for the following arguments without loss of generality.\nHence, $y^-$ is equal to the output component of a trajectory $\\left(u^-,y^-\\right)$ with zero input and with initial trajectory\n\\begin{align}\\label{eq:lem_2_yminus}\n&\\begin{bmatrix}u^-_{[t-n,t-1]}\\\\y^-_{[t-n,t-1]}\\end{bmatrix}=\\\\\\nonumber\n&\\qquad\\begin{bmatrix}0\\\\H_n\\left(\\varepsilon^d_{[0,N-L-1]}\\right)\\alpha^*(t)-\\varepsilon_{[t-n,t-1]}-\\sigma_{[-n,-1]}^*(t)\\end{bmatrix}.\n\\end{align}\nThe relation to the internal state $x^-$ can be derived as\n\\begin{align*}\ny^-_{[t-n,t-1]}=\\Phi x^-_{t-n},\n\\end{align*}\nwith the observability matrix $\\Phi$.\nThis leads to the corresponding output at time $t+k$\n\\begin{align*}\ny^-_{t+k}=CA^{n+k}\\Phi_\\dagger y^-_{[t-n,t-1]},\n\\end{align*}\nwhere $\\Phi_\\dagger$ is a left-inverse of $\\Phi$.\nUsing this fact, the expression for $y^-_{[t-n,t-1]}$ in~\\eqref{eq:lem_2_yminus}, and the inequality~\\eqref{eq:infty_bound}, $\\lVert y^-_{t+k}\\rVert_\\infty$ can be bounded as\n\\begin{align*}\n\\lVert y^-_{t+k}\\rVert_\\infty\\leq\\rho_{\\infty,n+k}\\left(\\bar{\\varepsilon}\\left(\\lVert\\alpha^*(t)\\rVert_1+1\\right)+\\left\\lVert\\sigma_{[-n,-1]}^*(t)\\right\\rVert_\\infty\\right).\n\\end{align*}\nNote that\n\\begin{align*}\n\\left\\lVert \\hat{y}_{t+k}-\\bar{y}_k^*(t)\\right\\rVert_\\infty\n\\leq \\lVert y^-_{t+k}\\rVert_\\infty+\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_1+\\lVert\\sigma^*_k(t)\\rVert_\\infty,\n\\end{align*}\nwhich concludes the proof.\n\\end{proof}\n\nEssentially, Lemma~\\ref{lem:prediction_error} gives a bound on the mismatch between the predicted output and the actual output resulting from the open-loop application of $\\bar{u}^*(t)$, depending on the optimal solutions $\\alpha^*,\\sigma^*$, and on system parameters.\nIn model-based robust MPC schemes, similar bounds are typically used to propagate uncertainty, where the role of the weighting vector $\\alpha$ to account for multiplicative uncertainty is replaced by the state $x$ and a model-based uncertainty description (compare~\\cite{Kouvaritakis16} for details).\nThe main difference in the proposed MPC scheme is that the predicted trajectory $\\bar{y}^*(t)$ is in general not a trajectory of the system in the sense of Definition~\\ref{def:trajectory_of}, corresponding to the input $\\bar{u}^*(t)$.\nOn the contrary, in model-based robust MPC, the predicted trajectory usually satisfies the dynamics of a (nominal) model of the system.\n\n\\subsection{Recursive feasibility}\\label{sec:robust_feasibility}\nThe following result shows that, if the proposed robust MPC scheme is feasible at time $t$, then it is also feasible at time $t+n$,\nassuming that the noise level is sufficiently small.\n\n\\begin{proposition}\\label{prop:robust_rec_feas}\nSuppose Assumption~\\ref{ass:pe} and~\\ref{ass:length_2n} hold.\nThen, for any $V_{ROA}>0$, there exists an $\\bar{\\varepsilon}_0>0$ such that for all $\\bar{\\varepsilon}\\leq\\bar{\\varepsilon}_0$,\nif $V_t\\leq V_{ROA}$ for some $t\\geq 0$, then the optimization problem~\\eqref{eq:robust_MPC} is feasible at time $t+n$.\n\\begin{comment}\n\\begin{itemize}\n\\item[(i)] the $n$-step MPC scheme is feasible at any $t\\in\\mathbb{N}$,\n\\item[(ii)] the closed-loop output with the $n$-step MPC scheme satisfies $y_t\\in\\mathbb{Y}$ for all $t\\in\\mathbb{N}$.\n\\end{itemize}\n\\end{comment}\n\\end{proposition}\n\\begin{proof}\nSuppose the robust MPC problem~\\eqref{eq:robust_MPC} is feasible at time $t$ with $V_t\\leq V_{ROA}$ and denote the optimizers by $\\alpha^*(t),\\sigma^*(t),\\bar{u}^*(t),\\bar{y}^*(t)$.\nAs in Lemma~\\ref{lem:prediction_error}, the trajectory resulting from an open-loop application of $\\bar{u}^*(t)$ and with initial conditions specified by $\\left(u_{[t-n,t-1]},y_{[t-n,t-1]}\\right)$ is denoted by $\\hat{y}$.\nFor $k\\in\\mathbb{I}_{[-n,L-2n-1]}$, we choose for the candidate input the shifted previously optimal solution, i.e., $\\bar{u}_k'(t+n)=\\bar{u}_{k+n}^*(t)$.\nOver the first $n$ steps, the candidate output must satisfy $\\bar{y}_{[-n,-1]}'(t+n)=\\tilde{y}_{[t,t+n-1]}$ due to~\\eqref{eq:robust_MPC2}.\nFurther, for $k\\in\\mathbb{I}_{[0,L-2n-1]}$, the output is chosen as $\\bar{y}_k'(t+n)=\\hat{y}_{t+n+k}$.\nSince $\\bar{y}_{[L-n,L-1]}^*(t)=0$ by~\\eqref{eq:robust_MPC3}, the prediction error bound of Lemma~\\ref{lem:prediction_error} implies that, for any $k\\in\\mathbb{I}_{[L-n,L-1]}$, it holds that\n\\begin{align*}\n\\lVert\\hat{y}_{t+k}&\\rVert_\\infty\\leq\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_1+\\lVert\\sigma^*(t)\\rVert_\\infty\\\\\n&+\\rho_{\\infty,n+k}\\left(\\bar{\\varepsilon}\\left(\\lVert\\alpha^*(t)\\rVert_1+1\\right)+\\lVert\\sigma_{[-n,-1]}^*(t)\\rVert_\\infty\\right).\n\\end{align*}\nFor $\\bar{\\varepsilon}_0$ sufficiently small, $\\lVert\\sigma^*(t)\\rVert_\\infty$ becomes arbitrarily small due to~\\eqref{eq:robust_MPC5}.\nFurther, using that $\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_2^2\\leq J_L^*(u_{t-n,t-1]},\\tilde{y}_{[t-n,t-1]})\\leq V_{ROA}$, we can bound $\\alpha^*(t)$ as\n\\begin{align*}\n\\lVert\\alpha^*(t)\\rVert_1&\\leq\\sqrt{N-L-n+1}\\lVert\\alpha^*(t)\\rVert_2\\\\\n&\\leq\\sqrt{N-L-n+1}\\sqrt{\\frac{V_{ROA}}{\\lambda_\\alpha\\bar{\\varepsilon}}}.\n\\end{align*} \nHence, if $\\bar{\\varepsilon}_0$ is sufficiently small, then $\\hat{y}_{t+k}$ becomes arbitrarily small at the above time instants.\nThis implies that the internal state in some minimal realization corresponding to the trajectory $(\\bar{u}^*(t),\\hat{y})$ at time $t+L-n$, i.e., $\\hat{x}_{t+L-n}=\\Phi_\\dagger\\hat{y}_{[t+L-n,t+L-1]}$, approaches zero for $\\bar{\\varepsilon}\\to0$.\nThus, similar to the proof of Lemma~\\ref{lem:value_fcn_upper_bound}, there exists an input trajectory $\\bar{u}_{[L-2n,L-n-1]}'(t+n)$, which brings the state and the corresponding output $\\bar{y}_{[L-2n,L-n-1]}'(t+n)$ to zero in $n$ steps, while satisfying\n\\begin{align}\\label{eq:prop_proof_rec_feas_ctrb}\n\\left\\lVert\\begin{bmatrix}\\bar{u}_{[L-2n,L-n-1]}'(t+n)\n\\\\\\bar{y}_{[L-2n,L-n-1]}'(t+n)\\end{bmatrix}\n\\right\\rVert_2^2&\\leq \\Gamma_{uy}\\lVert \\hat{x}_{t+L-n}\\rVert_2^2.\n\\end{align}\nMoreover, in the interval $\\mathbb{I}_{[L-n,L-1]}$, we choose $\\bar{u}_{[L-n,L-1]}'(t+n)=0$, $\\bar{y}_{[L-n,L-1]}'(t+n)=0$, i.e.,~\\eqref{eq:robust_MPC3} is satisfied.\nThe above arguments imply that\n\\begin{align*}\n\\left(\\bar{u}'(t+n),\\begin{bmatrix}\\hat{y}_{[t,t+n-1]}\\\\\\bar{y}_{[0,L-1]}'(t+n)\n\\end{bmatrix}\\right)\n\\end{align*}\nis a trajectory of the unknown LTI system in the sense of Definition~\\ref{def:trajectory_of}.\nDenote the corresponding internal state in some minimal realization by $\\bar{x}'(t+n)$.\nWe choose $\\alpha'(t+n)$ as a corresponding solution to~\\eqref{eq:thm_hankel}, i.e., as\n\\begin{align}\\label{eq:prop_proof_rec_feas_1}\n\\alpha'(t+n)=H_{ux}^\\dagger\\begin{bmatrix}\\bar{u}'_{[-n,L-1]}(t+n)\\\\x_t\\end{bmatrix}\n\\end{align}\nwith $H_{ux}$ from~\\eqref{eq:ass_pe_matrix}.\nFinally, we fix\n\\begin{align}\\label{eq:prop_1_sigma}\n\\sigma'(t+n)=H_{L+n}\\left(\\tilde{y}^d\\right)\\alpha'(t+n)-\\bar{y}'(t+n),\n\\end{align}\nwhich implies that~\\eqref{eq:robust_MPC1} holds.\nIt remains to show that the constraint~\\eqref{eq:robust_MPC5} is satisfied.\nOver the first $n$ time steps,~\\eqref{eq:robust_MPC5} holds since\n\\begin{align}\\nonumber\n&\\sigma_{[-n,-1]}'(t+n)=H_n\\left(\\tilde{y}^d_{[0,N-L-1]}\\right)\\alpha'(t+n)-\\tilde{y}_{[t,t+n-1]}\\\\\\nonumber\n&\\stackrel{\\eqref{eq:prop_proof_rec_feas_1}}{=}H_n\\left(\\varepsilon^d_{[0,N-L-1]}\\right)\\alpha'(t+n)+y_{[t,t+n-1]}-\\tilde{y}_{[t,t+n-1]}\\\\\\label{eq:prop_proof_rec_feas_sigma1}\n&=H_n\\left(\\varepsilon^d_{[0,N-L-1]}\\right)\\alpha'(t+n)-\\varepsilon_{[t,t+n-1]}.\n\\end{align}\nFurther, using the definition of $\\sigma'(t+n)$ in~\\eqref{eq:prop_1_sigma} and the bound~\\eqref{eq:infty_bound}, we obtain\n\\begin{align}\\label{eq:prop_proof_rec_feas_sigma2}\n\\lVert&\\sigma_{[0,L-1]}'(t+n)\\rVert_\\infty\\leq\\bar{\\varepsilon}\\lVert\\alpha'(t+n)\\rVert_1\\\\\\nonumber\n&+\\underbrace{\\left\\lVert H_{L}\\left(y^d_{[n,N-1]}\\right)\\alpha'(t+n)-\\bar{y}_{[0,L-1]}'(t+n)\\right\\rVert_\\infty}_{=0},\n\\end{align}\nand thus,~\\eqref{eq:robust_MPC5} holds.\n\\end{proof}\n\nProposition~\\ref{prop:robust_rec_feas} shows that, for any sublevel set of the Lyapunov function $V$, there exists a sufficiently small noise bound $\\bar{\\varepsilon}_0$ such that, for any $\\bar{\\varepsilon}\\leq\\bar{\\varepsilon}_0$ and any state starting in the sublevel set at time $t$, the $n$-step MPC scheme is feasible at time $t+n$.\nIn particular, the required noise bound decreases if the size of the sublevel set, i.e., $V_{ROA}$, increases and vice versa.\nThis can be explained by noting that the noise in~\\eqref{eq:robust_MPC1} corresponds to a multiplicative uncertainty, which affects the prediction accuracy more strongly if the current state is further away from the origin and hence the Lyapunov function $V_t$ is larger.\nWe note that this does not imply recursive feasibility of the $n$-step MPC scheme in the standard sense since it remains to be shown that the sublevel set $V_t\\leq V_{ROA}$ is invariant, which will be proven in Section~\\ref{sec:robust_stability}.\nIn our main result, the set of initial states for which $V_0\\leq V_{ROA}$ will play the role of the guaranteed region of attraction of the closed-loop system.\n\nThe input candidate solution used to prove recursive feasibility in Proposition~\\ref{prop:robust_rec_feas} is analogous to a candidate solution one would use to show robust recursive feasibility in model-based robust MPC with terminal equality constraints.\nThe output candidate solution is sketched in Figure~\\ref{fig:sketch_candidate}.\nUp to time $L-2n-1$, $\\bar{y}'(t+n)$ is equal to $\\hat{y}$ (shifted by $n$ times steps), which is the output, resulting from an open-loop application of $\\bar{u}^*(t)$.\nThis choice together with the prediction error bound of Lemma~\\ref{lem:prediction_error} implies that the internal state corresponding to $\\bar{y}'(t+n)$ at time $L-2n$ is close to zero.\nThus, by controllability, there exists an input trajectory satisfying the input constraints, which brings the state and the output to zero in $n$ steps.\nIn the interval $\\mathbb{I}_{[L-2n,L-n-1]}$, the candidate output is chosen as this trajectory.\nThis also implies that the choice $\\bar{y}_{[L-n,L-1]}'(t+n)=0$ makes the candidate solution between $0$ and $L-1$, i.e., $\\left(\\bar{u}_{[0,L-1]}'(t+n),\\bar{y}_{[0,L-1]}'(t+n)\\right)$, a trajectory\\footnote{\nIn most practical cases, $(\\bar{u}^*(t),\\bar{y}^*(t))$ are not trajectories of the system due to the slack variable $\\sigma$ and the noise.} of the unknown system $G$ in the sense of Definition~\\ref{def:trajectory_of}.\nFinally, the suggested candidate input is also similar to~\\cite{Yu14}, where inherent robustness of quasi-infinite horizon (model-based) MPC is shown.\n\n\\begin{figure}\n\t\t\\includegraphics[width=0.49\\textwidth]{sketch_candidate_tikz}\n\t\t\\caption{Sketch of the candidate output for recursive feasibility.\n\t\tDue to the terminal equality constraints~\\eqref{eq:robust_MPC3}, the last $n$ steps of the optimal predicted output $\\bar{y}^*(t)$ are equal to zero.\n\t\tAccording to the prediction error bound derived in Lemma~\\ref{lem:prediction_error}, this implies that the state resulting from an open-loop application of the optimal input $\\bar{u}^*(t)$ is small at time $L-2n$, provided that $\\bar{\\varepsilon}$ is sufficiently small.\n\t\tTherefore, a candidate solution $\\bar{y}'(t+n)$ can be constructed by appending the open-loop output $\\hat{y}$ by a local deadbeat controller, which steers the state to the origin in $n$ steps.\n\t\t}\t\\label{fig:sketch_candidate}\n\\end{figure}\n\n\n\\begin{remark}\\label{rk:one_step}\nFor a $1$-step MPC scheme, a similar argument to prove recursive feasibility can be applied, given that $\\bar{u}^*_{[L-2n,L-n-1]}(t)$ and $\\bar{y}^*_{[L-2n,L-n-1]}(t)$ (and hence $\\hat{y}_{[t+L-2n,t+L-n-1]}$) are close to zero.\nThis is required to construct a feasible input which steers the state and the corresponding output to zero, similar to the proof of Proposition~\\ref{prop:robust_rec_feas}, and it is, e.g., the case if the initial state $x_t$ is close to zero.\nThat is, the result of Proposition~\\ref{prop:robust_rec_feas} holds locally for a $1$-step MPC scheme, as expected based on model-based MPC with terminal equality constraints under disturbances using inherent robustness properties.\n\\end{remark}\n\n\\begin{remark}\n\\label{rk:wlog_zero}\nAs mentioned in Section~\\ref{sec:scheme}, all of our theoretical guarantees for the presented robust MPC scheme can be straightforwardly extended to the case $(u^s,y^s)\\neq0$, with the corresponding steady-state $\\xi^s\\neq0$.\nThe main difference lies in the bound~\\eqref{eq:lem:value_fcn_bound}, which becomes $V_t\\leq \\tilde{c}_3\\lVert \\xi_t-\\xi^s\\rVert_2^2+\\tilde{c}_4$ for constants $\\tilde{c}_3\\neq c_3,\\tilde{c}_4\\neq c_4$, where $\\tilde{c}_3$ can be made arbitrarily close to $c_3$.\nOn the other hand, $\\tilde{c}_4$ changes depending on $\\xi^s$, since the right-hand side of~\\eqref{eq:lem_value_fcn_bound_3} would need to be proportional to $\\lVert \\xi_t-\\xi^s\\Vert_2^2+\\lVert \\xi^s\\rVert_2^2$.\nThe same phenomenon can be observed in a bound of $\\alpha'(t+n)$ based on~\\eqref{eq:prop_proof_rec_feas_1}, which will be used in the stability proof.\nAs will become clear later in this section, such changes in the bound of $\\alpha'(t+n)$ as well as in the constant $\\tilde{c}_4$ do not affect our qualitative theoretical results, but they may potentially (quantitatively) deterioriate the robustness w.r.t. the noise level $\\bar{\\varepsilon}$.\nIntuitively, this can be explained by noting that~\\eqref{eq:robust_MPC1} corresponds to a multiplicative uncertainty and thus, stabilization of the origin is simpler than stabilization of any other equilibrium.\nSince equilibria with $(u^s,y^s)\\neq0$ require a significantly more involved notation, we omit this extension.\n\\end{remark}\n\n\\subsection{Practical exponential stability}\\label{sec:robust_stability}\n\nThe following is our main stability result.\nIt shows that, under Assumptions~\\ref{ass:pe} and~\\ref{ass:length_2n}, for a low noise amplitude and large persistence of excitation, and for suitable regularization parameters, the application of the scheme~\\eqref{eq:robust_MPC} as described in Algorithm~\\ref{alg:MPC_n_step} leads to a practically exponentially stable closed loop.\n\\begin{theorem}\\label{thm:robust}\nSuppose Assumptions~\\ref{ass:pe} and~\\ref{ass:length_2n} hold.\nThen, for any $V_{ROA}>0$, there exist constants $\\underline{\\lambda}_\\alpha,\\overline{\\lambda}_\\alpha,\\underline{\\lambda}_\\sigma,\\overline{\\lambda}_\\sigma>0$ such that, for all $\\lambda_\\alpha,\\lambda_\\sigma$ satisfying\n\\begin{align}\\label{eq:thm_bounds1}\n\\begin{split}\n\\underline{\\lambda}_\\alpha\\leq\\lambda_\\alpha\\leq\\overline{\\lambda}_\\alpha,\\quad\\underline{\\lambda}_\\sigma\\leq\\lambda_\\sigma\\leq\\overline{\\lambda}_\\sigma,\n\\end{split}\n\\end{align}\nthere exist constants $\\bar{\\varepsilon}_0,\\bar{c}_{pe}>0$, as well as a continuous, strictly increasing $\\beta:[0,\\bar{\\varepsilon}_0]\\to[0,V_{ROA}]$ with $\\beta(0)=0$, such that, for all $\\bar{\\varepsilon},c_{pe}$ satisfying\n\\begin{align}\\label{eq:thm_bounds2}\n\\bar{\\varepsilon}\\leq\\bar{\\varepsilon}_0,\\quad c_{pe}\\bar{\\varepsilon}\\leq\\overline{c}_{pe},\n\\end{align}\nthe sublevel set $V_t\\leq V_{ROA}$ is invariant and $V_t$ converges exponentially to $V_t\\leq\\beta(\\bar{\\varepsilon})$ in closed loop with the $n$-step MPC scheme for all initial conditions for which $V_0\\leq V_{ROA}$.\n\\end{theorem}\n\\begin{proof}\nThe proof consists of three parts:\nFirst, we bound the increase in the Lyapunov function $V$.\nThereafter, we prove that, for suitably chosen bounds on the parameters, there exists a function $\\beta$, which satisfies the above requirements.\nFinally, we show invariance of the sublevel set $V_t\\leq V_{ROA}$ and exponential convergence of $V_t$ to $V_t\\leq\\beta(\\bar{\\varepsilon})$.\\\\\n\\textbf{(i). Practical Stability}\\\\\nSuppose Problem~\\eqref{eq:robust_MPC} is feasible at time $t$ and let $V_{ROA}>0$ be arbitrary.\nFurther, let $\\bar{\\varepsilon}_0$ be sufficiently small such that Proposition~\\ref{prop:robust_rec_feas} is applicable.\nThe cost of the candidate solution derived in Proposition~\\ref{prop:robust_rec_feas} at time $t+n$ is\n\\begin{align*}\nJ_L&\\left(u_{[t,t+n-1]},\\tilde{y}_{[t,t+n-1]},\\alpha'(t+n),\\sigma'(t+n)\\right)\\\\\n=&\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)\\right)+\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha'(t+n)\\rVert_2^2\\\\\n&+\\lambda_\\sigma\\lVert\\sigma'(t+n)\\rVert_2^2.\n\\end{align*}\nThus, we obtain for the optimal cost\n\\begin{align}\\nonumber\n&J_L^*(u_{[t,t+n-1]},\\tilde{y}_{[t,t+n-1]})\\\\\n\\nonumber\n&\\leq J_L\\left(u_{[t,t+n-1]},\\tilde{y}_{[t,t+n-1]},\\alpha'(t+n),\\sigma'(t+n)\\right)\\\\\n\\label{eq:thm_proof_value_fcn_diff}\n&= J_L^*(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]})-\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\n\\nonumber\n&-\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_2^2-\\lambda_\\sigma\\lVert\\sigma^*(t)\\rVert_2^2+\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha'(t+n)\\rVert_2^2\\\\\n\\nonumber\n&+\\lambda_\\sigma\\lVert\\sigma'(t+n)\\rVert_2^2+\\sum_{k=0}^{L-1}\\ell(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)).\n\\end{align}\nIn the following key technical part of the proof (Parts (i.i)-(i.iv)), we derive useful bounds for most terms on the right-hand side of~\\eqref{eq:thm_proof_value_fcn_diff}.\nThis will lead to a decay bound of the optimal cost which is then used to prove practical exponential stability of the closed loop.\\\\\n\\textbf{(i.i) Stage Cost Bounds}\\\\\nWe first bound those terms in~\\eqref{eq:thm_proof_value_fcn_diff}, which involve the stage cost.\nThe above difference can be decomposed as\n\\begin{align}\\label{eq:thm_proof_stage_cost_diff}\n&\\sum_{k=0}^{L-1}\\ell(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n))-\\sum_{k=0}^{L-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\\nonumber\n&=\\sum_{k=L-2n}^{L-n-1}\\ell\\left(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)\\right)-\\sum_{k=0}^{n-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\\nonumber\n&+\\sum_{k=0}^{L-2n-1}\\ell\\left(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)\\right)-\\ell\\left(\\bar{u}_{k+n}^*(t),\\bar{y}_{k+n}^*(t)\\right),\n\\end{align}\nwhere we use that $\\bar{u}_k'(t+n),\\bar{y}_k'(t+n),\\bar{u}_k^*(t),\\bar{y}_k^*(t)$ are all zero for $k\\in\\mathbb{I}_{[L-n,L-1]}$ due to~\\eqref{eq:robust_MPC3}.\nTo bound the first term on the right-hand side of~\\eqref{eq:thm_proof_stage_cost_diff}, note that\n\\begin{align*}\n\\lVert \\hat{x}_{t+L-n}\\rVert_2^2\\leq\\lVert\\Phi_\\dagger\\rVert_2^2\\lVert\\hat{y}_{[t+L-n,t+L-1]}\\rVert_2^2,\n\\end{align*}\nwith $\\hat{x}_{t+L-n}$ as in the proof of Proposition~\\ref{prop:robust_rec_feas}.\nFurther, since $\\bar{y}_{[L-n,L-1]}^*(t)=0$, $\\hat{y}$ can be bounded in the considered time interval as in~\\eqref{eq:lem_prediction_error1}, i.e.,\n\\begin{align*}\n&\\lVert\\hat{y}_{[t+L-n,t+L-1]}\\rVert_2^2\\leq8c_5n\\bar{\\varepsilon}^2\\lVert\\alpha^*(t)\\rVert_2^2+2\\lVert\\sigma^*(t)\\rVert_2^2\\\\\n&+\\sum_{k=L-n}^{L-1}\\rho_{2,n+k}\\\\\n&\\cdot\\left(16n\\bar{\\varepsilon}^2\\left(c_5\\lVert\\alpha^*(t)\\rVert_2^2+p\\right)+4\\lVert\\sigma^*_{[-n,-1]}(t)\\rVert_2^2\\right).\n\\end{align*}\nHence, it holds that\n\\begin{align}\\label{eq:thm_proof_stage_cost_bound_1}\n&\\sum_{k=L-2n}^{L-n-1}\\ell\\left(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)\\right)\\\\\n\\nonumber\n&\\stackrel{\\eqref{eq:prop_proof_rec_feas_ctrb}}{\\leq}\\lambda_{\\max}(Q,R)\\Gamma_{uy}\\lVert\\hat{x}_{t+L-n}\\rVert_2^2\\\\\n\\nonumber\n&\\leq\\lambda_{\\max}(Q,R)\\Gamma_{uy}\\lVert\\Phi_\\dagger\\rVert_2^2\\Big(8c_5n\\bar{\\varepsilon}^2\\lVert\\alpha^*(t)\\rVert_2^2+2\\lVert\\sigma^*(t)\\rVert_2^2\\\\\n\\nonumber\n&+\\sum_{k=L-n}^{L-1}\\rho_{2,n+k}\\\\\n\\nonumber\n&\\cdot\\left(16n\\bar{\\varepsilon}^2\\left(c_5\\lVert\\alpha^*(t)\\rVert_2^2+p\\right)+4\\lVert\\sigma^*_{[-n,-1]}(t)\\rVert_2^2\\right)\\Big).\n\\end{align}\nNext, we bound the difference between the third and the fourth term on the right-hand side of~\\eqref{eq:thm_proof_stage_cost_diff}.\nThe following relations are readily derived:\n\\begin{align}\\nonumber\n&\\lVert\\bar{y}_k'(t+n)\\rVert_Q^2-\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\\\\\nonumber\n=&\\lVert\\bar{y}_k'(t+n)-\\bar{y}^*_{k+n}(t)+\\bar{y}_{k+n}^*(t)\\rVert_Q^2-\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\\\\\nonumber\n=&\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\\\\\label{eq:robust_thm_proof_bound5}\n&+2\\left(\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\right)^\\top Q\\bar{y}_{k+n}^*(t)\\\\\\nonumber\n\\leq&\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\\\\\nonumber\n&+2\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q.\n\\end{align}\nBy using $2\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q\\leq1+\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q^2$ as well as $\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\leq V_{ROA}$, we arrive at\n\\begin{align}\\label{eq:thm_proof_auxiliary_bound1}\n2\\lVert\\bar{y}_k'&(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q\\lVert\\bar{y}_{k+n}^*(t)\\rVert_Q\\\\\n\\nonumber\n\\leq&\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q\\left(1+V_{ROA}\\right).\n\\end{align}\nTherefore, since the inputs coincide over the considered time interval, and due to~\\eqref{eq:robust_thm_proof_bound5} as well as~\\eqref{eq:thm_proof_auxiliary_bound1}, it holds that\n\\begin{align}\\nonumber\n&\\sum_{k=0}^{L-2n-1}\\ell\\left(\\bar{u}_k'(t+n),\\bar{y}_k'(t+n)\\right)-\\ell\\left(\\bar{u}_{k+n}^*(t),\\bar{y}_{k+n}^*(t)\\right)\\\\\n\\label{eq:thm_proof_stage_cost_bound_2}\n&\\leq\\sum_{k=0}^{L-2n-1}\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q^2\\\\\\nonumber\n&\\quad+\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q\\left(1+V_{ROA}\\right).\n\\end{align}\nThe difference $\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q$ can be bounded similar to Lemma~\\ref{lem:prediction_error}.\nUsing the constraint~\\eqref{eq:robust_MPC5} to bound $\\lVert\\sigma^*(t)\\rVert_2$, it can be shown that the bound is of the form $\\lVert\\bar{y}_k'(t+n)-\\bar{y}_{k+n}^*(t)\\rVert_Q\\leq \\tilde{C}_1\\lVert\\alpha^*(t)\\rVert_2+\\tilde{C}_2\\leq\\tilde{C}_1\\left(1+\\lVert\\alpha^*(t)\\rVert_2^2\\right)+\\tilde{C}_2$, where both $\\tilde{C}_1$ and $\\tilde{C}_2$ are proportional to $\\bar{\\varepsilon}$.\nHence, applying Lemma~\\ref{lem:prediction_error} to~\\eqref{eq:thm_proof_stage_cost_bound_2}, the sum of~\\eqref{eq:thm_proof_stage_cost_bound_1} and~\\eqref{eq:thm_proof_stage_cost_bound_2} can be bounded as $C_1\\lVert\\alpha^*(t)\\rVert_2^2+C_2\\lVert\\sigma^*(t)\\rVert_2^2+C_3$ for suitable $C_i>0$, where $C_1$ and $C_3$ are quadratic in $\\bar{\\varepsilon}$ and vanish for $\\bar{\\varepsilon}=0$.\nTherefore, if $\\underline{\\lambda}_\\alpha$ and $\\underline{\\lambda}_\\sigma$ are sufficiently large, then~\\eqref{eq:thm_proof_value_fcn_diff} implies\n\\begin{align}\n\\nonumber\n&J_L^*(u_{[t,t+n-1]},\\tilde{y}_{[t,t+n-1]})\\\\\n\\label{eq:thm_proof_value_fcn_diff2}\n&\\leq J_L^*(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]})-\\sum_{k=0}^{n-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\n\\nonumber\n&+\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha'(t+n)\\rVert_2^2+\\lambda_\\sigma\\lVert\\sigma'(t+n)\\rVert_2^2\n+c_6,\n\\end{align}\nfor a suitable constant $c_6>0$, which is quadratic in $\\bar{\\varepsilon}$ and vanishes for $\\bar{\\varepsilon}=0$.\n\\\\\n\\textbf{(i.ii) Bound of $\\mathbf{\\lVert\\sigma'(t+n)\\rVert_2^2}$}\\\\\nBy applying standard norm bounds to the slack variable candidate $\\sigma'(t+n)$ as defined in~\\eqref{eq:prop_1_sigma} (compare also~\\eqref{eq:prop_proof_rec_feas_sigma1} and~\\eqref{eq:prop_proof_rec_feas_sigma2}), we obtain\n\\begin{align}\\label{eq:thm_proof_sigma_bound}\n\\lVert\\sigma'(t+n)\\rVert_2^2&\\leq 2np\\bar{\\varepsilon}^2+c_5(L+2n)\\bar{\\varepsilon}^2\\lVert\\alpha'(t+n)\\rVert_2^2,\n\\end{align}\nwith $c_5=p(N-L-n+1)$ as in~\\eqref{eq:lem_constant}.\\\\\n\\textbf{(i.iii) Bound of $\\mathbf{\\lVert\\alpha'(t+n)\\rVert_2^2}$}\\\\\nFor the weighting vector $\\alpha'(t+n)$, it holds that\n\\begin{align*}\n&\\lVert\\alpha'(t+n)\\rVert_2^2\\stackrel{\\eqref{eq:prop_proof_rec_feas_1}}{\\leq} c_{pe}\\left\\lVert\\begin{bmatrix}\\bar{u}'_{[-n,L-1]}(t+n)\\\\\\bar{x}_{-n}'(t+n)\\end{bmatrix}\\right\\rVert_2^2\\\\\n&= c_{pe}\\left( \\lVert x_t\\rVert_2^2+\\lVert\\bar{u}_{[-n,L-1]}'(t+n)\\rVert_2^2\\right)=c_{pe}\\lVert x_t\\rVert_2^2\\\\\n&+c_{pe}\\left(\\lVert\\bar{u}_{[0,L-n-1]}^*(t)\\rVert_2^2+\\lVert\\bar{u}'_{[L-2n,L-n-1]}(t+n)\\rVert_2^2\\right).\n\\end{align*}\nSimilar to~\\eqref{eq:thm_proof_stage_cost_bound_1}, we can use~\\eqref{eq:prop_proof_rec_feas_ctrb} to bound the last term as\n\\begin{align}\\label{eq:thm_proof_alpha_bound}\n&\\lVert\\bar{u}'_{[L-2n,L-n-1]}(t+n)\\rVert_2^2\\\\\\nonumber\n&\\leq\\Gamma_{uy}\\lVert\\Phi_\\dagger\\rVert_2^2\\Big(8c_5n\\bar{\\varepsilon}^2\\lVert\\alpha^*(t)\\rVert_2^2+2\\lVert\\sigma^*(t)\\rVert_2^2+\\sum_{k=L-n}^{L-1}\\rho_{2,n+k}\\\\\n\\nonumber\n&\\cdot\\left(16n\\bar{\\varepsilon}^2\\left(c_5\\lVert\\alpha^*(t)\\rVert_2^2+p\\right)+4\\lVert\\sigma^*_{[-n,-1]}(t)\\rVert_2^2\\right)\\Big).\n\\end{align}\nThe bound~\\eqref{eq:thm_proof_alpha_bound} is of the same form as~\\eqref{eq:thm_proof_stage_cost_bound_1} and~\\eqref{eq:thm_proof_stage_cost_bound_2}.\nDue to this fact, using the bound~\\eqref{eq:thm_proof_sigma_bound} for $\\sigma'(t+n)$, and by potentially choosing $\\underline{\\lambda}_\\alpha$ and $\\underline{\\lambda}_\\sigma$ larger,~\\eqref{eq:thm_proof_value_fcn_diff} implies\n\\begin{align}\n\\nonumber\n&J_L^*(u_{[t,t+n-1]},\\tilde{y}_{[t,t+n-1]})\\\\\n\\label{eq:thm_proof_value_fcn_diff3}\n&\\leq J_L^*(u_{[t-n,t-1]},\\tilde{y}_{[t-n,t-1]})-\\sum_{k=0}^{n-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\n\\nonumber\n&+\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)\\left(\\lVert x_t\\rVert_2^2+\\lVert\\bar{u}_{[0,L-n-1]}^*(t)\\rVert_2^2\\right)c_{pe}\\bar{\\varepsilon}+c_8,\n\\end{align}\nfor suitable constants $c_7,c_8>0$, which vanish for $\\bar{\\varepsilon}=0$.\\\\\n\\textbf{(i.iv) IOSS Bound}\\\\\nAs in the proof of Theorem~\\ref{thm:mpc_tec}, we consider now $V_t=J_L^*(\\tilde{\\xi}_t)+\\gamma W(\\xi_t)$ with the IOSS Lyapunov function $W$ for some $\\gamma>0$.\nIt follows directly from~\\eqref{eq:thm_eq0_proof2},~\\eqref{eq:thm_proof_value_fcn_diff3}, and from $\\lVert x_t\\rVert_2^2\\leq\\Gamma_x\\lVert \\xi_t\\rVert_2^2$ that\n\\begin{align}\\label{eq:thm_proof_value_fcn_diff4}\n&V_{t+n}-V_t\\leq-\\sum_{k=0}^{n-1}\\ell\\left(\\bar{u}_k^*(t),\\bar{y}_k^*(t)\\right)\\\\\n\\nonumber\n&+\\gamma(-\\frac{1}{2}\\lVert \\xi_{[t,t+n-1]}\\rVert_2^2+c_1\\lVert u_{[t,t+n-1]}\\rVert_2^2+c_2\\lVert y_{[t,t+n-1]}\\rVert_2^2)\\\\\n\\nonumber\n&+\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)\\left(\\Gamma_x\\lVert \\xi_t\\rVert_2^2+\\lVert\\bar{u}_{[0,L-n-1]}^*(t)\\rVert_2^2\\right)c_{pe}\\bar{\\varepsilon}+c_8.\n\\end{align}\nThe identity $(a+b)^2\\leq2(a^2+b^2)$ yields\n\\begin{align*}\n\\lVert y_{[t,t+n-1]}\\rVert_2^2&\\leq2\\lVert\\bar{y}_{[0,n-1]}^*(t)\\rVert_2^2\\\\\n&+2\\lVert y_{[t,t+n-1]}-\\bar{y}_{[0,n-1]}^*(t)\\rVert_2^2,\n\\end{align*}\nwhere the latter term can again be bounded using Lemma~\\ref{lem:prediction_error}.\nSimilar to the earlier steps of this proof, the components of the bound $\\lVert y_{[t,t+n-1]}-\\bar{y}_{[0,n-1]}^*(t)\\rVert_2^2$ vanish in~\\eqref{eq:thm_proof_value_fcn_diff4} if $\\underline{\\lambda}_\\sigma,\\underline{\\lambda}_\\alpha$ are chosen sufficiently large, except for an additive constant, which depends solely on the noise.\nMoreover, choosing $\\gamma=\\frac{\\lambda_{\\min}(Q,R)}{\\max\\{c_1,2c_2\\}}$, it holds that\n\\begin{align*}\n\\gamma(c_1\\lVert u_{[t,t+n-1]}\\rVert_2^2+2c_2\\lVert \\bar{y}_{[0,n-1]}^*&(t)\\rVert_2^2)\\\\\n\\leq&\\sum_{k=0}^{n-1}\\ell(\\bar{u}_k^*(t),\\bar{y}_k^*(t)).\n\\end{align*}\nCombining these facts, we arrive at\n\\begin{align*}\nV_{t+n}-V_t&\\leq\\left(\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)\\Gamma_x c_{pe}\\bar{\\varepsilon}-\\frac{\\gamma}{2}\\right)\\lVert \\xi_t\\rVert_2^2\\\\\n&+\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)c_{pe}\\bar{\\varepsilon}\\lVert\\bar{u}_{[0,L-n-1]}^*(t)\\rVert_2^2+c_9\n\\end{align*}\nfor a suitable constant $c_9$, which vanishes for $\\bar{\\varepsilon}=0$.\nFinally, note that $\\lambda_{\\min}(R)\\lVert\\bar{u}_{[0,L-n-1]}^*(t)\\rVert_2^2\\leq V_t$, which leads to\n\\begin{align}\\label{eq:thm_proof_value_fcn_diff5}\nV_{t+n}-V_t&\\leq\\left(\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)\\Gamma_x c_{pe}\\bar{\\varepsilon}-\\frac{\\gamma}{2}\\right)\\lVert \\xi_t\\rVert_2^2\\\\\\nonumber\n&\\quad+\\frac{\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)c_{pe}\\bar{\\varepsilon}}{\\lambda_{\\min}(R)}V_t+c_9\\\\\\nonumber\n&\\eqqcolon \\left(c_{10}-\\frac{\\gamma}{2}\\right)\\lVert \\xi_t\\rVert_2^2+c_{11}V_t+c_9.\n\\end{align}\n\\textbf{(ii). Construction of $\\mathbf{\\beta}$}\\\\\nThe local upper bound in Lemma~\\ref{lem:value_fcn_upper_bound}, which holds for any $\\xi_t\\in \\mathbb{B}_\\delta$, implies that the following holds for any $V_{ROA}>0$, and any $\\xi_t$ with $V_t\\leq V_{ROA}$:\n %\n\\begin{align}\n\\label{eq:robust_thm_proof_bound4}\nV_t\\leq\\underbrace{\\max\\left\\{ c_{3},\\frac{V_{ROA}-c_{4}}{\\delta^2}\\right\\}}_{c_{3,V_{ROA}}\\coloneqq}\\lVert \\xi_t\\rVert_2^2+c_{4}.\n\\end{align}\nWe first consider $V_{ROA}=\\delta^2c_3+c_4$, which implies $c_{3,V_{ROA}}=c_3$.\nFurther, we define $c_{12}\\coloneqq\\frac{\\gamma}{2}-c_{10}-c_{3}c_{11}$ as well as\n\\begin{align*}\n\\beta(\\bar{\\varepsilon})=\\frac{\\frac{\\gamma}{2}c_{4}+c_{3}c_9}{c_{12}}\n\\end{align*}\nfor any $\\bar{\\varepsilon}$ for which $c_{12}>0$.\nRecall that $c_3=a_1\\bar{\\varepsilon}^2+a_2\\bar{\\varepsilon}+a_3,c_4=a_4\\bar{\\varepsilon}^2,c_9=a_5\\bar{\\varepsilon}^2+a_6\\bar{\\varepsilon},c_{10}=a_7\\bar{\\varepsilon}^2+a_8\\bar{\\varepsilon},c_{11}=a_9\\bar{\\varepsilon}^2+a_{10}\\bar{\\varepsilon}$, for suitable constants $a_i>0$.\nThis implies $\\beta(0)=0$.\nNext, we show the existence of a constant $\\bar{\\varepsilon}_0$ such that $\\beta$ is strictly increasing on $[0,\\bar{\\varepsilon}_0]$.\nIf $c_{12}>0$, then $\\beta$ is strictly increasing since its numerator increases with $\\bar{\\varepsilon}$ whereas its denominator decreases with $\\bar{\\varepsilon}$. \nIn the following, we show that $c_{12}>0$.\nBy definition, we have\n\\begin{align*}\nc_{12}&=\\frac{\\gamma}{2}-\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)\\Gamma_xc_{pe}\\bar{\\varepsilon}\\\\\n&-\\frac{\\left(\\lambda_\\alpha+\\lambda_\\sigma c_7\\right)c_{pe}\\bar{\\varepsilon}}{\\lambda_{\\min}(R)}\\Big(\\lambda_{\\max}(Q,R)\\Gamma_{uy}\\Gamma_x+\\gamma\\lambda_{max}(P)\\\\\n&+\\left(\\lambda_\\alpha+c_5(L+2n)\\lambda_\\sigma\\bar{\\varepsilon}\\right)c_{pe}\\bar{\\varepsilon}(\\Gamma_{uy}\\Gamma_x+\\lVert M\\rVert_2^2\\Big).\n\\end{align*}\nIt can be seen directly from this expression that, if $\\lambda_\\alpha\\leq\\overline{\\lambda}_{\\alpha},\\lambda_\\sigma\\leq\\overline{\\lambda}_\\sigma$, with arbitrary but fixed upper bounds $\\overline{\\lambda}_{\\alpha},\\overline{\\lambda}_\\sigma$, and $c_{pe}\\bar{\\varepsilon}$ is sufficiently small, then $c_{12}>0$.\nIt remains to show that $\\beta(\\bar{\\varepsilon}_0)\\leq V_{ROA}$, or, equivalently,\n\\begin{align*}\n\\frac{\\frac{\\gamma}{2}c_4+c_3c_9}{\\frac{\\gamma}{2}-c_{10}-c_3c_{11}}\\leq \\delta^2c_3+c_4,\n\\end{align*}\nwhich can be ensured by choosing $\\bar{\\varepsilon}_0$ sufficiently small.\\\\\n\\textbf{(iii). Invariance and Exponential Convergence}\\\\\nTake an arbitrary $\\xi_t$ with $V_t\\leq V_{ROA}$ and note that this implies that~\\eqref{eq:robust_MPC} is feasible and thus,~\\eqref{eq:thm_proof_value_fcn_diff5} and~\\eqref{eq:robust_thm_proof_bound4} hold.\nMoreover, $c_{12}>0$ implies $c_{10}<\\frac{\\gamma}{2}$.\nDefining $V_{\\beta,t}\\coloneqq V_t-\\beta(\\bar{\\varepsilon})$, we thus obtain\n\\begin{align*}\n&V_{t+n}\\stackrel{\\eqref{eq:thm_proof_value_fcn_diff5}}{\\leq} \\left(1+c_{11}\\right)V_t+\\left(c_{10}-\\frac{\\gamma}{2}\\right)\\lVert \\xi_t\\rVert_2^2+c_9\\\\\n&\\stackrel{\\eqref{eq:robust_thm_proof_bound4}}{\\leq} \\left(1+c_{11}+\\frac{c_{10}-\\frac{\\gamma}{2}}{c_3}\\right) V_t+\\frac{c_4}{c_3}\\left(\\frac{\\gamma}{2}-c_{10}\\right)+c_9\\\\\n&\\leq\\left(1+c_{11}+\\frac{c_{10}-\\frac{\\gamma}{2}}{c_3}\\right)V_{\\beta,t}+\\beta(\\bar{\\varepsilon}),\n\\end{align*}\nwhere the last inequality follows from elementary computations.\nThis in turn implies the following contraction property\n\\begin{align}\\label{eq:thm_proof_contraction_property}\nV_{\\beta,t+n}\\leq\\underbrace{\\left(1+c_{11}+\\frac{c_{10}-\\frac{\\gamma}{2}}{c_3}\\right)}_{<1}V_{\\beta,t}.\n\\end{align}\nIf the noise bound $\\bar{\\varepsilon}_0$ is sufficiently small, then this implies invariance of the sublevel set $V_t\\leq V_{ROA}$ and hence, by Proposition~\\ref{prop:robust_rec_feas}, recursive feasibility of the $n$-step MPC scheme.\nApplying the contraction property~\\eqref{eq:thm_proof_contraction_property} recursively, we can thus conclude that $V_t$ converges exponentially to $V_t\\leq\\beta(\\bar{\\varepsilon})$.\n\nSo far, we have only considered the case $V_{ROA}=\\delta^2c_3+c_4$.\nIt remains to show that, \\emph{for any} $V_{ROA}>0$, there exist suitable parameter bounds such that\n\\begin{align*}\nc_{12,V_{ROA}}\\coloneqq \\frac{\\gamma}{2}-c_{10}-c_{3,V_{ROA}}c_{11}>0\n\\end{align*}\nwith $c_{3,V_{ROA}}$ from~\\eqref{eq:robust_thm_proof_bound4}.\nIt is easily seen from the above discussion that, for any \\emph{fixed} $V_{ROA}>0$ and for \\emph{fixed} bounds $\\overline{\\lambda}_\\alpha,\\overline{\\lambda}_\\sigma$, $c_{12,V_{ROA}}>0$ can always be ensured if $c_{pe}\\bar{\\varepsilon}$ is sufficiently small, i.e., if the bound $\\bar{c}_{pe}$ is sufficiently small.\n\\end{proof}\n\\begin{comment}\n\\begin{itemize}\n\\item Note that $\\lVert\\sigma'(t+n)\\rVert\\leq ...$ does not make use of $\\lVert\\sigma^*(t)\\rVert\\leq...$, i.e., the constraints do not really exploit recursive arguments - this is due to the fact that we do \/ can not relate $\\alpha'(t+n)$ and $\\alpha^*(t)$ $\\rightarrow$ big difference to standard robust MPC arguments.\n\\end{itemize}\n\\end{comment}\n\nTheorem~\\ref{thm:robust} shows that the closed loop of the proposed data-driven MPC scheme admits a (practical) Lyapunov function, which converges robustly and exponentially to a set, whose size shrinks with the noise level.\nSince $\\lVert\\xi_t\\rVert_2^2\\leq\\frac{1}{\\gamma\\lambda_{\\min}(P)}V_t$ due to~\\eqref{eq:lem:value_fcn_bound}, this implies practical exponential stability of the equilibrium $\\xi=0$.\nThe result requires that the noise level $\\bar{\\varepsilon}$ is small, the amount of persistence of excitation is large compared to the noise level (i.e., $c_{pe}\\bar{\\varepsilon}$ is small), and the regularization parameters are chosen suitably.\nConcerning the latter requirement, $\\lambda_\\alpha$ cannot be chosen arbitrarily large, which can be explained by noting that the optimal $\\alpha$ is usually not zero, even in the noise-free case.\nOn the other hand, $\\lambda_\\alpha$ cannot be too close to zero since solutions $\\alpha(t)$ of~\\eqref{eq:robust_MPC1} are not unique and large choices of $\\alpha(t)$ amplify the influence of the noise in $\\tilde{y}^d$ on the prediction accuracy.\nFurther, $\\lambda_\\sigma$ has to be chosen sufficiently large to ensure stability, but not arbitrarily large for a fixed noise level.\nTo be more precise, $\\lambda_\\alpha c_{pe}\\bar{\\varepsilon}$ and $\\lambda_\\sigma c_{pe}\\bar{\\varepsilon}^2$ have to be small, i.e., for a fixed $c_{pe}$, choosing the regularization parameters too large deteriorates the robustness of the scheme w.r.t. the noise level.\nOne can show that the theoretical properties in Theorem~\\ref{thm:robust} are also valid without imposing the lower bound in~\\eqref{eq:thm_bounds1} on $\\lambda_{\\sigma}$, by using the more conservative constraint~\\eqref{eq:robust_MPC5} in the proof.\nHowever,~\\eqref{eq:robust_MPC5} is non-convex (cf. Remark~\\ref{rk:sigma_bound}), but can typically be enforced implicitly if $\\lambda_\\sigma$ is chosen large enough.\n\nIn the proof of Theorem~\\ref{thm:robust}, a close connection between the region of attraction, i.e., the set of initial conditions with $V_0\\leq V_{ROA}$, and various parameters becomes apparent.\nFirst of all, the noise bound $\\bar{\\varepsilon}$ needs to be sufficiently small depending on $V_{ROA}$ to allow for an application of Proposition~\\ref{prop:robust_rec_feas}.\nMoreover, if $V_{ROA}$ increases, then also $c_{3,V_{ROA}}$ increases and hence, $c_{11}$ must decrease to ensure $c_{12,V_{ROA}}>0$ and thereby exponential stability.\nTo render $c_{11}$ small, $c_{pe}\\bar{\\varepsilon}$ must decrease, i.e., the amount of persistence of excitation compared to the noise level must increase.\nThus, for $c_{pe}\\bar{\\varepsilon}\\to0$ (and a sufficiently small noise bound $\\bar{\\varepsilon}$ due to Proposition~\\ref{prop:robust_rec_feas}), the region of attraction approaches the set of all initially feasible points.\nFor a fixed $c_{pe}$, the size of the region of attraction increases if the noise level decreases and vice versa.\nA similar connection between the maximal disturbance and the region of attraction can be found in~\\cite{Yu14}, which studies inherent robustness properties of quasi-infinite horizon MPC (but the result applies similarly to model-based $n$-step MPC with terminal equality constraints).\nFurther, if $c_{pe}$ decreases then so do $c_{10}$ as well as $c_{11}$ and hence also $\\beta(\\bar{\\varepsilon})$.\nThis implies that larger persistence of excitation (i.e., a lower $c_{pe}\\bar{\\varepsilon}$) does not only increase the region of attraction but it also reduces the tracking error.\n\n\\begin{remark}\\label{rk:RMPC_practical_application}\nTo apply the proposed data-driven MPC scheme in practice, the following ingredients are required.\nFirst of all, the design parameters in the cost, i.e., $Q,R,\\lambda_\\alpha,\\lambda_\\sigma$, have to be selected suitably.\nThe proof and discussion of Theorem~\\ref{thm:robust} give a qualitative guideline for choosing the regularization parameters.\nFurther, as in the nominal case (Section~\\ref{sec:tec}), measured data with a persistently exciting input as well as a (potentially rough) upper bound on the system's order need to be available.\nFinally, an upper bound on the noise level $\\bar{\\varepsilon}$ is required.\n\nWhile these ingredients suffice to apply the proposed scheme, computing bounds as in~\\eqref{eq:thm_bounds1} and~\\eqref{eq:thm_bounds2} is a difficult task in practice.\nTheorem~\\ref{thm:robust} should be interpreted as a qualitative result which illustrates a) the influence of the regularization parameters on stability and robustness of the presented MPC scheme and b) that large persistence of excitation (compared to the noise level) increases the region of attraction and reduces the tracking error.\nFurther, many of the employed bounds rely on conservative estimates such as $(a+b)^2\\leq2a^2+2b^2$.\nIn principle, it is possible to improve some of the quantitative estimates at the price of a more involved notation.\nNevertheless, such improved estimates may lead to meaningful, non-conservative, verifiable conditions on the noise level $\\bar{\\varepsilon}$ for closed-loop stability, and are therefore an interesting issue for future research.\n\\end{remark}\n\n\\begin{remark}\nIn the nominal MPC scheme~\\eqref{eq:term_eq_MPC} as well as in its robust modification~\\eqref{eq:robust_MPC}, the data $(u^d,y^d)$ used for prediction is fixed.\nAlternatively, one may update the data using online measurements, given that the closed loop is persistently exciting.\nIndeed, we believe that one of the main advantages of the proposed scheme is its ability to cope (locally) with nonlinear components of the unknown system.\nNonlinear dynamical systems are in general difficult to identify and thus, the proposed approach may be simpler than a model-based MPC scheme with prior system identification.\nAs illustrated in~\\cite{Coulson19} with an application of a similar MPC scheme to a nonlinear stochastic quadcopter system, the approach is already applicable in practice to time-varying or nonlinear dynamics without updating the data online.\nProviding theoretical guarantees for the application of the proposed scheme to a nonlinear system is an interesting and relevant problem for future research.\n\\end{remark}\n\nSimilar to the nominal MPC scheme, it is easy to see that the only free decision variables of Problem~\\eqref{eq:robust_MPC} are $\\alpha(t)$ and $\\sigma(t)$ with at least $m(L+2n)+n$ and $p(L+n)$ free parameters, respectively (cf. Remark~\\ref{rk:complexity_nominal}).\nOn the contrary, to implement a model-based MPC scheme (with state measurements), $mL$ parameters are required.\nWhen neglecting the constraint~\\eqref{eq:robust_MPC5} (cf. Remark~\\ref{rk:sigma_bound}), the slack variable $\\sigma(t)$ can be eliminated from~\\eqref{eq:robust_MPC} by directly penalizing the norm of the model mismatch $\\bar{y}(t)-H_{L+n}(\\tilde{y}^d)\\alpha(t)$ in the cost.\nHence, considering the minimal amount of data required for persistence of excitation, Problem~\\eqref{eq:robust_MPC} has roughly the same number of decision variables as a model-based MPC problem.\nIn contrast to the nominal case, however, Theorem~\\ref{thm:robust} implies that larger data horizons $N$ are beneficial for the theoretical properties of the proposed scheme as they typically decrease the constant $c_{pe}$.\nOn the other hand, increasing values for $N$ also lead to an increasing online complexity of~\\eqref{eq:robust_MPC} since $\\alpha(t)\\in\\mathbb{R}^{N-L+1}$, i.e., the presented MPC approach allows for a tradeoff between computational complexity and desired closed-loop performance by appropriately selecting $N$.\n\nOn the contrary, the performance of identification-based MPC typically improves if larger amounts of data are employed, whereas the online complexity is independent of $N$.\nHowever, while the scheme presented in this paper provides end-to-end guarantees for the closed loop using noisy data of finite length, the derivation of non-conservative estimation bounds on system parameters from such data, which would be required for guarantees in model-based MPC, is difficult in general and an active field of research~\\cite{Matni19,Matni19b}.\nAn extensive \\emph{quantitative} comparison of model-based MPC and the proposed data-driven MPC in theory and for practical examples is an interesting issue for future research.\n\n\\section{Example}\\label{sec:example}\nIn this section, we apply the robust data-driven MPC scheme of Section~\\ref{sec:robust} to a four tank system, which has been considered in~\\cite{raff2006nonlinear}.\nThis system is well-known as a real-world example, which is open-loop stable, but can be destabilized by an MPC without terminal constraints if the prediction horizon is too short.\nSimilarly, we show in this section that our proposed scheme is able to track a specified setpoint, whereas a scheme without terminal constraints as suggested in~\\cite{Yang15,Coulson19,Coulson19b} leads to an unstable closed loop, unless it is suitably modified.\n\nWe consider a linearized version of the system from~\\cite{raff2006nonlinear}, which takes the form\n\\begin{align*}\nx_{k+1}&=\\begin{bmatrix}0.921&0&0.041&0\\\\\n0&0.918&0&0.033\\\\\n0&0&0.924&0\\\\0&0&0&0.937\\end{bmatrix}x_k\\\\\n&+\\begin{bmatrix}\n0.017&0.001\\\\0.001&0.023\\\\0&0.061\\\\0.072&0\n\\end{bmatrix}u_k,\n\\end{align*}\n\\begin{align*}\ny_k&=\\begin{bmatrix}1&0&0&0\\\\0&1&0&0\n\\end{bmatrix}x_k.\n\\end{align*}\nFor the following application of the robust data-driven MPC scheme, the system matrices are \\emph{unknown} and only measured input-output data is available.\nThe control goal is tracking of the setpoint of the linearized system\n\\begin{align*}\n(u^s,y^s)=\\left(\\begin{bmatrix}1\\\\1\\end{bmatrix},\\begin{bmatrix}\n0.65\\\\0.77\\end{bmatrix}\\right),\n\\end{align*}\nwhich is readily shown to satisfy the dynamics.\nWe consider no constraints on the input or the output.\nIn an open-loop experiment, an input-output trajectory of length $N=400$ is measured, where the input is chosen randomly from the unit interval, i.e., $u^d_k\\in[-1,1]^2$, and the output is subject to \nuniformly distributed additive measurement noise with bound $\\bar{\\varepsilon}=0.002$.\nThe online measurements used to update the initial conditions~\\eqref{eq:robust_MPC2} in the MPC scheme are subject to the same type of noise.\n\nWe choose $L=30$ for the prediction horizon as well as the following design parameters\n\\begin{align*}\nQ=3\\cdot I_p,\\>R=10^{-4} I_m,\\>\\lambda_\\sigma=1000,\\>\\lambda_\\alpha\\bar{\\varepsilon}=0.1.\n\\end{align*}\nThe closed-loop output resulting from the application of Problem~\\eqref{eq:robust_MPC} in a $1$-step MPC scheme is displayed in Figure~\\ref{fig:four_tank}.\nIt can be seen that the control goal is fulfilled, with only slight deviations from the desired equilibrium.\nOn the other hand, if the same scheme without terminal constraints is applied to the system, then the closed loop is unstable and diverges with the chosen parameters for both a $1$-step and an $n$-step MPC scheme (cf. again Figure~\\ref{fig:four_tank}).\nThis confirms our initial motivation that rigorous guarantees are indeed desirable for data-driven MPC methods, in particular when they are applied to practical systems.\nFurthermore, it can also be observed in Figure~\\ref{fig:four_tank} that an $n$-step version of the proposed MPC scheme with terminal equality constraints yields slightly better tracking accuracy, compared to the $1$-step scheme.\nWe note that, with the above choice of parameters, the non-convex constraint~\\eqref{eq:robust_MPC5} is automatically satisfied without enforcing it explicitly (cf. Remark~\\ref{rk:sigma_bound}).\n\n\\begin{figure}\n\t\t\\begin{center}\n\t\n\t\n\t\n\t\n\t\t\\subfigure[Closed-loop output $y_1$]\n\t\t{\\includegraphics[width=0.49\\textwidth]{example_y1}}\n\t\t\\subfigure[Closed-loop output $y_2$]\n\t\t{\\includegraphics[width=0.49\\textwidth]{example_y2}}\n\t\t\\end{center}\n\t\t\\caption{Closed-loop output, resulting from the application of the robust data-driven MPC scheme with terminal equality constraints in a $1$-step fashion (TEC), in an $n$-step fashion (TEC, $n$-step), and without terminal equality constraints in a $1$-step fashion (UCON).}\t\\label{fig:four_tank}\n\\end{figure}\n\nTheorem~\\ref{thm:robust} gives qualitative guidelines for the tuning of the design parameters to guarantee robust stability.\nIn the following, we analyze the influence of various parameters on the closed-loop behavior.\nTheorem~\\ref{thm:robust} requires that the regularization parameters lie within specific bounds.\nThis is confirmed for the present example, where the MPC scheme achieves desirable closed-loop performance similar to Figure~\\ref{fig:four_tank} as long as $0.05\\leq\\lambda_\\alpha \\bar{\\varepsilon}\\leq0.5$.\nIf $\\lambda_\\alpha$ is chosen too low, then the closed loop is unstable since the norm of $\\alpha^*(t)$ and hence the amplification of the measurement noise in~\\eqref{eq:robust_MPC1} is too large.\nOn the contrary, if $\\lambda_\\alpha$ is chosen too large, then the asymptotic tracking error increases since the cost term $\\lambda_\\alpha\\bar{\\varepsilon}\\lVert\\alpha^*(t)\\rVert_2$ dominates over the tracking cost.\nSimilarly, if $\\lambda_\\sigma<500$, then the closed loop may be unstable since we did not consider the constraint~\\eqref{eq:robust_MPC5} and therefore the slack variable is too large, which has a negative impact on the prediction accuracy.\nAn upper bound on $\\lambda_\\sigma$ beyond which the closed-loop behavior is undesirable could not be observed for the present example.\nFurther, if the input weighting $R$ is chosen too low, then the robustness with respect to the noise deteriorates, which can be explained via the bound~\\eqref{eq:thm_proof_value_fcn_diff5}, which grows with $1\/\\lambda_{\\min}(R)$.\nIf the input weighting is chosen large enough, then also an MPC scheme without terminal constraints stabilizes the desired equilibrium.\n\nRegarding the knowledge of the system order $n=4$, it suffices if an upper bound on $n$ is available, i.e., if for instance $n=10$ is used in~\\eqref{eq:robust_MPC}.\nIf the system order is assumed lower than $n=4$, then the closed loop can be unstable.\nThe prediction horizon $L$ can be chosen (roughly) between $7\\leq L\\leq 70$.\nThe upper bound can be explained by noting that a larger $L$ implies that the constant $c_{pe}$ increases (compare the discussion after~\\eqref{eq:ass_pe_quantitative}) and therefore, the asymptotic tracking error increases.\nOn the other hand, the lower bound is due to the terminal equality constraints which require local controllability.\nMoreover, the steady-state tracking error, which can be seen e.g. in Figure~\\ref{fig:four_tank} (b), may increase or decrease, depending on the particular noise instance, and generally increases with the noise level $\\bar{\\varepsilon}$.\nThis confirms again the analysis of Section~\\ref{sec:robust}, which showed exponential stability of a set which grows with the noise level.\nFinally, if the norm of the data input $u^d$ increases (i.e., $c_{pe}$ decreases), then the tracking error decreases.\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn the present paper, we proposed and analyzed a novel MPC scheme with terminal equality constraints, which uses only past measured data for the prediction, without any prior system identification step.\nWe showed that, for a low noise amplitude, for a large ratio between persistence of excitation and the noise level, and for suitably tuned parameters, the closed loop in an $n$-step MPC scheme is recursively feasible and practically exponentially stable w.r.t. the noise level.\nTo the best of our knowledge, we have provided the first analysis regarding recursive feasibility and stability for a purely data-driven (model-free) MPC scheme.\nFurther, the analysis provides qualitative guidelines to choose the design parameters, and it illustrates the influence of other parameters, such as a persistence of excitation bound, on the region of attraction.\nWhile the MPC scheme is simple to implement, its analysis is challenging since we consider two sorts of noise: a) additive output noise and b) in the prediction model, similar to a multiplicative, parametric error in model-based MPC.\nIn an application to a practical example, we showed that the proposed MPC scheme guarantees stability, whereas an existing data-driven MPC scheme without terminal constraints leads to an unstable closed loop.\n\nSeveral topics for future research are left open.\nExtensions of the presented data-driven MPC approach to online optimization over artificial equilibria and robust output constraint satisfaction are provided in the recent works~\\cite{berberich2020tracking} and~\\cite{berberich2020constraints}, respectively.\nAnother extension, which would be highly interesting but also challenging, is the development of data-driven MPC schemes for \\emph{nonlinear} systems with meaningful closed-loop guarantees.\nFinally, many of the bounds employed in our proofs are conservative, and improving them may lead to less conservative, verifiable conditions on the admissible noise level for closed-loop stability.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMany modern applications encounter large data sets of very high dimension $d$, which requires new approaches for modeling as well as statistical inference, since classical methods usually fail. A particularly prominent example is the covariance matrix $\\Sigma$ of a random vector of $d$ variables, which has $d^2$ unknown parameters.\nIt is well-known that the sample covariance matrix $\\hat{\\Sigma}_n$ of sample size $n$ is singular if $n1$\n\\begin{align}\n\t\\begin{split}\n\t\t\\left(\\mathbb{E} |G_{t,l}(\\boldsymbol{\\epsilon}_t) - G_{t,l}(\\tilde{\\boldsymbol{\\epsilon}}_{t,t-j})|^q\\right)^\\frac{1}{q} &\\leq \\tilde{\\Theta} j^{-\\beta},\\qquad j\\geq 0,\\quad l=1,\\ldots,d,\\\\\n\t\t\\left(\\mathbb{E} |G_{t,l}(\\boldsymbol{\\epsilon}_0)|^q\\right)^\\frac{1}{q} &\\leq \\tilde{\\Theta},\\qquad l=1,\\ldots,d. \n\t\\end{split}\\label{eqn:ass-ergodic-proj} \\tag{P.1}\n\\end{align}\n\nThe time series $X_t$ may be nonstationary, and the nonstationarity is explicit by making the kernel $G_t$ depend on $t$. \nNevertheless, to obtain stronger asymptotic results, we require some regularity in time. \nHere, we do not choose classical smoothness conditions, but rather formulate the regularity in terms of the total variation norm of the mapping $t\\mapsto G_t$, measured in $L_2(P)$.\nIn particular, we suppose that for some $\\tilde{\\Gamma}>0$,\n\\begin{align}\n\t\\sum_{t=2}^{n}\\left( \\mathbb{E} |G_{t,l}(\\boldsymbol{\\epsilon}_0) - G_{t-1,l}(\\boldsymbol{\\epsilon}_0)|^2 \\right)^\\frac{1}{2} \n\t&\\leq \\tilde{\\Gamma}\\cdot\\tilde{\\Theta},\\quad l=1,\\ldots,d. \\label{eqn:ass-BV-proj} \\tag{P.2}\n\\end{align}\n\nA few results require to assume\n\\[\n \\text{$ X_t $ has finite and stationary eighth moment and $ \\beta > 2 $.} \\tag{P.3}\n\\]\n\nWe highlight that \\eqref{eqn:ass-ergodic-proj} and \\eqref{eqn:ass-BV-proj} only impose conditions on the individual components $l=1,\\ldots,d$, and are thus rather easy to verify even for high-dimensional models. To emphasize, we impose no assumption on the dependency among the $d$ components.\n\nNow consider the projection of the $d$-variate time series $X_t$ to a $m$-dimensional time series $Z_t$, with $m\\ll d$.\nThat is, for a matrix $V\\in\\mathbb{R}^{m\\times d}$, we consider the time series $Z_t = V X_t$.\nWe measure the size of the matrix $V$ by the operator norm, $\\|V\\|_\\infty$, with respect to the maximum vector norm, i.e.\\\n\\begin{align*}\n\t\\|V\\|_\\infty = \\sup_{w\\in\\mathbb{R}^d} \\frac{\\|Vw\\|_\\infty}{\\|w\\|_\\infty} = \\max_{l=1,\\ldots,m} \\|V_{l,\\cdot}\\|_1,\n\\end{align*}\nwhere $V_{l,\\cdot}$ denotes the $l$-th row of the matrix $V$. This choice of norm uniformly controls the $ \\ell_1 $-norms of the projection vectors and is motivated by highdimensional statistical problems. Indeed, as well known, if $d$ is large compared to $n$, classical statistical methods usually fail, because even strong signals are overlayed by too many random noise sources (noise accumulation) and spurious correlations occur. To overcome the curse of high dimensionality, sparse methods typically use $ \\| \\cdot \\|_r$-norms with $ 1 \\le r < 2$, often leading to $s$-sparse solutions, i.e.\\ with only $s$ active (non-zero) coordinates. For example, sparse PCA constructs $s$-sparse directions $ u $ on which the data vectors are projected by calculating $ u^T X_t$, and the variance of such a coefficients $ u^T X_t $, given by the quadratic form $ u^T \\Sigma_n u $ and coinciding with the associated eigenvalue when $ u $ is an eigenvector, provides information about the importance of that direction and is thus of interest. Similarly, when predicting a future response $\\tilde{Y}_t$ by a lasso regression on $ X_t $, one calculates $ \\hat{\\beta}_n^T \\tilde{X}_t $ for future regressors $ \\tilde{X}_t $, where the lasso estimate $ \\hat{\\beta}_n $ is $s-$sparse and thus selects $s \\ll d$ variables, so that it behaves like a low-dimensional statistic circumventing the issues arising for large dimension. The variance $ \\hat{\\beta}_n^T \\Sigma_n \\hat{\\beta}_n^T $ of the prediction $ \\hat{\\beta}_n^T \\tilde{X}_t$ is a natural measure of the prediction accuracy. When considering slightly more general vectors $ v $ with bounded $ \\| v \\|_1 $-norm (uniformly in $d$ and $n$), then we have the bound $ | v^T x_t | \\le \\| v \\|_1 \\max_{1 \\le j \\le d} | X_{tj} |$ at our disposal. Indeed, such projections have bounded moments under weak assumptions guaranteed by (P.1) as summarized in the following lemma, whose assertions no longer hold if $ \\| v \\|_2 $ is uniformly bounded but $ \\| v \\|_1 \\to \\infty $.\n\n\\begin{lemma}\\label{lem:jensen} Let $ q \\ge 1 $ and suppose that $ \\max_{1 \\le j \\le d} \\mathbb{E} |X_{tj}|^q \\le \\tilde{\\Theta} $ for some constant $C$ and all $d, n$, cf. (P.1). Then for any $ v \\in \\mathbb{R}^s $ with norm $ \\| v \\|_1 $ uniformly bounded by some $M$ \n\\[\n \\left(\\mathbb{E} | v^T X_t |^q\\right)^{\\frac{1}{q}} \\le \\| v \\|_1 \\max_{1 \\le j \\le d} ( \\mathbb{E} |X_{tj}|^q )^{\\frac{1}{q}} \\le \\tilde{\\Theta} M\n\\]\n\\end{lemma}\n\n\n\n\nThe formulation of our first main result requires the two rates\n\\begin{align*}\n\t\\chi(q,\\beta) = \\begin{cases}\n\t\t\\frac{q-2}{6q-4}, & \\beta \\geq \\frac{3}{2}, \\\\\n\t\t\\frac{(\\beta-1)(q-2)}{q(4\\beta-3)-2}, &\\beta\\in(1,\\frac{3}{2}).\n\t\\end{cases},\n\t\\quad\n\t\\xi(q,\\beta) = \\begin{cases}\n\t\t\\frac{q-2}{6q-4}, & \\beta \\geq 3, \\\\\n\t\t\\frac{(\\beta-2)(q-2)}{(4\\beta-6)q-4},& \\frac{3+\\frac{2}{q}}{1+\\frac{2}{q}} < \\beta < 3, \\\\\n\t\t\\frac{1}{2}-\\frac{1}{\\beta}, & 2< \\beta \\leq \\frac{3+\\frac{2}{q}}{1+\\frac{2}{q}}.\\\\\n\t\\end{cases}\n\\end{align*}\n\n\\begin{theorem}\\label{thm:Gauss-ts-proj}\n\tLet $X_t=G_t(\\boldsymbol{\\epsilon}_t)$ with $\\mathbb{E}(X_t)=0$ be such that \\eqref{eqn:ass-ergodic-proj} holds for some $q>2$, $\\beta>1$, and let $Z_t = V X_t$ for some $V\\in\\mathbb{R}^{m\\times d}$ such that $m\\leq cn$ for some $c>0$.\n\tThen, on a potentially different probability space, there exist random vectors $(X_t')_{t=1}^n\\overset{d}{=} (X_t)_{t=1}^n$ and independent, mean zero, Gaussian random vectors $Y_t'$ such that\n\t\\begin{align}\n\t\t\\left(\\mathbb{E} \\max_{k\\leq n} \\left\\|\\frac{1}{\\sqrt{n}} \\sum_{t=1}^k (V X_t' - Y_t') \\right\\|_2^2 \\right)^\\frac{1}{2}\n\t\t&\\leq C \\tilde{\\Theta} \\|V\\|_\\infty \\sqrt{m \\,\\log(n)} \\left( \\frac{m}{n} \\right)^{\\chi(q,\\beta)}. \\label{eqn:Gauss-ts-1-proj}\n\t\\end{align} \n\tfor some universal constant $C=C(q,\\beta,c)$.\n\t\n\tIf $\\beta>2$, the local long-run variance $\\Xi_t(v) = \\sum_{h=-\\infty}^\\infty V\\operatorname{Cov}(G_t(\\boldsymbol{\\epsilon}_0),G_t(\\boldsymbol{\\epsilon}_h))V^T$ is well defined.\n\tIf \\eqref{eqn:ass-BV-proj} is satisfied aswell, then there exist random vectors $(X_t')_{t=1}^n\\overset{d}{=} (X_t)_{t=1}^n$ and independent, mean zero, Gaussian random variables $Y_t'\\sim\\mathcal{N}(0,\\Xi_t(v))$ such that\n\t\\begin{align}\n\t\t\\begin{split}\n\t\t\t\\left(\\mathbb{E} \\max_{k\\leq n} \\left\\| \\frac{1}{\\sqrt{n}}\\sum_{t=1}^k (V X_t' - Y_t') \\right\\|_2^2 \\right)^\\frac{1}{2} \n\t\t\t&\\leq C \\tilde{\\Theta} \\|V\\|_\\infty \\sqrt{m \\,\\log(n)} \\left( \\frac{m}{n} \\right)^{\\xi(q,\\beta)}.\n\t\t\\end{split}\n\t\t\\label{eqn:Gauss-ts-2-proj}\n\t\\end{align}\n\\end{theorem}\n\n\nThe rate of the approximation \\eqref{eqn:Gauss-ts-1-proj} is faster than \\eqref{eqn:Gauss-ts-2-proj}. \nThe difference is that in the first case, the covariance structure of the approximating Gaussian random vectors $Y_t'$ is not explicit.\nIn the second case, the distribution of the approximating random vectors is explicit, at the price of assuming some temporal regularity of the stochastic process $X_t$.\nIn Section \\ref{sec:changepoint}, we will describe a bootstrap scheme to perform inference based on Theorem \\ref{thm:Gauss-ts-proj}.\n\nNeglecting the high-dimensional context for a moment, the projection $Z_t=V X_t$ may be analyzed as a multivariate, nonstationary time series of fixed dimension $m$. \nIn this setting, \\cite{Wu2011} and \\cite{Karmakar2020} study sequential Gaussian approximations, the latter obtaining optimal rates in $n$ which are faster than the rates of Theorem \\ref{thm:Gauss-ts-proj}. \nHowever, they require a lower bound on the covariance matrices of $Z_t$, while our result does not need this assumption. \nThis is particularly relevant because we do not impose any conditions on the dependency among the components of $X_t$. \nAs an extreme example, if all $d$ components are independent, then a $l_1$-bounded projection will lead to some concentration due to averaging, such that a lower bound on the variance of the projection can in general not be guaranteed.\nThis prevents an application of the result of \\cite{Karmakar2020} in the present situation. \nMoreover, the covariance of the approximating random vectors in \\cite{Karmakar2020} is not explicit.\nInstead, we employ a result on Gaussian couplings presented in \\cite{mies2022}, which is in turn enabled by a recent result of \\cite{eldan2020}.\n\n\n\n\n\n\n\\section{Asymptotics of high-dimensional shrinkage covariance matrix estimators and their projections}\\label{sec:coupling-cov}\n\nFor a $d$-variate centered time series $X_t$ as above, denote the sample covariance matrix by $\\hat{\\Sigma}_{n,n} = \\frac{1}{n}\\sum_{t=1}^n X_t X_t^T$, the partial sums by $\\hat{\\Sigma}_{k,n} = \\frac{1}{n}\\sum_{t=1}^k X_t X_t^T$, $k=1,\\ldots, n$, and its mean by\n\\begin{align*}\n \\Sigma_n \\;=\\; \\mathbb{E} \\hat{\\Sigma}_{n,n} \\;=\\; \\frac{1}{n} \\sum_{t=1}^n \\operatorname{Cov}(X_t).\n\\end{align*}\nHence, we account for nonstationarity of the time series $X_t$ by averaging the covariance matrices at all times $t=1,\\ldots, n$.\n\nThe central observation enabling our subsequent analysis is that the matrix-valued time series $X_t X_t^T$ fits within the framework of Section \\ref{sec:projection}.\n\n\\begin{proposition}\\label{prop:matrix}\n\tIf the time series $X_t = G_t(\\boldsymbol{\\epsilon}_t)$ satisfies \\eqref{eqn:ass-ergodic-proj} resp.\\ \\eqref{eqn:ass-BV-proj} with power $2q$ and factor $\\tilde{\\Theta}$, then the $d^2$-variate time series $\\check{X}_t = X_t X_t^T = \\check{G}_t(\\boldsymbol{\\epsilon}_t)$ satisfies \\eqref{eqn:ass-ergodic-proj} resp.\\ \\eqref{eqn:ass-BV-proj} with power $q$ and factor $\\tilde{\\Theta}^2$.\n\\end{proposition}\n\n\nIt has been suggested by \\cite{Steland2017} to perform inference on the covariance matrix via the quadratic form $v^T \\hat{\\Sigma}_{n,n} v$ for some vector $v\\in\\mathbb{R}^d$, i.e., the variance of the projection $ v^T X_t$. As already discussed above, such projections are ubiquitous in statistical problems, for example arising in optimal portfolio selection, as linear predictors in regression models, or when reducing dimensionality by principal component analysis. The associated empirical version, $v^T \\hat{\\Sigma}_{n,n} v = \\frac{1}{n}\\sum_{t=1}^n |v^T X_t|^2$, may be interpreted as the estimated variance of the univariate projection $\\tilde{X}_t=v^TX_t$. The results of \\cite{Steland2017} and \\cite{Steland2018} on Gaussian approximations of $ v^T \\hat{\\Sigma}_n v $, related change-point procedures, and Shrinkage estimators, impose the assumption that the process $X_t$ is a linear time series with a single innovation process, which may be interpreted as a special type of single-factor model, although approximate vector autoregressions and spiked covariance models are included under regularity conditions, see \\cite{Steland2019}. Extensions to multi-factor models with infinitely many factors and general multivariate linear processes with finite-dimensional innovations are studied in \\cite{Bours2021}. Although such processes, especially factor models, provide good description for many application scenarios and are widely used, they nevertheless restrict the dependence structure of the $d$ component processes and rule out phenomena such as conditional heteroscedasticity and nonlinearity. Here, by relying on the results of the previous section, we allow for nonstationary nonlinear time series and impose much weaker assumptions on the dependence of the components.\n\n\n\nTo estimate $ \\Sigma_n $ and related quadractic forms $ v^T \\Sigma_n v $, we shall shrink the nonparametric estimator $ \\hat{\\Sigma}_{n,n} $ towards structured targets, in order to overcome\nvarious issues of the sample covariance matrix arising if $d\\gg n$. A covariance matrix $\\Sigma$ is called bandable if $\\Sigma_{l,l'} \\leq g(|l-l'|)$ for some function $g(x)\\to 0$ as $x\\to\\infty$.\nIn this situation, optimal rates of convergence can be achieved via thresholding, see \\cite[Thm.\\ 7]{Cai2016} and references therein.\nIn particular, if $g(x) \\propto |x|^{-\\alpha-1}$ for some $\\alpha>0$, then a rate-optimal estimator is given by the tapering estimator\n\\begin{align*}\n\t\\hat{\\Sigma}_{k,n}^{\\dagger} &= ((\\hat{\\Sigma}_{k,n})_{l,l'} \\omega(|l-l'|))_{l,l'=1}^d, \\\\\n\t\\omega(x) &= \\begin{cases}\n\t\t1, & x\\leq \\frac{\\tau}{2}, \\\\\n\t\t2-\\frac{2x}{\\tau}, & \\frac{\\tau}{2}< x \\leq \\tau, \\\\\n\t\t0, & x>\\tau,\n\t\\end{cases}\n\\end{align*}\nThe optimal rate is achieved by the threshold $\\tau=\\min(n^\\frac{1}{2\\alpha+c},d)$ with $ c = 2 $ under the Frobenius norm $\\|\\cdot\\|_F$ and with $c=1$ under the spectral norm $\\|\\cdot\\|$, see \\cite{Cai2010}. \nIn particular, the optimal rates are\n\\begin{align*}\n d^{-1}\\|\\widehat{\\Sigma}_n - \\Sigma\\|_F^2 \n = \\mathcal{O}\\left( n^{-\\frac{2\\alpha+1}{2\\alpha+2}} + \\frac{d}{n} \\right), \\qquad\n \n \\|\\widehat{\\Sigma}_n - \\Sigma\\|^2 \n = \\mathcal{O}\\left( n^{-\\frac{2\\alpha}{2\\alpha+1}} + \\frac{\\log d}{n} \\right).\n\\end{align*}\n\nAnother approach is to impose a Toeplitz structure, i.e.\\ $\\Sigma_{l,l'} = \\sigma_{|l-l'|}$, for some sequence $ \\sigma_0, \\sigma_1, \\ldots $. The corresponding tapered Toeplitz estimator of $\\Sigma$ is \n\\begin{align*}\n\t\\hat{\\Sigma}_{k,n}^\\diamond &= \\left( \\hat{\\sigma}_{|l-l'|} \\omega(|l-l'|) \\right)_{l,l'=1}^d, \\\\\n\t\\hat{\\sigma}_m &= \\frac{1}{d-m} \\sum_{l-l'=m} (\\hat{\\Sigma}_{k,n})_{l,l'}, \\quad 0 \\le m < d,\n\\end{align*}\nwith tapering weights $\\omega(x)$ as above.\nIf $|\\sigma_{m}|\\leq C m^{-\\alpha-1}$, then the optimal rate under the spectral norm is achieved by the threshold choice $\\tau = (nd \/ \\log(nd))^\\frac{1}{2\\alpha+1}$, see \\cite[Thm.\\ 9]{Cai2016} and \\cite{cai2013}.\n\nThe following theorem shows that the full-sample estimators $ \\hat{\\Sigma}_n^\\dagger = \\hat{\\Sigma}_{n,n}^\\dagger $ and $ \\hat{\\Sigma}_n^\\diamond = \\hat{\\Sigma}_{n,n}^\\diamond $ are consistent in the (rescaled) Frobenius norm, under mild regularity conditions. To the best of our knowledge, these estimators have not yet been studied under such a general nonstationary nonlinear time series model as given by (P.1) and (P.2). Moreover, $\\hat{\\Sigma}_n^\\dagger$ is rate-optimal for bandable covariance matrices. Note that the optimal rates and thresholds have been obtained under the assumption that $ X_t $ are iid random vectors with mean zero and a bandable or Toeplitz covariance matrix $ \\Sigma $ satisfying further regularity conditions. \n\nDenote by $ \\Sigma_n^\\dagger $ and $ \\Sigma_n^\\diamond $ the theoretical oracle estimators associated to the estimators $ \\hat{\\Sigma}_{n,n}^{\\dagger} $ and $ \\hat{\\Sigma}_{n,n}^\\diamond $, respectively, obtained by replacing in their definitions the estimates $ (\\hat{\\Sigma}_{n,n})_{i,j} $ by the true covariances $ (\\Sigma_{n,n})_{i,j} $.\nThat is, $\\Sigma_n^\\diamond = \\mathbb{E}( \\hat{\\Sigma}_n^\\diamond )$ and $\\Sigma_n^\\dagger = \\mathbb{E}( \\hat{\\Sigma}_n^\\dagger )$. \nRecall also that $ \\Sigma_n = \\mathbb{E}( \\hat{\\Sigma}_{n,n} )$. \nDenote for $A, B \\ge 0 $ the scaled inner product by $ \\langle A,B\\rangle_* = A \\cdot B \/ d $ and let $ \\| A \\|_{F*} = \\sqrt{\\langle A,A\\rangle_*} $ be the associated scaled Frobenius matrix norm. This scaling ensures that the unit matrix has norm $1$ whatever the dimension $d$. The following theorem studies the above estimators under the general time series model given by (P.1) and (P.2) and provides sufficient conditions on the growth of dimension $ d $ relative to the sample size when measuring the estimation error in terms of the scaled Frobenius risk. \n\n\\begin{theorem} \n\\label{ConsistencyTapperToeplitz}\n\tSuppose that $ \\{ X_t \\} $ satisfies (P.1) with $q\\geq 4$ and $\\beta>1$, and that $ \\operatorname{Cov}(X_t) $ is bandable with exponent $\\alpha>0$ uniformly for all $ t $, i.e.\\ $\\operatorname{Cov}(X_t)_{i,j} \\leq C |i-j|^{-\\alpha-1}$.\n\t\n\t\\noindent(i)\n\tFor any threshold $\\tau$, it holds\n \\[\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\dagger - \\ \\Sigma_n \\|_{F*}^2 \n \\;\\leq\\; C(\\Theta,\\beta)\\, \\left\\{ \\tau^{-2\\alpha-1} + \\frac{\\min(\\tau,d)}{n} \\right\\}.\n \\]\n\tChoosing $\\tau = \\min(n^{\\frac{1}{2\\alpha+2}},\\, d)$, it holds\n\t\\[\n \\mathbb{E} \\| \\hat{\\Sigma}_n - \\ \\Sigma_n^\\dagger \\|_{F*}^2 \n \\;\\leq\\; C(\\Theta,\\beta)\\, \\min(n^{-\\frac{2\\alpha+1}{2\\alpha+2}},\\; \\tfrac{d}{n}).\n \\]\n\n \\noindent(ii)\n If $\\Sigma_n$ is a Toeplitz matrix, then \n \\[\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\ \\Sigma_n \\|_{F*}^2 \n \\;\\leq\\; C(\\Theta,\\beta)\\, \\left\\{ \\tau^{-2\\alpha-1} + \\frac{\\min(\\tau,d)}{n} \\right\\}.\n \\]\n\tChoosing $\\tau = \\min(n^{\\frac{1}{2\\alpha+2}},\\, d)$, it holds\n\t\\[\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\ \\Sigma_n \\|_{F*}^2 \n \\;\\leq\\; C(\\Theta,\\beta)\\, \\min(n^{-\\frac{2\\alpha+1}{2\\alpha+2}},\\; \\tfrac{d}{n}).\n \\]\n\\end{theorem}\n\nIn particular, the optimal rate of convergence of the tapering estimator $\\hat{\\Sigma}_n^\\dagger$ carries over to the dependent, nonstationary case, with identical threshold values.\n\nA shrinkage estimator may be defined for any weight $w\\in\\Delta = \\{(w_1, w_2,w_3)\\in[0,1]^3 : w_1+w_2+w_3=1\\}$, as \n\\begin{align*}\n\t\\hat{\\Sigma}_{k,n}^w &= w_1 \\hat{\\Sigma}_{k,n} + w_2 \\hat{\\Sigma}_{k,n}^\\dagger + w_3\\hat{\\Sigma}_{k,n}^\\diamond.\n\\end{align*}\nIt turns out that the theory presented in Section \\ref{sec:projection} may be applied to the quadratic form $v^T\\hat{\\Sigma}_n^w v$ for any shrinkage weight $w$, and hence also for the individual statistics $v^T\\hat{\\Sigma}_n v$, $v^T\\hat{\\Sigma}_n^\\dagger v$, and $v^T\\hat{\\Sigma}_n^\\diamond v$.\nThe crucial observation is that these quadratic forms can be regarded as linear projections of the $(d\\times d)$-variate time series $X_t X_t^T$ with respect to the Frobenius inner product $ A \\cdot B = \\sum_{i,j} a_{ij} b_{ij} $ defined for $ d \\times d $ matrices $ A = (a_{ij})_{i,j} $ and $ B = (b_{ij})_{i,j} $, which coincides with the usual inner product on $ \\mathbb{R}^{d^2} $ of the vectorized matrices. Indeed, we have the representations\n\\[\n v^T \\hat{\\Sigma}_{k,n}^\\dagger v = \\nu^2 \\cdot \\hat{\\Sigma}_{k,n}, \\qquad v^T \\hat{\\Sigma}_{k,n}^\\diamond v = \\nu^3 \\cdot \\hat{\\Sigma}_{k,n},\n\\]\nfor the projection weighting matrices $\\nu^1,\\nu^2, \\nu^3\\in\\mathbb{R}^{d\\times d}\\equiv \\mathbb{R}^{d^2}$ given by\n\\begin{align*}\n\t\\nu^1 = (v_l v_{l'})_{l,l'=1}^d, &\\qquad \\|\\nu^1\\|_1 = \\sum_{l,l'=1}^d |v_l v_{l'}| = \\|v\\|_1^2, \\\\\n\t\\nu^2 = (v_l v_{l'} \\omega(|l-l'|))_{l,l'=1}^d, & \\qquad \\|\\nu^2\\|_1 \\leq \\|\\nu^1\\|_1 = \\|v\\|_1^2, \\\\\n\t\\nu^3 = \\left( \\frac{\\omega(|l-l'|)}{d-|l-l'|}\\sum_{i-j=l-l'} v_iv_j\\right)_{l,l'=1}^d, \n\t& \\qquad \\|\\nu^3\\|_1 \\leq \\|\\nu^1\\|_1 = \\|v\\|_1^2\n\\end{align*}\nIn particular, we may write $v^T X_tX_t^T v = \\nu^1 \\cdot (X_t X_t^T)$ as the Frobenius inner product.\n\n\n\n\nWith Theorem \\ref{thm:Gauss-ts-proj}, we obtain the following result.\n\n\\begin{theorem}\\label{thm:Gauss-matrix}\n\tSuppose that the time series $X_t = G_t(\\boldsymbol{\\epsilon}_t)$ satisfies \\eqref{eqn:ass-ergodic-proj} resp.\\ \\eqref{eqn:ass-BV-proj} with power $2q$ and factor $\\tilde{\\Theta}$.\n\tThen, the time series $X_t$ may be defined on a different probability space, such that there exist independent, mean zero, Gaussian random vectors $\\eta_t\\in\\mathbb{R}^3$, such that\n\t\\begin{align*}\n\t\t\\left( \\mathbb{E} \\max_{k=1,\\ldots, n} \\max_{w\\in\\Delta} \\left|n v^T \\left[n\\hat{\\Sigma}_{k,n}^w - \\mathbb{E} \\hat{\\Sigma}_{k,n}^w\\right] v - w^T \\sum_{t=1}^k \\eta_t\\right| \\right)^\\frac{1}{2} \n\t\t\\leq C \\tilde{\\Theta}^2 \\|v\\|_1^2 \\sqrt{\\log(n)} n^{-\\xi(q,\\beta)}.\n\t\\end{align*}\n\tThe covariance of the Gaussian vectors $\\eta_t\\sim\\mathcal{N}(0,\\Xi_t)$ is given by\n\t\\begin{align*}\n\t\t(\\Xi_t)_{i,j} = \\sum_{h=-\\infty}^\\infty \\operatorname{Cov}\\left( \\nu^i \\cdot G_t(\\boldsymbol{\\epsilon}_0)G_t(\\boldsymbol{\\epsilon}_0)^T,\\; \\nu^j \\cdot G_t(\\boldsymbol{\\epsilon}_h)G_t(\\boldsymbol{\\epsilon}_h)^T \\right),\\quad i,j\\in \\{1,2,3\\}.\n\t\\end{align*}\n\\end{theorem}\n\nIt is worth mentioning that the bound in Theorem~\\ref{thm:Gauss-matrix} allows that $ \\| v \\|_1 $ increases as the dimension, $d$, or the sample size, $n$, grow, as long as $ \\| v \\|_1 = o( (\\log n)^{1\/4} n^{-\\xi(q,\\beta)\/2} ) $.\n\n\\section{Optimal shrinkage weights}\\label{sec:optimal}\n\nThe question arises, how the shrinking weights relate to the theoretical (oracle) performance of the resulting shrinkage estimator. To pursue such a study, we first consider the problem to combine the nonparametric estimator with oracles of the two targets within a framework proposed by \\cite{Ledoit2004} and aim at minimizing the squared Frobenius risk. This leads to a convex but box-constrained optimization problem allowing for an explicit interior solution (if it exists), which we briefly discuss. We propose consistent estimators of the unknown quantities determining the optimization problem and its solution(s) leading to data-adaptive bona fide estimates. \n\nFor simplicity of presentation, we elaborate this for the full sample of size $n$; the corresponding sequential estimates and oracles based on the first $k$ observations can then be derived easily. Write $ \\hat{\\Sigma}_n = \\hat{\\Sigma}_{n,n} $ and recall that $ \\Sigma_n^\\dagger $ and $ \\Sigma_n^\\diamond $ are the theoretical oracle estimators associated to the estimators $ \\hat{\\Sigma}_{n,n}^{\\dagger} $ and $ \\hat{\\Sigma}_{n,n}^\\diamond $, respectively, obtained by replacing in their definitions the estimates $ (\\hat{\\Sigma}_{n,n})_{i,j} $ by the true covariances $ (\\Sigma_{n,n})_{i,j} $. Equivalently, $\\Sigma_n^\\diamond = \\mathbb{E} \\hat{\\Sigma}_n^\\diamond$ and $\\Sigma_n^\\dagger = \\mathbb{E} \\hat{\\Sigma}_n^\\dagger$.\n\nWe wish to determine the optimal ideal shrinking weights such that the resulting (scaled) Frobenius risk is minimized when combining the nonparametric estimator $ \\hat{\\Sigma}_n $ with the oracle estimators $ \\Sigma_n^\\dagger $ and $ \\Sigma_n^\\diamond $. The scaled mean squared error of the sample covariance matrix is given by $\\operatorname{MSE}( \\hat{\\Sigma}_n ) = \\operatorname{MSE}( \\hat{\\Sigma}_n; \\Sigma_n ) = E \\| \\hat{\\Sigma}_n - \\Sigma_n \\|_{F*}^2$, and we consider the risk minimization problem \n\\begin{align} \n\\label{RMinP}\n \\begin{split}\n \\min_w& \\ \\ \\mathbb{E} \\| \\Sigma_n^w - \\Sigma_n \\|_{F*}^2 \\\\\n \\text{where}\\quad&\\Sigma_n^w = w_1 \\hat{\\Sigma}_n + w_2 \\Sigma_n^\\dagger + w_3 \\Sigma_n^\\diamond,\\qquad \\ w_1+w_2+w_3=1.\n \\end{split}\n\\end{align}\nFurther, let \n\\[\n E_n^{\\dagger} = \\| \\Sigma_n^\\dagger - \\Sigma_n \\|_{F*}^2, \\qquad E_n^{\\diamond} = \\| \\Sigma_n^\\diamond - \\Sigma_n \\|_{F*}^2\n\\]\nbe the squared approximation errors corresponding to the oracles, and \\[ D_n = \\langle \\Sigma_n^\\dagger - \\Sigma_n, \\Sigma_n^\\diamond - \\Sigma_n \\rangle_*. \\] \nSubstituting $ w_1 = 1-w_2-w_3 $ and observing that $ \\mathbb{E} \\langle \\hat{\\Sigma}_n-\\Sigma_n, A \\rangle = 0$, for $ A \\in \\{ \\Sigma_n^\\dagger, \\Sigma_n^\\diamond \\} $, it suffices to determine minimizers of\n\\begin{equation}\n\\label{OptSimplified}\n f( w_2, w_3 ) = (1-w_2-w_3)^2 \\mathbb{E} \\| \\hat{\\Sigma}_n - \\Sigma_n \\|_{F*}^2 + w_2^2 E_n^\\dagger + w_3^2 E_n^\\diamond + 2w_2w_3 D_n.\n\\end{equation}\nAdding the constraints $ 0 \\le w_1, w_2 \\le 1 $ leads to a quadratic programming problem under box constraints, which generally requires numerical algorithms for its solution. We discuss the special case of an interior solution, where explicit formulas can be derived, at the end of this section. The optimal weights depend on the unknown values $MSE(\\hat{\\Sigma_n}) = \\mathbb{E} \\|\\hat{\\Sigma_n}-\\Sigma_n\\|_{F*}^2$, $E_n^\\dagger$, $E_n^\\diamond$, and $D_n$. To obtain a data-driven solution for the optimal weights, we need consistent estimators of these quantities, which are also interesting in their own right.\n\nObserve that \n\\[\n \\operatorname{MSE}(\\hat{\\Sigma}_n) = \\frac{1}{n d} \\sum_{i,j=1}^d \\operatorname{Var}( \\sqrt{n} ( \\hat{\\Sigma}_{n} )_{i,j} ). \n\\]\nUnder the assumptions of Proposition~\\ref{prop:matrix} $ \\sqrt{n} (\\hat{\\Sigma}_{n} )_{i,j} = \\frac{1}{\\sqrt{n}} \\sum_{t=1}^n (X_t)_i (X_t)_j $ can be approximated in $ L_2 $ by $ \\frac{1}{\\sqrt{n}} \\sum_{t=1}^n Y_{t,i,j} $ for independent Gaussian random variables \n\\[ Y_{t,i,j} \\sim N( 0, \\xi_{t,i,j}^2 ) \\quad \\text{with}\\quad \\xi_{t,i,j}^2 = \\sum_{h=-\\infty}^\\infty \\operatorname{Cov}( G_{t,i}(\\boldsymbol{\\epsilon}_0)G_{t,j}(\\boldsymbol{\\epsilon}_0), \\, G_{t,i}(\\boldsymbol{\\epsilon}_h )G_{t,j}(\\boldsymbol{\\epsilon}_h ) ). \\]\nThis yields \n\\[\n\\operatorname{MSE}(\\hat{\\Sigma}_n) = \\frac{1}{n d} \\sum_{i,j=1}^d \\frac{1}{n} \\sum_{t=1}^n \\xi_{t,i,j}^2 + o(1).\n\\]\n\n\n\\cite{Sancetta2008} proposed an estimator of $ \\operatorname{MSE}(\\hat{\\Sigma}_n) $ by estimating the long-run-variance parameters \n$\n\\operatorname{Var}( \\sqrt{n} ( \\hat{\\Sigma}_{n} )_{i,j} ) \n$ by the long-run-variance estimators \n\\[\n \\widehat{\\operatorname{Var}}( \\sqrt{n} ( \\hat{\\Sigma}_{n} )_{i,j} ) = \\hat{\\Gamma}_{n,i,j}(0) + 2 \\sum_{s=1}^{n-1} K(s\/b) \\hat{\\Gamma}_{n,i,j}(s), \n\\]\nwhere $ K $ is a positive weighting function, $b>0$ is a bandwidth, and\n\\[\n \\hat{\\Gamma}_{n,i,j}(s) = \\frac{1}{n} \\sum_{t=1}^{n-s} [(X_t)_i (X_t)_j - (\\hat{\\Sigma}_n)_{i,j} ] \\cdot [ (X_{t+s})_i (X_{t+s})_j - (\\hat{\\Sigma}_n)_{i,j} ].\n\\]\nThe mean square error may then be estimated as \n\\[\n\\widehat{MSE}(\\hat{\\Sigma}_n) = \\frac{1}{nd} \\sum_{i,j=1}^d \\widehat{\\operatorname{Var}}(\\sqrt{n} (\\hat{\\Sigma}_n)_{i,j}).\n\\] \nThe result is as follows, see \\cite[Lemma~1]{Sancetta2008}.\n\n\\begin{lemma}\\label{lem:MSE} If (P.1)-(P.3) hold, $ K : \\mathbb{R} \\to \\mathbb{R} $ is a decreasing positive c\\`adl\\`ag-function with $\\lim_{x\\to0+} K(x) = 1$ and $ \\int_0^\\infty K^2(x) dx < \\infty$, and $ b \\to \\infty $ with $ b = o( \\sqrt{n} ) $, then\n\\[\n \\left| \\widehat{MSE}(\\hat{\\Sigma}_n) - MSE(\\hat{\\Sigma}_n) \\right| = o\\left( \\frac{d}{n} \\right).\n\\]\n\\end{lemma}\nIn practice, the bandwidth $b$ needs to be chosen in a data-driven way, e.g.\\ by the method described in \\cite{NeweyWest1994}.\n\n\nLet us now derive consistent estimators for the approximation errors $ E_n^\\dagger $ and $ E_n^\\diamond $ and the inner product $D_n$. The quantity $ E_n^\\diamond = \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*}^2$ can be estimated by plugging in the estimators $ \\hat{\\Sigma}_n^\\diamond $ and $\\hat{\\Sigma}_n^\\dagger $. Hence, define\n\\[\n \\hat{E}_n^\\diamond = \\| \\hat{\\Sigma}_n^\\diamond - \\hat{\\Sigma}_n^\\dagger \\|_{F*}^2.\n\\]\nTo estimate $ E_n^\\dagger $ observe that $ \\Delta_n^\\dagger = ( \\Delta_{n,ij}^\\dagger )_{i,j} = \\Sigma_n - \\Sigma_n^\\dagger $ is given by\n\\[\n \\Delta_n^\\dagger = \\left\\{\n \\begin{array}{ll}\n 0, & | i - j | \\le \\tau\/2, \\\\\n (1-\\omega(|i-j|)) \\Sigma_{n,ij}, & \\tau\/2 < |i-j| \\le \\tau, \\\\\n \\Sigma_{n,ij}, & |i-j| > \\tau.\n \\end{array}\n \\right.\n\\]\nEstimation of $\\Delta_n^\\dagger $ is delicate, since this involves estimating $\\sum_{m>\\tau} \\sum_{|i-j|=m} 1 = (d-\\tau)(d-\\tau+1)\/2 = O(d^2)$ off-diagonal terms which would lead to the unsatisfactory condition $ d^2 = o(n) $. For the class of bandable covariance matrices under consideration, we can proceed by introducing a second threshold and estimate only the covariances on the diagonals $ \\tau\/2+1, \\ldots, \\sigma $. This means, we estimate $ \\Delta_n^\\dagger $ by $ \\hat{\\Delta}_n^\\dagger = ( \\hat{\\Delta}_{n,ij}^\\dagger )_{i,j} $ with\n\\[\n \\hat{\\Delta}_{n,ij}^\\dagger = \\left\\{\n \\begin{array}{ll}\n 0, & | i - j | \\le \\tau\/2, \\\\\n (1-\\omega(|i-j|)) \\hat{\\Sigma}_{n,ij}, & \\tau\/2 < |i-j| \\le \\tau, \\\\\n \\hat{\\Sigma}_{n,ij}, & \\tau < |i-j| \\le \\sigma, \\\\\n 0, & |i-j| > \\sigma,\n \\end{array}\n \\right.\n\\]\nfor some threshold $ \\sigma $ with $ \\tau \\ll \\sigma \\ll d $, and put\n\\[\n \\hat{E}_n^\\dagger = \\| \\hat{\\Delta}_n^\\dagger \\|_{F*}^2.\n\\]\nLastly, the estimation of $ D_n = \\langle \\Sigma_n - \\Sigma_n^\\dagger, \\Sigma_n - \\Sigma_n^\\diamond \\rangle_* $ can be based on the above estimators of $ \\Delta_n^\\dagger = \\Sigma_n - \\Sigma_n^\\dagger $ and $ \\Sigma_n - \\Sigma_n^\\diamond $,\n\\[\n \\hat{D}_n = \\langle \\hat{\\Delta}_n^\\dagger, \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\rangle_*.\n\\]\nConsistency of $ \\hat{D}_n $, however, requires an extra condition compared to the estimators of the approximation errors. The reason is that $ \\Sigma_n - \\Sigma_n^\\dagger $ tends to zero in the $\\| \\cdot \\|_{F*} $-norm within the class of bandable matrices, whereas the error $ \\Sigma_n - \\Sigma_n^\\diamond $ may diverge. But within the the subclass of bandable matrices which are located in a neighborhood of their Toeplitz-approximation in the sense that \\[ \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*} = O(1) \\] consistency can be established, where $ \\Sigma_n^\\diamond $ is determined using the threshold $ n^{\\frac{1}{2\\alpha+c}} $; the threshold for the estimators are given below. We formulated the results for the highdimensional case $ d \\ge n^{\\frac{1}{2\\alpha+c}} $.\n\n\\begin{theorem}\n\\label{ConsistencyApproxErrors}\n\tSuppose that $ \\{ X_t \\} $ satisfies (P.1) and (P.2) and $ \\operatorname{Var}(X_t) $ is bandable for all $ t $.\nFurther, assume that $ d \\ge n^{\\frac{1}{2\\alpha+c}} $, and the estimators $ \\hat{\\Sigma}_n^\\dagger $, $ \\hat{\\Delta}_n^\\dagger $ and $ \\hat{\\Sigma}_n^\\diamond $ are calculated with thresholds $\\tau_n^\\dagger = n^{\\frac{1}{2\\alpha+c}} $, $\\sigma_n^\\dagger = n^{\\frac{1+s}{2\\alpha +c}} $ for some $ 0 < s < 2\\alpha+c-1$ and $ \\tau_n^\\diamond = \\left( \\frac{nd}{\\log nd} \\right)^{\\frac{1}{2\\alpha +1} }$, respectively. \n\\begin{itemize}\n\\item[(i)] If $ d = o( n^{\\frac{2\\alpha+2}{2\\alpha + 1}} ) $, then\n\\[\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\hat{\\Sigma}_n^\\dagger - (\\Sigma_n^\\diamond - \\Sigma_n ) \\|_{F*} = o(1).\n\\]\nand\n\\[\n \\mathbb{E} \\left| \n \\| \\hat{\\Sigma}_n^\\diamond - \\hat{\\Sigma}_n^\\dagger \\|_{F*} - \\| \\Sigma_n^\\diamond - \\Sigma_n \\|_{F*} \\right| = o(1),\n\\]\nwhich implies $ \\| \\hat{E}_n^\\diamond - E_n^\\diamond \\|_{F*} \\to 0 $ in probability.\n\n\\item[(ii)] We have\n\\[\n \\mathbb{E} \\| \\hat{\\Delta}_n^\\dagger - \\Delta_n^\\dagger \\|_{F*}^2 = \n \\left\\{\n \\begin{array}{ll}\n O\\left( n^{-\\frac{2\\alpha-1+c-s}{2\\alpha+c}} \\right), & \\qquad s > \\frac{c}{2\\alpha}, \\\\\n O\\left( n^{-\\frac{(s+1)(2\\alpha-1)}{2\\alpha+c}} \\right), & \\qquad s \\le \\frac{c}{2\\alpha}, \\\\ \n \\end{array}\n \\right.\n\\]\nand\n\\[\n \\mathbb{E} \\left| \\| \\hat{\\Delta}_n^\\dagger \\|_{F*} - \\| \\Delta_n^\\dagger \\|_{F*} \n \\right| = o(1),\n\\]\nsuch that $ \\| \\hat{E}_n^\\dagger - E_n^\\dagger \\|_{F*} \\to 0 $ in probability.\n\\item[(iii)] If $ \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*} = O(1) $ and $ d = o( n^{\\frac{2\\alpha+2}{2\\alpha + 1}} ) $, then\n\\[\n | \\hat{D}_n - D_n | \\to 0,\n\\]\nin probability.\n\\end{itemize}\n\\end{theorem}\n\n\n\nIf, for example, $ \\alpha = 1 $, then the dimension needs to satisfy $ d^{3\/4}=o(n)$, i.e. it may grow almost as fast as $ n^{4\/3} $, and for $ \\alpha = 2 $ almost as fast as $ n^{6\/5} $.\n\nHaving consistent estimators at our disposal, we can propose the following bona fide estimators of the optimal weights, \n\\[ \n \\left( \\hat{w}_1^*, \\hat{w}_2^*, \\hat{w}_3^* \\right) = \\arg\\min_{w\\in \\Delta} w_1^2 \\widehat{MSE}(\\hat{\\Sigma}_n) + w_2^2 \\hat{E}_n^\\dagger + w_3^2 \\hat{E}_n^\\diamond + 2 w_2 w_3 \\hat{D}_n.\n\\]\nThis yields the feasible shrinkage estimator\n\\begin{align*}\n \\hat{\\Sigma}_n^{w^*} = w_1^* \\hat{\\Sigma}_n + w_2^* \\hat{\\Sigma}_n^\\dagger + w_3^* \\hat{\\Sigma}_n^\\diamond.\n\\end{align*}\nThe performance of the corresponding shrinkage estimator $\\hat{\\Sigma}_n^{\\hat{w}^*}$ is assessed via simulations in the Section \\ref{sec:simulation}.\n\nLet us now briefly discuss the case of an interior solution of the minimization problem \\eqref{OptSimplified}. \nWe have\n\\begin{align*}\n f( w_2, w_3 ) &= ( w_2,w_3) Q_n \\begin{pmatrix} w_2 \\\\ w_3 \\end{pmatrix} - a_n^\\top \\begin{pmatrix} w_2 \\\\ w_3 \\end{pmatrix} + \\operatorname{MSE}(\\hat{\\Sigma}_n),\n \\end{align*}\nwhere\n\\[\nQ_n = \\begin{pmatrix} \\operatorname{MSE}(\\hat{\\Sigma}_n) + E^\\dagger & \\operatorname{MSE}(\\hat{\\Sigma}_n) + D_n \\\\ \\operatorname{MSE}(\\hat{\\Sigma}_n) + D_n & \\operatorname{MSE}(\\hat{\\Sigma}_n) + E_n^\\diamond \\end{pmatrix},\n\\quad a_n = \\begin{pmatrix} \\operatorname{MSE}(\\hat{\\Sigma}_n) \\\\ \\operatorname{MSE}(\\hat{\\Sigma}_n) \\end{pmatrix}.\n\\]\nHence, we are given a quadratic minimization problem under box constraints, and a natural condition is to assume that $ \\operatorname{det}(Q_n) > 0$. If an interior solution $ w^* \\in (0,1)^3$ exists (or the box constraint is omitted), the explicit solution is easily seen to be given by \n\\[\n w_2^* = \\frac{ \\operatorname{MSE}(\\hat{\\Sigma}_n)( E_n^\\diamond - D_n) }{ F_n }, \\qquad w_3^* = \\frac{\\operatorname{MSE}(\\hat{\\Sigma}_n)( E_n^\\dagger - D_n )}{F_n}\n\\]\nwith\n\\[\n F_n = \\operatorname{det}(Q_n) = E_n^\\dagger E_n^\\diamond + \\operatorname{MSE}( \\hat{\\Sigma}_n )( E_n^\\dagger + E_n^\\diamond - 2D_n ) - D_n^2.\n\\]\nThe resulting optimal weight for the sample covariance matrix $ \\hat{\\Sigma}_n $ is\n\\[\n w_1^* = \\frac{E_n^\\dagger E_n^\\diamond - D_n^2}{F_n}. \n\\]\nWe see from the formula for $ w_1^* $ that the optimal solution prefers the nonparametric estimator $ \\hat{\\Sigma}_n $, if the truth $ \\Sigma_n $ cannot be well approximated by a banded or Toeplitz matrix. The banded estimator receives a large weight, if the distance between the Toeplitz oracle and $ \\Sigma_n $ is large. Analogously, the optimal solution assigns a large weight to the Toeplitz estimator, if $ \\Sigma_n $ is not well approximated by the banded oracle, so that $ \\| \\Sigma_n - \\Sigma_n^\\dagger \\|_F^2 $ is large. As a result, it is optimal to distribute the weights across the estimators $ \\hat{\\Sigma}_n, \\hat{\\Sigma}_n^\\dagger $ and $ \\hat{\\Sigma}_n^\\diamond $ according to how well they approximate $ \\Sigma_n $, where the approximation accuracy is measured by the squared distances of the oracles to the truth and, in case of the nonparametric estimator, by the MSE. \n\nSince $ D_n^2 \\le E_n^\\dagger E_n^\\diamond $, we have $ w_1^* \\ge 0 $. If $D_n \\le \\min( E_n^\\dagger, E_n^\\diamond) $, then $ w_2^*, w_3^* \\ge 0 $ so that $ w \\in \\Delta $. \nIt is interesting to note that $ \\frac{w_2^*}{w_3^*} = \\frac{E_n^\\diamond - D_n}{E_n^\\dagger - D_n} $, i.e., the ratio of the oracle shrinking weights no longer dependents on $ \\operatorname{MSE}(\\hat{\\Sigma}_n) $. Further, $ w_{2}^* > w_3^* $ (banding preferable compared to Toeplitz) if and only if \n\\[ E_n^\\dagger < E_n^\\diamond \\Leftrightarrow \\| \\Sigma_n^\\dagger - \\Sigma_n \\|_{F*} < \\| \\Sigma_n^\\dagger - \\Sigma_n \\|_{F*}. \\]\nEven more is true: The above derivations remain formally true without any changes, if we use an arbitrary unbiased estimator $ \\tilde{\\Sigma}_n $ of $ \\Sigma_n $, i.e., an arbitrary measurable function $ \\tilde{\\Sigma}_n $ of the data with $ \\mathbb{E}( \\tilde{\\Sigma}_n ) = \\Sigma_n $. Therefore, we have the following interesting result: The ratio $\\frac{w_2^*}{w_3^*} $ is a universal characteristic of the risk minimization problem (\\ref{RMinP}) in the sense that it is independent of the nonparametric estimator and completely determined by $ \\Sigma_n $.\n\n\n\\section{Testing for structural breaks}\\label{sec:changepoint}\n\nThe sequential approximation result of Theorem \\ref{thm:Gauss-matrix} may be applied to test for changes in the covariance matrix $\\Sigma_t = \\operatorname{Cov}(X_t, X_t)\\in\\mathbb{R}^{d\\times d}$ of a time series $X_t$.\nTo be precise, we want to test the hypothesis\n\\begin{align*}\n\tH_0: \\Sigma_t \\equiv \\Sigma_0 \\quad\\leftrightarrow\\quad H_1: \\Sigma_t \\not\\equiv \\Sigma_0.\n\\end{align*}\nWe propose the following CUSUM-type statistics\n\\begin{align*}\n\tT_n(w) &= \\sqrt{n} \\max_{k\\leq n} \\left|v^T \\left(\\hat{\\Sigma}^w_{k,n} - \\tfrac{k}{n} \\hat{\\Sigma}^w_{n,n}\\right) v \\right|,\\quad w\\in\\Delta, \\\\\n\tT_n^*(D) &= \\sup_{w\\in D}\\; T_n(w), \\qquad D \\subset \\Delta.\n\\end{align*}\n\nTo determine critical values for the statistic $T_n(w)$ resp.\\ $T_n^*$, we employ a bootstrap scheme suggested in \\cite{mies2022}.\nIn particular, let $\\eta_t\\sim \\mathcal{N}(0,A_t)$ be independent Gaussian random vectors in $\\mathbb{R}^3$, where\n\\begin{align*}\n\tA_t \t&= \\frac{1}{b} \\left(\\sum_{s=t-b+1}^t \\chi_s\\right)^{\\otimes 2} , \\qquad t=b,\\ldots, n, \\\\\n\t\\chi_s\t&= \\left[\\nu^j \\cdot (X_s X_s^T - \\hat{\\Sigma}_{n,n}) \\right]_{j=1}^3.\n\\end{align*}\nand $A_t=0$ for $t0\\; :\\; P(R_n(w)|X_1,\\ldots, X_n) \\leq \\alpha \\}, \\\\\n\ta_n^*(\\alpha,D) &= \\inf \\{a>0\\; :\\; P(R_n^*(D)|X_1,\\ldots, X_n) \\leq \\alpha \\}.\n\\end{align*}\nWe suggest to reject the null hypothesis if\n\\begin{align*}\n\tT_n(w) > a_n(\\alpha,w)+\\delta_n, \\qquad \\text{resp.} \\qquad T_n^*(D) > a_n^*(\\alpha_n,D)+\\delta_n,\n\\end{align*}\nfor $\\delta_n = 1\/(\\log n)^p$ for some $p\\geq 1$.\n\n\\begin{theorem}\\label{thm:bootstrap}\n\tLet $X_{t} = X_{t,n} = G_{t}^n(\\boldsymbol{\\epsilon}_t)$ be an array of $d_n$-variate time series, such that each kernel $G_{t}^n$ satisfies (P.1) and (P.2), for some $q>8$, $\\beta>2$, and with factors $\\tilde{\\Theta}$ and $\\tilde{\\Gamma}$ not depending on $n$.\n\tChoose the block length $b$ such that $b=b_n \\asymp n^\\zeta$ for some $\\zeta\\in(0,1)$.\n\tIf $\\operatorname{Cov}(X_{t,n}) = \\Sigma_0$ for all $t=1,\\ldots, n$, and for some covariance matrix $\\Sigma_0$, then\n\t\\begin{align*}\n\t\t\\limsup_{n\\to\\infty} P\\left( T_n(w) > a_n(\\alpha,w) + \\delta_n \\right) &\\leq \\alpha, \\qquad w\\in\\Delta,\\\\\n\t\t\\limsup_{n\\to\\infty} P\\left( T_n^*(D) > a_n^*(\\alpha,D) + \\delta_n \\right) &\\leq \\alpha, \\qquad D\\subset\\Delta.\n\t\\end{align*}\n\\end{theorem}\n\n\nHence, the proposed bootstrap scheme indeed maintains the specified size for a change in covariance. \nA particular feature of this test is that it is robust against nuisance changes, because even under the null hypothesis the time series $X_t$ may be nonstationary, except for its stationary marginal covariance matrix. \nThe relevance of this kind of robustness has been first highlighted by \\cite{Zhou2013} for changes in mean, see also \\cite{Gorecki2018} and \\cite{Pesta2018}. Changes in the second moment structure in the presence of non-constant mean have been studied by \\cite{Dette2018} and \\cite{schmidt2021}, and a robust CUSUM test for a broad class of parameters is introduced in \\cite{mies2021}. Nevertheless, all these references focus on the low-dimensional case, whereas our new test is also applicable in high dimensions.\n\nThe power of the proposed test and bootstrap scheme is hard to investigate analytically, because the model framework is rather broad. Instead, we analyze the behavior of the proposed changepoint test via simulations in the subsequent section.\n\n\n\\section{Simulation}\\label{sec:simulation}\n\nTo demonstrate the implementation of our proposed changepoint test and the benefits of shrinkage, we assess our methodology for three examples of high-dimensional time series.\n\n\\subsection{Model A}\nFirst, we simulate a high-dimensional time series according to the vector ARMA(1,1) model\n\\begin{align*}\n\tX_t = a \\Pi X_{t-1} + b\\epsilon_t + \\epsilon_{t-1}.\n\\end{align*}\nThe matrix $\\Pi\\in\\mathbb{R}^{d\\times d}$ is a permutation matrix, such that the model is non-explosive.\nFurthermore, for any $t$, the $(\\epsilon_t)_i$, $i=1,\\ldots, d$, are chosen as independent, zero mean random variables having a symmetrized Gamma distribution with shape parameter $\\alpha(t\/n)$, standardized to unit variance. \nWe set $\\alpha(u) = 2+\\sin(2\\pi u)$, $u\\in[0,1]$. \nThus, the autocovariance structure is stationary, but the time-series is nonstationary. \nIn particular, the long run covariance matrices $\\Xi_t$ which occur in Theorem \\ref{thm:Gauss-matrix} are non-constant.\n\nWe assess our changepoint procedure under the null hypothesis of no change, and under the alternative where $b$ changes at time $t= \\lceil\\frac{n}{2}\\rceil$.\nThe projection vector $v$ is chosen randomly as $N\/\\|N\\|_1$ for a standard Gaussian random vector $N\\sim\\mathcal{N}(0,I_{d\\times d})$.\nThe threshold for the tapering estimator is chosen as $\\tau^{\\dagger}_n = n^{\\frac{1}{2\\alpha+1}} $, and $\\tau_n^{\\diamond} = (nd\/\\log(nd))^\\frac{1}{2\\alpha+1}$. \nNote that these thresholds are optimal for estimation under the spectral norm, in the class of bandable matrices with decay rate $\\alpha$. \nIn practice, $\\alpha$ is unknown, and we use the threshold for $\\alpha=2$ in our simulations.\nTo determine the shrinkage weights, we also use the threshold $\\sigma=\\sigma_n^\\dagger = n^{\\frac{1}{2\\alpha+1}} \\gg \\tau_n^{\\dagger}$.\nFor the bootstrap scheme, the additional offset is chosen as $\\delta_n = \\log(n)^{-4}$, and the block-length is $b_n=\\lceil 5n^{0.2} \\rceil$.\n\nTable \\ref{tab:power} reports the simulated size and power of the bootstrap test for different combinations of $n$ and $d=d_n$, and shrinkage weights $w^{(1)} = (1,0,0)$ and $w^{(2)}=(0.3, 0.3, 0.4)$.\nWe also consider the data-driven shrinkage weight $w^*$ as described in Section \\ref{sec:optimal}, which is not covered by our Gaussian approximation results.\nRegarding the choice of the bandwidth for the estimator $\\widehat{MSE}(\\hat{\\Sigma}_n)$, we employ the method of \\cite{NeweyWest1994} with Bartlett kernel as implemented in the R package \\textsc{sandwich} (see \\cite{sandwich2020}), and we determine a separate bandwidth for each component $\\widehat{\\operatorname{Var}}(\\sqrt{n}(\\hat{\\Sigma}_n)_{i,j})$.\n\nWhen simulating the process, we set $a=0.5$ and $b=0.5$ (resp.\\ $b=0.75$ post-change).\nBased on our simulation results, we find that the test maintains the specified size $10\\%$ and is indeed slightly conservative, as established theoretically in Section \\ref{sec:changepoint}. \nMoreover, the power of the changepoint test increases with sample size, as expected, but also with increasing dimension. \nThis may be explained by the fact that here, the change affects all coordinates, such that the additional information can be used to improve the detection performance. \nIt is also interesting to observe that the additional usage of the tapered and the Toeplitz estimator further improves the power of the test.\nThis is interesting because the shrinkage estimator has originally been proposed as a method to improve the performance of a point estimate. \nOur simulation results demonstrate that shrinkage can also be also useful for testing.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{ll|cccc}\n & & $n=100$ & $n=500$ & $n=1000$ & $n=2000$ \\\\ \\hline\\hline\n && \\multicolumn{4}{c}{Model A} \\\\ \\hline\\hline\n $w^{(1)}$ & $d=4$ & 0.102 (0.09) & 0.220 (0.08) & 0.433 (0.09) & 0.727 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.101 (0.09) & 0.289 (0.08) & 0.520 (0.08) & 0.824 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.098 (0.07) & 0.282 (0.08) & 0.535 (0.08) & 0.822 (0.09) \\\\ \\hline\n $w^{(2)}$ & $d=4$ & 0.107 (0.08) & 0.334 (0.08) & 0.594 (0.09) & 0.878 (0.10) \\\\ \n & $d=4n^{0.3}$ & 0.155 (0.07) & 0.823 (0.08) & 0.990 (0.09) & 1.000 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.201 (0.05) & 0.967 (0.07) & 1.000 (0.07) & 1.000 (0.08) \\\\ \\hline\n $w^{*}$ & $d=4$ & 0.102 (0.08) & 0.252 (0.08) & 0.458 (0.09) & 0.751 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.119 (0.08) & 0.530 (0.08) & 0.826 (0.08) & 0.985 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.130 (0.06) & 0.697 (0.07) & 0.956 (0.07) & 1.000 (0.09) \\\\ \\hline\\hline\n \n && \\multicolumn{4}{c}{Model B} \\\\ \\hline\\hline\n $w^{(1)}$ & $d=4$ & 0.108 (0.08) & 0.277 (0.08) & 0.503 (0.09) & 0.803 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.102 (0.08) & 0.275 (0.07) & 0.502 (0.08) & 0.804 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.101 (0.08) & 0.276 (0.07) & 0.505 (0.08) & 0.798 (0.09) \\\\ \\hline\n $w^{(2)}$ & $d=4$ & 0.137 (0.08) & 0.438 (0.08) & 0.712 (0.09) & 0.938 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.198 (0.08) & 0.861 (0.07) & 0.993 (0.08) & 1.000 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.270 (0.07) & 0.966 (0.08) & 1.000 (0.08) & 1.000 (0.09) \\\\ \\hline\n $w^{*}$ & $d=4$ & 0.115 (0.08) & 0.302 (0.08) & 0.531 (0.09) & 0.829 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.133 (0.08) & 0.528 (0.07) & 0.837 (0.08) & 0.989 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.157 (0.07) & 0.671 (0.07) & 0.941 (0.08) & 0.999 (0.09) \\\\\n \\hline\\hline\n \n && \\multicolumn{4}{c}{Model C} \\\\ \\hline\\hline\n $w^{(1)}$ & $d=4$ & 0.098 (0.08) & 0.273 (0.08) & 0.502 (0.08) & 0.806 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.104 (0.08) & 0.277 (0.08) & 0.510 (0.08) & 0.806 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.094 (0.08) & 0.269 (0.07) & 0.506 (0.08) & 0.799 (0.09) \\\\ \\hline\n $w^{(2)}$ & $d=4$ & 0.124 (0.08) & 0.415 (0.08) & 0.692 (0.09) & 0.929 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.170 (0.08) & 0.774 (0.08) & 0.971 (0.08) & 1.000 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.225 (0.07) & 0.908 (0.08) & 0.997 (0.08) & 1.000 (0.09) \\\\ \n \\hline\n $w^{*}$ & $d=4$ & 0.108 (0.08) & 0.321 (0.08) & 0.573 (0.08) & 0.856 (0.09) \\\\ \n & $d=4n^{0.3}$ & 0.132 (0.08) & 0.460 (0.08) & 0.733 (0.08) & 0.947 (0.09) \\\\ \n & $d=4n^{0.5}$ & 0.143 (0.08) & 0.518 (0.07) & 0.786 (0.08) & 0.952 (0.09) \\\\ \\hline\n \\end{tabular}\n\t\\caption{Power (and size), nominal level $10\\%$. p-values based on $10^3$ bootstrap samples, reported values based on $10^4$ Monte Carlo simulations.}\n \\label{tab:power}\n\\end{table}\n\n\n\n\\subsection{Models B \\& C}\nAs a second example, we consider the vector ARMA(1,1) process $X_t$ given by\n\\begin{align*}\n X_t = a X_{t-1} + b\\epsilon_t + \\epsilon_{t-1},\n\\end{align*}\nwith $a$ and $b$ as above, and $\\epsilon_t\\sim \\mathcal{N}(0, A)$ iid random vectors, and we initialize $X_0 \\sim \\mathcal{N}(0,A\\,(1+b^2)\/(1-a^2))$ to ensure weak stationarity. \nThe symmetric positive semidefinite matrix $A\\in\\mathbb{R}^{d\\times d}$ is chosen as $A_{i,j}=\\gamma(t_i-t_j)$, for $\\gamma(h) = |h+1|^{2H} + |h-1|^{2H} - 2|h|^{2H}$, which is the autocovariance function of fractional Gaussian noise. y\nFor the values $t_i$, we consider the scenarios (B) $t_i=i$, such that $A$ is a Toeplitz matrix, and (C) $t_i = \\sqrt{i}$, such that $A$ is not a Toeplitz matrix.\nFor the bootstrap scheme and for the determination of shrinkage weights, we use the same settings as in Model A.\n\nThe size and power of the changepoint test are presented in Table \\ref{tab:power}.\nAs for Model (A), the test turns out to be conservative, and gains power with increasing sample size, and higher dimension. \nIn our simulations, we also find the deterministic weight $w^{(2)}$ leading to higher power compared to the data-driven weight $w^*$.\nThus, future work could explore the optimal choice of shrinkage weights from a testing perspective.\n\nUnder the null hypothesis of no change, the marginal covariance matrix of $X_t$ is given by $\\operatorname{Cov}(X_t) = A \\frac{1+2ab + b^2}{1-a^2}$. \nThanks to this explicit formula, we are able to analyze the error of estimation of the proposed shrinkage estimator with data-driven shrinkage weights, see Table \\ref{tab:error-ARMA}. \nAs benchmarks, we evaluate the performance of the estimators $\\hat{\\Sigma}_n$, $\\hat{\\Sigma}_n^\\dagger$, and $\\hat{\\Sigma}_n^\\diamond$ individually. \nFor model B, the Toeplitz estimator $\\hat{\\Sigma}_n^\\diamond$ is found to perform best, which could be expected because the true covariance matrix is of Toeplitz type. \nOn the other hand, the shrinkage estimator always improves upon the sample covariance matrix $\\hat{\\Sigma}_n$, and is often close the performance of the tapered estimator $\\hat{\\Sigma}_n^\\dagger$.\nFor model C, the decay of the off-diagonal entries of the true covariance matrix is slower, hence the performance of $\\hat{\\Sigma}_n^\\dagger$ is also worse. \nIndeed, its estimation error is larger than the error of the sample covariance $\\hat{\\Sigma}_n$.\nYet, for $n\\geq 500$, the error of the shrinkage estimator is lower than that of all other estimators. \nThis demonstrates that our proposed shrinkage estimator not only chooses among the three estimators, but may indeed improve the performance even further due to the convex combination.\n\n\n\n\n\\begin{table}\n \\centering\n \\small\n \\begin{tabular}{l|rrrr}\n & \\multicolumn{1}{c}{$n=100$} & \\multicolumn{1}{c}{$n=500$} & \\multicolumn{1}{c}{$n=1000$} & \\multicolumn{1}{c}{$n=2000$} \\\\ \n \n \\hline\\hline\n & \\multicolumn{4}{c}{Model B} \\\\ \\hline\\hline\n \n $d=4$ & 2.50$|$2.32$|$0.87$|$2.34 & 0.52$|$0.48$|$0.18$|$0.48 & 0.26$|$0.24$|$0.09$|$0.24 & 0.13$|$0.12$|$0.05$|$0.12 \\\\ \n $d=4n^{0.3}$ & 6.38$|$3.84$|$0.78$|$4.01 & 2.55$|$1.29$|$0.16$|$1.42 & 1.49$|$0.76$|$0.08$|$0.82 & 0.95$|$0.49$|$0.04$|$0.53 \\\\ \n $d=4n^{0.5}$ & 19.96$|$4.32$|$0.60$|$7.71 & 9.06$|$1.43$|$0.15$|$3.17 & 6.39$|$0.86$|$0.08$|$2.13 & 4.54$|$0.56$|$0.04$|$1.48 \\\\ \n \n \\hline\\hline\n & \\multicolumn{4}{c}{Model C} \\\\ \\hline\\hline\n \n $d=4$ & 2.63$|$2.61$|$1.49$|$2.45 & 0.55$|$0.67$|$0.61$|$0.52 & 0.28$|$0.42$|$0.50$|$0.27 & 0.14$|$0.29$|$0.44$|$0.14 \\\\ \n $d=4n^{0.3}$ & 6.85$|$5.17$|$3.91$|$4.80 & 2.73$|$2.55$|$5.21$|$1.98 & 1.58$|$1.79$|$5.69$|$1.23 & 1.01$|$1.26$|$6.80$|$0.82 \\\\ \n $d=4n^{0.5}$ & 21.25$|$12.47$|$9.05$|$10.64 & 9.48$|$11.13$|$12.98$|$5.75 & 6.66$|$12.37$|$15.69$|$4.55 & 4.70$|$12.27$|$19.12$|$3.53 \\\\ \\hline\n \\end{tabular}\n \\caption{Mean square estimation error of the marginal covariance $\\Sigma=\\operatorname{Cov}(X_t)$ under stationarity, measured in the scaled Frobenius norm $\\|\\cdot\\|_{F*}$. Errors are reported, in this order, for (i) sample covariance $\\hat{\\Sigma}_n$, (ii) tapered estimator $\\hat{\\Sigma}_n^\\dagger$, (iii) Toeplitz-type estimator $\\hat{\\Sigma}_n^\\diamond$, (iv) the shrinkage estimator $\\hat{\\Sigma}_n^{w^*}$. Reported values based on $10^4$ Monte Carlo simulations.}\n \\label{tab:error-ARMA}\n\\end{table}\n\n\\section{Proofs}\\label{sec:proofs}\n\n\n\\begin{proof}[Proof of Lemma \\ref{lem:jensen}] The result follows from Jensen's inequality:\n\\begin{align*}\n\t\\mathbb{E} \\left| \\sum_{j=1}^d v_j X_{tj} \\right|^{q} \n\t\\le \\| v \\|_{1}^{q} \\mathbb{E} \\left( \\sum_{j=1}^d \\frac{ |v_j| }{\\| v \\|_{1}} | X_{tj} | \\right)^{q} \n\t\\le \\| v \\|_{1}^{q} \\sum_{j=1}^d \\frac{ |v_j| }{\\| v \\|_{1}} \\mathbb{E} | X_{tj} |^{q} \n\t = \\| v \\|_{1}^{q} \\max_{1 \\le j \\le d} \\mathbb{E} | X_{tj} |^{q}\n\\end{align*}\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:Gauss-ts-proj}]\n\tThe time series $Z_t$ may be written as $Z_t = \\tilde{G}_t(\\boldsymbol{\\epsilon}_t)$, with kernel $\\tilde{G}_t = V G_t$, such that we may apply Theorem 3.1 of \\cite{mies2022}.\n\tIt can be verified that $\\tilde{G}_t$ satisfies conditions (G.1) and (G.2) therein, with $\\Gamma=\\tilde{\\Gamma}$ and $\\Theta = \\sqrt{m}\\tilde{\\Theta} \\|V\\|_\\infty$.\n\tTo see this, note that for any $m$-variate random variable $X$, we have\n\t\\begin{align*}\n\t\t(\\mathbb{E} \\|X\\|_2^q)^\\frac{1}{q} \n\t\t\\leq (\\mathbb{E} \\|X\\|_q^q)^\\frac{1}{q} m^{\\frac{1}{2}-\\frac{1}{q}} \n\t\t\\leq m^\\frac{1}{2} \\max_{l=1,\\ldots,m} (\\mathbb{E} |X_l|^q)^\\frac{1}{q},\n\t\\end{align*}\n\tbecause $\\|x\\|_2 \\leq m^{\\frac{1}{2}-\\frac{1}{q}} \\|x\\|_q$ for any vector $x\\in\\mathbb{R}^m$.\n\tMoreover, for any $l=1,\\ldots,m$,\n\t\\begin{align*}\n\t\t\\left( \\mathbb{E} |V_{l,\\cdot} G_t(\\boldsymbol{\\epsilon}_0)|^q\\right)^\\frac{1}{q} \n\t\t&= \\left( \\mathbb{E} \\left|\\sum_{r=1}^d V_{l,r} G_{t,r}(\\boldsymbol{\\epsilon}_0)\\right|^q\\right)^\\frac{1}{q} \\\\\n\t\t&\\leq \\sum_{r=1}^d |V_{l,r}| \\, \\left(\\mathbb{E} |G_{t,r}(\\boldsymbol{\\epsilon}_0)|^q\\right)^\\frac{1}{q} \\qquad\n\t\t\\leq \\|V\\|_\\infty \\tilde{\\Theta},\n\t\\end{align*}\n\tsuch that\n\t\\begin{align*}\n\t\t\\left( \\mathbb{E} \\|V G_t(\\boldsymbol{\\epsilon}_0)\\|^q\\right)^\\frac{1}{q} \\leq \\sqrt{m} \\|V\\|_\\infty \\tilde{\\Theta}.\n\t\\end{align*}\n\tThe same argument can be used to establish (G.1) and (G.2).\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{prop:matrix}]\n\tObserve that for four random variables $Y_1, Y_2, Z_1, Z_2 \\in L_{2q}$, it holds that\n\t\\begin{align*}\n\t\t(\\mathbb{E} |Y_1Z_1 - Y_2 Z_2|^{q})^\\frac{1}{q} \n\t\t&\\leq (\\mathbb{E} |(Y_1-Y_2)Z_1|^{q})^\\frac{1}{q} + (\\mathbb{E} |Y_2(Z_1- Z_2)|^{q})^\\frac{1}{q} \\\\\n\t\t&\\leq (\\mathbb{E} |Y_1-Y_2|^{2q})^\\frac{1}{2q} (\\mathbb{E} |Z_1|^{2q})^\\frac{1}{2q} + (\\mathbb{E} |Y_w|^{2q})^\\frac{1}{2q}(\\mathbb{E} |Z_1-Z_2|^{2q})^\\frac{1}{2q}.\n\t\\end{align*}\n\tInequality \\eqref{eqn:ass-ergodic-proj} may be obtained by setting $Y_1 = G_{t,l}(\\boldsymbol{\\epsilon}_t), Y_2 = G_{t,l}(\\tilde{\\boldsymbol{\\epsilon}}_{t,t-j})$ and $Z_1 = G_{t,l'}(\\boldsymbol{\\epsilon}_t), Z_2 = G_{t,l'}(\\tilde{\\boldsymbol{\\epsilon}}_{t,t-j})$.\n\tInequality \\eqref{eqn:ass-BV-proj} may be obtained by setting $Y_1 = G_{t,l}(\\boldsymbol{\\epsilon}_t), Y_2 = G_{t-1,l}(\\boldsymbol{\\epsilon}_{t})$ and $Z_1 = G_{t,l'}(\\boldsymbol{\\epsilon}_t)$, $Z_2 = G_{t-1,l'}(\\boldsymbol{\\epsilon}_{t})$. \n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{ConsistencyTapperToeplitz}]\n\t By \\cite[Th.~3.2]{mies2022} we have under Assumption (P.1) for all $d, n$\n\t \\[\n\t \\left( \\mathbb{E} \\max_{k \\le n} \\left| \\sum_{t=1}^k X_{ti} X_{tj} - ( \\operatorname{Var}(X_t) )_{ij} \\right|^2 \\right)^{\\frac{1}{2}} \n\t \\leq C(\\Theta,\\beta) n^\\frac{1}{2},\\quad 1\\leq i,j\\leq d,\n\t \\]\n\t such that $ \\hat{\\Sigma}_{n,ij} = ( \\hat{\\Sigma}_n)_{ij} $ satisfies \n\t \\begin{equation}\n\t \\max_{1 \\le i, j \\le d} \\mathbb{E}( \\hat{\\Sigma}_{n,ij} - \\ \\Sigma_{n,ij} )^2 \\leq C(\\Theta,\\beta) n^{-1}.\n\t \\end{equation}\n\t \n\t \\noindent\\underline{To show the bound for $\\hat{\\Sigma}_n^\\dagger$}, observe that\n\t \\[\n\t \\mathbb{E} \\| \\hat{\\Sigma}_n^\\dagger - \\ \\Sigma_n \\|_{F*}^2 \\leq 4\\, ( A_n + B_n + C_n )\n\t \\]\n\t where\n\t \\begin{align*}\n\t \t A_n & = d^{-1} \\sum_{m \\le \\tau\/2} \\sum_{|i-j|=m} \\mathbb{E}( \\hat{\\Sigma}_{n,ij} - \\ \\Sigma_{n,ij} )^2, \\\\\n\t \t B_n &= d^{-1} \\sum_{\\tau\/2 < m \\le \\tau} \\sum_{|i-j|=m} \\omega(|i-j|)^2 \\mathbb{E}\\left( \\hat{\\Sigma}_{n,ij} - \\Sigma_{n,ij} \\right)^2, \\\\\n\t \t C_n &= d^{-1} \\sum_{\\tau\/2 < m \\le \\tau} \\sum_{|i-j|=m} (1-\\omega(|i-j|))^2 \\Sigma_{n,ij}^2 , \\\\\n\t \t D_n & = d^{-1} \\sum_{m > \\tau} \\sum_{|i-j|=m} \\Sigma_{n,ij}^2.\n \t\\end{align*}\n Clearly, $ |C_n + D_n| = O( \\tau^{-2\\alpha-1} ) $. \n Furthermore, \n \\begin{align*}\n |A_n + B_n| \n \\leq \\frac{C}{n d} \\sum_{m\\leq \\tau} \\sum_{|i-j|=m} 1 \n \\leq \\frac{C}{n} \\min(\\tau,d).\n \\end{align*}\n Thus, \n \\begin{align*}\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\dagger - \\ \\Sigma_n \\|_{F*}^2\n \\leq C \\left\\{ \\tau^{-2\\alpha-1} + \\frac{\\min(\\tau,d)}{n} \\right\\}.\n \\end{align*}\n The optimal rate is attained for $\\tau = \\min( n^{\\frac{1}{2\\alpha+2}},\\, d)$. \n \n \\noindent\\underline{To show the bound for $\\hat{\\Sigma}_n^\\diamond$} observe that by Jensen's inequality\n \\begin{align*}\n \\mathbb{E}( \\hat{\\sigma}_m - \\ \\sigma_m )^2 \n & =\\mathbb{E} \\left( \\frac{1}{d-m} \\sum_{k-l=m} \\hat{\\Sigma}_{n,kl} - \\ \\Sigma_{n,kl} \\right)^2 \\\\\n & \\le \\frac{1}{d-m} \\sum_{k-l=m} \\mathbb{E}( \\hat{\\Sigma}_{n,kl} - \\ \\Sigma_{n,kl} )^2\n \\\\\n & \\leq C(\\Theta,\\beta) n^{-1},\n \\end{align*}\nwhere $ \\ \\sigma_m = \\frac{1}{d-m} \\sum_{k-l=m} \\ \\Sigma_{n,kl} $ for all $i,j $ with $ i-j= m \\le \\tau$. We have\n\\begin{align*}\n\t\\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\ \\Sigma_n \\|_{F,*}^2 \n\t& \\le \\frac{2}{d} \\sum_{m \\le \\tau} \\sum_{i-j=m} \\mathbb{E}( \\hat{\\Sigma}_{n,ij}^\\diamond - \\ {\\Sigma}_{n,ij} )^2 + \\frac{2}{d} \\sum_{m>\\tau} \\sum_{i-j=m} \\ ({\\Sigma}_{n,ij})^2 \\\\\n\t& = \\frac{2}{d} \\sum_{m \\le \\tau} \\sum_{i-j=m} \\mathbb{E}( \\omega(i-j) ( \\hat{\\sigma}_m - \\ \\sigma_m ) )^2 + \\frac{2}{d} \\sum_{m>\\tau} \\sum_{i-j=m} \\ ({\\Sigma}_{n,ij})^2 \\\\\n\t& = G_n + H_n.\n\\end{align*}\nSince $ \\ {\\Sigma}_{n,ij}^\\diamond \\leq C \\, m^{-\\alpha-1}$, we have $\\sigma_m \\leq C m^{-\\alpha-1}$ for $ i-j = m$.\nThus, $H_n = O(\\tau^{-2\\alpha-1})$.\nMoreover, $G_n = O(\\min(d,\\tau) \/ n)$.\nWe obtain the same bound as in case (i), i.e.\n \\begin{align*}\n \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\ \\Sigma_n \\|_{F*}^2\n \\leq C \\left\\{ \\tau^{-2\\alpha-1} + \\frac{\\min(\\tau,d)}{n} \\right\\}.\n \\end{align*}\n Optimal choice of $\\tau$ yields the same rates as in case (i).\n \\end{proof}\n \n\\begin{proof}[Proof of Lemma \\ref{lem:MSE}] One only needs to verify Condition 2 of \\cite{Sancetta2008}. Essentially, this is specific Doukhan-Louhichi type weak dependence condition. It is known that Bernoulli shifts satisfy such weak dependence conditions, but the applying the results in the literature would require to impose stronger assumptions on $G_t$. Therefore, we give a direct proof working under (P.1). Let $ \\bar{\\epsilon}_{i,j} = (\\epsilon_i, \\ldots, \\epsilon_{j+1}, \\epsilon_j', \\epsilon_{j-1}', \\ldots) $ where $ \\{ \\epsilon_t' \\} $ is an independent copy of $\\{ \\epsilon_t \\} $. Fix $u, v \\in \\{1, \\ldots, 4\\} $, $ (i_1,\\ldots, i_v), (s_1,\\ldots, s_v) \\in \\mathbb{N}^v $ and $ (j_1,\\ldots, j_u), (t_1, \\ldots, t_u) \\in \\mathbb{N}^u$ such that $ s_1 \\le \\cdots \\le s_u + r \\le t_1 \\le \\cdots \\le t_v $ for some $ r \\in \\mathbb{N}$. Put $ U_{t_v}(\\boldsymbol{\\epsilon}_{t_v} ) = X_{t_1,j_1} \\cdots X_{t_v,j_v}$ and $ V_{s_u}(\\boldsymbol{\\epsilon}_{s_u} ) = Y_{s_1,i_1} \\cdots Y_{s_u,i_u} $. Then\n\\begin{align*}\n \\operatorname{Cov}( U_{t_v}(\\boldsymbol{\\epsilon}_{t_v} ), V_{s_u}(\\boldsymbol{\\epsilon}_{s_u} ) ) &\n = \\operatorname{Cov}( U_{t_v}( \\boldsymbol{\\epsilon}_{t_v-s_u} ), V_{s_u}(\\boldsymbol{\\epsilon}_0) ) \\\\\n &= \\operatorname{Cov}( U_{t_v}( \\bar\\boldsymbol{\\epsilon}_{t_v-s_u,0} ), V_{s_u}(\\boldsymbol{\\epsilon}_0) ) + \\operatorname{Cov}( U_{t_v}( \\boldsymbol{\\epsilon}_{s_u,0} ) - U_{t_v}( \\bar\\boldsymbol{\\epsilon}_{t_v-s_u,0} ), V_{s_u}(\\boldsymbol{\\epsilon}_0) ) \\\\\n &= \\operatorname{Cov}( U_{t_v}( \\boldsymbol{\\epsilon}_{s_u,0} ) - U_{t_v}( \\bar\\boldsymbol{\\epsilon}_{t_v-s_u,0} ), V_{s_u}(\\boldsymbol{\\epsilon}_0) )\n\\end{align*}\nBy telescoping over the random variables $ U_{t_v}(l) = U_{t_v}( \\epsilon_{t_v-s_u}, \\ldots, \\epsilon_{t_v-s_u-l+1}, \\bar\\boldsymbol{\\epsilon}_{t_v-s_u-l} ) $, so that $ \\sum_{l>t_v-s_u} U_{t_v}(l) - U_{t_v}(l-1) = U_{t_v}( \\boldsymbol{\\epsilon}_{s_u,0} ) - U_{t_v}( \\bar\\boldsymbol{\\epsilon}_{t_v-s_u,0} ) $, and noting that $ \\mathbb{E} | (U_{t_v}(l) - U_{t_v}(l-1) ) V_{s_u}(\\boldsymbol{\\epsilon}_0) | \\le \\Theta' l^{-\\beta} $ for some constant $ \\Theta' $ by (P.1), we obtain\n\\[\n\\operatorname{Cov}( U_{t_v}(\\boldsymbol{\\epsilon}_{t_v} ), V_{s_u}(\\boldsymbol{\\epsilon}_{s_u} ) ) = O( r^{-\\beta+1} ),\n\\]\nwhich establishes the Doukhan-Louhichi type weak dependence condition required in Condition 2 of \\cite{Sancetta2008}, since $ \\beta > 2$.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{ConsistencyApproxErrors}] The first assertion follows from rearranging terms,\n\t\\[\n\t \\hat{\\Sigma}_n^\\diamond - \\hat{\\Sigma}_n^\\dagger - (\\Sigma_n^\\diamond - \\Sigma_n ) = (\\hat{\\Sigma}_n^\\diamond - \\Sigma_n^\\diamond ) - (\\hat{\\Sigma}_n^\\dagger - \\Sigma_n)\n\t\\]\n\tand the triangle inequality, so that\n\t\\begin{align*}\n\t\\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\hat{\\Sigma}_n^\\dagger - (\\Sigma_n^\\diamond - \\Sigma_n ) \\|_F &\\le \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\Sigma_n^\\diamond \\|_F + \\mathbb{E} \\| \\hat{\\Sigma}_n^\\dagger - \\Sigma_n \\|_F \\\\\n\t& \\le \\left( \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\Sigma_n^\\diamond \\|_F^2 \\right)^\\frac{1}{2} + \\left( \\mathbb{E} \\| \\hat{\\Sigma}_n^\\diamond - \\Sigma_n^\\diamond \\|_F^2 \\right)^\\frac{1}{2}. \n\t\\end{align*}\n\tHence, the result follows from Theorem~\\ref{ConsistencyTapperToeplitz}.\n\t\n\tFor the second assertion, use the fact that if $ A_n - B_n \\le C_n $ on $ A_n \\ge B_n $ and $ B_n-A_n \\le D_n $ on $ A_n < B_n $, for random variables $A_n, B_n $ and bounds $ C_n, D_n $, then\n\t\\begin{align*}\n\t \\mathbb{E} | A_n - B_n | &\\le \\mathbb{E} (A_n-B_n)1_{A_n\\ge B_n} + \\mathbb{E} (-A_n+B_n)1_{A_n\\sigma^\\dagger} \\sum_{|i-j|=m} \\Sigma_{n,ij}^2 \\\\\n & = O( \\sigma^\\dagger n^{-1} ) + O( (\\sigma^\\dagger)^{-2\\alpha+1} ) \\\\\n & =O\\left( n^{-\\frac{2\\alpha-1+c-s}{2\\alpha+c}} \\right) + O\\left( n^{-\\frac{(s+1)(2\\alpha-1)}{2\\alpha+c}} \\right).\n\\end{align*}\nBoth terms are $o(1) $ and the first term dominates if and only if $s>c\/2\\alpha$. \n\n(iii) The claim follows directly from the fact that\n\\[\n \\langle \\hat{\\Delta}_n^\\dagger, \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\rangle_* \n - \\langle \\Delta_n^\\dagger, \\Sigma_n^\\dagger - \\Sigma_n^\\diamond \\rangle_* \n = \\langle \\hat{\\Delta}_n^\\dagger - \\Delta_n^\\dagger, \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\rangle_*\n - \\langle \\Delta_n^\\dagger, \\Sigma_n^\\dagger - \\Sigma_n^\\diamond -\\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\rangle_* \n\\]\nso that\n\\begin{align*}\n\\left| \\langle \\hat{\\Delta}_n^\\dagger, \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\rangle_* \n - \\langle \\Delta_n^\\dagger, \\Sigma_n^\\dagger - \\Sigma_n^\\diamond \\rangle_* \\right| \n & \\quad \\le \\| \\hat{\\Delta}_n^\\dagger - \\Delta_n^\\dagger \\|_{F*} \\| \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\|_{F*} \\\\\n & \\quad + \\| \\Sigma_n - \\Sigma_n^\\dagger \\|_{F*} \\| \\Sigma_n^\\dagger - \\Sigma_n^\\diamond -\\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\|_{F*} \\\\\n & = O( \\| \\hat{\\Delta}_n^\\dagger - \\Delta_n^\\dagger \\|_{F*} \\| ( \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*} + o_P(1) ) + o(1) \\\\\n & = o_P(1),\n\\end{align*}\nsince\n\\[\n \\| \\hat{\\Sigma}_n^\\dagger - \\hat{\\Sigma}_n^\\diamond \\|_{F*} \n = \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*} + o_P(1)\n\\]\nby (i),\n\\[\n \\| \\hat{\\Delta}_n^\\dagger - \\Delta_n^\\dagger \\|_{F*} \\to 0, \\quad \\text{in probability},\n\\]\nby (ii), \n\\[\n \\| \\Sigma_n - \\Sigma_n^\\dagger \\|_{F*}^2 \\le d^{-1} \\sum_{m>\\tau\/2} \\sum_{|i-j|=m} \\Sigma_{n,ij}^2 = O\\left( (\\tau\/2)^{-2\\alpha+1} \\right)\n = O\\left( n^{-\\frac{2\\alpha-1}{2\\alpha+c}} \\right)\n\\]\nand $ \\| \\Sigma_n - \\Sigma_n^\\diamond \\|_{F*} = O(1) $ by assumption. This completes the proof. \n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:bootstrap}]\n Note that the statistics $T_n$ and the critical values $a_n(\\alpha,w)$ are linear in $\\|v\\|_1$, such that we may suppose without loss of generality that $\\|v\\|_1=1$.\n If we replace the $\\chi_s$ by the centered random vectors $\\bar{\\chi}_s = [\\nu^j \\cdot (X_sX_s^T - \\Sigma_0)]_{j=1}^3$, then Theorem \\ref{thm:bootstrap} is a special case of \\cite[Prop.\\ 4.3]{mies2022}.\n We briefly show that the estimation error of $\\hat{\\Sigma}_n$ versus the true covariance matrix is negligible.\n To this end, denote the sequential estimators of the asymptotic covariance, for $k=b,\\ldots, n$, by \n \\begin{align*}\n \\hat{Q}(k) = \\sum_{t=b}^k \\frac{1}{b} \\left( \\sum_{s=t-b+1}^t \\chi_s \\right)^{\\otimes 2}, \\qquad\n \\overline{Q}(k) = \\sum_{t=b}^k \\frac{1}{b} \\left( \\sum_{s=t-b+1}^t \\bar{\\chi}_s \\right)^{\\otimes 2} .\n \\end{align*}\n It suffices to show that\n \\begin{align}\n \\mathbb{E} \\max_{k=1,\\ldots, n} \\|\\hat{Q}(k) - \\overline{Q}(k)\\|_\\text{tr} = \\mathcal{O}(\\sqrt{nb}), \\label{eqn:boostrap-estimation}\n \\end{align}\n where $\\|A\\|_\\text{tr} = \\text{tr}((A A^T)^\\frac{1}{2})$ denotes the trace norm of a symmetric matrix.\n Since the matrices $\\hat{Q}(k)$ and $\\overline{Q}(k)$ are always of fixed dimension $3\\times 3$, the choice of norm is in fact irrelevant as all norms are equivalent, and we may use an arbitrary matrix norm $\\|\\cdot\\|$.\n If \\eqref{eqn:boostrap-estimation} holds, then both $\\hat{Q}(k)$ and $\\overline{Q}(k)$ satisfy Theorem 5.1 in \\cite{mies2022}, such that the bootstrap scheme based on $\\hat{Q}(k)$ is valid.\n \n To establish \\eqref{eqn:boostrap-estimation}, denote $\\Delta_n = \\left[\\nu^j \\cdot (\\Sigma_0 - \\hat{\\Sigma}_{n,n})\\right]_{j=1}^3 = \\chi_s - \\bar{\\chi}_s \\in \\mathbb{R}^3$, for all $s$. \n Via quadratic expansion, we find that, for some universal $C$ which may change from line to line,\n \\begin{align*}\n &\\mathbb{E} \\max_{k}\\left\\| \\hat{Q}(k) - \\overline{Q}(k) \\right\\| \\\\\n &= \\mathbb{E} \\max_k \\left\\| \\sum_{t=b}^k \\frac{1}{b} \\left[ \\left(\\sum_{s=t-b+1}^t (\\bar{\\chi}_s + \\Delta_n) \\right)^{\\otimes 2} - \\left(\\sum_{s=t-b+1}^t \\bar{\\chi}_s \\right)^{\\otimes 2} \\right] \\right\\| \\\\\n &= \\mathbb{E} \\max_k \\left\\| \\sum_{t=b}^k \\frac{1}{b} \\left[ \\left(\\sum_{s=t-b+1}^t (\\bar{\\chi}_s (b\\Delta_n)^T + (b\\Delta_n) \\bar{\\chi}_s^T + (b\\Delta_n)^{\\otimes 2})\\right) \\right] \\right\\| \\\\\n &\\leq C \\, \\mathbb{E} \\sum_{t=b}^n \\left[ 2\\left\\|\\sum_{s=t-b+1}^t \\bar{\\chi}_s \\Delta_n^T \\right\\| + b\\left\\|\\Delta_n\\right\\|^2 \\right] \\\\\n &\\leq C\\sum_{t=b}^n \\sqrt{\\mathbb{E} \\left\\| \\sum_{s=t-b+1}^t \\bar{\\chi}_s \\right\\|^2} \\sqrt{\\mathbb{E} \\|\\Delta_n\\|^2} + nb \\mathbb{E} \\|\\Delta_n\\|^2.\n \\end{align*}\n By virtue of Proposition \\ref{prop:matrix} and Lemma \\ref{lem:jensen}, the centered time series $\\bar{\\chi}_s$ satisfies condition (G.1) of \\cite{mies2022}, such that the Rosenthal-type inequality, Theorem 3.2 therein, is applicable. \n This yields,\n \\begin{align*}\n \\sqrt{\\mathbb{E} \\left\\| \\sum_{s=t-b+1}^t \\bar{\\chi}_s \\right\\|^2}\n &\\leq C \\sqrt{b} \\sum_{j=1}^\\infty \\tilde{\\Theta} j^{-\\beta} = \\mathcal{O}(\\sqrt{b}), \n \\end{align*}\n because $\\tilde{\\Theta}$ is fixed in the asymptotic regime of Theorem \\ref{thm:bootstrap}.\n Analogously, \n \\begin{align*}\n \\sqrt{\\mathbb{E} \\|\\Delta_n\\|^2} \n &= \\sqrt{ \\mathbb{E} \\left\\| \\tfrac{1}{n} \\sum_{t=1}^n \\bar{\\chi}_t \\right\\|^2 } = \\mathcal{O}(1\/\\sqrt{n}).\n \\end{align*}\n Hence, $\\mathbb{E} \\max_{k}\\left\\| \\hat{Q}(k) - \\overline{Q}(k) \\right\\| = \\mathcal{O}(\\sqrt{nb} + b) = \\mathcal{O}(\\sqrt{nb})$. \n The last inequality holds because $b\\leq n$, and establishes \\eqref{eqn:boostrap-estimation}.\n\\end{proof}\n\n\\section*{Acknowledgements}\nThe authors acknowledge support from Deutsche Forschungsgemeinschaft (DFG, grant STE 1034\/11-2).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe interaction of light with vibrational modes close to the ground state leads to an asymmetry in the generated Stokes and anti-Stokes scattering. \nPhysically, this asymmetry originates from the fact that a mechanical oscillator in the ground state can only absorb energy.\nThe anti-Stokes scattering (resulting in blue-shifted radiation) rate scales with $\\bar n$, while the Stokes scattering rate scales with $\\bar n+1$, where $\\bar n$ denotes the average thermal occupation of the vibrational mode~\\cite{clerk_introduction_2010}.\nThe ratio of the two scattering rates is given by the Boltzmann factor $\\exp(-\\hbar\\Omega_m\/k_BT)$, where $\\Omega_m$ is the frequency of the oscillator and $T$ its temperature,\nwhich therefore allows for absolute and self-calibrated quantum noise thermometry.\nSuch motional sideband asymmetry has been observed in the quantized motion of laser-cooled trapped ions~\\cite{diedrich_laser_1989} and cold atoms in optical lattices~\\cite{jessen_observation_1992}.\nIt has been exploited for Raman noise thermometry in solids~\\cite{hart_temperature_1970, nemanich_raman_1984, cui_noncontact_1998} and molecules~\\cite{eckbreth_laser_2000}, and it is commercially used in fiber optical distributed temperature measurements~\\cite{ukil_distributed_2012}.\n\nCavity optomechanical techniques have enabled to prepare mechanical oscillators close to the ground state via optomechanical sideband cooling~\\cite{schliesser_resolved-sideband_2008,chan_laser_2011,verhagen_quantum-coherent_2012}, and therefore reach a regime where the interaction of mechanical oscillators and optical (or microwave) fields has to be treated quantum mechanically.\nIn this context, significant attention has been devoted to sideband asymmetry~\\cite{safavi-naeini_observation_2012,purdy_optomechanical_2015,underwood_measurement_2015,weinstein_observation_2014}, as a signature of the quantum-mechanical nature of the interaction between light and engineered mechanical oscillators.\n\n\nTo exploit sideband asymmetry for an absolute thermometer without the need for calibration~\\cite{purdy_quantum_2017}, understanding all noise contributions is central.\nFor example, if the laser exhibits excess classical noise in the phase or amplitude quadrature, sideband asymmetry is not a faithful measure of the mechanical thermal occupation, but is artificially increased~\\cite{jayich_cryogenic_2012, sudhir_appearance_2017, kippenberg_phase_2013}, resulting in incorrect temperature measurements.\n\nHere we observe experimentally and describe theoretically a novel mechanism that can lead to an artificially modified sideband asymmetry,\nwhich occurs in the presence of multiple laser drives due to the cavity nonlinear response, such as the Kerr effect or the photothermal cavity frequency shift.\nThermal effects are common for both optical~\\cite{verhagen_quantum-coherent_2012,barclay_nonlinear_2005} and microwave~\\cite{suh_thermally_2012} cavities.\nIn a cavity driven by multiple lasers, the cavity nonlinearity leads to a coupling of originally independent thermomechanical sidebands, modifying the sideband asymmetry measurements.\nMore generally, any optomechanical measurement protocol that utilizes multiple drive tones on a single cavity mode can be affected.\nThis therefore includes backaction-evading measurements~\\cite{suh_mechanically_2014}, mechanical squeezing~\\cite{kronwald_arbitrarily_2013, lecocq_quantum_2015,wollman_quantum_2015,pirkkalainen_squeezing_2015}, dissipative squeezing of the cavity field~\\cite{kronwald_dissipative_2014}, entanglement of two mechanical oscillators via reservoir engineering~\\cite{woolley_two-mode_2014,ockeloen-korppi_stabilized_2018}, and recently demonstrated non-reciprocal devices~\\cite{bernier_nonreciprocal_2017, peterson_demonstration_2017}.\n\nThis unwanted artificial asymmetry can be eliminated by sufficiently separating the driving tones in frequency, thus operating beyond the bandwidth of the Kerr-type nonlinearity, as verified by independent measurements.\nThe revealed intrinsic quantum motional sideband asymmetry then enables to perform self-calibrated thermometry.\nWe proceed to optomechanically cool a $5.3\\unit{GHz}$ breathing mode of a nanobeam optomechanical crystal, thermalized in a $^3$He buffer gas cryostat at $\\sim 4.5\\unit{K}$,\nself-calibrated at $1.50\\pm 0.2$ quanta (40\\% ground state probability).\n\n\\section{Experimental Results}\n\\label{sec:ExprResults}\n\n\\begin{figure*}\n \\includegraphics[scale=1]{fig_experimental_combined_5.pdf}\n \\caption{\\textbf{Optomechanical crystal and experimental setup.}\n (a)~False-color SEM image of the silicon optomechanical crystal cavity with a waveguide for laser input coupling.\n The path of the tapered fiber is indicated.\n The inset shows the simulated mechanical breathing mode and optical mode.\n (b)~SEM image of the cavity.\n (c)~Scheme for motional sideband asymmetry measurement using two probe tones and a cooling tone.\n (d)~Experimental setup. PM, phase modulator; VOA, variable optical attenuator;\nAOM, acousto-optical modulator; BHD, balanced heterodyne detector; SA, spectrum analyzer; NA, network analyzer; PLL, phase-locked loop.}\n\\label{fig:experimental}\n\\end{figure*}\n\nOur experimental system is a nanobeam optomechanical crystal~\\cite{chan_optimized_2012}, shown in Fig.~\\ref{fig:experimental}.\nOptically, the device functions as a single-sided cavity with a partially transmitting input mirror.\nLight is evanescently coupled from a tapered optical fiber into a waveguide that forms part of the nanobeam with efficiency exceeding 50\\%.\nThe optical resonance is at $1540\\unit{nm}$ with a linewidth of $\\kappa\/2\\pi=1.6\\unit{GHz}$, of which $\\kappa\\s{ex} = 0.3\\kappa$ are extrinsic losses to the input mirror.\nThe optical mode is optomechanically coupled to a mechanical breathing mode of frequency $\\Omega_m\/2\\pi=5.3\\unit{GHz}$, strongly confined due to a phononic bandgap, with an intrinsic linewidth of $\\Gamma\\s{int}\/2\\pi=84\\unit{kHz}$.\nThis places the system in the resolved sideband regime~\\cite{schliesser_resolved-sideband_2008}, $\\Omega_m>\\kappa$, as required for ground state cooling~\\cite{wilson-rae_theory_2007,marquardt_quantum_2007}.\nThe optomechanical coupling parameter is $g_0\/2\\pi = 780\\unit{kHz}$.\nFull details of the device design, the setup and the system are given in Appendices~\\ref{sec:fabrication} and~\\ref{sec:expr_details}.\n\nThough optomechanical crystals, owing to their GHz-scale frequencies, have been put in their ground state by employing passive cooling in dilution refrigerators at $\\sim 10\\unit{mK}$, \nsevere heating due to optical absorption precluded continuous measurements with high optomechanical cooperativities~\\cite{meenehan_silicon_2014,meenehan_pulsed_2015,riedinger_non-classical_2016,hong_hanbury_2017,riedinger_remote_2018}.\nOur system is passively cooled to low initial thermal occupancy at $4.5\\unit{K}$ (thermal occupancy at bath temperature $\\bar n\\s{th}\\simeq 17$) using a $^3$He buffer gas cryostat.\nThe buffer gas provides additional mechanical damping, increasing $\\Gamma\\s{int}$ by several 10s kHz to the actual mechanical linewidth $\\Gamma_m$.\nHowever,\nas detailed in Appendix~\\ref{sec:sideband_cooling}\nand shown in the measurements described in this section, the buffer gas environment enables greatly enhanced thermalization of the oscillator and is necessary for cooling close to the ground state.\n\nTo measure motional sideband asymmetry, we employ two probe tones, around upper and lower motional sidebands, and in addition apply a strong tone near the lower sideband for sideband cooling (Fig.~\\ref{fig:experimental}c).\nSuch multi-tone probing has been applied in previous experiments in the microwave domain~\\cite{weinstein_observation_2014,lecocq_quantum_2015}.\nThe two weak equal-power probes (about $n_c=100$ mean intracavity photons each) are applied at frequencies $\\omega\\s{cav}\\pm(\\Omega_m+\\delta)$.\nThe cooling tone is blue-detuned from the lower probe by the frequency $\\Omega\\s{mod}$, where $\\Gamma_m\\ll\\Omega\\s{mod}\\ll\\kappa$.\nThe red-detuned probe will generate a resonantly enhanced anti-Stokes Raman process, where a probe photon is upconverted in frequency from $-\\Omega_m-\\delta$ to $-\\delta$, while destroying a phonon in the mechanical oscillator. \nThe converse occurs for the blue-detuned probe, where a probe photon is downconverted from $\\Omega_m+\\delta$ to $\\delta$ while creating a phonon, thus forming the resonant enhanced Stokes sideband.\nThe experimental setup is shown in Fig.~\\ref{fig:experimental}d.\nWe use balanced heterodyne detection for quantum-limited measurement of the scattered thermomechanical sidebands in the output spectrum.\nThe overall detection efficiency is $\\eta\\simeq 4\\%$.\nWe measure the symmetrized autocorrelator of the photocurrent \\cite{weinstein_observation_2014,BowenMilburn2015}\n$\\bar S_I(\\omega)=\\frac12\\int_{-\\infty}^\\infty\\langle\\{\\overline{\\hat{I}_{\\text{out}}(t+t'),\\hat{I}_{\\text{out}}(t')}\\}\\rangle e^{i\\omega t}dt$, where the average over $t'$ is denoted by an overbar.\nThe one-sided heterodyne spectrum takes the form (in the resolved sideband limit)\n\\begin{equation}\n \\begin{aligned}\n\t {}&\\bar S_I(\\omega+\\Delta_{\\text{LO}})=1\n\t+\\eta\\Gamma_{\\text{tot}}\\left\\{\\frac{\\Gamma_m\\C_{\\text{red}}\\bar n}{\\Gamma_{\\text{tot}}^2\/4+(\\omega+\\delta)^2}\\right.\\\\\n\t&+\\left.\\frac{\\Gamma_m\\C_{\\text{cool}}\\bar n}{\\Gamma_{\\text{tot}}^2\/4+(\\omega+\\delta-\\Omega_{\\text{mod}})^2}\n\t+\\frac{\\Gamma_m\\C_{\\text{blue}}(\\bar n+1)}{\\Gamma_{\\text{tot}}^2\/4+(\\omega-\\delta)^2}\\right\\},\n \\end{aligned}\n \\label{eq:ideal_theory}\n\\end{equation}\nwhere we have introduced cooperativities $\\C_{\\text{red,blue,cool}}\\equiv 4g_{r,b,c}^2\/(\\kappa\\Gamma_m)$, the light-enhanced optomechanical coupling $g_{r,b,c}\\equiv g_0\\sqrt{n_{r,b,c}}$, reduced occupancy $\\bar n=\\Gamma_m\\bar n\\s{th}\/\\Gamma_{\\text{tot}}$, local oscillator detuning $\\Delta_{\\text{LO}}=\\omega_{\\text{LO}}-\\omega_{\\text{cav}}$, and incorporated the effect of sideband cooling into a broadened mechanical linewidth $\\Gamma_{\\text{tot}} \\simeq \\Gamma_m (1+\\C_{\\text{cool}})$.\nIn the last expression we assume weak probe tones of equal strength, such that they do not contribute to the total mechanical damping, and taken the good cavity limit $\\kappa\/\\Omega_m\\to\\infty$, thus neglecting the quantum backaction limit~\\cite{wilson-rae_theory_2007,marquardt_quantum_2007}.\nWe further assume that the cavity linewidth is much larger than the detuning and effective mechanical linewidth $\\kappa\\gg \\delta,\\Gamma_{\\text{tot}}$, such that the optical susceptibility can be evaluated on resonance, and neglect classical laser noise. \nEquation~\\eqref{eq:ideal_theory} is normalized to the shot noise floor, includes the overall detection efficiency $\\eta$, \nand we have chosen to show only sidebands close to resonance, the others are heavily suppressed due to the cavity resonance.\n\nFor our measurement, the red and blue cooperativities are chosen to be equal $\\C_{\\text{red}}=\\C_{\\text{blue}}\\equiv\\C$,\nsuch that the Lorentzian probe tone sidebands centered around $\\Delta_{\\text{LO}}\\pm\\delta$\nhave weights proportional to $\\C \\bar n$ and $\\C(\\bar n+1)$.\nThe asymmetry can be interpreted as a consequence of the commutation relations of either mechanical or optical creation and annihilation operators.\nWhich one of the two is responsible depends on the detection scheme used.\nFor heterodyne detection the sideband asymmetry arises from quantum backaction (i.e., the optical input vacuum fluctuations), while for photon counting of the sidebands (as done in Raman scattering experiments of solids) the asymmetry arises from the mechanical commutation relations \\cite{khalili_quantum_2012,weinstein_observation_2014,borkje_heterodyne_2016}.\nFrom the sidebands we extract $\\bar n$ either directly from the red-detuned probe sideband weight (with suitable calibration as outlined in Appendix~\\ref{sec:sideband_cooling})\nor from the ratio of the weights of the two sidebands, as in previous optomechanical experiments~\\cite{purdy_optomechanical_2015, underwood_measurement_2015}.\nRaman thermometry can be done using the cooling tone sideband instead of the red-detuned probe sideband, as long as one accounts for the difference in optomechanical coupling.\nClassical laser noise, which can be a source of artificially enhanced asymmetry~\\cite{jayich_cryogenic_2012,safavi-naeini_laser_2013,sudhir_appearance_2017} is not affecting our measurements, as is shown in Appendix~\\ref{sec:excess_noise}.\n\n\\begin{figure*}[ht]\n \\includegraphics[scale=1]{fig_cooling_curve_true_fake_asymmetry.pdf}\n \\caption{\\textbf{Artificial and quantum sideband asymmetry in optomechanical sideband cooling.} In (a), (b), the cooling tone is detuned $80\\unit{MHz}$ from red-detuned probe.\n (a)~Inferred $\\bar n$ ($\\bar n+1$) from anti-Stokes (Stokes) mechanical sidebands, based on total thermomechanical noise.\n Overall cooling is observed, however a clear discrepancy exists since the cooling tone and the red-detuned probe should yield the same and be equal to the occupancy $\\bar n$.\n Inset: Example of an actual observed noise spectrum, colored for different tones as in the main panel.\n Note that the choice of local oscillator frequency (Fig.~\\ref{fig:experimental}c) leads to the noise spectrum `folded over' and to the cooling tone sideband located close to the probe sidebands.\n The $\\omega$-axis does not strictly correspond to Eq.~\\eqref{eq:ideal_theory} in this case.\n (b)~The occupancy $\\bar n$ obtained from motional sideband asymmetry, obtained both from the ratio of red- and blue-detuned probes, and the ratio of cooling tone and blue-detuned probe, which shows strong disagreement.\n Error bars represent $\\pm 20\\unit{MHz}$ tuning accuracy, see text for discussion.\n (c), (d) show the data corresponding to (a), (b), respectively, only with the cooling tone detuned $220\\unit{MHz}$ from red-detuned probe, where the effect of the thermal Kerr-type nonlinearity is strongly diminished.\n In~(c), the lower curve is a fit according to the model of Appendix~\\ref{sec:sideband_cooling} with the average asymmetry of the last two points, shown in (d), used for calibration; one quantum is added to result in the upper curve, coinciding with the Stokes sideband data.\n Inset (c)~shows spectra of the last data point. \n}\n\\label{fig:fake_asymmetry}\n\\end{figure*}\n\nThroughout the experiment, the power of the cooling tone is increased, lowering the mechanical occupancy $\\bar{n}$ through sideband cooling, while the probe tones are held constant.\nFigure~\\ref{fig:fake_asymmetry}a,b we show thermometry results for a detuning of the red-probe from the cooling tone of $\\Omega\\s{mod}\/2\\pi=80\\unit{MHz}$.\nIn Fig.~\\ref{fig:fake_asymmetry}a we use the total sideband power to infer $\\bar n$ ($\\bar n+1$) from the anti-Stokes (Stokes) mechanical sidebands, plotted against the cooling tone intracavity photons.\nStrikingly, the curves of the cooling tone and red-detuned probe \\emph{do not coincide}, making it impossible to associate $\\bar n$ with either.\nThe discrepancy is also reflected in Fig.~\\ref{fig:fake_asymmetry}b, where \\emph{quantum sideband asymmetries} of either of the two anti-Stokes sidebands compared to the Stokes sideband yield \\emph{different} $\\bar n$.\nWe next repeat the measurement with larger seperation between the red-detuned probe and the cooling tone, with\n$\\Omega\\s{mod}\/2\\pi=220\\unit{MHz}$ (Fig.~\\ref{fig:fake_asymmetry}c,d).\nThe inferred $\\bar n$ from both anti-Stokes sidebands \\emph{now show excellent agreement} (Fig.~\\ref{fig:fake_asymmetry}c), and $\\bar n$ inferred from motional sideband asymmetry also agree within experimental errors\n(Fig.~\\ref{fig:fake_asymmetry}d). Our measurements thus show that the presence of the cooling tone, when tuned closely to the red-detuned probe, modifies the quantum-mechanical motional sideband asymmetry.\n\nTo investigate this novel phenomenon further, we perform an auxiliary experiment, shown in Fig.~\\ref{fig:dual_tone_all}a.\nOmitting the blue-detuned probe, we apply two \\emph{equal power} tones near the lower mechanical sideband (still referred to as red-detuned probe and cooling tone).\nAs illustrated in Fig.~\\ref{fig:dual_tone_all}a, the observed anti-Stokes sidebands are not equal even for $\\Omega\\s{mod}\\ll\\kappa$, with the higher-frequency sideband stronger, in disagreement with standard optomechanical theory.\nKeeping the red-detuned probe fixed at $\\Delta=-\\Omega_m$ and scanning the cooling tone, Fig.~\\ref{fig:dual_tone_all}b shows the ratio of the two sidebands, normalized to the expected bare optomechanical response, vs.~$\\Omega\\s{mod}$.\nThe normalized ratio decreases with $\\Omega\\s{mod}$, only reaching the expected behavior at $\\Omega\\s{mod}\\gtrsim 200\\unit{MHz}$.\nNote that, depending on the power used, the ratio may be very large for small $\\Omega\\s{mod}$, e.g.~exceeding 2 for $\\Omega\\s{mod}\/2\\pi=20\\unit{MHz}$ in Fig.~\\ref{fig:dual_tone_all}b.\nFigure~\\ref{fig:dual_tone_all}c shows the noise spectra at low $\\Omega\\s{mod}\/2\\pi=4\\unit{MHz}$ for increasing tone power, where additional higher-order sidebands, spaced by $\\Omega\\s{mod}$, are observed.\nThese measurements point to the presence of a cavity nonlinearity, which couples the thermomechanical sidebands and modifies the observed asymmetry.\n\nThe existence of cavity nonlinearities due to e.g.~thermal effects is not unusual and has been observed in both bulk optical cavities as well as ultrasmall optical mode volume resonators~\\cite{ilchenko_thermal_1992,fomin_nonstationary_2005,rokhsari_observation_2005,barclay_nonlinear_2005}.\nWhile there are several potential sources for a ``Kerr-type'' effect, we consider the photothermorefractive frequency shift (PTRS) as the dominant mechanism.\nPhysically, photons circulating in the cavity are absorbed, leading to heating, and thus shifting of the cavity resonance (e.g., via the temperature dependent refractive index). The temperature deviation $\\delta T$ is governed in the simplest case by a single timescale and temperature-independent absorption\n\\begin{align}\n\t\\delta\\dot T(t)&=-\\gamma\\s{th}\\delta T(t)+g_{\\text{abs}}|\\bar a(t)|^2.\n\t\\label{eq:temperature_deviation}\n\\end{align}\nHere the absorption rate is given by $g_\\mathrm{abs}$, and the thermalization rate by $\\gamma_\\mathrm{th}$.\nIn the presence of two tones, the cavity field intensity beats $n_c(t)=|\\bar a(t)|^2\\propto\\text{const}+\\cos(\\Omega_{\\text{mod}}t)$, which, via the absorption heating, causes a periodic modulation of the cavity frequency, captured by the detuning\n\\begin{equation}\n\t\\Delta\\s{th}(t)=\\Delta_k \\exp(i\\Omega_{\\text{mod}}t)+\\text{c.c.}\n\t\\label{eq:PTRS}\n\\end{equation}\nwhere \n\\begin{equation}\n\t\\Delta_{k}=\\frac{g\\s{PT}g_{\\text{abs}}\\bar a_{c}\\bar a_{r}}{\\sqrt{\\gamma \\s{th}^2+\\Omega_{\\text{mod}}^2}}e^{-i\\phi_{\\text{th}}},\n\t\\quad \\phi_{\\text{th}}=\\tan^{-1}\\left(\\frac{\\Omega_{\\text{mod}}}{\\gamma_{\\text{th}}}\\right).\n\t \\label{eq:Delta_k}\n\\end{equation}\nHere $\\bar a_{c},\\bar a_{r}$ are the amplitudes of the intracavity fields produced by cooling tone and red-detuned probe, and\n$g_{\\text{th}}$ relates cavity frequency shift to temperature deviation via $\\Delta_{\\text{th}}=g\\s{PT}\\delta T$. Consequently the static thermal cavity shift per mean intracavity photon is given by $g\\s{0,th}=g\\s{PT}{g\\s{abs}}\/\\gamma\\s{th}$.\n\n\n\\begin{figure*}\n \\includegraphics[width=\\linewidth]{fig_dual_tone_all_5.pdf}\n \\caption{\\textbf{Observation of asymmetric noise spectra due to Kerr-type nonlinearity.}\n (a)~Pumping scheme with two equal red-detuned pumps.\n (b)~The peak ratio of the observed spectra, relative to the ratio expected from bare optomechanical response, vs.~$\\Omega\\s{mod}$ for constant power of $n\\s{c}=640$ intracavity photons, showing decreased effect for higher modulation frequencies.\n The inset shows the spectra for data points of the same color in the main panel.\n (c)~Increasing the pump power for a constant $\\Omega\\s{mod}\/2\\pi=4\\unit{MHz}$, showing higher-order peaks.\n Also shown are fits to the analytic model of Sec.~\\ref{sec:theory}, see text for details.\n (d)~Calibration of sideband cooling using motional sideband asymmetry, with $\\Omega\\s{mod}\/2\\pi=220\\unit{MHz}$, free from Kerr-type artificial asymmetry. See text for more details. The oscillator is cooled down to $\\bar n=1.5$ phonons.\n}\n\\label{fig:dual_tone_all}\n\\end{figure*}\n\nThe cavity frequency modulation mediates processes in which photons are scattered from a frequency $\\omega$ to $\\omega\\pm\\Omega_{\\text{mod}}$, which causes coupling of the sidebands with strength $\\Delta_k$.\nIn Section~\\ref{sec:theory}, we incorporate the PTRS into standard optomechanical theory to model our experiments, and use a Floquet approach, based on writing the cavity and mechanical annihilation operators as sums of Fourier modes, to solve the time-dependent quantum Langevin equations.\n\nIn the first approximation, corresponding to retaining the dominant Fourier modes, the output spectrum takes the same form as above \\eqref{eq:ideal_theory} but with modified cooperativities\n\\begin{subequations}\n \t\\begin{align}\n \t\t\\tilde\\C\\s{red}&=\\C\\s{red}\\left|1-2ig_c\\Delta_k \/(g_r\\kappa)\\right|^2,\\\\\n\t\t\\tilde\\C\\s{cool}&=\\C\\s{cool}\\left|1-2ig_r\\Delta_k ^*\/(g_c\\kappa)\\right|^2,\n \t\\end{align}\n \t\\label{eq:modified_cooperativities}\n\\end{subequations}\nleaving the ideal theory [Eq.~\\eqref{eq:ideal_theory}] otherwise unchanged.\nImportantly, these expressions can lead to an asymmetry when $\\Delta_k$ is complex even when $g_r=g_c$.\nThis explains our observations in Fig.~\\ref{fig:dual_tone_all}b, where the cavity is driven by two equal-strength pumps, and asymmetric sidebands are observed.\nThe asymmetry diminishes as $\\Omega\\s{mod}$ is increased beyond the bandwidth $\\gamma\\s{th}$.\nA fit to this simple model shown in Fig.~\\ref{fig:dual_tone_all}b is in good agreement and captures this general behavior.\nIn this case of $\\Omega\\s{mod}\\gg\\gamma\\s{th}$ we find that the approximation~\\eqref{eq:modified_cooperativities} is sufficient to describe our data, with added Fourier modes producing no change to the fitted curve.\nFrom the fit we obtain the two quantities characterizing $\\Delta_k$~\\eqref{eq:Delta_k}, $\\gamma\\s{th}\/2\\pi\\sim 6\\unit{MHz}$ and $g\\s{abs}g\\s{PT}\/4\\pi^2\\sim 10\\unit{MHz^2}$, with other parameters determined independently.\nDeviations from the data at high frequencies may be accounted for by considering additional Kerr-type effects of higher bandwidths or more complicated thermal behavior than that afforded by Eq.~\\eqref{eq:temperature_deviation}, however we keep the simplicity of our model and emphasize that for $\\Omega\\s{mod}\/2\\pi=220\\unit{MHz}$ used in Fig.~\\ref{fig:fake_asymmetry}c,d and all our subsequent motional sideband asymmetry measurements, we consistently observe the same normalized sideband power (i.e.~same $\\bar n$) for both cooling tone and red-detuned probe (within experimental errors of $\\sim 5\\%$), confirming the weakness of Kerr-type effects, and allowing to operate in a regime where the quantum sideband asymmetry arising from optomechancial quantum correlations can be observed.\nWe also note that the coupling of red and blue tones is negligible as their spacing is $2\\Omega_m\\gg\\gamma\\s{th}$.\n\n\nIn order to capture the physics observed in Fig.~\\ref{fig:dual_tone_all}c, where a series of thermomechanical sidebands are generated, more Fourier modes have to be included (six optical and five mechanical; see Fig.~\\ref{fig:fourier_modes}).\nThe relative weight of the higher order sidebands is in good agreement with the data.\nStrong thermal effects at $\\Omega\\s{mod}\/2\\pi=4\\unit{MHz}$ lead to distortion in the cavity linewidth measurement, introducing large detuning errors, leading to large uncertanties in cavity parameters.\nThe fits shown in Fig.~\\ref{fig:dual_tone_all}c yield $\\gamma\\s{th}\/2\\pi$ in the range 4--$8\\unit{MHz}$ and $g\\s{abs}g\\s{PT}\/4\\pi^2\\sim 6$--$17\\unit{MHz}^2$, in close agreement with Fig.~\\ref{fig:dual_tone_all}b.\nIn order to further confirm our model, we have carried out pump-probe response measurements of the cavity, detailed in Appendix~\\ref{sec:frequency_response}.\nWe found the bandwidth of power induced cavity frequency shifts (likely thermal in origin) $\\sim 10\\unit{MHz}$, in agreement with the results presented here.\n\nNext, we turn back to sideband measurements at $\\Omega\\s{mod}\/2\\pi=220\\unit{MHz}$, where only the quantum asymmetry is prominent.\nFigure~\\ref{fig:dual_tone_all}d shows an extended set of measurements done at cryostat temperature of $4.4\\unit{K}$ and buffer gas pressure of $140\\unit{mbar}$, including occupancies $\\bar n$ inferred from both motional sideband asymmetry and power in the cooling sideband.\nInferring $\\bar n$ from motional sideband asymmetry is straightforward, however the probes must be weak to avoid extraneous heating of the oscillator.\nThe much-larger signal-to-noise ratio of the cooling sideband is more suitable for determination of $\\bar n$ for the highest cooling powers.\nReferring to Eq.~\\eqref{eq:ideal_theory} and neglecting the weak probes, we see that apart from the easily-measured optomehcanical parameters, accurate knowledge of the quantity $\\eta\\,\\bar n\\s{th}$ is required.\nMoreover, extraneous heating due to optical absorption modifies the actual bath occupancy from that measured, $\\bar n\\s{th}\\rightarrow\\bar n\\s{th}+\\heating n_c$, where $\\heating$ is the added bath phonons per intracavity photon (see also Appendix~\\ref{sec:sideband_cooling}).\nThus, at least one of the parameters $\\eta,\\bar n\\s{th},\\heating$ needs to be independently determined.\nIt is often difficult to obtain accurate measurement of these parameters.\nHere we use the sideband asymmetry at an intermediate data point to compute $\\bar n$ and, unequivocally, $\\eta$, providing calibration for the entire cooling curve of Fig.~\\ref{fig:dual_tone_all}d.\n\nThe two main sources of measurement error are tuning accuracy, estimated at\n$\\pm 2\\pi\\times 20\\unit{MHz} = \\pm 0.0125\\kappa$, that leads to slightly different cavity response seen by the two probes (error bars in Figs.~\\ref{fig:fake_asymmetry} and~\\ref{fig:dual_tone_all}d), and error in estimate of Lorentzian peaks of the sidebands, that yield different $\\bar n$ values for the two flavors of asymmetry used in Fig.~\\ref{fig:fake_asymmetry}b,d (see inset of Fig.~\\ref{fig:fake_asymmetry}c).\nThe former error is dominant for large $\\bar n$ (small asymmetry), while the latter is dominant at strong cooling powers hence low signal-to-noise ratios (for example the last point in Fig.~\\ref{fig:fake_asymmetry}d).\nFor every data point in Fig.~\\ref{fig:dual_tone_all}d, we add these errors in quadrature.\nWe then compute $\\eta$ and its uncertainty using weighted average.\nThe final result, $\\eta = 0.044\\pm 0.005$ remains essentially the same if we take the last few, minimum error points.\nThis calibration gives $\\bar{n}=1.5\\pm 0.2$ (40\\% ground state occupation) for the minimum occupation in Fig.~\\ref{fig:dual_tone_all}d.\nIn addition, fitting the entire data set yields bath thermal occupation $\\bar n\\s{th}=17.5$ and extraneous heating of $\\heating=1.3\\,\\C_0$, in excellent agreement with the independently calibrated measurements of Appendix~\\ref{sec:sideband_cooling}.\n\n\\section{Theory\n}\n\\label{sec:theory}\n\nTo describe our experiment we take the standard optomechanical model in a rotating frame\n\\begin{equation}\n \t\\hat{H}\\s{OM}\/\\hbar = [\\Delta_{\\text{th}}(t)-\\Delta]\\hat a\\dagg\\hat a\n + \\Omega_m\\hat b\\dagg\\hat b\n - g_0\\hat a\\dagg\\hat a(\\hat b+\\hat b\\dagg),\n \\label{eq:HOM}\n\\end{equation}\nbut include the PTRS (via $\\Delta_{\\text{th}}$) as well as optical and mechanical baths. \nA standard calculation~\\cite{gardiner2004quantum,aspelmeyer_cavity_2014} gives quantum Langevin equations,\nwhich we linearize around the mean intracavity field $\\hat a(t)=\\bar a(t)+\\delta\\hat a(t)$ (and the same for the mechanical mode) to obtain the equations for linear optomechanics~\\cite{aspelmeyer_cavity_2014}\n\\begin{equation}\n \\begin{aligned}\n\t\\delta \\dot{\\hat a} &= \\left\\{i\\left[\\Delta-\\Delta\\s{th}(t)\\right]-\\frac{\\kappa}{2}\\right\\}\\delta\\hat a\n\t+ig(t)(\\delta\\hat b+\\delta\\hat b\\dagg)+\\sqrt{\\kappa}\\delta\\hat a\\s{in},\\\\\n\t\\delta\\dot{\\hat b} &= \\left(-i\\Omega_m-\\frac{\\Gamma_m}{2} \\right)\\delta\\hat b\n\t+i[g(t)\\delta\\hat a\\dagg+g^*(t)\\delta\\hat a]+\\sqrt{\\Gamma_m}\\hat b\\s{in},\n \\end{aligned}\n \\label{eq:langevin_eoms}\n\\end{equation}\nwhere $\\Delta\\approx-\\Omega_m$ is the average detuning of the pumps from the cavity,\n$g(t)=g_0\\bar a(t)\\exp[i(\\omega\\s{cav}+\\Delta)t]$ is the modulated light-enhanced optomechanical coupling strength, \nand the input noises obey $\\langle\\delta\\hat a\\s{in}(t)\\delta\\hat a\\s{in}\\dagg(t')\\rangle=\\delta(t-t')$ and $\\langle\\hat b\\s{in}(t)\\hat b\\s{in}\\dagg(t')\\rangle=(\\bar n\\s{th}+1)\\delta(t-t')$ (as well as standard commutation relations).\nLangevin equations with periodic time-dependence can be analyzed with a recently developed method~\\cite{malz_floquet_2016}.\nNote that closely related models have been studied in the context of levitated optomechanics~\\cite{Aranas2016,Aranas2017}, where instead of the cavity, the mechanical resonator frequency is modulated.\n\nIn the experiment, we apply up to three tones to the cavity: a strong cooling tone, as well as weak red- and blue-detuned probes. Since the blue-detuned probe is very far detuned from the other two tones, it remains unaffected and we will not consider it in this section.\nCooling tone and red-detuned probe are close to the red sideband, separated in frequency by $\\Omega_{\\text{mod}}$, as is shown in Fig.~\\ref{fig:dual_tone_all}a.\nNeglecting all other effects, the resulting intracavity field $\\bar a(t)=\\bar a_{r}e^{-i(\\omega_{\\text{cav}}+\\Delta)t}+\\bar a_{c}e^{-i(\\omega_{\\text{cav}}+\\Delta+\\Omega_{\\text{mod}})t}$.\n\nWe model the PTRS~\\cite{verhagen_quantum-coherent_2012,Li2014} through Eq.~\\eqref{eq:temperature_deviation} in conjuction with $\\Delta_{\\text{th}}(t)=g_{\\text{th}}\\delta T(t)$ as discussed above.\nIn our level of approximation we neglect the effect of the PTRS on the mean intracavity field, such that we can solve directly for the temperature deviation.\nSince a constant temperature shift leads to a static frequency shift which can be absorbed into the overall detuning $\\Delta$,\nwe only consider the time-dependent part\n\\begin{equation}\n \\begin{aligned}\n\t\\delta T(t)&=g_{\\text{abs}}\\int_{-\\infty}^t2\\bar a_{c}\\bar a_{r}e^{-\\gamma\\s{th}(t-t')}\\cos(\\Omega_{\\text{mod}}t')dt'\\\\\n\t&=\\frac{2g_{\\text{abs}}\\bar a_{c}\\bar a_{r}}{\\sqrt{\\gamma\\s{th}^2+\\Omega_{\\text{mod}}^2}}\\cos(\\Omega_{\\text{mod}}t-\\phi\\s{th}),\n \\end{aligned}\n \\label{eq:thermal_response}\n\\end{equation}\nwhere the phase lag \n\\begin{equation}\n \\phi\\s{th}=\\tan^{-1}\\left(\\Omega\\s{mod}\/\\gamma\\s{th}\\right)\n \\label{eq:phase_lag}\n\\end{equation}\narises due to the finite thermalization time.\nEquation \\eqref{eq:thermal_response} yields the PTRS displayed in Eqs.~\\eqref{eq:PTRS} and \\eqref{eq:Delta_k}.\nThe phase $\\phi\\s{th}$ plays a crucial role in the observed asymmetric response. Note that if $\\Delta\\s{th}$ is positive, the cavity resonance is blue-shifted.\n\nIf the two pumps lie close to the red sideband ($\\Delta\\approx-\\Omega_m$), and the modulation frequency is much smaller than $\\Omega_m$, as is the case here, we can neglect terms rotating at $2\\Omega_m$ in a rotating-wave approximation.\nHowever, the resulting equations still have an explicit time-dependence, which can be removed by splitting the fields $\\delta\\hat a$ and $\\delta\\hat b$ into Fourier components~\\cite{malz_floquet_2016}\n\\begin{equation}\n \\begin{aligned}\n \\delta\\hat a(t)&=\\sum_n\\exp(in\\Omega\\s{mod}t)\\delta\\hat a^{(n)}(t),\\\\\n \\delta\\hat b(t)&=\\sum_n\\exp(in\\Omega\\s{mod}t)\\delta\\hat b^{(n)}(t),\n \\end{aligned}\n \\label{eq:fourier_cpts}\n\\end{equation}\nat the cost of introducing an infinite set of coupled equations of motion\n\\begin{equation}\n \\begin{aligned}\n\t\\delta \\dot{\\hat a}^{(n)}&=\\left( i\\tilde\\Delta-in\\Omega_{\\text{mod}}-\\frac{\\kappa}{2} \\right)\\delta\\hat a^{(n)}+\\delta_{n,0}\\sqrt{\\kappa}\\delta\\hat a_{\\text{in}}\\\\\n\t&+i\\left(g_{c} \\delta\\hat b^{(n+1)}+g_{r}\\delta\\hat b^{(n)}-\\Delta_k ^*\\delta\\hat a^{(n+1)}-\\Delta_k \\delta\\hat a^{(n-1)} \\right),\\\\\n\t\\delta\\dot{\\hat b}^{(n)}&=\\left(-in\\Omega_{\\text{mod}}-\\frac{\\Gamma_m}{2}\\right)\\delta\\hat b^{(n)}\\\\\n\t&+i\\left( g_{c}\\delta\\hat a^{(n-1)}+g_{r}\\delta\\hat a^{(n)}\\right)+\\delta_{n,0}\\sqrt{\\Gamma_m}\\hat b_{\\text{in}},\n \\end{aligned}\n \\label{eq:eom}\n\\end{equation}\nwhich are depicted as lattice in Fig.~\\ref{fig:fourier_modes}.\nHere, $g_{r,c}=g_0\\bar a_{r,c}$ and we have written $\\Delta\\s{th}(t)=\\Delta_k \\exp(i\\Omega_{\\text{mod}}t)+\\text{c.c.}$, and defined the residual detuning $\\tilde\\Delta=\\Omega_m+\\Delta\\ll\\kappa$, which presents only a small correction and will thus be neglected in the following.\nIn principle, the cavity frequency shift could have a number of origins, which can be included in $\\Delta_k $. Here for simplicity we consider only the PTRS \\eqref{eq:PTRS}, such that $\\Delta_k $ is given by Eq.~\\eqref{eq:Delta_k}.\n\n\nThe Fourier components we have introduced are sometimes called auxiliary modes~\\cite{Peterson2017}, frequency-shifted operators \\cite{Aranas2016}, or sidebands. \nThe explicitly time-dependent terms in the linearized equations of motion~\\eqref{eq:langevin_eoms} couple the Fourier modes.\nThis off-diagonal coupling (in Fourier space) is suppressed by the response of the modes (especially the narrow mechanical mode), such that good approximations are obtained with only few Fourier modes.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{unequal_sketch.pdf}\n \\caption{\\textbf{An illustration of the infinite array of coupled Fourier modes.}\n The map to Fourier components \\eqref{eq:fourier_cpts} introduces an infinite set of coupled Langevin equations, which has to be truncated at some order to obtain a solution.\n Due to the two pumps (cooling and red-detuned probe), thermomechanical noise incident on mode $\\delta\\hat b^{(0)}$ is distributed onto $\\delta\\hat a_r$ and $\\delta\\hat a_{c}$, generating two sidebands.\n The cavity Fourier modes are coupled due to a cavity Kerr-type nonlinearity [$\\Delta_k $~\\eqref{eq:Delta_k}].\n This effect modifies the sideband weight, which to lowest order (considering only the bold three-mode system)\n can be accounted for with new effective optomechanical couplings~\\eqref{eq:modified_cooperativities},\n but in general leads to more sidebands (Fig.~\\ref{fig:dual_tone_all}c), modeled by including more Fourier modes in the description.\n Notably, due to the finite response time, the coupling between the Fourier modes is \\emph{complex}, with the phase $\\phi_{\\text{th}}$ given in Eq.~\\eqref{eq:phase_lag}.}\n \\label{fig:fourier_modes}\n\\end{figure}\n\nFor now, we assume that only the central modes $\\delta\\hat a_r\\equiv\\delta\\hat a^{(0)}$, $\\delta\\hat a_{c}\\equiv\\delta\\hat a^{(-1)},\\delta\\hat b^{(0)}$ are nonzero.\nThese modes are shown in bold colors in Fig.~\\ref{fig:fourier_modes} and are most relevant as they contain the most thermomechanical noise.\nThis reduces the equations of motion~\\eqref{eq:eom} to the matrix equation\n\\begin{equation}\n \\begin{aligned}\n\t&\\mat{\\chi\\s{opt}^{-1}(\\omega)&-ig_{r}&i\\Delta_k \\\\\n\t-ig_{r}&\\chi_m^{-1}(\\omega)&-ig_{c}\\\\\n\ti\\Delta_k ^*&-ig_{c}&\\chi\\s{opt}^{-1}(\\omega+\\Omega\\s{mod})}\n\t\\mat{\\delta\\hat a_r\\\\\\delta\\hat b^{(0)}\\\\ \\delta\\hat a_{c}}\\\\\n\t&=\\mat{\\sqrt{\\kappa}\\delta\\hat a\\s{in}\\\\ \\sqrt{\\Gamma_m}\\hat b\\s{in}\\\\ 0},\n \\end{aligned}\n \\label{eq:matrix_equation}\n\\end{equation}\nwhere we have defined the susceptibilities\n\\begin{equation}\n \\chi\\s{opt}^{-1}(\\omega)=\\kappa\/2-i(\\omega+\\tilde\\Delta), \\quad\\chi_m^{-1}(\\omega)=\\Gamma_m\/2-i\\omega.\n \\label{eq:susceptibilities}\n\\end{equation}\nNote that the Kerr-type effect thus leads to a coupling of the Fourier modes $\\delta\\hat a_{c}$ and $\\delta\\hat a_r$, captured by $\\Delta_k $.\nThe noise spectrum contains mostly noise from the mechanical oscillators, such that we can neglect $\\delta\\hat a_{\\text{in}}$.\nApplying the red-detuned probe and the cooling tone introduces optical damping, yielding an effective susceptibility $\\chi\\s{m,eff}^{-1}(\\omega)=\\Gamma_m(1+\\C_{\\text{red}}+\\C_{\\text{cool}})\/2-i\\omega$.\nGiven the approximation \\eqref{eq:matrix_equation}, the spectrum reads\n\n\\begin{equation}\n \\begin{aligned}\n \t \\bar S_I&(\\omega+\\Delta_{\\text{LO}})=1+\\eta\\kappa\\bar n\\Gamma_m\n \t \\left|\\frac{g_r-ig_c\\Delta_k\\chi\\s{opt}(\\Omega\\s{mod}+\\omega)}{\\chi\\s{opt}^{-1}(\\omega)\\chi\\s{m,eff}^{-1}(\\omega)}\\right|^2\\\\\n \t &+\\eta\\kappa\\bar n\\Gamma_m\\left|\\frac{g_c-ig_r\\Delta_k^*\\chi\\s{opt}(\\omega-\\Omega\\s{mod})}{\\chi\\s{opt}^{-1}(\\omega)\\chi\\s{m,eff}^{-1}(\\omega-\\Omega\\s{mod})}\\right|^2\n \\end{aligned}\n \\label{eq:three_mode_spectrum}\n\\end{equation}\nwhere we have neglected the frequency dependence of the cavity response, \nand introduced the overall detection efficiency $\\eta$~\\cite{gardiner2004quantum,BowenMilburn2015}.\nThis spectrum can be understood as follows: \nthermomechanical noise is filtered by the mechanical response $\\chi_{\\text{m,eff}}$ and is scattered to the optical modes, where it interferes with itself. In the Fourier mode corresponding to the red-detuned probe sideband, the amplitudes $g_r$ and $ig_c\\Delta_k \\chi\\s{opt}(\\Omega\\s{mod})$ add.\nThe rate $g_{r}$ comes from scattering directly into that Fourier mode, whereas $ig_{c}\\Delta_k \\chi\\s{opt}(\\Omega\\s{mod})$ has the clear interpretation of noise scattering first into cooling mode $\\delta\\hat a_{c}$ with amplitude $g_c$, where it picks up the optical susceptibility $\\chi\\s{opt}(\\Omega\\s{mod})$, and then hopping from there onto the red-detuned probe mode with amplitude $\\Delta_k$.\n\nWe find that even for equal pump strength ($g_{r}=g_{c}$) the thermomechanical sideband weights can differ, \\emph{as long as $\\Delta_k $ has a nonzero phase}.\nThis phase occurs only if $\\Omega\\s{mod}$ and $\\gamma\\s{th}$ are comparable, such that the phase lag of the thermal response behind the intracavity field [Eq.~\\eqref{eq:PTRS}] is non-trivial.\nThis phase is conjugated between the clockwise and counterclockwise process, allowing us to interpret it as a synthetic gauge flux $\\phi_{\\text{th}}$ threading the triangles in Fig.~\\ref{fig:fourier_modes}.\nFrom this point of view, the interpretation is similar to the nonreciprocal noise scattering observed in the optomechanical isolator~\\cite{Bernier2017}, except that here the phase arises dynamically, among the virtual Fourier modes, whereas in optomechanical nonreciprocal systems the phase is a direct consequence of the phase relation of drives. \nIn the more relevant case for thermometry, $g_{r}\\ll g_{c}$, we can neglect the backaction of the probe tones and Eq.~\\eqref{eq:three_mode_spectrum} reverts to the ideal theory \\eqref{eq:ideal_theory}, but now with the modified optomechanical cooperativities displayed above [Eq.~\\eqref{eq:modified_cooperativities}].\nThe three-mode approximation is sufficient for fitting our data in Fig.~\\ref{fig:dual_tone_all}b, where mostly $\\Omega\\s{mod}\\gg\\gamma\\s{th}$. \n\nFor strong driving and small $\\Omega\\s{mod}\\sim\\gamma\\s{th}$, the interaction $\\Delta_k $ between the optical Fourier modes is enhanced [Eq.~\\eqref{eq:Delta_k}],\nand thus higher-order Fourier modes are populated, and more sidebands appear in the output spectrum.\nThis is precisely what we observe in Fig.~\\ref{fig:dual_tone_all}c. \nThe higher-order sidebands can be modeled by including more Fourier components (faint color in Fig.~\\ref{fig:fourier_modes}).\nThe matrix in Eq.~\\eqref{eq:matrix_equation} is straightforwardly generalized to larger systems.\nIt can then be shown~\\cite{malz_floquet_2016} that the normal-ordered time-averaged noise spectrum is given by\n\\begin{equation}\n S^N(\\omega)=\\sum_n\\int\\frac{d\\omega'}{2\\pi}\\langle \\delta\\hat a_{\\text{out}}^{(n) \\dagger}\n (\\omega+n\\Omega\\s{mod})\\delta\\hat a_{\\text{out}}^{(-n)}(\\omega')\\rangle.\n \\label{eq:measured_spectrum}\n\\end{equation}\nThe heterodyne spectrum we quote above is related through $\\bar S_I(\\omega+\\Delta_{\\text{LO}})=1+\\eta S^N(\\omega)$, which is shown explicitly in Appendix~\\ref{app:heterodyne_spectrum}.\nIn Fig.~\\ref{fig:dual_tone_all}c, we fit the data to this model including 6~optical Fourier modes and 5~mechanical Fourier modes\n[i.e.~$d^{(2)}\\ldots d^{(-3)}$, $b^{(2)}\\ldots b^{(-2)}$ in Fig.~\\ref{fig:fourier_modes}].\n\nFinally, we note that the instantaneous Kerr effect also leads to a cavity frequency shift $\\Delta\\s{Kerr}(t) = g\\s{Kerr}|\\bar a(t)|^2$ without a phase lag.\nThe coupling strength $g\\s{Kerr}$ can be estimated through \\cite{Matsko2005}\n\\begin{equation}\n g\\s{Kerr} = -\\omega\\s{cav}\\frac{n_2}{n_0}\\frac{\\hbar\\omega\\s{cav}c}{V\\!\\s{mode}n_0},\n\\end{equation}\nwhere $n_0$ is the linear refractive index, $n_2$ the Kerr coefficient, $V\\!\\s{mode}$ the mode volume, and $c$ the speed of light. \nFor our system, we find $g\\s{Kerr}\/2\\pi\\sim 13\\unit{kHz}$, which is considerably weaker than the PTRS, which has a static, single intracavity photon coupling $g\\s{0,th}\/2\\pi=1.7\\unit{MHz}$.\nNevertheless, in precision thermometry the intrinsic Kerr effect might be important especially since it affects pumps with much larger spacing.\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nRapid advances in cavity optomechanics over the last decade now enable quantum control of mechanical oscillators using electromagnetic radiation.\nHere we have shown that, similar to previously studied classical noise phenomena, quantum effects can be masked by classical nonlinearities, that lead to modification of the effective scattering rates between thermomechanical sidebands of closely-tuned applied drives.\nAlthough we have focused on motional sideband asymmetry measurements, our results apply to \nany experiment utilizing continuous monitoring of the mechanical oscillator with multiple tones~\\cite{wollman_quantum_2015,kronwald_dissipative_2014}, \nsuch as backaction-evading measurement~\\cite{suh_mechanically_2014}, \nmechanical quantum squeezing~\\cite{kronwald_arbitrarily_2013} and dissipative optical squeezing ~\\cite{kronwald_dissipative_2014}, entanglement of two mechanical oscillators using reservoir engineering\n~\\cite{woolley_two-mode_2014,ockeloen-korppi_stabilized_2018}, \nand recently demonstrated non-reciprocal devices~\\cite{bernier_nonreciprocal_2017, peterson_demonstration_2017}.\nMoreover, quantum optomechanics encompasses a wide range of electromagnetic cavities susceptible to thermal Kerr-type effects, such as photonic crystal~\\cite{barclay_nonlinear_2005}, whispering-gallery mode~\\cite{verhagen_quantum-coherent_2012}, as well as microwave cavities~\\cite{suh_thermally_2012}.\nIt is therefore necessary, in addition to the characterization of noise properties, to rule out such effects, by performing additional measurements, such as equal-tone red-detuned pump (Fig.~\\ref{fig:dual_tone_all}) and cavity frequency response measurements (Fig.~\\ref{fig:response}). \nBy comparing sideband asymmetry measurements with calibrated thermometry, we have shown that quantum mechanical effects are once again dominant when the system is operated outside the bandwidth of cavity Kerr-type effects.\n\n\n\\begin{acknowledgments}\n LQ acknowledges support by Swiss National Science Foundation under grant No.~163387.\n IS acknowledges support by the European Union's Horizon 2020 research and innovation programme under Marie Sko\\l{}odowka-Curie IF grant agreement No.~709147 (GeNoSOS).\n DM acknowledges support by the UK Engineering and Physical Sciences Research Council (EPSRC) under Grant No.~EP\/M506485\/1.\n AN acknowledges a University Research Fellowship from the Royal Society and support from the Winton Programme for the Physics of Sustainability.\n TJK acknowledges financial support from an ERC AdG (QuREM).\n This work was supported by the SNF, the NCCR Quantum Science and Technology (QSIT), and the European Union's Horizon 2020 research and innovation programme under grant agreement No.~732894 (FET Proactive HOT).\n All samples were fabricated in the Center of MicroNanoTechnology (CMi) at EPFL.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nSpontaneous emergence of coordination between people or animals, without\nexternal control, is a remarkable phenomenon that can be crucial for\noptimal functioning or even survival of the population\n\\cite{king2009origins,conradt2009group,courchamp2002small}.\nIn some circumstances individuals are facing a choice between two\nor more possible actions, called strategies, they\ncan take. It often happens that the best outcome\nfor everyone can be obtained only if we choose the same strategy\nas our neighbours. In game theory such situation is referred to\nas coordination game\n\\cite{weidenholzer2010coordination,antonioni2013coordination,mazzoli2017equilibria,antonioni2014global}.\nAdditionally, it might matter under which strategy\nthe population coordinates.\nOne action can lead to higher prosperity than the other, what\nis modelled by different strategies having different payoffs.\nConditions required to coordinate have been scrutinised under various\nassumptions and for numerous environments, yet there are still unanswered\nquestions. Here, we study coordination and equilibrium selection in\ngames on multilayer networks.\n\nPeople interact in multiple contexts and through different media.\nOne of natural ways to represent it in a strict manner is by using a\nmultilayer network \\cite{boccaletti2014structure,kivela2014multilayer,battiston2014structural,battiston2017new,aleta2019multilayer}.\nEach layer is a separate network of interactions\nin a given context. For example, we interact with each other in work place,\nat home, online etc. In principle, the pattern of interactions can be\ndifferent in every layer resulting in a different network topology.\nAdditionally, some layers can be hidden \\cite{gajewski2021detecting}.\nNot every person must be active in every environment,\ntherefore some nodes are present only on a specific layer. If a node exists\nin many layers, however, it represents the same person, which often acts\nsimilarly in every context. It is therefore\nconnected between layers to itself\nvia inter-layer links, which provide coupling between the layers.\nIt is important to note that, if a system has a multilayer structure,\nit can not be simply reduced to a single-layer graph without\nchanging the dynamics \\cite{diakonova2016irreducibility}.\nHence, the scrutiny of layered systems is highly relevant.\n\nWe use the approach from evolutionary\ngame theory \\cite{sigmund1999evolutionary,axelrod1981evolution,nowak1992evolutionary,nowak2006five}\nto analyse synchronisation \nbetween the layers and equilibrium selection in coordination games.\nCoordination games have been studied in depth on single layer networks,\na comprehensive literature review can be found here\n\\cite{raducha2022coordination}. From previous results it is\nworth mentioning the KMR model which\nexplored the equilibrium selection in populations equivalent to\ncomplete graphs with the best response update rule \\cite{kandori1993learning}.\nThe risk-dominant equilibrium was always evolutionary\nfavoured in the model and several extensions did not find any deviation from\nthis behaviour \\cite{young1993evolution,youngindividual,ellison2000basins,peski2010generalized}. That outcome is preserved also on a circular network\n\\cite{ellison1993learning}, unless the unconditional imitation\nis used to update strategies \\cite{alos2006imitation}. In general,\nimitative update rules can favour Pareto-efficiency over risk dominance\n\\cite{raducha2022coordination,ohtsuki2006replicator,roca2009evolutionary}\n\n\nEvolutionary games were also extended to multilayer networks\n\\cite{wang2015evolutionary}.\nPrisoner's dilemma was studied on many layers with a possibility\nof using different strategies on different layers.\nThe strategy was updated according to replicators dynamics,\nbut using the collective payoff from all\nlayers \\cite{gomez2012evolution,matamalas2015strategical}.\nIt was also studied together with the stag hunt, the harmony game,\nand the snow drift on two-layer networks with the game being\nplayed on one layer and strategy imitation on the other\n\\cite{wang2014degree}. Additionally, the same games\non one layer were mixed with opinion dynamics and\nsocial influence on the second layer \\cite{amato2017interplay}.\nThe idea of separating the group in which we play the game\nfrom the one where we learn or imitate the strategy had been\nalready studied before within a single network\n\\cite{alos2014imitation,cui2016collaboration,khan2014coordination,alos2021efficient}.\nThe public goods game\n\\cite{tomassini2021computational,giardini2021gossip,maciel2021framing}\nwas considered on two \\cite{wang2013interdependent}\nand more layers \\cite{battiston2017determinants} with the game\nbeing played on each layer.\nInterestingly, in some of the mentioned research the multilayer structure\nwas said to enhance cooperation\n\\cite{gomez2012evolution,amato2017interplay,wang2013interdependent}.\nFinally, coordination games were also investigated on multilayer\nnetworks. The pure coordination game on one layer was\ncoupled with social dynamics and coevolution on the other, leading\nto a possible segregation \\cite{lipari2019investigating}.\nA particular version of the general coordination game\nwas studied on a two interconnected layers, with the strategy being imitated on the layers\nand the game played between the layers \\cite{lugo2015learning,lugo2020local,gonzalez2019coordination}.\nSimilarly to single-layer networks,\nthe unconditional imitation and smaller degree favoured\nthe Pareto-optimal equilibrium.\nHowever, the body of work on coordination games on multilayer\nnetworks is still very limited and consists of particular cases\nof more complex models mixed with opinion dynamics.\nMoreover, different works consider different update rules and it is\ndifficult to judge to which extent results are determined by\nthe multilayer structure, the particular payoff matrix,\nor the chosen update rule. Comparison between different\nupdate rules is necessary.\nFor these reasons, we provide a broader analysis\nof different payoff matrices laying within\nthe coordination games scope together with\nthree different update rules.\n\nWe focus on the two-player general coordination game\\cite{raducha2022coordination} described by a $2 \\times 2$ payoff matrix:\n\\begin{equation}\n\\begin{blockarray}{ccc}\n & $A$ & $B$ \\\\\n\\begin{block}{c(cc)}\n $A$ & 1 & S \\\\\n $B$ & T & 0 \\\\\n\\end{block}\n\\end{blockarray}~,\n\\label{eqn:matrix_most_general}\n\\end{equation}\nwhere A and B are available strategies, while $T$ and $S$ are\nparameters defining payoffs. By definition, coordination games\nmust fulfil conditions $T<1$ and $S<0$.\nNote that the payoff matrix (\\ref{eqn:matrix_most_general})\nis equivalent to the most general one with all four payoffs\ndefined by a parameter, if only coordination on the strategy A is\nmore lucrative than on the strategy B.\nWithout loss of generality we can reduce it\nto the presented form. A game described\nby such payoff matrix contains a social\ndilemma. Obviously, the most rewarding outcome is obtained if both\nplayers choose the same strategy, but there is a hidden trade off\nbetween security and profit. Clearly, the highest possible profit is made\nwhen both play the strategy A, hence it is called the payoff-dominant\nor Pareto-optimal strategy. On the other hand, the risk-dominant strategy\nis the best choice in the lack of knowledge, i.e. it is the strategy that\nresults in the highest average payoff assuming that the opponent\nwill play either way with the same probability\n\\cite{harsanyi1988general}.\nIt is easy to check that for $TS+1$ the strategy B is be risk-dominant. This calculation\nprovides a theoretical line $T=S+1$ at which risk dominance changes.\nWhen all players coordinate on one of these strategies we \nrefer to such state as a payoff-dominant or risk-dominant equilibrium.\n\nIn the evolutionary game theory the game evolves because the players\nupdate their strategies after interacting and observing their peers.\nIt is well known that the update rule is as important as the payoff\nmatrix in defining the end result of the game\n\\cite{raducha2022coordination,ohtsuki2006replicator,roca2009evolutionary,xia2012role,szolnoki2018dynamic,danku2018imitate,poncela2016humans}.\nMultiple update\nrules have been proposed in the literature \n\\cite{szabo2007evolutionary,pangallo2021towards,blume1993statistical,traulsen2007pairwise}. We focus on three\nwell established ones: the replicator dynamics (RD)\n\\cite{schuster1983replicator,hammerstein1994game,nowak2004evolutionary}, the best response\n(BR)\n\\cite{kandori1993learning,young1993evolution,blume1995statistical,ellison1993learning,sandholm1998simple,buskens2008consent}, and the unconditional imitation (UI)\n\\cite{nowak1992evolutionary,vilone2012social,vilone2014social,lugo2015learning,gonzalez2019coordination,lugo2020local}. It is important to note\nthat RD and UI are imitative in nature, as players adapt the strategy\nof one of the neighbours. BR on the other hand is a strategical update rule\nwhich requires from the player knowledge abut the payoff matrix.\nAnother distinction between the update rules is their determinism\n-- BR and UI are deterministic, meaning that the same configuration\nwill always lead to the same strategy being chosen, while RD is a\nprobabilistic update rule. See Methods section for more details.\n\n\\begin{figure}[ht]\n\\centerline{\n\\includegraphics[scale=1,align=c]{figures\/multilayer_diagram.pdf}\n}\n\\caption{\n(a) Schematic representation of a miniature of multilayer network used in our\nsimulations. Both layers have the same topology of a random regular\ngraph with $N=8$ nodes of degree $k=3$ each\nand a fraction $q=5\/8$ of nodes\nis shared between the layers. Shared nodes are connected by inter-layer\nconnections (dashed lines). The node overlap $q$ is the number of\nshared nodes divided by $N$. White nodes play the strategy A\nand black ones play the strategy B. Shared nodes always have the same\nstate on both layers. Each layer has a specific payoff matrix\ngiven by $(S^I, T^I)$ and $(S^{II}, T^{II})$.\n(b) Diagram of the $S$-$T$ parameter space showing parametrisation\nof the layers. Each circle on the diagonal lines represents a game\nplayed on one of the layers. On layer I the strategy A is always\nrisk-dominant (yellow area),\nand on layer II the strategy B is always risk-dominant (purpule area).\nRisk-dominance changes at the line $T=S+1$. Examplary values\nof $(S^I, T^I)$ and $(S^{II}, T^{II})$ are highlighted in green\nwith $\\Delta S$ and $\\Delta T$ illustrated.}\n\\label{fig:multilayer_diagram}\n\\end{figure}\n\nIt was shown that on a single-layer network the risk aversion\nis usually stronger than the drive to profit. Therefore, on complete\ngraphs the risk-dominant equilibrium is always obtained. For sparse\nnetworks under unconditional imitation the system\ncan favour the Pareto-optimal\nequilibrium over the risk-dominant one, but only for a limited\nrange of parameters \\cite{raducha2022coordination}.\nFor RD and BR, however, the risk-dominant equilibrium is always selected.\nIn general, local effects were shown to be more important\nfor update rules\nwhich have an imitative nature, such as unconditional imitation.\n\\cite{alos2006imitation,ohtsuki2006replicator,roca2009evolutionary}.\nA natural question is which equilibrium, if any, will be chosen\nwhen the population is placed on a multilayer network\nwith two layers on opposite sides of the $T=S+1$ risk-dominance transition line.\nIn other words,\non layer I agents play a game where the strategy A is risk-dominant\nand on layer II a game where the strategy B is risk-dominant.\nWe investigate it by means of numerical simulations.\n\nWe study a population of players participating in two games on\na multilayer network with\ntwo inter-connected layers, as depicted in\nFigure~\\ref{fig:multilayer_diagram}.\nBoth layers have the same number of nodes $N$.\nIf a node is connected\nto itself between the layers via an inter-link, it plays the same\nstrategy everywhere. The fraction of nodes connected (or shared) between the layers\nis controlled by a parameter $q \\in [0,1]$, called node overlap\nor degree of multiplexity \\cite{diakonova2014,diakonova2016irreducibility}. There are $Nq$ inter-layer connections. For $q=0$ the two layers\nare effectively two independent networks, for $q=1$ the layers\nare equivalent to one network (every node is the same on each layer all the time)\nplaying each game half of the times. The edge overlap is kept constant\nwith both layers having the same topology, since we did not observe\nany change under varying edge overlap. We use random regular graphs\n\\cite{newmannetworks}. See Methods for more details on our simulations.\n\nPlayers on each layer are engaged in different games, i.e. parameters\n$S^\\beta$ and $T^\\beta$, $\\beta \\in \\{ \\textrm{I, II}\\}$, \ndefining the payoff matrix have different values\non each layer. In order to give the same relevance to both layers,\ntheir preferences towards one of the equilibria are set to be\nequally strong. This is achieved by choosing the points $(S^I, T^I)$\nand $(S^{II}, T^{II})$ equally distant from the $T=S+1$ line,\nas visible in Figure~\\ref{fig:multilayer_diagram}. Another choice to make\nis the angle between the $T=S+1$ line and the line created by\npoints $(S^I, T^I)$ and $(S^{II}, T^{II})$. We focus on cases\nwhere all points lay on a line $T^\\beta=-S^\\beta +C$,\nwhere $C$ is a constant\n(see Supplementary Material for other cases).\nThis is because only then the average payoffs $\\langle \\Pi^I \\rangle$\nand $\\langle \\Pi^{II} \\rangle$ of both layers are equal, therefore\ngames are truly symmetrical. We analyse the case of $T^\\beta=-S^\\beta -3$,\nwhich we call \\textit{diagonal}, and $T^\\beta=-S^\\beta$\nwhere all games are variants of\nthe well known \\textit{stag hunt} \\cite{skyrms2001stag,skyrms2004stag}.\nNote, that the stag hunt game can be obtained for different values of $C$ and that both cases\nare ,,diagonal'' in the sense that they have the same slope.\nNevertheless, we call the case of $C=-3$ \\textit{diagonal} and\n$C=0$ \\textit{stag hunt} to easily distinguish them in the discussion of results that\nfollows in the manuscript.\nIn both cases we cover with\nthe parameters $S$ and $T$ the whole width of the general\ncoordination game area (see Figure~\\ref{fig:multilayer_diagram}).\n\nSince the layers are placed symmetrically around the $T=S+1$ line,\nor more precisely around a point $(S_0, T_0)$ at this line,\nthe parameter $\\Delta S = S^I - S^{II}$ is sufficient to determine\nvalues of all four parameters $S^I, T^I, S^{II}, T^{II}$. Namely:\n\\begin{equation}\n\\begin{split}\n& S^I = S_0 + \\frac{\\Delta S}{2} ,\\\\\n& T^I = T_0 - \\frac{\\Delta T}{2} ,\\\\\n& S^{II} = S_0 - \\frac{\\Delta S}{2} ,\\\\\n& T^{II} = T_0 + \\frac{\\Delta T}{2} ,\n\\end{split}\n\\label{eqn:layers_params}\n\\end{equation}\nwhere $(S_0, T_0) = (-2,-1)$ for the diagonal case and\n$(S_0, T_0) = (-0.5,0.5)$ for the stag hunt case.\nNote also that for $T^\\beta=-S^\\beta+C$, that is both cases,\nwe have $\\Delta S = \\Delta T$.\nWe use the coordination rate $\\alpha \\in [0,1]$ to describe the state\nof the population. When $\\alpha^\\beta = 1$ every player on the\nlayer $\\beta$ chooses the strategy A, therefore layer $\\beta$\nis in the Pareto-optimal equilibrium.\nWhen $\\alpha^\\beta = 0$ the layer also coordinates,\nbut on the strategy B. For $\\alpha^\\beta = 0.5$ both strategies\nare mixed in equal amounts in the layer $\\beta$. We say that the\nlayers are synchronised when $\\alpha^I = \\alpha^{II}$ and\nthen we use just $\\alpha$ to describe both of them.\nNote that synchronisation does not require coordination within\nthe layers and vice versa,\nalthough they usually come together in our results.\n\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_long_res_repl_k8_.pdf}\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_long_res_best_k8_.pdf}\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_res_imit_k499_.pdf}\n}\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_long_res_repl_k8_SH.pdf}\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_long_res_best_k8_SH.pdf}\n\\includegraphics[scale=0.65]{figures\/fig2_alpha_long_res_imit_k499_SH.pdf}\n}\n\\caption{\nCoordination rate $\\alpha=\\alpha^I=\\alpha^{II}$\nvs gap size $\\Delta S$ for full node overlap $q=1$ (the multiplex case).\nThe upper row (a, b, c) presents the diagonal case\nand the bottom row (d, e, f) the stag hunt.\nFor RD and BR each layer\nhas $N=1000$ nodes with an intra-layer degree $k=8$, for\nUI it is a complete graph with $N=500$. Each circle represents\nthe value of $\\alpha$ (for both layers) in one of 400 realisations\nand solid lines show the average values.\n}\n\\label{fig:alpha}\n\\end{figure}\n\n\\section*{Results}\n\nWe study the synchronisation between the layers, coordination, and\nequilibrium selection under varying conditions.\nFor RD and BR update rules we set the connectivity\nat $k=8$, since it was shown that the degree does not\nchange the equilibrium selection in their case\n\\cite{raducha2022coordination}. However, for UI the line\n$T=S+1$, at which risk-dominance of strategies changes,\noverlaps with the actual transition in equilibrium selection\nonly for a complete graph\\cite{raducha2022coordination}.\nHence, we analyse the case of unconditional imitation\nalways with full connectivity in order to obtain true\nsymmetry between the layers.\n\nThe two main parameters whose influence we investigate\nare the node overlap $q$ and the distance between the games\n$\\Delta S$ or $\\Delta T$. For simplicity, we start with an analysis\nof the multiplex case, i.e. full node overlap, $q=1$.\nIn Figure~\\ref{fig:alpha} we present the coordination rate\n$\\alpha$ for synchronised layers at $q=1$ (layers are always synchronised\nat full node overlap, because all nodes have to be the same on both layers by definition).\nThe RD update rule clearly favours the payoff-dominant strategy A\nat the maximal level of multiplexity. In the diagonal case the difference\nis moderate with $0.6 \\lesssim \\alpha \\lesssim 0.8$, but in\nthe stag hunt case coordination rarely happens at the strategy B\nand $ \\alpha$ is close to 1.\n\nInterestingly, for $q=1$ the UI\nupdate rule does not favour the strategy A. As we\ncan see in the figure, for small size of the gap $\\Delta S$\nthe outcome is symmetrical with both strategies selected\nhalf of the time. But for increasing distance between the layers\nthe system starts to coordinate more often on the strategy B,\nto finally select exclusively the not Pareto-optimal equilibrium\nfor the maximal gap size. It has to be noted that\nthe maximal gap size results in payoff matrices that are on the\nborder between coordination games area and non-coordination games,\ntherefore this very point technically does not represent\na coordination game. Nonetheless, the decline in the payoff-dominant\nequilibrium selection is visible already before this limit value.\nThis result is especially surprising, since the UI is the only update rule\nthat on a single-layer network can lead to the Pareto-optimal equilibrium\neven though it is not risk-dominant\\cite{raducha2022coordination}. However,\nthe requirement for that was a sparse network.\n\nThe only truly symmetric update rule is the BR which does not\nreveal any preference towards one of the strategies for\nfull node overlap. Additionally, the diagonal case is \nidentical to the stag hunt case. For gaps $\\Delta S < \\Delta S_{max} \/ 2$\nand $q=1$ synchronised layers reach\neither equilibrium with equal probability,\nand for $\\Delta S > \\Delta S_{max} \/ 2$ the system does not coordinate\nstaying at $\\alpha = 0.5$. At the transition value of\n$\\Delta S = \\Delta S_{max} \/ 2$ both states -- the coordination on one of\nthe strategies and non-coordinated fully-mixing state -- are possible\n(see Figure~\\ref{fig:alpha}).\n\n\n\\begin{figure}[ht]\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig3_repl_k8_gap1.6.pdf}\n\\includegraphics[scale=0.65]{figures\/fig3_best_k8_gap0.8.pdf}\n\\includegraphics[scale=0.65]{figures\/fig3_imit_k499_gap2.0.pdf}\n}\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig3_repl_k8_gap0.4.pdf}\n\\includegraphics[scale=0.65]{figures\/fig3_best_k8_gap0.3.pdf}\n\\includegraphics[scale=0.65]{figures\/fig3_imit_k499_gap0.5.pdf}\n}\n\\caption{\nCoordination rates on layers $\\alpha^{I}$, $\\alpha^{II}$,\nand $\\Delta \\alpha$ vs node overlap $q$ for exemplary\nvalues of $\\Delta S$. The upper row (a, b, c) presents the diagonal case\nand the bottom row (d, e, f) the stag hunt.\nFor RD and BR each layer\nhas $N=1000$ nodes with an intra-layer degree $k=8$, for\nUI it is a complete graph with $N=500$. Each circle represents\none of 500 realisations and solid lines show the average values.\nFor each realisation there is one circle for layer I (yellow) and one for layer II (purple).\nNote, that when layers synchronise $\\alpha^{I} = \\alpha^{II}$,\n $\\Delta \\alpha = 0$, and both circles overlap looking like one of brownish colour,\n as well as the solid lines for $\\alpha^{I}$ and $\\alpha^{II}$ merge.\n The dashed line in (a, b, d, e) shows the function fitted to $\\Delta \\alpha$.\n}\n\\label{fig:alpha_vs_q_diag}\n\\end{figure}\n\nIn addition to the results showed in Figure~\\ref{fig:alpha} for $q=1$, we know that\nfor $q=0$ each layer will obtain full coordination on its preferred strategy --\nA for layer I and B for layer II \\cite{raducha2022coordination}.\nThe middle ground between those two extreme values of $q$ must therefore contain\nsome kind of transition. We investigate it in Figure~\\ref{fig:alpha_vs_q_diag},\nwhere we can see how the coordination rate $\\alpha$ changes at both layers\nwith increasing $q$. First thing to notice is that for any update\nrule and any parameter choice, but with $q=0$,\neach layer converges to a different limit value of $\\alpha$.\nThis means that both layers indeed obtain full coordination\non their preferred strategies. Consequently,\nthe difference between layers is maximal and $\\Delta \\alpha = 1$.\nSuch outcome was to be expected, because with no inter-connections\nthe two layers are effectively separate networks, as we\ndescribed. Therefore, each network selects the risk-dominant\nequilibrium. Similarly, for $q=1$ layers must fully overlap\nwith $\\Delta \\alpha = 0$, because each node is present on all\nlayers and the state of a shared node must be\nthe same across the layers. As visible in\nFigure~\\ref{fig:alpha_vs_q_diag} this is exactly what happens.\n\nThe above considerations lead to a conclusion that there must be\na certain point $q_c \\in [0,1]$ at which $\\Delta \\alpha$\nbecomes zero. In Figure~\\ref{fig:alpha_vs_q_diag}\nwe see that the value of $q_c$ can vary for replicator\ndynamics and best response update rules, but is close to\nzero in both cases for unconditional imitation. In fact,\n$q_c \\to 0$ for any configuration of the layers when\nplayers update their strategies according to UI\n(see Supplementary Materials for plots of different cases).\nIn other words, synchronisation between the layers is the strongest\nfor the UI update rule. One has to still bear in mind that for\nUI the network has a higher degree than in other cases.\nIt is a complete graph, whereas for RD and BR the networks are much\nsparser with $k=8$. Nevertheless, simulations for higher degree for BR\nindicate that synchronisation is weakened by increasing\nconnectivity (see Supplementary Materials), which makes the\nupdate rule a natural explanation of the observed differences.\n\nAnother surprising observation is that not all the results are\nsymmetrically placed around $\\alpha = 0.5$. Both layers\nhave equally strong preferences towards their natural equilibria\n-- payoff matrix parameters $(S^I, T^I)$ and $(S^{II}, T^{II})$\nare equally distant from the transition line $T=S+1$ and\naverage payoffs of the games on both layers are the same.\nThere is no reason, in principle, why the system as a whole\nshould choose one equilibrium over the other. Nevertheless,\nwe can see that in some cases with RD and UI synchronised layers\ncoordinate exclusively on the Pareto-optimal strategy A ($\\alpha = 1$),\nwhile it doesn't happen for the strategy B at any point\n(except for $q=1$ with the maximal gap $\\Delta S$ for UI, see Figure~\\ref{fig:alpha}).\nThis symmetry breaking is especially interesting, because\nit is driven by the level of multiplexity $q$ in a non-trivial way.\nIn examples shown in Figure~\\ref{fig:alpha_vs_q_diag}, and in \ngeneral, if the Pareto-optimal equilibrium is obtained on both layers\nit happens as soon as they synchronise, i.e. at $q_c$. When increasing\nthe node overlap further at some point $q_p$ the synchronised state\nwith coordination on the strategy $B$ starts to appear and\nthe average value of $\\alpha$ drops below 1. For $q>q_p$\nsynchronised layers can coordinate on either strategy,\nhowever $\\alpha>0.5$ most of the time meaning that the Pareto-optimal\nequilibrium is still dominant.\n\nThe fully symmetrical outcome that one would expect from the symmetrical\ndesign of the experiment is obtained solely for BR. We can\nsee in Figure~\\ref{fig:alpha_vs_q_diag}\nthat there are only two types of behaviour\nthat the system displays with BR update. The first one, for $qq_c$, is coordination of both synchronised layers on one of\nthe strategies with equal probability of choosing either of them.\n\n\\begin{figure}[ht]\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig4_sc_diag_repl.pdf}\n\\includegraphics[scale=0.65]{figures\/fig4_sc_diag_best.pdf}\n}\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig4_q_c_repl_k8.pdf}\n\\includegraphics[scale=0.65]{figures\/fig4_q_c_best_k8.pdf}\n}\n\\caption{\n(a, b) Coordination rate difference between the layers $\\Delta \\alpha$\nvs node overlap $q$ for increasing value of $\\Delta S$ (given in the legend).\n(c, d) Critical value of $q_c$ and $q_c^{fit}$ vs gap size $\\Delta S$.\nResults for the diagonal case,\n$N=1000$ nodes on each layer with an intra-layer\ndegree $k=8$ averaged over 100 realisations.\n}\n\\label{fig:alpha_scaling_diag}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig5_sc_diag_repl.pdf}\n\\includegraphics[scale=0.65]{figures\/fig5_sc_diag_best.pdf}\n}\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig5_q_c_repl_k8.pdf}\n\\includegraphics[scale=0.65]{figures\/fig5_q_c_best_k8.pdf}\n}\n\\caption{\n(a, b) Coordination rate difference between the layers $\\Delta \\alpha$\nvs node overlap $q$ for increasing value of $\\Delta S$ (given in the legend).\n(c, d) Critical value of $q_c$ and $q_c^{fit}$ vs gap size $\\Delta S$.\nResults for the stag hunt case,\n$N=1000$ nodes on each layer with an intra-layer\ndegree $k=8$ averaged over 100 realisations.\n}\n\\label{fig:alpha_scaling_stag}\n\\end{figure}\n\nThe behaviour observed so far leads to a logical question\nabout the change, if any, we would observe when varying\nthe distance between the layers,\ni.e. for different values of the gap size $\\Delta S$.\nIn Figures~\\ref{fig:alpha_scaling_diag} and~\\ref{fig:alpha_scaling_stag} (a, b)\nwe present the dependence of $\\Delta \\alpha$ on the degree\nof multiplexity $q$ for values of $\\Delta S$ ranging from\n0.4 to 4 in the diagonal case, and from 0.1 to 1 for the stag hunt.\nThis range essentially covers the whole width of the general coordination game\narea, as presented in Figure~\\ref{fig:multilayer_diagram}.\nA bigger gap would result\nin payoff matrices out of the coordination game's scope.\nWhat we can see is that $\\Delta \\alpha$ drops to zero at higher\nnode overlap when increasing the gap size. More precisely,\nfor RD it roughly follows the line of $\\Delta \\alpha = -q +1$\nto diverge from it at some point and eventually reach the lowest\npossible value of 0. The line is followed for much longer in the\ndiagonal case than in the stag hunt case. For BR there is virtually\nno difference between those cases and the dependence on the gap\nsize is slightly different. Values of $\\Delta \\alpha$ are the same\nfor gap sizes equal 0.4 and 0.8, then again for 1.2 and 1.6, and from \n$\\Delta S = 2$ onwards $\\Delta \\alpha = -q +1$ (these values are\nfor the diagonal case, for stag hunt the general picture is the same\nwith values rescaled by a factor of 1\/4).\n\nWe can clearly see that $q_c$\ndepends on the gap size $\\Delta S$ and this dependence\nis presented in Figures~\\ref{fig:alpha_scaling_diag}\nand~\\ref{fig:alpha_scaling_stag} (c, d). We use two approaches in order to\nestimate the value of $q_c$. The first one is simply taking the\nlowest value of $q$ at which $\\Delta \\alpha$ is equal 0\nfor the first time. This approach, however, is prone to numerical\nnoise and a tiny divergence from 0 will result in a change of the value.\nTo obtain the second one we fit a parabola with an exponential\ncutoff to the function $\\Delta \\alpha (q)$ (dashed line in Figure~\\ref{fig:alpha_vs_q_diag}) and we take the first\nvalue of $q$ at which $\\Delta \\alpha < 0.01$ as $q_c^{fit}$.\nAs we can see in the plots, it does not make a real difference\nfor BR, but can give different results for RD for higher values\nof $\\Delta S$. Regardless the approach, $q_c$ changes from approximately\n0.2 up to 1 for RD in the diagonal case (for the stag hunt values are slightly\nlower), and from 0.5 to 1 for BR with no visible difference between\nthe diagonal and the stag hunt case.\nWe similarly estimate the value of $q_p$, however without fitting a function,\nbecause the behaviour of $\\alpha$ for synchronised layers\nis more complex than the one of $\\Delta \\alpha$.\nWe take as an approximation of $q_p$ the first value of $q$\nafter synchronisation for which the coordination rate\n$\\alpha$ drops below 0.95 (dashed lines in Figure~\\ref{fig:phase_diagram}).\n\nIn summary, for any gap $\\Delta S$ (or $\\Delta T$) between the layers\nat $q=0$ there is no synchronisation and each layer gravitates\ntowards its preferred equilibrium. Then, at $q=q_c$ layers start\nto synchronise. For RD and UI synchronised layers coordinate \non the Pareto-optimal strategy for $q_cq_p$\nthey coordinate on either of the strategies. For some values\nof $\\Delta S$, however, as well as for BR in general, $q_p$\noverlaps with $q_c$ and the system goes from unsynchronised state\nstraight into coordination on any strategy, without the phase of\npure Pareto-optimal equilibrium\n(see Figure~\\ref{fig:phase_diagram}).\nAdditionally, there are two \nupdate-rule-specific phenomena.\nFor UI at the maximal\ngap between the layers ($\\Delta S_{max} = 4$ for the diagonal case and\n$\\Delta S_{max} = 1$ for the stag hunt) and for $q=1$ synchronised layers\ncoordinate only on the strategy B just at this point.\nAnd for BR for $\\Delta S > \\Delta S_{max} \/ 2$ at full node overlap\nwhen the layers get synchronised they do not reach coordination.\nInstead they both end up in a fully mixing state with\n$\\alpha^I=\\alpha^{II}=0.5$.\n\nWe can also see from\nFigure~\\ref{fig:phase_diagram}\nthat an increase in the absolute values of payoffs $S^\\beta$ and $T^\\beta$\non both layers,\ni.e. a shift from the diagonal to the stag hunt case, significantly\nenlarges the relative area of Pareto-optimal equilibrium for RD and UI.\nIt does not, however, change the relative size of the no-synchronisation\nphase and it seems not to influence the best response dynamics at all.\nOne explanation of the enlargement of the Pareto-optimal phase,\nat least for RD, could be the fact that in the stag hunt case\nthe layers are closer to each other -- the gap $\\Delta S$\n(and $\\Delta T$) is 4 times smaller on average. Games being more\nsimilar and closer to the transition line could justify\nwhy it is easier for the layer I to shift the layer II into\nits preferred equilibrium on the strategy A.\nNevertheless, for UI in the diagonal case there is a minimal value\n$\\Delta S \\approx 2$ below which the Pareto-optimal phase\ndoes not exist at all, hence here the proximity of layers can not be the \nexplanation of synchronisation in the payoff-dominant equilibrium.\nMoreover, there is an optimal size of the gap $\\Delta S$\nfor which the Pareto-optimal phase is the widest.\nFor UI it is approximately the maximal gap $\\Delta S_{max}$\nand for RD it is one of the middle values, but certainly not\nthe smallest gap. These considerations lead us to a conclusion\nthat synchronisation and equilibrium selection in\ncoordination games on multilayer networks are very complex\nphenomena where obtaining the most advantageous outcome\nrequires accurate parameter selection.\n\n\n\\begin{figure}[ht]\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig6_diag_repl.pdf}\n\\includegraphics[scale=0.65]{figures\/fig6_diag_best.pdf}\n\\includegraphics[scale=0.65]{figures\/fig6_diag_imit.pdf}\n}\n\\centerline{\n\\includegraphics[scale=0.65]{figures\/fig7_stag_repl.pdf}\n\\includegraphics[scale=0.65]{figures\/fig7_stag_best_k8.pdf}\n\\includegraphics[scale=0.65]{figures\/fig7_stag_imit.pdf}\n}\n\\caption{\nPhase diagram of coordination rate $\\alpha=\\alpha^I=\\alpha^{II}$\nin the $q$-$\\Delta S$ space, for synchronised layers\nfor the diagonal (a, b, c) and the stag hunt (d, e, f) case.\nThe pink area represents the range of parameters where\nsynchronisation is not obtained and $\\alpha^I \\neq \\alpha^{II}$\n(for UI it happens only at $q=0$).\nThe solid lines show the critical value $q_c^{fit}$ and\nthe dashed lines $q_p$.\nFor RD and BR each layer\nhas $N=1000$ nodes with an intra-layer degree $k=8$, for\nUI it is a complete graph with $N=500$. \nResults are averaged over 100 realisations.\n}\n\\label{fig:phase_diagram}\n\\end{figure}\n\n\n\n\\section*{Discussion}\n\nWe investigated synchronisation between layers\nand equilibrium selection in the general coordination game\non a multilayer network. The game played on each layer\nis described by a different payoff matrix, but both games are\nequally distant from the risk-dominance transition line $T=S+1$.\nThe layers are connected by $Nq$ inter-links,\nwhere the parameter $q$ is the node overlap or degree of multiplexity. We studied the impact of\nof the value of $q$ and the gap $\\Delta S$ between the layers\nfor three update rules: the replicator dynamics, the best response,\nand the unconditional imitation.\n\nThe most prominent outcome is the symmetry breaking\nin equilibrium selection. In neither of the cases,\ndiagonal and stag hunt, there is a difference in\naverage payoffs of games played on the layers. The strategies\npreferred by each layer are equally risk-dominant, i.e.\nthe distance from the transition line $T=S+1$ is the same.\nThe only difference, of course, is that the strategy A\ngives the highest possible payoff, hence it's the most\nprofitable one. A common-sense approach would lead us to\nbelieve that the payoff-dominant strategy A should be\nnaturally promoted by the population. This is however not\nthe case on single-layer networks, where the risk-dominant\nstrategy is always selected in the range of connectivities\nthat we considered \\cite{raducha2022coordination}.\nIn our multilayer model, which strategy is risk-dominant \ndepends on the layer, but coordination on the strategy A prevails\nin most of the parameters space or is at least favoured on average.\nIt is therefore clear that the multilayer structure\nenhances the Pareto-optimal outcome and it does so in \na complex manner.\n\nWe identified three main phases depending on the node overlap $q$\nand the gap size $\\Delta S$. The first one for lower values of $q$\nis a no-synchronisation phase with $\\alpha^I \\neq \\alpha^{II}$.\nEach layer obtains a certain level of\ncoordination close to its preferred equilibrium. The second phase\nbegins when $\\Delta \\alpha$ drops to zero, i.e. at $q_c$.\nHere, layers are synchronised and fully coordinate on the\nPareto-optimal strategy A. Finally, the third phase appears\nfor a higher node overlap $q>q_p$. In this phase layers are also\nsynchronised and they also coordinate, but not always on\nthe strategy A -- either equilibrium is possible, although\ndepending on the parameters one of them might be preferred on\naverage. In some cases $q_c = q_p$ and the second phase does not\nappear.\n\nThe Pareto-optimal phase is not a mere effect of high node overlap\nbetween layers or low gap size. It has a more complex shape that\ndepends on both parameters and on the update rule. For BR the\nPareto-optimal phase does not exist at all. For RD it is\nplaced, surprisingly, in the middle rage of the node overlap $q$,\nbut its position and width depends also on $\\Delta S$. Neither too\nlow nor too high degree of multiplexity helps in achieving the optimal\nequilibrium, and the same is true for the gap size.\nNevertheless, the value of $q_c$ grows with increasing distance $\\Delta S$.\nFor UI the Pareto-optimal phase might not even exist\nfor lower values of $\\Delta S$,\nand its usually wider for bigger gaps. If the phase exists, however,\nit appears already for any $q>0$, as the synchronisation is\nmuch faster for UI.\n\nOur work contributes to the understanding of the necessary conditions\nfor the optimal outcome in coordination games. The study of evolutionary\ncoordination games is well established and\nhad its place in the literature for a long time.\nOur work also fits into more recent general studies of social\nsystems on multilayer networks. Since many of socio-technical\nsystems have multiple environments where people can interact,\nthe application of layered structures in their modelling is \na natural step forward. As we showed, this approach can be\nhighly relevant in analysis of coordination dilemmas, because\nit leads to non-trivial new effects that have not been observed\nin single-layer networks.\n\n\n\\section*{Methods}\n\nWe run numerical simulations of the general coordination game\ndefined by the payoff matrix~(\\ref{eqn:matrix_most_general})\non a multilayer graph.\nAgents are placed on two networks of $N$ nodes forming\ntwo layers of the multilayer network. Each layer is a random regular graph\nwith a degree $k$, generated using \\verb,K_Regular, algorithm form the\n\\textit{igraph} python package \\cite{csardi2006igraph,igraph}.\nThe coupling between layers can be adjusted using\ntwo parameters: node overlap $q$ and edge overlap. As we didn't observe\nany influence of varying edge overlap on the results we maintain\na perfect edge overlap, i.e. both layers have exactly\nthe same structure of connections.\nThe node overlap $q$ takes values from 0 to 1, defining\nthe fraction of nodes connected (or shared) between both layers.\nIf two nodes are shared, their state has to be the same\non both layers at all times.\nIn other words, it's the same node present on both layers.\nFor $q=0$ there is no connection between\nthe layers and their dynamics are fully separated,\nfor $q=1$ it's effectively a single-layer network\nwith each game played half of the time.\n\nThe game played on each layer\nis described by different values of $S^\\beta$ and\n$T^\\beta$ parameters of the payoff matrix, given in\nequation~(\\ref{eqn:layers_params}).\nWe use an asynchronous algorithm where at the beginning of each time step\na layer is randomly selected with equal probability for both layers. \nThen, the update is performed on the chosen layer\nas for a single-layer network and according\nto the game played on the layer.\nFirst, a random node is chosen with equal probability for all nodes\non the layer. We call it the active or focal node.\nThe active node then plays the game with all its $k$ neighbours\non the layer and receives a given payoff, which is saved.\nFinally, the strategy of the active node is updated according\nto one of the following three update rules:\n\\begin{itemize}\n \\item {the Replicator Dynamics (RD)} (aka replicator rule, or proportional imitation rule) -- the active node compares the payoff with a random neighbour on the layer and copies its strategy with probability $p=(\\mathrm{payoff~diff.})\/\\phi$, if the neighbour's payoff is bigger. Normalisation $\\phi$ is the largest possible payoff difference allowed by the payoff matrix and network structure and it sets the probability $p$ within $[0,1]$ range,\n \\item {the myopic Best Response (BR)} -- the active node chooses the best strategy given the current strategies of the neighbours on the layer, i.e. it compares all payoffs it would obtain playing each possible strategy against the current strategies of the neighbours and chooses the strategy resulting in the largest payoff,\n \\item {the Unconditional Imitation (UI)} -- the active node copies the strategy of the most successful neighbour on the layer, i.e. the one with the highest payoff, if its payoff is larger.\n\\end{itemize}\nAt the end, the state of the focal node is copied onto\nthe other layer, if the updated node is connected (shared) between the layers.\nMore precisely, the new strategy selected by the node and\nthe last payoff are copied. The simulation\nruns until a stationary state is reached,\nor a frozen configuration is obtained on all layers.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{section:introduction}\n\nThe ``pre-train then fine-tune'' paradigm has become the new favorite in software intelligence~\\cite{niu2022deep}. Since pre-training tasks can be done with unlabeled data, a model can be pre-trained on large amounts of, easily accessible data, thus obtaining much common sense and linguistic knowledge~\\cite{de2021javabert,chirkova2021empirical,karmakar2021pre}. With this knowledge, pre-trained models can achieve better generalizability, which means that it is able to perform better than its ``no pre-training'' counterpart on a wide variety of software engineering (SE) tasks after being fine-tuned on the data of the target task~\\cite{feng2020codebert,guo2021graphcodebert,mastropaolo2021t5-learning,ahmad2021plbart,wang2021codet5,niu2022sptcode,guo2022unixcoder}. Therefore, rather than learning models from scratch, the adoption of pre-trained models as backbone for downstream tasks has become a common practice in the field of software intelligence~\\cite{guo2021graphcodebert,gotmare2021cascaded,wang2022bridging}.\n\nHowever, the fine-tuning stage requires updating the weights of a pre-trained model by training on thousands of supervised labels specific to the target task. Therefore, the ``pre-train then fine-tune'' paradigm still relies on the data from the target task. And the effectiveness of transfer learning of the pre-trained model to the target task depends heavily on the size and quality of the fine-tuning data~\\cite{niu2022sptcode}. Unfortunately, in practice, we often encounter situations where we need to apply a pre-trained model to a task with very low available data resources. In such cases, we are unable to fine-tune the pre-trained model on sufficient target task data to obtain a fine-tuned model that can be applied to the target task.\n\nFacing the same issue, pioneers in the field of Natural Language Processing (NLP) have proposed many ways to address this issue. Few-shot learning~\\cite{radford2019gpt2,yu2018diverse,gao2021making} skips the fine-tuning phase and applies the pre-trained model directly to the target task, without any weight updates. To bridge the gap between the pre-trained model and the target task, few-shot learning gives the model a task description and a few demonstrations (i.e., supervised examples) of the task at inference time. If there is only one positive example, it is also called one-shot learning~\\cite{vinyals2016matching}. Moreover, instead of any examples, zero-shot learning gives the model a natural language description of the task~\\cite{brown2020gpt3,raffel2020t5}. In the few\/one\/zero-shot settings, a large-scale pre-trained language model is able to show strong performance on many NLP tasks and benchmarks, in some cases nearly matching the performance of state-of-the-art fine-tuned systems~\\cite{radford2019gpt2,brown2020gpt3,raffel2020t5,gao2021making,ye2021crossfit}. In addition, learning from task instructions~\\cite{efrat2020turking,weller2020learning,mishra2022cross} adopts task definition, positive and negative examples, where task definition can be seen as a specification of solving the target task.\n\nThe SE community also does some explorations. Mastropaolo et al.~\\cite{mastropaolo2021t5-learning} and Wang et al.~\\cite{wang2021codet5} utilizes multi-task learning~\\cite{raffel2020t5,aghajanyan2021muppet} to achieve a better performance on the target task. By learning multiple related tasks simultaneously, multi-task learning aims to make models exploit both task-generic and task-specific information, thereby improving the model's performance on tasks with low available resources. However, multi-task learning favors tasks with significantly larger amounts of data than others, thus requiring sufficient supervised examples of the target task compared to other tasks to guarantee the availability of the model~\\cite{gu2018meta,dou2019investigating}. Rather, Prenner and Robbes~\\cite{prenner2021making} experiments with several other techniques that promised a possible benefit for small datasets, i.e., active learning, data augmentation, soft labels, self-training and intermediate-task fine-tuning~\\cite{phang2018sentence}. They find that soft labels to be more useful, while other methods are relatively more narrowly applicable, less effective, more costly, or inconclusive. Instead of using any data from some target tasks, Guo et al.~\\cite{guo2022unixcoder} directly apply the pre-trained model to the code-to-code retrieval task in order to evaluate the performance of code fragment embeddings. Given that there are many very large-scale pre-trained models of source code being proposed (e.g., GitHub Copilot, Codex~\\cite{chen2021codex} and AlphaCode~\\cite{li2022alphacode}), there is also a lot of work exploring the few\/zero-shot performance of these models on specific tasks and domains, such as program repair~\\cite{prenner2021automatic,kolak2022patch,pearce2022examining}, software security~\\cite{asare2022github,pearce2022asleep} and program synthesis~\\cite{austin2021program}.\n\nAll things considered, in the field of software intelligence, there is no systematic work to evaluate and explore the cross-task generalizability of code models. Therefore, in order to evaluate the cross-task capability of the model on code-related tasks in detail and comprehensively, we build a large-scale benchmark called CrossCodeBench. We start by collecting as many and as diverse code-related tasks as possible, and end up with 216 tasks across 28 categories, 7 types and 18 programming languages. Then, to make our benchmark available for multiple cross-task learning settings (e.g., few-shot learning, learning from task instructions), we manually label each of the 216 tasks with extensive meta-information such as task description, definition, positive\/negative examples, etc. Next, we create 10 training\/evaluation splits corresponding to different benchmark types.\n\nGiven the data splits, we perform experiments by adopting two types of pre-trained models: (1) off-the-shelf model, which are used directly on the evaluation set, and (2) models further fine-tuned on the training set. All models are applied to all splits by using all or suitable cross-task learning methods, such as few-shot learning, learning from instructions, etc. Last but not least, we carry out a number of scaling experiments, through which we find that (1) when the data of each task reaches a certain level (e.g., 10,000 instances), it is difficult for the model to maintain a high improving speed of performance as data instances increasing, and (2) larger models always lead to better performance. We hope that the benchmark, experimental results and analysis we provide will facilitate future research to more powerful cross-task approaches in SE literature. Furthermore, since our benchmark contains massive datasets (and is open to updates), not only our CrossCodeBench, but we hope that such a large-scale meta-dataset (i.e., dataset of datasets~\\cite{triantafillou2019meta,wang2022benchmarking}) will facilitate the construction of more benchmarks\\footnote{All datasets, tasks and their summaries are available at \\url{https:\/\/doi.org\/10.5281\/zenodo.7321934}. Source code is available at \\url{https:\/\/github.com\/NougatCA\/CrossCodeBench}.}.\n\n\\section{Related Work}\n\\label{section:related}\n\n\\subsection{Few-Shot, One-Shot, and Zero-Shot Learning}\n\\label{section:related_few_shot}\n\nAlbeit defeating human in many fields~\\cite{taigman2014deepface,silver2016alphago,he2016resnet,najberg2018alibaba,berner2019dota,brown2020gpt3}, current artificial intelligence (AI) techniques still rely on learning from large-scale task-specific data, and they are unable to rapidly generalize from a few examples. Rather, humans are able to learn new tasks quickly by using what they are born with, or what they have learned in the past.\n\nFew-shot learning (FSL) is therefore proposed in order to learn from a limited number of examples with supervised information. In the cross-task setup, it is an in-context learning approach where a pre-trained language model does not need any fine-tuning and weight updating~\\cite{xie2021explanation}\\footnote{Since our work only discusses the cross-task scenario, i.e., where no supervised training is performed on the data of the target task, we only introduce the FSL methods that can be applied in this scenario, for more FSL methods please refer to Wang et al.~\\cite{wang2020generalizing} and Yin~\\cite{yin2020meta}.}. The input can be divided into three parts, namely task description, examples, and prompt~\\cite{brown2020gpt3}. The task description is a typically short natural language description of the task, e.g. ``translate English to French''. Examples consist of $k$ canonical supervised examples (usually $100$ let $B(x,r)$ be the closed ball of radius $r$ centered at $x$. For every $s\\geq 0$ the \\emph{$s$-Hausdorff content} of $X$ is defined as\n\\begin{equation*} \\mathcal{H}^{s}_{\\infty}(X)=\\inf \\left\\{ \\sum_{i=1}^\\infty (\\diam\nE_{i})^{s} : X \\subset \\bigcup_{i=1}^{\\infty} E_{i} \\right\\},\n\\end{equation*}\nwhere $\\diam E_i$ denotes the diameter of $E_i$. The \\emph{Hausdorff dimension} of $X$ is\n\\begin{equation*} \\dim_{H} X = \\inf\\{s \\ge 0: \\mathcal{H}_{\\infty}^{s}(X) =0\\}.\n\\end{equation*}\nLet $N_n(X)$ be the minimal number of closed balls of radius at most $2^{-n}$ needed to cover $X$. The \\emph{lower box dimension} and \\emph{upper box dimension} of $E$ are defined as\n\\begin{equation*} \\underline{\\dim}_{B}\\, X=\\liminf_{n \\to \\infty} \\frac{\\log N_{n}(X)}{n\\log 2} \\quad \\text{and} \\quad \\overline{\\dim}_{B} \\, X=\\limsup_{n\\to \\infty} \\frac{\\log N_{n}(X)}{n\\log 2},\n\\end{equation*} \nrespectively. The \\emph{packing dimension} of $X$ is defined by\n\\begin{equation*} \\dim_P X=\\inf \\left\\{\\sup_{i} \\overline{\\dim}_{B} \\, E_i: X\\subset \\bigcup_{i=1}^{\\infty} E_i\\right\\}.\n\\end{equation*}\nFor the following lemma see \\cite[Lemma~2.8.1]{BP}.\n\\begin{lemma} \\label{l:packing}\nLet $X$ be a separable metric space. \n\\begin{enumerate}[(i)]\n\\item \\label{i:i} If $X$ is complete and $\\overline{\\dim}_B \\, U\\geq \\alpha$ for each open set $U\\subset X$, then $\\dim_P X\\geq \\alpha$.\n\\item \\label{i:ii} If $\\dim_P X>\\alpha$ then there is a closed subset $F\\subset X$ such that $\\dim_P (F\\cap U)>\\alpha$ for each open set $U$ intersecting $F$.\n\\end{enumerate}\n\\end{lemma}\n\nThe following fact easily follows from the finite stability of the dimensions in question. \n\n\\begin{fact} \\label{f:dimK}\nLet $\\dim$ be one of $\\dim_H$, $\\overline{\\dim}_B$, or $\\dim_P$. Let $K$ be a non-empty compact metric space. Then there exists $y\\in K$ such that $\\dim B(y,r)=\\dim K$ for all $r>0$. \n\\end{fact}\n\n\n\nFor the next claim see \\cite[Product formulas 7.2 and 7.5]{Fa}.\n\\begin{claim} \\label{c:product}\nLet $A\\subset \\mathbb{R}^d$ and $B\\subset \\mathbb{R}^m$ be compact sets. Then \n\\begin{equation*} \\dim_H A+\\dim_H B\\leq \\dim_H(A\\times B)\\leq \\overline{\\dim}_B \\, (A\\times B) \\leq \\overline{\\dim}_B \\, A+\\overline{\\dim}_B \\, B.\n\\end{equation*} \n\\end{claim}\n\\noindent Consult~\\cite{Fa} or \\cite{Ma} for more on these concepts.\n\n\\bigskip\n\nFor any $x\\in 2^{\\omega}$ define a compact set $K(x)\\subset [0,1]$ as\n\\begin{equation*} \nK(x)=\\left\\{\\sum_{i=0}^{\\infty} a_i 2^{-i-1}: a_i=0 \\text{ if } x(i)=0 \\text{ and } a_i\\in \\{0,1\\} \\text{ if } x(i)=1\\right\\}.\n\\end{equation*}\nFor $n\\in \\omega$ let $x\\restriction n$ be the restriction of $x$ to its first $n$ coordinates. We endow $2^{\\omega}$ with a metric $d$ compatible with the product topology defined as \n\\begin{equation*} d(x,y)=2^{-\\min\\{i: x(i)\\neq y(i)\\}} \\quad \\text{for all } x,y\\in 2^{\\omega}.\n\\end{equation*}\nThe following fact is straightforward. \n\\begin{fact} \\label{f:cont} \nThe map $K\\colon 2^{\\omega}\\to \\mathcal{K}([0,1])$ mapping $x$ to $K(x)$ is continuous, more precisely, \n\\begin{equation*} d_H(K(x),K(y))\\leq d(x,y) \\quad \\text{for all } x,y\\in 2^{\\omega}.\n\\end{equation*}\n\\end{fact} \nDefine the left shift $T\\colon 2^{\\omega} \\to 2^{\\omega}$ such that \n\\begin{equation*} \nT(x)(n)=x(n+1) \\text{ for all } n\\in \\omega.\n\\end{equation*} \nWe define the \\emph{lower density} and \\emph{upper density of $x$} by\n\\begin{equation*} \n\\underline{\\varrho}(x)=\\liminf_{n \\to \\infty} \\frac{\\sum_{i=0}^{n-1} x(i)}{n} \\quad \\text{and} \\quad \\overline{\\varrho}(x)=\\limsup_{n \\to \\infty} \\frac{\\sum_{i=0}^{n-1} x(i)}{n},\n\\end{equation*} \nrespectively. If $\\underline{\\varrho}(x)=\\overline{\\varrho}(x)$ then the common value $\\varrho(x)$ is called the \\emph{density of $x$}. For the following claim see e.g.~\\cite[Example~3.2.3]{BP}.\n\\begin{claim} \\label{cl:1}\nFor any $x\\in 2^{\\omega}$ we have \n\\begin{equation*} \n\\dim_H K(x)=\\underline{\\dim}_B \\, K(x)=\\underline{\\varrho}(x) \\quad \\text{and} \\quad \\dim_P K(x)=\\overline{\\dim}_B \\, K(x)=\\overline{\\varrho}(x).\n\\end{equation*}\n\\end{claim}\nClaims~\\ref{c:product} and \\ref{cl:1} allow us to calculate the dimensions of the Cartesian product $K(x)^d$ as follows. \n\\begin{fact} \\label{f:dim}\nLet $\\dim$ be one of $\\dim_H$, $\\underline{\\dim}_B$, $\\overline{\\dim}_B$ or $\\dim_P$. Assume that $d\\in \\mathbb{N}^+$ and $x\\in 2^{\\omega}$ are given such that $\\varrho(x)$ exists. Then \n\\begin{equation*} \\dim K(x)^d=d\\varrho(x). \n\\end{equation*} \n\\end{fact}\n\nFor $s,t\\in 2^{<\\omega}$ let $s^\\frown t\\in 2^{<\\omega}$ denote the concatenation of $s$ and $t$.\n\n\n\n\n\n\t\n\\section{Range of dimensions of microsets} \\label{s:character}\n\nThe goal of this section is to prove Theorem~\\ref{t:characterization} after some preparation. \n\n\\subsection{A description of the microsets of $K(x)^d$} The goal of this subsection is to prove Theorem~\\ref{t:Cx}, which provides us with a good tool to work with the microsets of $K(x)^d$. For technical reasons we generalize $\\mathcal{M}_K$ as follows. \n\n\\begin{definition}\n\tFor $d\\geq 1$ and $\\mathcal{F}\\subset \\mathcal{K}(\\mathbb{R}^d)$ define $\\mathcal{M}(\\mathcal{F})$ as the set of compact sets $K\\subset [0,1]^d$ for which $K\\cap (0,1)^d\\neq \\emptyset$ and there exist $K_n\\in \\mathcal{F}$, $\\lambda_n\\geq 1$, and $u_n\\in \\mathbb{R}^d$ such that $(\\lambda_n K_n+u_n)\\cap [0,1]^d\\to K$. \n\\end{definition}\t\n\n\\begin{fact} \\label{f:subset} Let $d\\geq 1$ and assume that $C_n\\subset K_n\\subset \\mathbb{R}^d$ are compact sets such that $C_n\\to C$ and $K_n\\to K$. Then $C\\subset K$.\n\\end{fact}\n\n\\begin{definition}\n\tFor $x\\in 2^{\\omega}$ and $n\\in \\omega$ define the finite set\n\t\\begin{equation*} \n\tF_n(x)=\\left\\{\\sum_{i=0}^{n-1} a_i 2^{-i-1}: a_i=0 \\text{ if } x(i)=0 \\text{ and } a_i\\in \\{0,1\\} \\text{ if } x(i)=1\\right\\}\n\t\\end{equation*}\n\tand let\n\t\\begin{equation*} \n\t\\mathcal{D}_n(x)=\\{2^{-n}K(T^n(x))+u: u\\in F_n(x)\\}.\n\t\\end{equation*}\n\tClearly, $\\bigcup \\mathcal{D}_n(x)=K(x)$ and elements of $\\mathcal{D}_n(x)$ have pairwise non-overlapping convex hulls.\n\\end{definition}\n\n\\begin{lemma} \\label{l:main} \n\tLet $E\\in \\mathcal{M}(\\{K(x): x\\in 2^{\\omega}\\})$. Assume $(\\lambda_n K(x_n)+u_n)\\cap [0,1]\\to E$ for some $\\lambda_n\\geq 1$, $x_n\\in 2^{\\omega}$, and $u_n\\in \\mathbb{R}$. Then there exist $x\\in 2^{\\omega}$, $c\\in \\mathbb{R}^+$, $m_n\\in \\omega$ for all $n$, a subsequence of positive integers $k_n\\uparrow \\infty$, a similar copy $C(x)$ of $K(x)$, $w_0=0$ and $w_1,w_2,w_3\\in \\mathbb{R}$ such that \n\t\\begin{enumerate} \n\t\t\\item \\label{e:T1} $T^{m_n}(x_{k_n})\\to x$, \n\t\t\\item \\label{e:T2} $\\lambda_{k_n}2^{-m_n}\\to c$, \n\t\t\\item \\label{e:T3} $C(x)\\subset E\\subset \\bigcup_{i=0}^3 (C(x)+w_i)$.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tDefine $C^i_n=2^{-i} K(T^i(x_n))$ for all $i,n\\in \\omega$. Let $\\varepsilon\\in (0,1\/2)$ such that $E\\cap (\\varepsilon,1-\\varepsilon)\\neq \\emptyset$. For all $n$ let $\\varphi_n \\colon \\mathbb{R} \\to \\mathbb{R}$ be defined as $\\varphi_n(z)=\\lambda_n z+u_n$ and let $p_n\\in \\omega$ be the minimal number such that there is a $v^0_n\\in F_{p_n}(x_n)$ such that \n\t\\begin{equation} \\label{e:vn} \\varphi_n (C_n^{p_n}+v^0_n) \\subset [0,1].\n\t\\end{equation} \n\tBy the minimality of $p_n$ for all $n$ there are at most $4$ translations $v\\in F_{p_n}(x_n)$ satisfying \n\t\\[\\varphi_n(C_n^{p_n}+v)\\cap [0,1] \\neq \\emptyset,\\] \n\tassume that they are $v^0_n,\\dots,v_n^{\\ell_n}$ for some $\\ell_n\\in\\{0,1,2,3\\}$. \n\t\n\tFirst we show $\\lambda_n 2^{-p_n}\\geq \\varepsilon\/2$ for all $n$ large enough. Indeed, if $n$ is large enough then $\\varphi_n (K(x_n))\\cap (\\varepsilon,1-\\varepsilon)\\neq \\emptyset$. Thus for $k=p_n-1$ there is a $D\\in \\mathcal{D}_{k}(K(x_n))$ with $\\varphi_n(D)\\cap (\\varepsilon,1-\\varepsilon)\\neq \\emptyset$. Assume to the contrary that $\\lambda_n 2^{-p_n}<\\varepsilon\/2$. Then $\\diam \\varphi_n(D)\\leq \\lambda_n 2^{-k}<\\varepsilon$, so $\\varphi_n(D)\\subset [0,1]$, contradicting the minimality of $p_n$.\n\t\n\tNow let $q_n\\geq p_n$ be the minimal integer for which $\\lambda_n 2^{-q_n}\\leq 2$. We prove $x_n(i)=0$ for all $i\\in \\{p_n,\\dots,q_n-1\\}$. Assume to the contrary that this is not the case, and take the minimal $i$ such that $p_n\\leq i\\ell$. The proof is complete.\n\\end{proof}\n\n\\begin{theorem} \\label{t:Cx}\n\tLet $d\\geq 1$ and let $E\\in \\mathcal{M}(\\{K(x)^d: x\\in 2^{\\omega}\\})$. Assume that $(\\lambda_n K(x_n)^d+u_n)\\cap [0,1]^d \\to E$ for some $\\lambda_n\\geq 1$, $x_n\\in 2^{\\omega}$, and $u_n\\in \\mathbb{R}^d$. Then there exist $x\\in 2^{\\omega}$, $m_n\\in \\omega$ for all $n$, a subsequence of positive integers $k_n\\uparrow \\infty$, a similar copy $C(x)$ of $K(x)$, and $v_1,\\dots,v_\\ell\\in \\mathbb{R}^d$ such that \n\t\\begin{enumerate}[(i)]\n\t\t\\item \\label{e:d1} $T^{m_n}(x_{k_n})\\to x$,\n\t\t\\item \\label{e:d2} $C(x)^d+v_1\\subset E\\subset \\bigcup_{i=1}^\\ell (C(x)^d+v_i)$. \n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof} Let $u_n=(u_n^1,\\dots, u_n^d)$ for all $n$. It easily follows that $E=E_1\\times\\dots \\times E_d$, where $E_i\\subset [0,1]$ are compact sets such that $E_i\\cap (0,1)\\neq \\emptyset$ and \n\t\\begin{equation*} \n\t(\\lambda_n K(x_n)+u^i_n)\\cap [0,1] \\to E_i \\text{ as } n\\to \\infty\n\t\\end{equation*}\n\tfor all $1\\leq i\\leq d$. Applying Lemma~\\ref{l:main} for all $1\\leq i\\leq d$ successively implies that there exist $z_i\\in 2^{\\omega}$, $c_i\\in \\mathbb{R}^+$, $m_{i,n}\\in \\omega$ for all $n$, a subsequence of positive integers $k_{n}\\uparrow \\infty$ (note that this does not depend on $i$), a similar copy $C(z_i)$ of $K(z_i)$, and $w_{i,j}\\in \\mathbb{R}$ for $0\\leq j\\leq 3$ such that \n\t\\begin{enumerate} \n\t\t\\item \\label{e:dd1} $T^{m_{i,n}}(x_{k_n})\\to z_i$, \n\t\t\\item \\label{e:dd2} $\\lambda_{k_n}2^{-m_{i,n}}\\to c_i$, \n\t\t\\item \\label{e:dd3} $C(z_i)\\subset E_i\\subset \\bigcup_{j=0}^3 (C(z_i)+w_{i,j})$. \n\t\\end{enumerate}\n\tBy \\eqref{e:dd2} for large enough $n$ and for all $1\\leq i\\leq j\\leq d$ we obtain that $m_{i,n}-m_{j,n}$ is independent of $n$. We may assume that $m_{1,n}-m_{i,n}=\\ell_i\\in \\mathbb{N}$ for any $1\\leq i\\leq d$ and any large enough $n$. Let $m_n=m_{1,n}$ and $x=z_1$, then clearly \\eqref{e:d1} holds. By \\eqref{e:dd1} we obtain $x=T^{\\ell_i}(z_i)$ for all $1\\leq i\\leq d$. Therefore, $C(z_i)$ is a union of at most $2^{\\ell_i}$ many translates of $C(x)$ for all $1\\leq i\\leq d$, so taking the product of \\eqref{e:dd3} for all $i\\in \\{1,\\dots,d\\}$ implies \\eqref{e:d2} with $\\ell= \\prod_{i=1}^d 2^{\\ell_i+2}$, which finishes the proof.\n\\end{proof}\n\n\n\\subsection{Useful lemmas for analytic sets and balanced sequences} The goal of this subsection is to prove Lemmas~\\ref{l:varphi existence} and \\ref{l:balanced}.\n\n\\begin{fact} \\label{f:Gd}\nLet $A \\subset [0,\\infty)$ be a non-empty analytic set. Then there exist a $G_\\delta$ set $G \\subset 2^\\omega$ and a continuous map $f \\colon G \\to [0,\\infty)$ such that $f(G) = A$.\n\\end{fact} \n\\begin{proof} \nLet $g \\colon 2^\\omega \\to [0,1]$ be a continuous surjection, and let $\\psi \\colon [0, 1) \\to [0, \\infty)$ be a homeomorphism. Since $g^{-1}(\\psi^{-1}(A)) \\subset 2^\\omega$ is analytic, by \\cite[Exercise~14.3]{Ke} we can find a $G_\\delta$ set $H \\subset 2^\\omega \\times 2^\\omega$ such that $\\pi_1(H) = g^{-1}(\\psi^{-1}(A))$, where $\\pi_1$ denotes the projection onto the first coordinate. Let $h\\colon 2^{\\omega } \\to 2^\\omega \\times 2^\\omega$ be a homeomorphism. As $g(g^{-1}(\\psi^{-1}(A))) = \\psi^{-1}(A)$, taking $G=h^{-1}(H)$ and $f=\\psi \\circ g \\circ \\pi_1\\circ h|_{G}$ finishes the proof. \n\\end{proof} \n\n\\begin{definition} \nFor $s\\in 2^{<\\omega}$ we denote by $\\length(s)$ the number of coordinates of $s$, where $\\length(\\emptyset)=0$. Define \n\\begin{equation*} \n[s]=\\{x\\in 2^{\\omega}: x\\restriction \\length(s)=s\\}.\n\\end{equation*} \n\\end{definition} \n\n\\begin{lemma}\n\\label{l:varphi existence}\nLet $A \\subset [0,\\infty)$ be a non-empty analytic set. Then there exists a map $\\varphi \\colon 2^{<\\omega} \\to [0, \\infty)$ such that\n\\begin{equation*} \n\\overline{\\varphi}(x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n)\n\\end{equation*} \nexists for each $x \\in 2^\\omega$, and the resulting function $\\overline{\\varphi}$ satisfies $\\overline{\\varphi}(2^\\omega) = A$. \n\\end{lemma}\n\\begin{proof}\nAccording to Fact~\\ref{f:Gd} we can choose a $G_\\delta$ set $G \\subset 2^\\omega$ and a continuous map $f \\colon G \\to [0, \\infty)$ such that $f(G) = A$. The set $F = 2^\\omega \\setminus G$ is $F_{\\sigma}$, so it can be written as $F = \\bigcup_{n=1}^{\\infty} F_n$ where $F_n\\subset 2^{\\omega}$ are closed and $F_n \\subset F_{n+1}$ for all $n\\geq 1$. We define $\\varphi$ on an element $s \\in 2^{<\\omega}$ by induction on the length of $s$. \nFix $a_0\\in A$ arbitrarily and define $\\varphi(\\emptyset) = a_0$. Now suppose that $\\varphi$ is already defined on $s \\in 2^{<\\omega}$, our task is to define it on $s^\\frown c$ where $c \\in \\{0, 1\\}$. \nIf $[s^\\frown c] \\cap F = \\emptyset$ then let $\\varphi(s^\\frown c)$ be an arbitrary element of $f([s^\\frown c])$. If $[s^\\frown c] \\cap G = \\emptyset$ then let $\\varphi(s^\\frown c) = \\varphi(s)$. \n\t\nIt remains to define $\\varphi(s^\\frown c)$ if $[s^\\frown c]$ intersects both $G$ and $F$. Let $m(s^\\frown c)$ and $m(s)$ be the smallest indices with $[s^\\frown c] \\cap F_{m(s^\\frown c)} \\neq \\emptyset$ and $[s] \\cap F_{m(s)} \\neq \\emptyset$, respectively. If $m(s^\\frown c) = m(s)$ then let $\\varphi(s^\\frown c) = \\varphi(s)$. Otherwise, let $\\varphi(s^\\frown c)$ be an arbitrary element of $f([s^\\frown c] \\cap G)$, concluding the definition of $\\varphi$. \n\t\nIt remains to check that $\\varphi$ satisfies the conditions of the lemma. First note that \n\\begin{equation} \\label{e:varphi s in A}\n\\varphi(s) \\in A \\text{ for each } s \\in 2^{<\\omega},\n\\end{equation}\na fact that can be quickly checked by induction. Let $x \\in 2^\\omega$ be fixed. It is enough to show that $\\overline{\\varphi} (x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n)$ exists, $\\overline{\\varphi}(x) \\in A$, and if $x \\in G$ then $\\overline{\\varphi}(x) = f(x)$. \n\t\nFirst assume that $[x \\restriction m] \\cap F = \\emptyset$ for some $m$. Then $\\varphi(x \\restriction n)$ is an element of $f([x \\restriction n])$ for each $n \\ge m$. Hence, using the continuity of $f$ and the fact that $x \\in G$, we obtain $\\varphi(x\\restriction n) \\to f(x) \\in A$. \n\t\nNow suppose that $[x \\restriction m] \\cap G = \\emptyset$ for some $m$. The definition of $\\varphi$ and \\eqref{e:varphi s in A} imply that there exists $a \\in A$ such that $\\varphi(x \\restriction n) = a$ for all $n\\geq m$. It follows that $\\varphi(x \\restriction n) \\to a \\in A$. Note that in this case $x \\in F$, so we do not have to check that $\\overline{\\varphi}(x) = f(x)$.\n\t\nFinally, assume that $[x \\restriction n]$ intersects both $G$ and $F$ for each $n$. Clearly $m(x \\restriction n)$ increases as $n \\to \\infty$. Suppose first that $m(x \\restriction n) \\to \\infty$. Then $x \\not \\in F_n$ for each $n$, hence $x \\in G$. Also, for infinitely many $n$, the value of $\\varphi(x \\restriction n)$ is chosen from $f([x \\restriction n])$, and when it is not, then $\\varphi(x \\restriction n) = \\varphi(x \\restriction (n - 1))$. It follows that $\\varphi(x \\restriction n) \\to f(x) \\in A$. Finally, assume that $m(x \\restriction n) $ converges, that is, there exists $m$ such that $m(x \\restriction n)= m$ for all large enough $n$. Then $[x \\restriction n] \\cap F_m \\neq \\emptyset$ for each $n$, hence $x \\in F_m \\subset F$, so we do not need to check $\\overline{\\varphi}(x) = f(x)$. Using \\eqref{e:varphi s in A} it also follows that there exists $a \\in A$ such that $\\varphi(x \\restriction n) = a$ if $n$ is large enough. The proof is complete.\n\\end{proof}\n\n\\begin{definition}\n We call $s\\in 2^{<\\omega}$ a \\emph{subsequence} of $x\\in 2^{\\omega}$ if \\begin{equation*} \n s = (x(k), x(k+1), \\dots, x(k+n-1))\n \\end{equation*} \nfor some $k, n\\in \\omega$. Such a relation is denoted by $s \\in x$. Let $I\\subset \\omega$ be a \\emph{discrete interval} if $I=\\{k,\\dots,k+n-1\\}$ for some $k,n\\in \\omega$, then define $x\\restriction I=(x(k),\\dots,x(k+n-1))$. For two discrete intervals $I,J\\subset \\omega$ we write $Ik$. Then clearly $x_n \\restriction I_1 \\in s_n^0$ and $x_n \\restriction I_2 \\in s_n^0$, or $x_n\\restriction I_3 \\in s_n^1$ and $x_n\\restriction I_4 \\in s_n^1$. Using that both $s_n^0$ and $s_n^1$ are subsequences of $\\alpha$ or $\\beta$, and the fact that $x$ and $x_n$ coincide on these intervals, we obtain a contradiction.\n\\end{proof}\n\t\n\\subsection{The proof of the Main Theorem} Finally, in this subsection we are ready to prove Theorem~\\ref{t:characterization}. We need the following technical lemma which is implicitly contained in \\cite{microsets}.\n\n\\begin{lemma}\n\\label{l:building K}\nLet $K_n \\subset [0, 1]^d$ be compact sets such that $\\dim_H E=\\overline{\\dim}_B \\, E$ for all $E\\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})$ and let $\\gamma=\\sup\\{ \\dim_H K_n: n\\geq 1\\}$. Then there exists a compact set $K\\subset [0,1]^d$ such that \n\\begin{equation*}\n\\dim_H E=\\overline{\\dim}_B \\, E \\text{ for all } E\\in \\mathcal{M}_K \n\\end{equation*}\nand \n\\begin{equation*} \\{\\dim_H E : E \\in \\mathcal{M}_K\\} =\\left\\{\\dim_H E : E \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})\\right\\} \\cup \\{\\gamma\\}.\n\\end{equation*} \n\\end{lemma}\n\\begin{proof}\nApply the proof of \\cite[Theorem~1.3]{microsets} to the multiset $\\Omega^0=\\{Q_i\\}_{i\\geq 1}$, where $\\Omega^0$ is an enumeration of $\\{K_n\\}_{n\\geq 1}$ such that each set $K_n$ is repeated infinitely often. \n\\end{proof}\n\t\n\\begin{theorem}[Main Theorem]\n\\label{t:characterization}\nLet $\\dim$ be one of $\\dim_H$, $\\underline{\\dim}_B$, or $\\overline{\\dim}_B$.\nLet $d\\geq 1$ and let $A \\subset [0, d]$ be a non-empty set. Then the following are equivalent:\n\\begin{enumerate}\n\\item \\label{i2} There exists a compact set $K \\subset \\mathbb{R}^d$ such that $\\{\\dim E : E \\in \\mathcal{M}_K\\} = A$;\n\\item \\label{i3} $A$ is an analytic set which contains its infimum and supremum;\n\\item \\label{i1} There exists a compact set $K \\subset \\mathbb{R}^d$ such that $\\dim_H E=\\overline{\\dim}_B \\, E$ for all $E\\in \\mathcal{M}_K$ and $\\{\\dim_H E : E \\in \\mathcal{M}_K\\} = A$.\n\\end{enumerate} \n\\end{theorem}\n\n\\begin{proof} \nThe direction $\\eqref{i1} \\Rightarrow \\eqref{i2}$ is straightforward by Fact~\\ref{f:ineq}.\n\nNow we prove $\\eqref{i2} \\Rightarrow \\eqref{i3}$. Assume that $A = \\{\\dim E : E \\in \\mathcal{M}_K\\}$ for some compact set $K \\in \\mathcal{K}(\\mathbb{R}^d)$. Theorem~\\ref{t:Fu} yields that $A$ contains its infimum and supremum as well. To see that $A$ is analytic, note that the set $\\mathcal{M}_K$ is $F_\\sigma$, since the set $\\{E \\in \\mathcal{M}_K : E \\cap [\\varepsilon, 1 - \\varepsilon]^d \\neq \\emptyset\\}$ is closed for any $\\varepsilon \\in (0,1\/2)$. Mattila and Mauldin \\cite{MM} proved that the mapping $\\dim \\colon \\mathcal{K}(\\mathbb{R}^d)\\to [0,d]$ is Borel measurable. Therefore, we obtain that $A$ is the image of a Borel set under a Borel map, hence it is analytic by \\cite[Proposition~14.4]{Ke}. \n\t\nFinally, we show $\\eqref{i3} \\Rightarrow \\eqref{i1}$. Fix an analytic set $A \\subset [0, d]$ which contains its infimum and supremum. Define $B=\\{z\/d: z\\in A\\}$, then $B\\subset [0,1]$ is analytic, and set $a = \\min B$ and $b = \\max B$. If $a = b = 0$, then $K$ can be a singleton. Hence we may assume that $b > 0$. Applying Lemma \\ref{l:varphi existence} for the analytic set $B$ yields a map $\\varphi \\colon 2^{<\\omega} \\to [0,\\infty)$ such that $\\overline{\\varphi}(x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n)$ exists for all $x\\in 2^{\\omega}$ and $\\overline{\\varphi}(2^{\\omega})=B$. We may assume that \n\\begin{equation}\n\\label{e:q3 a <= varphi <= b}\na \\le \\varphi(s) \\le b \\text{ for each $s \\in 2^{<\\omega}$.}\n\\end{equation}\nIndeed, let us replace a value $\\varphi(s)$ by $a$ if $\\varphi(s) < a$, and replace $\\varphi(s)$ by $b$ if $\\varphi(s) > b$. As $\\overline{\\varphi}(x) \\in B \\subset [a, b]$ for each $x \\in 2^\\omega$, the values of $\\overline{\\varphi}$ do not change by modifying $\\varphi$ in this way. \n\t\nWe now construct a continuous map $\\psi \\colon 2^\\omega \\to 2^\\omega$ and then use compact sets of the form $(K(\\psi(x)))^d$ to construct $K$. Let $\\alpha$ and $\\beta$ be the sequences provided by Lemma~\\ref{l:balanced} for $a$ and $b$. To construct $\\psi$, first we specify a mapping $\\phi \\colon 2^{<\\omega} \\to 2^{<\\omega}$ such that $\\phi(s)$ is a subsequence of either $\\alpha$ or $\\beta$ for all $s\\in 2^{<\\omega}$. Let $\\phi(\\emptyset) = \\emptyset$. For $s \\in 2^{<\\omega}$ with $\\length(s)=n\\geq 1$ define $\\phi(s) = (\\alpha \\restriction n) ^\\frown (\\beta \\restriction k)$, where $k=k(s)$ is a positive integer such that $ \\sqrt{n}-10$ and $\\beta$ is balanced with $\\varrho(\\beta)=b>0$, we obtain that $\\psi(x)\\neq \\mathbf{0}$ for all $x\\in 2^{\\omega}$, where $\\mathbf{0}\\in 2^{\\omega}$ is the zero sequence. We now claim that $\\psi$ satisfies \n\\begin{equation} \\label{e:dvarphi}\n\\varrho(\\psi(x)) = \\overline{\\varphi}(x)\\text{ for each $x \\in 2^\\omega$.} \n\\end{equation}\nLet us fix $x \\in 2^\\omega$, and let \n\\begin{equation*} \n\\psi_n(x) = \\phi(x \\restriction 0) ^\\frown \\phi(x \\restriction 1) ^\\frown \\dots ^\\frown \\phi(x \\restriction n).\n\\end{equation*} \nAn elementary calculation using $k(s)=o(n^2)$ as $\\length(s)=n\\to \\infty$ shows that \n\\begin{equation*} \n\\frac{\\length(\\phi(x \\restriction n))}{\\length(\\psi_n(x))} \\to 0 \\text{ as } n \\to \\infty,\n\\end{equation*} \nhence it is enough to show that $\\varrho(\\psi_n(x)) \\to \\overline{\\varphi}(x)$. Therefore, it is enough to show that $\\varrho(\\phi(x \\restriction n)) \\to \\overline{\\varphi}(x)$. Since $\\alpha$ and $\\beta$ are balanced, for $k=k(x \\restriction n)$ we obtain \n\\begin{equation} \\label{e:nk}\n|\\varrho(\\alpha \\restriction n) - a| \\le \\frac{1}{n} \\quad \\text{and} \\quad \n|\\varrho(\\beta \\restriction k) - b| \\le \\frac{1}{k}.\n\\end{equation} \nThen \\eqref{e:q3 lambda close to varphi} and \\eqref{e:nk} imply that \n\\begin{align*} \n|\\varrho(\\phi(x \\restriction n)) - \\varphi(x \\restriction n) |&= \\left|\\frac{n \\varrho(\\alpha \\restriction n) +k \\varrho(\\beta \\restriction k)}{n + k} - \\varphi(x \\restriction n)\\right| \\\\\n&\\le \\frac{2}{n + k} + \\frac{2}{\\sqrt{n}},\n\\end{align*}\nwhich tends to $0$. This implies that $\\varrho(\\phi(x \\restriction n)) \\to \\overline{\\varphi}(x)$, so the proof of \\eqref{e:dvarphi} is complete.\n\t\nFinally, we can construct $K$. As $\\psi$ is continuous, Fact~\\ref{f:cont} implies that the map\n\\begin{equation} \\label{e:K_x}\nx \\mapsto K(\\psi(x)) \\text{ is also continuous.}\n\\end{equation}\nLet $\\{x_n\\}_{n\\geq 1}$ be a dense subset of $2^\\omega$ with $\\overline{\\varphi}(x_1) = b$. Let $K_n = K(\\psi(x_n))^d$ for all $n\\geq 1$. First we show that \n\\begin{equation} \\label{e:subs} A\\subset \\{\\dim_H E : E \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})\\}.\n\\end{equation} \nLet $z \\in A$ be arbitrary. By the definition of $\\varphi$ there exists $x \\in 2^\\omega$ such that $\\overline{\\varphi}(x) = z\/d\\in B$. Fact~\\ref{f:dim} and \\eqref{e:dvarphi} imply\n\\begin{equation*} \n\\dim_H (K(\\psi(x))^d)=d \\varrho(\\psi(x))=z.\n\\end{equation*} \nAs $\\{x_n\\}_{n\\geq 1}$ is dense in $2^{\\omega}$, we can find a sequence $k_n$ such that $x_{k_n} \\to x$. By \\eqref{e:K_x} we obtain that \n\\begin{equation*} \nK_{k_n}=K(\\psi(x_{k_n}))^d \\to K(\\psi(x))^d \\quad \\text{as} \\quad n\\to \\infty.\n\\end{equation*} \nAs $\\psi(x)\\neq \\mathbf{0}$, we obtain $K(\\psi(x))\\cap (0,1)\\neq \\emptyset$, hence $K(\\psi(x))^d\\cap (0,1)^d\\neq \\emptyset$. Thus $K(\\psi(x))^d \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})$, so $z\\in \\{\\dim_H E : E \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})\\}$ proving \\eqref{e:subs}.\n\nNow it is enough to prove that \n\\begin{equation} \\label{e:EiG} \n\\dim_H E=\\overline{\\dim}_B \\, E\\in A \\text{ for all } \nE \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1}).\n\\end{equation} \nIndeed, note that $\\sup\\{ \\dim_H K_n: n\\geq 1\\}=\\dim_H K_1$ and $K_1\\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})$. Then \\eqref{e:subs}, \\eqref{e:EiG}, and Lemma~\\ref{l:building K} will immediately imply \\eqref{i1}. \n\nFinally, we prove \\eqref{e:EiG}. Let $E \\in \\mathcal{M}(\\{K_n\\}_{n\\geq 1})$ and let $\\dim$ be one of $\\dim_H$ or $\\overline{\\dim}_B$, we will calculate $\\dim E$ independently of the choice of the dimension. Since we are only interested in $\\dim E$, by Theorem~\\ref{t:Cx} we may suppose that $E=K(y)^d$ for some $y\\in 2^{\\omega}$ for which there exists a subsequence of positive integers $k_n\\uparrow \\infty$ and $m_n\\in \\omega$ such that $T^{m_n}(\\psi(x_{k_n}))\\to y$. We may assume by choosing a subsequence that $x_{k_n}\\to x$ for some $x \\in 2^\\omega$. \n\nFirst suppose that $\\{m_n\\}_{n\\geq 1}$ is bounded. By choosing a subsequence we may assume that $m_n=m$ for all $n$. The continuity of $\\psi$ implies $y=T^m(\\psi(x))$. As $\\varrho(T^m(\\psi(x)))=\\varrho(\\psi(x))$, using \\eqref{e:dvarphi} and $\\overline{\\varphi}(x)\\in B$ we obtain \n\\begin{equation*} \\dim E=\\dim (K(y)^d)=\\dim (K(\\psi(x))^d)=d\\varrho(\\psi(x))=d\\overline{\\varphi}(x)\\in A,\n\\end{equation*} \nand we are done. \n\nFinally, assume that $\\{m_n\\}_{n\\geq 1}$ is not bounded. We may suppose by choosing a subsequence that $m_n\\to \\infty$. Applying Lemma \\ref{l:balanced} for $T^{m_n}(\\psi(x_{k_n}))\\in 2^{\\omega}$ implies that $\\varrho(y)=a$ or $\\varrho(y)=b$, so \n\\begin{equation*}\n\\dim E=\\dim (K(y)^d)=d\\varrho(y)\\in \\{\\min A, \\max A\\}, \n\\end{equation*} \nwhich finishes the proof of \\eqref{e:EiG}. The proof of the theorem is complete.\n\\end{proof}\n\n\n\n\n\n\\section{Compact families of compact sets} \\label{s:K}\n\n\\subsection{The case of Hausdorff dimension} \\label{ss:Haus}\nThe goal of this subsection is to prove Theorem~\\ref{t:compact family} after some preparation. We define parametrized fractal percolations in axis parallel cubes $Q\\subset \\mathbb{R}^d$ as follows.\n\n\n\\begin{definition} \\label{d:p}\nLet $Q\\subset \\mathbb{R}^d$ be an axis parallel cube with side length $r$. For all $n\\in \\mathbb{N}^+$ we denote by $\\mathcal{D}_n$ the collection of compact dyadic subcubes of $Q$ of side length $r2^{-n}$. Given $\\alpha_n\\in [0,d]$ for all $n\\geq 1$, we construct a random compact set $\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})\\subset Q$ as follows. We keep each of the $2^d$ cubes in $\\mathcal{D}_1$ with probability $2^{-\\alpha_1}$. Let $\\Delta_1\\subset \\mathcal{D}_1$ be the collection of kept cubes and let $S_1=\\bigcup \\Delta_1$ be their union. If $\\Delta_{n-1}\\subset \\mathcal{D}_{n-1}$ and $S_{n-1}=\\bigcup \\Delta_{n-1}$ are already defined, then we keep each cube in $D\\in \\mathcal{D}_{n}$ for which $D\\subset S_{n-1}$ independently with probability $2^{-\\alpha_{n}}$. Denote by $\\Delta_{n}\\subset \\mathcal{D}_{n}$ the collection of kept cubes and by $S_{n}=\\bigcup \\Delta_{n}$ their union. Define our \\emph{percolation limit set with generation dependent retention probabilities $2^{-\\alpha_n}$} as\n\\begin{equation*} \n\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})=\\bigcap_{n=1}^{\\infty} S_n.\n\\end{equation*}\nIf $\\alpha_n=\\alpha$ for all $n\\geq 1$ then we simply use the notation $\\Gamma(\\alpha)$ instead of $\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})$.\n\nWe say that a random set $X$ \\emph{stochastically dominates} $Y$ if they can be defined on a common probability space (coupled) such that $Y\\subset X$ almost surely. For two sets $A,B$ we write $A\\subset^{\\star} B$ if $A\\setminus B$ is countable. \n\\end{definition}\n\nThe following theorem is due to Hawkes \\cite[Theorem~6]{H} in the context of trees, see also \\cite[Theorem~9.5]{MP}.\n\n\\begin{theorem}[Hawkes] \\label{t:H} For every $\\beta\\in [0,d]$ and every compact set $K\\subset Q$ the following properties hold\n\\begin{enumerate}\n\\item \\label{e:H1} if $\\dim_H K<\\beta$, then almost surely, $K\\cap \\Gamma(\\beta)=\\emptyset$,\n\\item \\label{e:H2} if $\\dim_H K>\\beta$, then $K\\cap \\Gamma(\\beta)\\neq \\emptyset$ with positive probability,\n\\item \\label{e:H3} if $\\dim_H K>\\beta$, then almost surely, $\\dim_H(K\\cap \\Gamma(\\beta))\\leq \\dim_H K-\\beta$.\n\\end{enumerate} \n\\end{theorem}\n\n\\begin{lemma} \\label{l:H}\nLet $K\\subset Q$ be compact with $\\dim_H K=\\gamma$ and let $0<\\beta <\\gamma$. Then there exists a constant $c=c(Q,K,\\beta)>0$ such that the following holds. If $\\alpha_n \\in [0,\\beta]$ for all $n\\geq 1$ and $\\alpha_n\\to \\alpha $, then \n\\begin{equation} \\label{e:Kc}\n\\P(\\dim_H ( K\\cap \\Gamma(\\{ \\alpha_n \\}_{n\\geq 1}))\\geq \\beta-\\alpha)\\geq c.\n\\end{equation}\n\\end{lemma}\n\\begin{proof} Define $c=\\P(K\\cap \\Gamma(\\beta)\\neq \\emptyset)$, we have $c>0$ by Theorem~\\ref{t:H}~\\eqref{e:H2}. For all $n\\geq 1$ let $\\delta_n=\\beta-\\alpha_n\\geq 0$ and consider $K\\cap \\Gamma(\\{\\alpha_n\\}_{n\\geq 1})\\cap \\Gamma(\\{\\delta_n\\}_{n\\geq 1})$, where $\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})$ and $\\Gamma(\\{\\delta_n\\}_{n\\geq 1})$ are independent. Since $\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})\\cap \\Gamma(\\{\\delta_n\\}_{n\\geq 1})$ stochastically dominates $\\Gamma(\\beta)$, we obtain\n\t\\begin{equation} \\label{e:gamma1} \n\t\\P(K\\cap \\Gamma(\\{\\alpha_n\\}_{n\\geq 1})\\cap \\Gamma(\\{\\delta_n\\}_{n\\geq 1})\\neq \\emptyset)\\geq c.\n\t\\end{equation} \nAs $\\Gamma(\\{\\delta_n\\}_{n\\geq 1})$ is stochastically dominated by the union of finitely many affine copies of $\\Gamma(\\delta)$ for any given $\\delta<\\beta-\\alpha$, Theorem~\\ref{t:H}~\\eqref{e:H1}, the independence of $\\Gamma(\\{\\alpha_n\\}_{n\\geq 1})$ and $\\Gamma(\\{\\delta_n\\}_{n\\geq 1})$, and Fubini's theorem imply \n\t\\begin{equation*} \n\t\\P(K\\cap \\Gamma(\\{\\alpha_n\\}_{n\\geq 1})\\cap \\Gamma(\\{\\delta_n\\}_{n\\geq 1})\\neq \\emptyset \\text{ and } \\dim_H(K\\cap \\Gamma(\\{\\alpha_n\\}_{n\\geq 1}))<\\beta-\\alpha)=0.\n\t\\end{equation*}\nThis and \\eqref{e:gamma1} imply \\eqref{e:Kc}, and the proof is complete.\n\\end{proof}\n\n\n\n\\begin{theorem}\n\\label{t:compact family}\nLet $K \\subset \\mathbb{R}^d$ be a non-empty compact set and let $A \\subset [0, \\dim_H K]$. The following statements are equivalent: \n\\begin{enumerate}\n\\item \\label{ic1} There is a compact set $\\mathcal{C} \\subset \\mathcal{K}(K)$ with $\\{\\dim_H C : C \\in \\mathcal{C}\\} = A$;\n\\item \\label{ic2} $A$ is an analytic set. \n\\end{enumerate} \n\\end{theorem}\n\n\\begin{proof} \nFirst we prove $\\eqref{ic1} \\Rightarrow \\eqref{ic2}$. Since $\\dim_H \\colon \\mathcal{K}(\\mathbb{R}^d)\\to [0,d]$ is Borel measurable by \\cite[Theorem 2.1]{MM}, if $\\{\\dim_H C : C \\in \\mathcal{C}\\} = A$ then $A$ must be analytic as the image of a compact set under a Borel map, see e.g.~\\cite[Proposition~14.4]{Ke}. \t\n\t\nNow we show $\\eqref{ic2} \\Rightarrow \\eqref{ic1}$. If $A=\\emptyset$ then $\\mathcal{C}=\\emptyset$ works, so we may suppose that $A\\subset [0,\\dim_H K]$ is non-empty and analytic. Let $\\gamma=\\dim_H K$. We may assume that $A \\subset(0, \\gamma]$, since if an appropriate family $\\mathcal{C}$ for $A \\cap (0, \\dim_H K]$ is constructed, then $\\mathcal{C}\\cup\\{\\{ y \\}\\}$ works for $A$ for any $y\\in K$. By Fact~\\ref{f:dimK} we can fix $y_0 \\in K$ such that $\\dim_H (K \\cap U)= \\dim_H K $ for any open neighborhood $U$ of $y_0$. \n\t\nFix a sequence of positive numbers $\\beta_k \\uparrow \\dim_H K $, and for all $k\\geq 1$ let $Q_k$ be a cube around $y_0$ of side length $1\/k$. Fix $k\\geq 1$ arbitrarily, and let $c_k>0$ be the constant we obtain by applying Lemma~\\ref{l:H} for $K\\cap Q_k\\subset Q_k$ and $\\beta_k$. Choose $i_k\\in \\mathbb{N}^+$ large enough so that \n\\begin{equation}\n\\label{e:choice of i_k}\n(1-c_k)^{i_k} < \\frac 12.\n\\end{equation}\nWe will run $i_k$ independent, parameterized family of percolations inside $Q_k$. For all $n\\geq 0$ let $\\mathcal{D}_n^k$ denote the collection of dyadic subcubes of $Q_k$ with side length $(1\/k)2^{-n}$ and let $\\mathcal{D}^k=\\bigcup_{n=1}^{\\infty} \\mathcal{D}_n^k$. For any $D\\in \\mathcal{D}^k$ let $u^k_i(D)$ be a random variable uniformly distributed in $[0, 1]$ such that the family $\\{u^k_i(D): k \\ge 1, \\, i \\leq i_k, \\, D \\in \\mathcal{D}^k\\}$ is independent. Assume that a sequence $\\{\\alpha_n\\}_{n\\geq 1}$ is given such that $\\alpha_n\\in [0,\\gamma)$ for all $n\\geq 1$ and $\\alpha_n \\to \\alpha \\in [0,\\gamma)$. We define $\\Gamma_i^k(\\{ \\alpha_n \\}_{n\\geq 1})$ as follows. Let $S_0=Q_k$ and $\\mathcal{D}^k_0=\\{Q_k\\}$, and for each $1\\leq i\\leq i_k$ let \n\\begin{align*} \n&\\Delta_1=\\Delta_{i,1}^k(\\{\\alpha_n\\}_{n\\geq 1})=\\{D\\in \\mathcal{D}_1^k: u_i^k(D)\\leq 2^{-\\alpha_1} \\}, \\\\\n&S_1=S_{i,1}^k(\\{\\alpha_n\\}_{n\\geq 1})=\\bigcup \\Delta_1.\n\\end{align*} \nIf $\\Delta_{m-1}$ and $S_{m-1}$ are already defined, let \n\\begin{align*} \n&\\Delta_{m}=\\Delta_{i,m}^k(\\{\\alpha_n\\}_{n\\geq 1})=\\{D\\in \\mathcal{D}_m^k: D\\subset S_{m-1} \\text{ and } u_i^k(D)\\leq 2^{-\\alpha_m} \\}, \\\\\n&S_m=S_{i,m}^k(\\{\\alpha_n\\}_{n\\geq 1})=\\bigcup \\Delta_m.\n\\end{align*} \nFinally, we define \n\\begin{equation*} \n\\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1}) = \\bigcap_{m=0}^{\\infty} S_m.\n\\end{equation*} \nSince for any $\\delta < \\alpha$ the percolation $\\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1})$ is stochastically dominated by the union of finitely many affine copies of $\\Gamma(\\delta)$, by Theorem~\\ref{t:H}~\\eqref{e:H3} for all $1\\leq i\\leq i_k$ we obtain\n\\begin{equation}\n\\label{e:percolation probability}\n\\P\\left(\\dim_H \\left(K \\cap \\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1})\\right) \n\\le \\gamma- \\alpha\\right) = 1.\n\\end{equation}\nMoreover, if $\\alpha_n \\in [0,\\beta_k]$ for all $n\\geq 1$, then Lemma~\\ref{l:H}, \\eqref{e:choice of i_k}, and the independence of the processes $\\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1})$ for $1\\leq i\\leq i_k$ imply \n\\begin{equation}\n\\label{e:percolation probability 2}\n\\P\\left(\\dim_H \\left(K \\cap \\left(\\bigcup_{i=1 }^{i_k}\\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1})\\right)\\right) \\ge \\beta_k - \\alpha \\right) \\geq \\frac 12.\n\\end{equation}\nFor each $k \\ge 1$ from each cube $D \\in \\mathcal{D}^k$ satisfying $D \\cap K \\neq \\emptyset$ we choose a point $z_D \\in D \\cap K$. For all \n$n\\geq 0$ let \n\\begin{equation*} \n\\mathcal{C}^k_n=\\{D\\in \\mathcal{D}^k_n: D\\cap K\\neq \\emptyset\\},\n\\end{equation*}\t\nand define the countable random set\n\\begin{equation*}\nF^k_i(\\{\\alpha_n\\}_{n \\geq 1}) = \\bigcup_{n=0}^{\\infty} \\big\\{z_D : D \\in \\mathcal{C}^k_n, \\, D \\subset S_n,\\text{ and } \\nexists C\\in \\mathcal{C}^k_{n+1} \\text{ with } C\\subset S_{n+1} \\cap D \\big\\}.\n\\end{equation*}\nWe now claim that \n\\begin{equation}\\label{e:set is compact}\nF^k_i(\\{\\alpha_n\\}_{n \\geq 1}) \\cup \\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1}) \\text{ is compact for each $k \\ge 1$ and $1\\leq i \\leq i_k$}.\n\\end{equation} \nIndeed, $F^k_i(\\{\\alpha_n\\}_{n \\geq 1}) \\setminus S_m$ is finite for all $m\\geq 1$, hence $S_m^* = S_m \\cup F^k_i(\\{\\alpha_n\\}_{n \\geq 1})$ is compact. Therefore $F^k_i(\\{\\alpha_n\\}_{n \\geq 1}) \\cup \\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1}) = \\bigcap_{m=1}^{\\infty} S_m^*$ is compact as well, which completes the proof of \\eqref{e:set is compact}. \n\t\nFor all $1\\leq i\\leq i_k$ let \n\\begin{equation*} \n\\Gamma^{*}_{k,i}(\\{\\alpha_n\\}_{n \\geq 1})=F^k_i(\\{\\alpha_n\\}_{n \\geq 1}) \\cup \\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1}), \n\\end{equation*}\nand define\n\\begin{equation*} \n\\Gamma^*(\\{\\alpha_n\\}_{n \\geq 1}) = \\{x \\} \\cup \\bigcup_{k=1}^{\\infty} \\bigcup_{i=1}^{i_k} \\Gamma^{*}_{k,i}(\\{\\alpha_n\\}_{n \\geq 1}).\n\\end{equation*} \nUsing \\eqref{e:set is compact} and the fact that $\\Gamma^{*}_{k,i}(\\{\\alpha_n\\}_{n \\geq 1}) \\subset Q_k$ and $Q_k\\to \\{x\\}$ as $k\\to \\infty$, it is clear that $\\Gamma^*(\\{\\alpha_n\\}_{n \\geq 1})$ is compact. As $\\alpha<\\gamma$ and $\\alpha_n<\\gamma$ for all $n\\geq 1$, we have $\\sup \\{\\alpha_n: n\\geq 1\\}\\leq \\beta_k$ for all large enough $k$, so we can apply \\eqref{e:percolation probability 2} for large values of $k$. Therefore \\eqref{e:percolation probability}, \\eqref{e:percolation probability 2}, and the independence of the processes defining each $\\Gamma^k_i(\\{\\alpha_n\\}_{n \\geq 1})$ yield\n\\begin{equation*}\n\\P\\left(\\dim_H \\left(K \\cap \\Gamma^*(\\{\\alpha_n\\}_{n \\geq 1})\\right) = \\gamma - \\alpha\\right) = 1.\n\\end{equation*}\nOur coupling of percolations clearly implies the following monotonicity: Almost surely, for all sequences $\\{\\alpha_n\\}_{n \\geq 1}$ and $\\{ \\alpha'_n\\}_{n\\geq 1}$ we have\n\t\\begin{equation}\n\t\\label{e:monotonicity}\n\t\\Gamma^*(\\{\\alpha_n\\}_{n \\geq 1}) \\subset^{\\star} \\Gamma^*(\\{\\alpha'_n\\}_{n\\geq 1}) \\text{ if $\\alpha_n \\ge \\alpha'_n$ for each $n$}.\n\t\\end{equation}\nLet $Q =\\mathbb{Q} \\cap [0, \\gamma)$, and define the set \n\\begin{equation*} \nQ^* = \\{\\{\\alpha_n\\}_{n \\geq 1} : \\alpha_n\\in Q \\text{ for all $n$ and $\\alpha_n$ is eventually constant}\\}.\n\\end{equation*} \nClearly, $Q^*$ is countable, therefore, with probability $1$ we have \n\\begin{equation}\n\\label{e:dimension prob for Q}\n\\dim_H (K \\cap \\Gamma^*(\\{\\alpha_n\\}_{n \\geq 1})) = \\gamma - \\alpha \\text{ for all $\\{\\alpha_n\\}_{n \\geq 1} \\in Q^*$ with $\\alpha_n \\to \\alpha$}.\n\\end{equation}\n\t\nNow we are ready to define our family of compact sets $\\mathcal{C}$. By Lemma~\\ref{l:varphi existence} there exists a map $\\varphi \\colon 2^{<\\omega} \\to [0,\\infty)$ such that $\\overline{\\varphi}(x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n)$ exists for all $x \\in 2^\\omega$ and $\\overline{\\varphi}(2^\\omega) = A$. Since $A \\subset (0, \\gamma]$, we may assume that \n\\begin{equation} \\label{e:0gamma}\n0<\\varphi(s)\\leq \\gamma \\text{ for all } s\\in 2^{<\\omega}.\n\\end{equation} \nIndeed, otherwise for all $s\\in 2^n$ we can replace $\\varphi(s)$ by $\\gamma 2^{-n}$ if $\\varphi(s)\\leq 0$ and by $\\gamma$ if $\\varphi(s)>\\gamma$, which modifications do not change the limit $\\overline{\\varphi}$. For each $x \\in 2^\\omega$ let $\\alpha(x)=\\{\\alpha(x)_n: n\\geq 1\\}$ such that\n\\begin{equation*} \n\\alpha(x)_n=\\gamma-\\varphi(x\\restriction n) \\text{ for all } n\\geq 1. \n\\end{equation*} \nLet us define $\\mathcal{C}$ as \n\\begin{equation*} \n\\mathcal{C} = \\{K \\cap \\Gamma^*(\\alpha(x)) : x \\in 2^\\omega\\}.\n\\end{equation*}\nIt is clear that $\\mathcal{C}$ is a random family of compact sets. Now we prove that, almost surely, $\\{\\dim_H C : C \\in \\mathcal{C}\\} = A$. Assume that the event of \\eqref{e:dimension prob for Q} holds, it is enough to show that $\\dim_H (K \\cap \\Gamma^*(\\alpha(x))) = \\overline{\\varphi}(x)$ for all $x \\in 2^\\omega$. Let $x \\in 2^\\omega$ be fixed. The definition of $\\varphi$ and \\eqref{e:0gamma} imply that $\\alpha(x)_n\\in [0,\\gamma)$ for all $n$, and $\\alpha(x)_n$ converges to $\\alpha_x=\\gamma-\\overline{\\varphi}(x)\\in [0,\\gamma)$. Hence for any $\\varepsilon > 0$, we can find sequences $\\{\\alpha'_n\\}_{n \\geq 1}, \\{\\alpha''_n\\}_{n \\geq 1} \\in Q^*$ such that \n\\begin{equation*} \n\\alpha'_n \\le \\alpha(x)_n \\le \\alpha''_n \\text{ and } \\alpha''_n-\\alpha'_n\\leq \\varepsilon \\text{ for each } n.\n\\end{equation*} \nThen $\\alpha'_n\\to \\alpha'$ and $\\alpha''_n\\to \\alpha''$ such that \n\\begin{equation} \\label{e:alpha'} \n\\alpha'\\leq \\alpha_x\\leq \\alpha'' \\text{ and } \\alpha''-\\alpha'\\leq \\varepsilon.\n\\end{equation} \nBy \\eqref{e:dimension prob for Q} we have \n\\begin{equation} \\label{e:dimg}\n\\dim_H (K \\cap \\Gamma^*(\\{\\alpha'_n\\}_{n \\geq 1})) = \\gamma - \\alpha' \\text{ and } \\dim_H(K \\cap \\Gamma^*(\\{\\alpha''_n\\}_{n \\geq 1})) = \\gamma - \\alpha''.\n\\end{equation}\nMonotonicity \\eqref{e:monotonicity} yields \n\\begin{equation} \\label{e:mon}\n\\Gamma^{*} (\\{\\alpha''_n\\}_{n\\geq 1} )\\subset^{\\star} \\Gamma^{*}(\\alpha(x)) \\subset^{\\star} \n\\Gamma^{*} (\\{\\alpha'_n\\}_{n\\geq 1}).\n\\end{equation}\nAs $\\varepsilon>0$ was arbitrary, \\eqref{e:alpha'}, \\eqref{e:dimg}, and \\eqref{e:mon} imply that \n\\begin{equation*}\n\\dim_H(K \\cap \\Gamma^*(\\alpha(x))) =\\gamma-\\alpha_x=\\overline{\\varphi}(x),\n\\end{equation*}\nwhich proves that $\\{\\dim_H C : C \\in \\mathcal{C}\\} = A$ almost surely.\n\n\nFinally, we check that $\\mathcal{C}$ is compact with probability $1$. Since $2^\\omega$ is compact and $Q_k\\to \\{y_0\\}$ as $k\\to \\infty$, it is enough to prove that for an arbitrarily given $k\\geq 1$ and $1\\leq i\\leq i_k$ the map $x \\mapsto C_x\\stackrel{\\text{def}}{=} K\\cap \\Gamma_{k,i}^*(\\alpha(x))$ is continuous. Assume that $x$ and $y$ are two sequences with $x \\restriction n= y \\restriction n$ for some $n\\geq 1$, it is enough to show that \n\\begin{equation} \\label{e:dH} \nd_H(C_x, C_y)\\leq 2^{-n}\\diam Q_k.\n\\end{equation} \nThe construction implies that $S^k_{i,n}(\\alpha(x))=S^k_{i,n}(\\alpha(y))\\stackrel{\\text{def}}{=} S_n$ coincide, and also $C_x \\setminus S_n=C_y\\setminus S_n$. Therefore, \n\\begin{equation} \\label{e:dH1}\nd_H(C_x, C_y)\\le d_H(C_x\\cap S_n, C_y\\cap S_n).\n\\end{equation} \nLet $D\\in \\mathcal{D}^k_n$ be arbitrary such that $D\\subset S_n$. By the construction we obtain that $C_x\\cap D\\neq \\emptyset$ iff $D\\in \\mathcal{C}^k_n$ iff $C_y\\cap D\\neq \\emptyset$, so $\\diam D=2^{-n}\\diam Q_k$ yields \n\\begin{equation} \\label{e:dH2}\nd_H(C_x\\cap S_n, C_y\\cap S_n)\\leq 2^{-n}\\diam Q_k.\n\\end{equation}\nSince \\eqref{e:dH1} and \\eqref{e:dH2} imply \\eqref{e:dH}, the proof is complete. \n\\end{proof}\n\n\\subsection{Packing and box dimensions} \\label{ss:box}\nWe prove Theorems~\\ref{t:compbox} and \\ref{t:comppack} in this subsection. For the following equivalent version of the upper box dimension and for more alternative definitions see \\cite[Chapter~3]{Fa}.\n\n\\begin{definition}\nIn a metric space $(X,\\rho)$ we say that $S\\subset X$ is a \\emph{$\\delta$-packing} if $\\rho(x,y)>\\delta$ for all distinct $x,y\\in S$. Let $P_n(X)$ be the cardinality of a maximal $2^{-n}$-packing in $X$. \n\\end{definition}\n\n\\begin{fact} \\label{f:equiv} \n\tFor any metric space $X$ we have $N_n(X)\\leq P_n(X)\\leq N_{n+1}(X)$, so \n\t\\begin{equation*}\n\t\\overline{\\dim}_B \\, X=\\limsup_{n \\to \\infty} \\frac{\\log P_n(X)}{n\\log 2}.\n\t\\end{equation*} \n\\end{fact} \n\n\n\n\\begin{theorem} \\label{t:compbox}\nLet $K$ be a non-empty compact metric space and $A \\subset [0, \\overline{\\dim}_B \\, K]$. The following statements are equivalent: \n\\begin{enumerate}\n\\item \\label{i01} There is a compact set $\\mathcal{C} \\subset \\mathcal{K}(K)$ with $\\{\\overline{\\dim}_B \\, C : C \\in \\mathcal{C}\\} = A$;\n\\item \\label{i02} $A$ is an analytic set. \n\\end{enumerate} \n\\end{theorem}\n\\begin{proof} \n\tThe direction $\\eqref{i01} \\Rightarrow \\eqref{i02}$ is analogous to the one in Theorem~\\ref{t:compact family}.\n\t\n\tNow we prove $\\eqref{i02} \\Rightarrow \\eqref{i01}$. We may assume that $A\\neq \\emptyset$, otherwise $\\mathcal{C}=\\emptyset$ works. Let $\\alpha=\\overline{\\dim}_B \\, K$. Choose a sequence $\\alpha_n \\uparrow \\alpha$. By Fact~\\ref{f:dimK} we can fix $y_0\\in K$ such that \n\\begin{equation*}\n\\overline{\\dim}_B \\, B(y_0,r)=\\alpha \\text{ for all } r>0.\n\\end{equation*}\nLet $g(n)=\\max\\{n+1,P_n(K)\\}$ for all $n\\in \\mathbb{N}$. We can choose a sequence $k_n\\uparrow \\infty$ with $k_0=0$ and a positive integer $j=j(n)$ such that $g(k_n)\\leq j\\leq k_{n+1}-3$ and\n\\begin{equation} \\label{e:alpha} P_{j}(B(y_0,2^{-g(k_n)}))\\geq 2^{\\alpha_n j}. \n\\end{equation}\nBy Lemma~\\ref{l:varphi existence} there is a map $\\varphi \\colon 2^{<\\omega} \\to [0, \\infty)$ such that\n\\begin{equation*} \n\\overline{\\varphi}(x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n) \n\\end{equation*} \n\texists for each $x \\in 2^\\omega$, and the resulting function $\\overline{\\varphi}$ satisfies $\\overline{\\varphi}(2^\\omega) = A$. We may assume that $\\varphi(s)\\leq \\alpha_n$ for all $s\\in 2^n$ and $n\\in \\mathbb{N}$, otherwise we can replace $\\varphi(s)$ by $\\alpha_n$ without changing $\\overline{\\varphi}$.\n\n\tLet $m(\\emptyset)=1$, $y_{\\emptyset}=y_0$, and $C(\\emptyset)=B(y_0,1)$. Assume that $s\\in 2^n$, a positive integer $m(s)$, and points $y_i(s)\\in K$ are given such that $y_0(s)=y_0$ and their pairwise distance is more than $2^{2-k_n}$, and \n\t\\begin{equation*} \n\tC(s)=\\bigcup_{i=1}^{m(s)} B(y_i(s), 2^{-k_n}),\n\t\\end{equation*} \n\tso the distance between distinct balls of the form $B(y_i(s), 2^{-k_n})$ is bigger than $2^{-k_n}$.\n\t\n\tLet $c\\in \\{0,1\\}$ and $t=s^\\frown c$. Let $\\ell(t)$ be the minimal positive integer such that $g(k_n) \\leq \\ell(t) \\leq k_{n+1}-3$ and \n\t\\begin{equation} \n\tP_{\\ell(t)}(B(y_0,2^{-g(k_n)}))\\geq 2^{\\varphi(t)\\ell(t)},\n\t\\end{equation} \n\tby \\eqref{e:alpha} the number $\\ell(t)$ is well-defined. Then we can choose a $2^{-\\ell_i}$-packing $S$ of size exactly $\\lfloor 2^{\\varphi(t)\\ell(t)} \\rfloor$ in $B(y_i(s),2^{-g(k_n)})$, where $\\lfloor \\cdot \\rfloor$ denotes the integer part. By replacing a suitable element of $S$ with $y_0$ we can obtain a $2^{-\\ell(t)-1}$-packing $T$ in $B(y_i(s),2^{-g(k_n)})$ such that $y_0\\in T$ and $\\#T=\\#S$. Indeed, this is straightforward if $S\\cup\\{y_0\\}$ is a $2^{-\\ell_i-1}$-packing. Otherwise $y_0\\in B(y,2^{-\\ell_i-1})$ for some $y\\in S$, and replacing $y$ with $y_0$ provides a suitable $T$. Let $m(t)=m(s)+(\\#T)-1$ and let \n\t\\begin{equation*} \n\ty_i(t)=y_i(s) \\text{ if } 1\\leq i\\leq m(s) \\quad \\text{and} \\quad \\{y_i(t)\\}_{m(s)\\overline{\\varphi}(x)$ be arbitrary, we use that $\\{y_i(x\\restriction n)\\}_{1\\leq i\\leq m(x\\restriction n)}$ forms a $2^{-k_n}$-packing in $K$, so $m(x\\restriction n)\\leq g(k_n)$. Then for all large enough $n$ and $g(k_n)\\leq \\ell \\leq k_{n+1}-3$ by Fact~\\ref{f:equiv} we have\n\t\\begin{align*}\n\tN_{\\ell}(C(x))&\\leq m(x\\restriction n)+N_{\\ell}\\left(C(x)\\cap B\\left(y_0, 2^{-g(k_n)}\\right)\\right) \\\\\n\t&\\leq g(k_n)+ P_{\\ell}\\left(C(x)\\cap B\\left(y_0, 2^{-g(k_n)}\\right)\\right) \\\\\n\t&\\leq \\ell+2^{\\varphi(x\\restriction (n+1))\\ell } \\leq 2^{\\gamma \\ell}.\n\t\\end{align*} \n\tFor all large enough $n$ and $k_n-3< \\ell < g(k_n)$ the inequality \n\t$N_{\\ell}(C(x))\\leq 2^{\\gamma \\ell}$ clearly holds. As $\\gamma>\\overline{\\varphi}(x)$ was arbitrary, we obtain \\eqref{e:upper}. Then \\eqref{e:lower} and \\eqref{e:upper} imply \\eqref{e:BC}, so the proof is of the theorem is complete.\n\\end{proof} \n\n\nFor the packing dimension we prove the following theorem.\n\n\\begin{theorem} \\label{t:comppack}\n\tLet $K$ be a non-empty compact metric space and let $A \\subset [0, \\dim_P K]$ be analytic. Then there is a compact set $\\mathcal{C} \\subset \\mathcal{K}(K)$ with $\\{\\dim_P C : C \\in \\mathcal{C}\\} = A$.\n\\end{theorem}\n\n\\begin{proof}\n\tWe may assume that $A\\neq \\emptyset$, otherwise $\\mathcal{C}=\\emptyset$ works. Let $\\dim_P K=\\alpha$. First suppose that $A\\subset [0,\\beta]$ for some $\\beta<\\alpha$. By Lemma~\\ref{l:packing}~\\eqref{i:ii} we may assume by shrinking $K$ if necessary that $\\overline{\\dim}_B \\, U>\\beta$ for any non-empty open set $U\\subset K$. By Lemma~\\ref{l:varphi existence} there exists a map $\\varphi \\colon 2^{<\\omega} \\to [0, \\infty)$ such that\n\t\\begin{equation*} \\overline{\\varphi}(x) = \\lim_{n \\to \\infty} \\varphi(x \\restriction n)\n\t\\end{equation*} \n\texists for each $s \\in 2^\\omega$, and the resulting function $\\overline{\\varphi}$ satisfies $\\overline{\\varphi}(2^\\omega) = A$. We may assume that $\\varphi(s)\\leq \\beta$ for all $s\\in 2^{<\\omega}$. Let $g(n)=\\max\\{n+1,P_n(K)\\}$ for all $n\\in \\mathbb{N}$. \n\tBy compactness we can choose a sequence $k_n\\uparrow \\infty$ with $k_0=0$ such that for all $y\\in K$ and $n\\in \\mathbb{N}$ there exists a positive integer $j=j(n,y)$ such that $g(k_n)\\leq j\\leq k_{n+1}-2$ and\n\t\\begin{equation} \\label{e:Pj} P_{j}(B(y,2^{-g(k_n)}))\\geq 2^{\\beta j}. \n\t\\end{equation}\n\tLet $m(\\emptyset)=1$, $y_{\\emptyset}\\in K$ be arbitrary, and $C(\\emptyset)=B(y_{\\emptyset},1)$. Assume that $s\\in 2^n$ and a positive integer $m(s)$, points $y_i(s)\\in K$ with pairwise distance more than $2^{2-k_n}$ are given, and \n\t\\begin{equation*} \n\tC(s)=\\bigcup_{i=1}^{m(s)} B(y_i(s), 2^{-k_n}),\n\t\\end{equation*} \n\tso the distance between distinct balls of the form $B(y_i(s), 2^{-k_n})$ is bigger than $2^{-k_n}$.\n\t\n\tLet $c\\in \\{0,1\\}$ and $t=s^\\frown c$. For all $1\\leq i\\leq m(s)$ let $\\ell_i=\\ell_i(t)$ be the minimal number such that $g(k_n) \\leq \\ell_i \\leq k_{n+1}-2$ and \n\t\\begin{equation} \\label{e:PL}\n\tP_{\\ell_i}(B(y_i(s),2^{-g(k_n)}))\\geq 2^{\\varphi(t)\\ell_i},\n\t\\end{equation} \n\tby \\eqref{e:Pj} the number $\\ell_i$ are well-defined.\n\tFor all $1\\leq i\\leq m(s)$ let us choose a $2^{-\\ell_i}$-packing $S_i$ of size exactly $\\lfloor 2^{\\varphi(t)\\ell_i} \\rfloor$ in $B(y_i(s),2^{-g(k_n)})$, where $\\lfloor \\cdot \\rfloor$ denotes the integer part.\n\tLet $S=\\bigcup_{i=1}^{m(s)} S_i$ and $m(t)=\\#S$, and let us define $y_i(t)\\in K$ such that $S=\\{y_i(t)\\}_{1\\leq i\\leq m(t)}$. Let \t\n\t\\begin{equation*} C(t)=\\bigcup_{i=1}^{m(t)} B(y_i(t), 2^{-k_{n+1}}),\n\t\\end{equation*}\n\tso the distance between distinct balls $B(y_i(t), 2^{-k_{n+1}})$ is bigger than $2^{-k_{n+1}}$, and clearly $C(t)\\subset C(s)$. Thus we defined $C(s)$ for all $s\\in 2^{<\\omega}$. For $x\\in 2^{\\omega}$ let \n\t\\begin{equation*}\n\tC(x)=\\bigcap_{n=1}^{\\infty} C(x\\restriction n).\n\t\\end{equation*}\n\tThe construction clearly implies that $C(x)\\subset K$ is compact. If $x\\restriction n=y\\restriction n$, then $C(x)$ and $C(y)$ are covered by the same balls of radius $2^{-k_n}$ which they both intersect, so the map $x\\mapsto C(x)$ is continuous. Therefore, the definition \n\t\\begin{equation*} \\mathcal{C}=\\{C(x): x\\in 2^{\\omega}\\}\n\t\\end{equation*}\n\tyields a compact set $\\mathcal{C}\\subset \\mathcal{K}(K)$. In order to prove $\\{\\dim_P C: C\\in \\mathcal{C}\\}=A$ let $x\\in 2^{\\omega}$ be arbitrarily fixed, it is enough to show that \n\t\\begin{equation} \\label{e:PC}\n\t\\dim_P C(x)=\\overline{\\varphi}(x).\n\t\\end{equation} \n\tFirst we prove \n\t\\begin{equation} \\label{e:P>}\n\t\\dim_P C(x)\\geq \\overline{\\varphi}(x).\n\t\\end{equation}\n\tLet $U\\subset K$ be an arbitrary open set intersecting $C(x)$, by Lemma~\\ref{l:packing}~\\eqref{i:i} it is enough to show that $\\overline{\\dim}_B \\, (C(x)\\cap U)\\geq \\overline{\\varphi}(x)$. For all large enough $n$ we can fix $1\\leq i\\leq m(s)$ such that $B=B(y_i(x\\restriction n),2^{-k_n})\\subset U$. Let $\\ell(n)=\\ell_i(x\\restriction (n+1))$, by the construction and \\eqref{e:PL} the ball $B$ contains $\\lfloor 2^{\\varphi(x\\restriction (n+1))\\ell(n)} \\rfloor$ balls with pairwise distance greater than $2^{-\\ell(n)}$ such that all of them intersect $C(x)\\cap U$. Thus \n\t\\begin{equation*} P_{\\ell(n)}(C(x)\\cap U)\\geq \\lfloor 2^{\\varphi(x\\restriction (n+1))\\ell(n)}\\rfloor,\n\t\\end{equation*} \n\timplying \n\t\\begin{equation*}\n\t\\overline{\\dim}_B \\, (C(x)\\cap U)\\geq \\overline{\\varphi}(x).\n\t\\end{equation*}\n\tHence \\eqref{e:P>} holds. For the other direction it is enough to prove that\n\t\\begin{equation} \\label{e:PP}\n\t\\overline{\\dim}_B \\, C(x)\\leq \\overline{\\varphi}(x).\n\t\\end{equation}\n\tLet $\\gamma>\\overline{\\varphi}(x)$ be arbitrary, we use that $\\{y_i(x\\restriction n)\\}_{1\\leq i\\leq m(x\\restriction n)}$ form a $2^{-k_n}$-packing in $K$, so $m(x\\restriction n)\\leq g(k_n)$. Then for all large enough $n$ and $g(k_n)\\leq \\ell \\leq k_{n+1}-2$ by Fact~\\ref{f:equiv} we have\n\t\\begin{align*}\n\tN_{\\ell}(C(x))&\\leq \\sum_{i=1}^{m(x\\restriction n)} N_{\\ell}\\left(C(x)\\cap B\\left(y_i(s), 2^{-g(k_n)}\\right)\\right) \\\\\n\t&\\leq \\sum_{i=1}^{m(x\\restriction n)} P_{\\ell}\\left(C(x)\\cap B\\left(y_i(s), 2^{-g(k_n)}\\right)\\right) \\\\\n\t&\\leq g(k_n) 2^{\\varphi(x\\restriction (n+1))\\ell } \\leq 2^{\\gamma \\ell}.\n\t\\end{align*} \n\tFor all large enough $n$ and $k_n-2< \\ell < g(k_n)$ the inequality \n\t$N_{\\ell}(C(x))\\leq 2^{\\gamma \\ell}$ clearly holds. As $\\gamma>\\overline{\\varphi}(x)$ was arbitrary, we obtain \\eqref{e:PP}. Inequalities \\eqref{e:P>} and \\eqref{e:PP} imply \\eqref{e:PC}, and the proof is complete if $A\\subset [0,\\beta]$ for some $\\beta<\\alpha$. \n\t\nFinally, we prove the theorem in case of $A\\subset [0,\\alpha]$. If $A=\\{\\alpha\\}$ then $\\mathcal{C}=\\{K\\}$ works, so we may assume that there exists $\\beta_0\\in A\\cap [0,\\alpha)$. Take a sequence $\\beta_n\\uparrow \\alpha$, and define $A_n=A\\cap [0,\\beta_n]$ for all $n\\geq 0$. By Fact~\\ref{f:dimK} we can choose $y_0\\in K$ such that $\\dim_P B(y_0,r)=\\alpha$ for all $r>0$. By the first part we can choose a compact set $K_0\\subset K$ with $\\dim_P K_0=\\beta_0$ and we may assume that $y_0\\in K_0$. By the first part we can also choose compact sets $\\mathcal{C}_n\\subset \\mathcal{K}(B(y_0,1\/n))$ such that \n\t\\begin{equation*}\n\t\\{\\dim_P C: C\\in \\mathcal{C}_n\\}=A_n \\text{ for all } n\\geq 0.\n\t\\end{equation*} \n\tDefine\n\t\\begin{equation*}\n\t\\mathcal{C}=\\mathcal{C}_0\\cup \\{K\\} \\cup \\{K_0\\cup C: C\\in \\mathcal{C}_n,~n\\geq 1\\}.\n\t\\end{equation*}\t\n\tIt is easy to see that $\\mathcal{C}\\subset \\mathcal{K}(K)$ is compact and satisfies $\\{ \\dim_P C: C\\in \\mathcal{C}\\}=A$, and the proof of the theorem is complete.\t\n\\end{proof}\n\n\n\\section{Open problems} \\label{s:open}\n\nOur first problem is the most ambitious one. For the sake of simplicity we only formalize it in case of the Hausdorff dimension.\n\n\\begin{problem} \\label{p:K} \nLet $K\\subset \\mathbb{R}^d$ be compact. Characterize the sets $A\\subset [0,\\dim_H K]$ for which there exists a compact set $C\\subset K$ such that $\\{\\dim_H E: E\\in \\mathcal{M}_C\\}=A$.\n\\end{problem}\n\n\nJ\\\"arvenp\\\"a\\\"a, J\\\"arvenp\\\"a\\\"a, Koivusalo, Li, Suomala, and Xiao \\cite[Lemma~2.3]{JJKLSX} proved an analogue of Hawkes' theorem in complete metric spaces satisfying a mild doubling condition. Therefore, the proof of Theorem~\\ref{t:compact family} possibly works in compact metric spaces with a suitable doubling condition as well. As the case of arbitrary compact metric spaces can be out of reach with this method, we ask the following.\n\n\\begin{problem}\nLet $K$ be a non-empty compact metric space and let $A \\subset [0, \\dim_H K]$ be an analytic set. Is there a compact set $\\mathcal{C} \\subset \\mathcal{K}(K)$ with $\\{\\dim_H C : C \\in \\mathcal{C}\\} = A$?\n\\end{problem}\n\nAs Mattila and Mauldin \\cite[Theorem~7.5]{MM} proved that the packing dimension $\\dim_P \\colon \\mathcal{K}(\\mathbb{R}^d)\\to [0,d]$ is not Borel measurable, we do not know whether analogue versions of Theorems~\\ref{t:characterization} and~\\ref{t:compact family} hold for the packing dimension. We state the measurability problems as follows. \n\n\\begin{problem} \\label{p:meas} If $K\\subset \\mathbb{R}^d$ is a compact set, is the set $\\{\\dim_P E: E\\in \\mathcal{M}_K\\}$ analytic? If $\\mathcal{C} \\subset \\mathcal{K}(\\mathbb{R}^d)$ is compact, is the set $\\{\\dim_P E: E\\in \\mathcal{C}\\}$ analytic?\n\\end{problem}\n\nIf the answer to the above problem is negative, we can ask for a characterization. \n\\begin{problem} \\label{p:1} Let $d\\geq 1$. Characterize the sets $A \\subset [0, d]$ for which there exists a compact set $K \\subset \\mathbb{R}^d$ with $\\{\\dim_P E : E \\in \\mathcal{M}_K\\} = A$.\n\\end{problem}\n\n\\begin{problem} \\label{p:2} Let $K\\subset \\mathbb{R}^d$ be compact. Characterize the sets $A \\subset [0, \\dim_P K]$ for which there exists a compact set $\\mathcal{C}\\subset \\mathcal{K}(K)$ with $\\{\\dim_P C: C\\in \\mathcal{C}\\}=A$.\n\\end{problem}\n\n\\subsection*{Acknowledgments}\nWe are indebted to Jonathan M.~Fraser for some illuminating conversations and for providing us with the reference \\cite{FWW}. We thank Ville Suomala for pointing our attention to the paper \\cite{JJKLSX}. We also thank Ignacio Garc\\'ia for pointing out that a result stated in an earlier version of the manuscript was already known.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAccording to the AdS\/CFT correspondence \\cite{Maldacena:1997re}, $\\mathcal{N}=4$ super Yang-Mills (SYM) theory on $\\mathbb{R} \\times S^3$ is dual to type IIB string theory on $\\mbox{AdS}_5\\times S^5$. This duality should in particular relate the phase transitions, critical behavior and thermal physics of the theories. \n\nOne interesting example of a critical behavior is the Hagedorn temperature. In the planar limit of $\\mathcal{N}=4$ SYM theory on $\\mathbb{R} \\times S^3$, the origin of the Hagedorn temperature $T_{\\text{H}}$ is the confinement of the color degrees of freedom due to the theory being on a three-sphere. This enables the theory to have a phase transition that bears resemblance to the confinement\/deconfinement phase transition in QCD or pure Yang-Mills theory \\cite{Atick:1988si,Aharony:2003sx}.\n\nThe Hagedorn temperature is the lowest temperature for which the planar partition function $\\mathcal{Z}(T)$\ndiverges. Via the state\/operator correspondence, the partition function can be re-expressed in terms of the dilatation operator $D$ of $\\mathcal{N}=4$ SYM theory on $\\mathbb{R}^4$:\n\\begin{equation}\n \\mathcal{Z}(T)=\\tr_{\\mathbb{R}\\times S^3}[\\operatorname{e}^{-H\/T}]=\\tr_{\\mathbb{R}^4}[\\operatorname{e}^{-D\/T}]\\, , \n\\end{equation}\nwhere we have set the radius of $S^3$ to $1$.\nStates correspond to gauge-invariant operators consisting of one or more trace factors.\nThe energies correspond to the scaling dimensions of the operators, as measured by the dilatation operator.\nIn the planar limit, the scaling dimensions of multi-trace operators are entirely determined by those of their single-trace factors, and the latter can be enumerated via P\\'{o}lya theory to determine the partition function and thus the Hagedorn temperature in the free theory \\cite{Sundborg:1999ue}.\nThis procedure was later generalized to one-loop order and to the case of non-zero chemical potentials \\cite{Spradlin:2004pp,Yamada:2006rx,Harmark:2006di,Suzuki:2017ipd,GomezReino:2005bq}.\n\nOn the string-theory side, the Hagedorn temperature occurs due to the exponential growth of string states with the energy present in tree-level string theory. For interacting string theory, it is connected to the Hawking-Page phase transition \\cite{Witten:1998zw}. This suggests that the confinement\/deconfinement transition on the gauge-theory side is mapped on the string-theory side to a transition from a gas of gravitons (closed strings) for low temperatures to a black hole for high temperatures.\nIn particular, also the Hagedorn temperature on the gauge-theory and string-theory sides of the AdS\/CFT correspondence should be connected \n \\cite{Sundborg:1999ue,Aharony:2003sx}. \n \nOn the string-theory side, the Hagedorn temperature has been computed in pp-wave limits \\cite{PandoZayas:2002hh,Greene:2002cd,Brower:2002zx,Grignani:2003cs}. In \\cite{Harmark:2006ta}, the first quantitative interpolation of the Hagedorn temperature from the gauge-theory side to the string-theory side was made, exploiting a limit towards a critical point in the grand canonical ensemble \\cite{Harmark:2014mpa}.\nThis limit effectively reduces the gauge-theory side to the $\\mathfrak{su}(2)$ sector with only the one-loop dilatation operator surviving, which enables one to match the Hagedorn temperature of the gauge-theory side to that of string theory on a pp-wave background via the continuum limit of the free energy of the Heisenberg spin chain. \n\nA hitherto unrelated but very powerful property of planar $\\mathcal{N}=4$ SYM theory is integrability, see \\cite{Beisert:2010jr,Bombardelli:2016rwb} for reviews. It amounts to the existence of an underlying two-dimensional exactly solvable model, which reduces to an integrable sigma model at strong coupling and to an integrable spin chain at weak coupling. \nVia integrability, the planar scaling dimensions of all single-trace operators can in principle be calculated at any value of the 't Hooft coupling $\\lambda=g_{\\mathrm{\\scriptscriptstyle YM}}^2N$, allowing for a smooth interpolation between weak and strong coupling results.\nIn practice, however, the calculation for each operator is so involved that summing the results for all operators to obtain the partition function $\\mathcal{Z}(T)$ seems prohibitive.\n\nIn this letter, we show how to use integrability to compute the Hagedorn temperature at any value of the 't Hooft coupling.\nIn the spectral problem, the integrable model is solved \non a cylinder of finite circumference $L$, which accounts for wrapping contributions to the scaling dimension due to the finite length of the spin chain.\nIn order to calculate the partition function $\\mathcal{Z}(T)$, we would need to solve this model on the torus with circumferences $L$ and $1\/T$, an endeavor that has not been successful yet even for the Heisenberg spin chain.\nThe Hagedorn singularity, however, is driven by the contributions of spin chains with very high $L$, or rather very high classical scaling dimension, \nwhere the finite-size corrections play no role \\footnote{The fact that finite-size corrections are suppressed in $L$ is manifest. The suppression in the classical scaling dimension can for example be seen for so-called twist operators. Up to corrections in the classical scaling dimension, these operators are dual to Wilson loops, for which no finite-size effects occur \\cite{Korchemsky:1988si,Belitsky:2003ys}.}. \nThus, we can calculate it by solving the integrable model on a cylinder of circumference $1\/T$, a situation that is related to the one in the spectral problem via a double Wick rotation.\nIndeed, we find a direct relation between the continuum limit of the free energy of the spin chain associated with planar $\\mathcal{N}=4$ SYM theory and the Hagedorn temperature.\nUsing the integrability of the model, we derive thermodynamic Bethe ansatz (TBA) equations which determine the Hagedorn temperature at any value of the 't Hooft coupling. We present them in the form of a Y-system in \\eqref{eq: general Y system}--\\eqref{eq: theta n}.\nAs a first application, we solve them in the constant case as well as perturbatively at weak coupling, confirming the known tree-level and one-loop Hagedorn temperature. Moreover, we determine the previously unknown two-loop Hagedorn temperature:\n\\begin{equation}\n\\begin{aligned}\n T_{\\text{H}}&=\\frac{1}{2\\log(2+\\sqrt{3})}+\\frac{1}{\\log(2+\\sqrt{3})}g^2 \\\\\n &\\phaneq+\n \\left( -\\frac{86}{ \\sqrt{3}} + \\frac{24 \\log(12)}{\\log(2+\\sqrt{3})}\\right)\n g^4+\\mathcal{O}(g^6)\\,,\n \\end{aligned}\n\\end{equation}\nwhere $g^2=\\frac{\\lambda}{16\\pi^2}$.\n\n\\section{TBA equations for the Hagedorn temperature}\n\nIn the following, we relate the Hagedorn temperature to the spin-chain free energy and derive TBA equations for the latter. \n\n\\paragraph{The Hagedorn temperature from the free energy of the spin chain}\n\nIn the planar limit, the scaling dimensions of multi-trace operators are completely determined by the scaling dimensions of their single-trace factors. The partition function $\\mathcal{Z}(T)$ is then entirely determined by the single-trace partition function $Z(T)$.\nSplitting the dilatation operator into a classical and an anomalous part as $D=D_0+\\delta D$, \nwe can write \n\\begin{equation}\n Z(T)=\\sum_{m=2}^\\infty\\operatorname{e}^{-\\frac{m}{2}\\frac{1}{T}(1 + F_m(T))}\\, , \n\\end{equation}\nwhere \n\\begin{equation}\n\\label{eq: spin_chain_Z}\n F_m(T)=-T\\frac{2}{m}\\log \\left( \\tr_{\\text{spin-chain},D_0=\\frac{m}{2}}[\\operatorname{e}^{-\\delta D\/T}] \\right) \n \\end{equation}\nis the spin-chain free energy per unit classical scaling dimension for fixed $D_0=\\frac{m}{2}$. \nThe multi-trace partition function $\\mathcal{Z}(T)$ is then given by \n\\begin{equation}\n\\label{eq: relation between partition function and free energy}\n \\mathcal{Z}(T)=\\exp\\sum_{n=1}^\\infty\\frac{1}{n}\\sum_{m=2}^\\infty(-1)^{m(n+1)}\\operatorname{e}^{-\\frac{m}{2T} (n + F_m(T\/n))}\\, , \n\\end{equation}\nwhere the alternating sign takes care of the correct statistics.\nThe Hagedorn singularity is the first singularity of $\\mathcal{Z}(T)$ encountered raising the temperature from zero.\nIt arises from the $n=1$ contribution to the sum over $n$, i.e.\\ from the infinite series\n\\begin{equation}\n\\label{n1contr}\n\\sum_{m=2}^\\infty \\operatorname{e}^{-\\frac{m}{2T} (1 + F_m(T))}\\, , \n\\end{equation}\nwhere each term in the series is finite as $F_m(T)$ only includes a finite number of states.\nWe can use Cauchy's root test to assess when this series diverges. To this end, we compute the $m$th root of the absolute value of the $m$th term and take the large $m$ limit, giving\n\\begin{equation}\nr = \\lim_{m\\rightarrow \\infty} \\operatorname{e}^{-\\frac{1}{2T}(1+F_m(T))} = \\operatorname{e}^{-\\frac{1}{2T} ( 1+ F(T))} \\, , \n\\end{equation}\nwhere\n\\begin{equation}\n F(T)= \\lim_{m\\rightarrow \\infty} F_m(T)\n\\end{equation}\nis the thermodynamic limit of the free energy.\nThe root test states that the series is convergent for $r<1$ and divergent for $r>1$. Thus, the Hagedorn temperature is determined from $r=1$ or, equivalently, from\n\\begin{equation}\n\\label{eq: hagedorn temperature via F}\nF(T_{\\text{H}})=-1\\, . \n\\end{equation}\n\n\n\n\n\\paragraph{TBA equations}\n\nThe free energy $F$ of the spin chain can be calculated via the thermodynamic Bethe ansatz (TBA). The TBA equations for the Hagedorn temperature of $\\mathcal{N}=4$ SYM theory can be derived in analogy to the case of the spectral problem \\cite{Arutyunov:2009zu,Bombardelli:2009ns,Gromov:2009bc,Arutyunov:2009ur,Gromov:2009tv,Cavaglia:2010nm}.\nThe starting point are the all-loop asymptotic Bethe equations \\cite{Beisert:2004hm,Beisert:2006ez} for the \n$\\mathfrak{psu}(2,2|4)$ spin chain found in the spectral problem,\nwhich are written in terms of the length $L$ of the spin chain as well as the seven excitation numbers corresponding to the roots in the Dynkin diagram of $\\mathfrak{psu}(2,2|4)$. We then rewrite the Bethe equation so that the middle, momentum-carrying root is written in terms of $D_0$ instead of $L$, since it is $D_0$ that we keep fixed when calculating the free energy \\eqref{eq: spin_chain_Z}. We proceed by employing the string hypothesis, which enables us to write the Bethe equations for many magnons. The next step is the continuum limit $D_0 \\rightarrow \\infty$, in which we can write the TBA equations in terms of the Y-functions defined from the densities of the strings. In particular, it allows us to write down the free energy. \nThe main difference compared to the TBA equations of the spectral problem is that we do not make a double Wick rotation i.e.\\ we consider the so-called direct theory and not the mirror theory. This means we use the Zhukovsky variable $x(u)$ with a short cut: \n\\begin{equation}\nx(u)=\\frac{u}{2}\\left(1+\\sqrt{1-\\frac{4g^2}{u^2}}\\right)\\, . \n\\end{equation}\nNote that TBA equations for the direct theory were also considered in \\cite{Cavaglia:2010nm,Arutynov:2014ota} but in different thermodynamic limits.\n\n\n\\paragraph{Y-system}\nThe TBA equations can be rephrased in terms of a Y-system consisting of the functions $\\mathcal{Y}_{a,s}$, where $(a,s)\\in M=\\{(a,s)\\in\\mathbb{N}_{\\geq0}\\times\\mathbb{N} \\,|\\,a=1 \\vee |s|\\leq 2 \\vee \\pm s=a=2\\}$.\nWith some exceptions, they satisfy the equations\n\\begin{equation}\n\\label{eq: general Y system}\n \\log\\mathcal{Y}_{a,s}=\\log\\frac{(1+\\mathcal{Y}_{a,s-1})(1+\\mathcal{Y}_{a,s+1})}{(1+\\mathcal{Y}_{a-1,s}^{-1})(1+\\mathcal{Y}_{a+1,s}^{-1})}\\star s\\, , \n\\end{equation}\nwhere $\\star$ denotes the convolution with $s(u)=(2 \\cosh \\pi u)^{-1}$ on $\\mathbb{R}$ and the (inverse) Y-functions with shifted indices are assumed to be zero when the shifted indices are not in $M$.\nThe Y-functions are analytic in the strip with $|\\Im(u)|<\\frac{1}{2}|a-|s||$.\nFor the purpose of this letter, the chemical potentials are set to zero. Hence, the Y-system is symmetric, $\\mathcal{Y}_{a,s}=\\mathcal{Y}_{a,-s}$, with boundary conditions \n\\begin{equation}\n\\label{eq: Ybcs}\n\\lim_{a\\rightarrow \\infty} \\frac{\\mathcal{Y}_{a+1,s}}{\\mathcal{Y}_{a,s}} = 1 \\, , \\quad \\lim_{n\\rightarrow \\infty} \\frac{\\mathcal{Y}_{1,n+1}}{\\mathcal{Y}_{1,n}} = 1 \\, , \n\\end{equation}\nfor $s=0,\\pm 1$.\nThe first of the aforementioned exceptions to the equations \\eqref{eq: general Y system} then is \n\\begin{equation}\n\\label{eq: CY_10}\n \\log \\mathcal{Y}_{1,0} = - \\rho\\hatstar s + 2\\log (1+\\mathcal{Y}_{1,1})\\checkstar s-\\log(1+\\mathcal{Y}_{2,0}^{-1})\\star s\\,,\n\\end{equation}\nwhere we have defined $\\hatstar$ and $\\checkstar$ as the convolutions on $(-2g,2g)$ and $\\mathbb{R}\\setminus(-2g,2g)$, respectively.\nSimilarly, the convolution with $\\mathcal{Y}_{1,1}$ and $\\mathcal{Y}_{2,2}$ in \\eqref{eq: general Y system} for $(a,s)=(2,1),(1,2)$ is also understood to be $\\checkstar$.\nThe source term $\\rho(u)$ is defined as \n\\begin{equation}\n \\begin{aligned}\n\\rho &= \\frac{\\epsilon_0}{T} + 2 \\log (1+\\mathcal{Y}_{1,1}) (1+\\mathcal{Y}_{2,2}^{-1}) \\checkstar H_0 \\\\ &\\phaneq + 2 \\sum_{m=1}^\\infty \\log (1+\\mathcal{Y}_{m+1,1})\\star \n\\Big(H_m +H_{-m}\\Big) \\\\&\\phaneq+ \\sum_{m=1}^\\infty \\log (1+\\mathcal{Y}_{m,0} ) \\star \\Sigma^{m} \\,,\n \\end{aligned}\n\\label{eq: rho}\n\\end{equation}\nwhere \n\\begin{equation}\n H_m(v,u) =\\frac{i}{2\\pi}\\partial_v\\log \\frac{x(u-i0)-\\frac{g^2}{x(v+\\frac{i}{2}m)}}{x(u+i0)-\\frac{g^2}{ x(v+\\frac{i}{2}m)}}\\,,\n\\end{equation}\n\\begin{equation}\n\\label{epsilon0}\n\\epsilon_0 (u) = \\begin{cases}\n 0 &\\mbox{for } | u | \\geq 2 g\\,,\\\\\n 2\\sqrt{4g^2-u^2} & \\mbox{for } |u| < 2g\\,,\n \\end{cases}\n\\end{equation}\nand the kernel\n\\begin{equation}\n\\begin{aligned}\n\\Sigma^{m} (v,u) =& \\frac{i}{2\\pi} \\partial_v \\left( \\log \\frac{R^2(x(v+ \\frac{im}{2}),x(u+i0))}{R^2(x(v+ \\frac{im}{2}),x(u-i0))} \\right. \\\\ &+ \\left. \\log \\frac{R^2(x(v- \\frac{im}{2}),x(u-i0))}{R^2(x(v- \\frac{im}{2}),x(u+i0))} \\right) \n\\end{aligned}\n\\end{equation}\nis given in terms of the dressing factor \\cite{Beisert:2006ez}\n\\begin{equation}\n\\begin{aligned}\n\\sigma^2 (u,v) =& \\frac{R^2(x^+(u),x^+(v))R^2(x^-(u),x^-(v))}{R^2(x^+(u),x^-(v))R^2(x^-(u),x^+(v))} \\,,\n\\end{aligned}\n\\end{equation}\nwith $x^\\pm(u) = x(u\\pm \\frac{i}{2})$.\nWhen applied to a function of two arguments such as $H_m(v,u)$, $\\star$, $\\hatstar$ and $\\checkstar$ are moreover understood as integrals over the respective intervals.\nThe other exceptions to the equations \\eqref{eq: general Y system}\nare the non-local equations\n\\begin{equation}\n\\label{eq: Y11_times_Y22}\n\\log \\mathcal{Y}_{1, 1}\\mathcal{Y}_{2,2} (u) = \\sum_{m=1}^\\infty \\log (1+\\mathcal{Y}_{m,0} (v) ) \\star \\Theta_{m} (v,u) \n\\end{equation}\nwith\n\\begin{equation}\n\\Theta_{m} (v,u) =\\frac{i}{2\\pi}\\partial_v\\log \\frac{x(u)-\\frac{g^2}{ x(v-\\frac{i m}{2})}}{x(u)-\\frac{g^2}{ x(v+\\frac{i m}{2})}}\\frac{x(u)-x(v+\\frac{i m}{2})}{x(u)-x\\left(v-\\frac{i m}{2}\\right)}\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq: Y11_divide_Y22}\n\\log \\frac{\\mathcal{Y}_{2, 2}}{\\mathcal{Y}_{1,1}} = \\sum_{m=1}^\\infty a_m \\star \\log \\frac{ (1+ \\mathcal{Y}_{m+1, 1} )^2}{( 1+ \\mathcal{Y}_{1,m+1}^{-1} )^2(1+\\mathcal{Y}_{m,0} )} \n\\end{equation}\nwith $a_n(u)=\\frac{n}{2\\pi(u^2+\\frac{n^2}{4})}$.\n\n\nThe free energy per unit scaling dimension is given by \n\\begin{equation}\n\\label{eq: free energy}\nF(T)= - T \\sum_{n=1}^\\infty \\int_{-\\infty}^\\infty \\operatorname{d}\\! u \\, \\theta_n (u) \\log ( 1 + \\mathcal{Y}_{n,0} (u) )\\, , \n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq: theta n}\n\\theta_n(u) = \\frac{i}{2\\pi} \\partial_u \\log \\frac{x(u+\\frac{in}{2})}{x(u-\\frac{in}{2})}\\, . \n\\end{equation}\nThus, the TBA equations \\eqref{eq: general Y system}--\\eqref{eq: theta n} determine the Hagedorn temperature at any value of the 't Hooft coupling via \\eqref{eq: hagedorn temperature via F}.\n\n\n\\section{Solving the TBA equations}\n\nLet us now solve the TBA equations in the form of the Y-system.\n\n\\paragraph{Constant solution via T-system}\n\nAt large spectral parameter $u$, the Y-system approaches a constant value. This means we can find a constant Y-system that solves \\eqref{eq: general Y system} for all $(a,s)\\in M\\setminus \\{(1,1),(2,2)\\}$ as well as \\eqref{eq: Y11_divide_Y22}. \nNote that we cannot impose \\eqref{eq: Y11_times_Y22} as it relates the behavior at finite and large $u$. \nThus, we find a one-parameter family of solutions with parameter $z$.\nThis solution is most easily expressed in terms of a T-system consisting of the functions $T_{a,s}$ with $(a,s)\\in \\hat{M}=\\{(a,s)\\in\\mathbb{Z}_{\\geq0}\\times\\mathbb{Z} \\,|\\,\\min(a,|s|)\\leq 2\\}$ and $T_{a,s}=0$ for $(a,s)\\notin \\hat{M}$.\nThe Y-functions are expressed in terms of the T-functions as\n\\begin{equation}\n \\mathcal{Y}_{a,s}=\\frac{T_{a,s+1}T_{a,s-1}}{T_{a+1,s}T_{a-1,s}}\\,.\n\\end{equation}\nIn the constant case, the equations \\eqref{eq: general Y system} imply the following T-system (Hirota) equations for all $(a,s)\\in \\hat{M}$:\n\\begin{equation}\n\\label{eq: T-system equations}\n T_{a,s}^2 = T_{a+1,s} T_{a-1,s} + T_{a,s+1} T_{a,s-1}\\,.\n\\end{equation}\nThe latter are solved by\n\\begin{equation}\n\\label{consTbfgen1}\n\\begin{aligned}\nT_{a,0} &= \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{2a} \\frac{a + 2\\noty}{12 \\noty^4} \\big( a^3 + 6 \\noty a^2 \\\\\n&\\hphantom{= \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{2a} \\frac{a + 2\\noty}{12 \\noty^4} \\big(}+ ( 12\\noty^2-1) a + 6\\noty^3 \\big) \\,,\n\\\\\nT_{a,\\pm 1} &= (-1)^{a} \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{2a} \\frac{a+3\\noty}{6\\noty^4} (a^2+3a\\noty + 3\\noty^2-1) \\,,\n\\\\\nT_{a,\\pm 2} &= \\frac{1}{\\noty^4} \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{2a}\\,,\n\\end{aligned}\n\\end{equation}\nfor $a \\geq |s|$, and\n\\begin{equation}\n\\label{consTbfgen2}\n\\begin{aligned}\nT_{0,s}&=1\\,,\\\\\nT_{1,s} &= \\frac{(-1)^s}{\\noty^2} \\left[ |s| + \\frac{1-3\\noty^2}{2\\noty} \\right] \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{|s|} \\,,\\\\\nT_{2,s} &= \\frac{1}{\\noty^4} \\left(\\frac{1-\\noty}{1+\\noty}\\right)^{2|s|}\\,,\n\\end{aligned}\n\\end{equation}\nfor $|s| \\geq a$. \nThis solution is a special case of the most general, $\\mathfrak{psu}(2,2|4)$ character solution of \\eqref{eq: T-system equations} in \\cite{Gromov:2010vb}.\n\n\\paragraph{Solution at zero coupling}\n\nIn the limit of zero coupling, $g^2=0$, the source term $\\rho(u)$ in \\eqref{eq: CY_10} vanishes \\footnote{In particular, $\\text{constant}\\star H_m=\\text{constant}\\star\\Theta_m=0$.}, such that the functions $\\mathcal{Y}_{a,s}$ are constant for all $u$. Hence, the non-local equation \\eqref{eq: Y11_times_Y22} implies $\\mathcal{Y}_{1,1}\\mathcal{Y}_{2,2}=T_{1,0}=1$. We can use this to determine the parameter $z$ in the constant solution for the T-system above \nand thereby find the Y-system at zero coupling. Imposing $T_{1,0}=1$ is equivalent to $z = \\pm 1\/\\sqrt{3}$. The negative solution has to be discarded as it leads to a negative Hagedorn temperature. Thus, we conclude that to zeroth order $z = 1\/\\sqrt{3}$. Using \\eqref{eq: hagedorn temperature via F} and \\eqref{eq: free energy}, we find the zeroth-order Hagedorn temperature \n\\begin{equation}\n T_{\\text{H}}^{(0)}=\\frac{1}{2\\log(2+\\sqrt{3})}\\,,\n\\end{equation}\nwhich is in perfect agreement with \\cite{Sundborg:1999ue}.\n\n\\paragraph{Perturbative solution}\n\nWe can also solve the TBA equations in a perturbative expansion at weak coupling, expanding the Y-functions as\n\\begin{equation}\n\\mathcal{Y}_{a,s} (u) = \\mathcal{Y}_{a,s}^{(0)} \\left( 1+ \\sum_{\\ell=1}^\\infty g^{2\\ell} y_{a,s}^{(\\ell)} (u)\\right) \\,.\n\\end{equation}\n\nAt one-loop order, the solution takes the form\n\\begin{equation}\n\\label{eq: one-loop form}\ny_{a,s}^{(1)}(u) = \\tilde{y}_{a,s}^{(1)}+ \\sum_{k=0}^\\infty c^{(1)}_{a,s,k} a_{2k+a+s}(u) \\,,\n\\end{equation}\nwhere $\\tilde{y}_{a,s}^{(1)}$ as well as $c^{(1)}_{a,s,k}$ are constants.\nThis follows from the expansions\n\\begin{equation}\n\\label{eq: expansions of kernels}\n \\begin{aligned}\n \\epsilon_0\\hatstar s(u)&=4\\pi g^2 s(u) + 2\\pi g^4 s''(u)+ \\mathcal{O}(g^6)\\,,\\\\\n s(u)&=\\sum_{m=0}^\\infty (-1)^{m}a_{1+2m}(u)\\,, \\\\\n \\theta_n(u)&=a_n(u)+g^2 a_n''(u)+\\mathcal{O}(g^4)\\,,\\\\\n \\Theta_m(v,u)&=a_m(u-v)-a_m(v)\\\\&\\phaneq+g^2\\left(\\frac{2}{u}a_m'(v)-a_m''(v)\\right)+\\mathcal{O}(g^4)\n \\end{aligned}\n\\end{equation}\nin combination with the convolution identity $a_n\\star a_m=a_{n+m}$ and the structure of the TBA equations. Inserting \\eqref{eq: one-loop form} into the expansion of the TBA equations, we can solve for the coefficients $c^{(1)}_{a,s,k}$.\nThe remaining one-loop parameter in the constant solution can be fixed from \n$\\left(\\mathcal{Y}_{1,1}\\mathcal{Y}_{2,2}\\right)^{(1)} (0)=0$, which follows from \\eqref{eq: Y11_times_Y22} and the last expansion in \\eqref{eq: expansions of kernels}.\nWe find for the one-loop Hagedorn temperature\n\\begin{equation}\n T_{\\text{H}}^{(1)}=\\frac{1}{\\log(2+\\sqrt{3})}\\,,\n\\end{equation}\nwhich perfectly agrees with the result of \\cite{Spradlin:2004pp}.\n\nAt two-loop order, $\\rho \\hatstar s$ in \\eqref{eq: CY_10} receives contributions from the one-loop solution\n$y_{a,s}^{(1)}(u)$ from the second and third term in \\eqref{eq: rho}. They can be calculated using \n\\begin{equation}\n (a_n\\star H_{m} \\hatstar s)(u) = g^2 \\frac{4}{(n+|m|)^2} s(u) + \\mathcal{O}(g^4)\\,.\n\\end{equation}\nNote that the dressing kernel in the fourth term of \\eqref{eq: rho} vanishes at this loop order. The two-loop solution takes the form\n\\begin{equation}\n\\label{eq: two-loop form}\n\\begin{aligned}\ny_{a,s}^{(2)}(u) &= \\tilde{y}_{a,s}^{(2)} + \\sum_{k=0}^\\infty c^{(2)}_{a,s,k,1} a_{2k+a+s}(u)\\\\ &\\phaneq+ \\sum_{k=0}^\\infty c^{(2)}_{a,s,k,2} a^2_{2k+a+s}(u) \\\\ &\\phaneq+ \\sum_{k=0}^\\infty c^{(2)}_{a,s,k,3} a^3_{2k+a+s}(u) \\,,\n\\end{aligned}\n\\end{equation}\nas follows from simple reasoning paralleling the one at one-loop order.\nSolving for the coefficients $c^{(2)}_{a,s,k,1}$, $c^{(2)}_{a,s,k,2}$ and $c^{(2)}_{a,s,k,3}$ and fixing the two-loop parameter in the constant solution via \\eqref{eq: Y11_times_Y22}, we find the previously unknown two-loop Hagedorn temperature\n\\begin{equation}\n\\begin{aligned}\n T_{\\text{H}}^{(2)}= \n -\\frac{86}{\\sqrt{3}} + \\frac{24\\log(12)}{\\log(2+\\sqrt{3})}\\,.\n\\end{aligned}\n\\end{equation}\n\n\n\\paragraph{Solution at finite coupling}\n\nAt finite coupling, the infinite set of non-linear integral equations \\eqref{eq: general Y system}--\\eqref{eq: theta n} can be solved numerically \nby iterating the equations and truncating to $a,s\\leq n_{\\max}$.\nThe convolutions are calculated for a finite number of sampling points from which the functions are recovered by interpolation and extrapolation at small and large $u$, respectively.\nWe have implemented this procedure in Mathematica following the strategy of \\cite{Bajnok:2013wsa}, where also \n$T_{\\text{H}}$ has to be iterated.\nWe will report on the resulting solution at finite coupling in our future publication \\cite{HW}.\n\n\\section{Outlook}\n\nIn this letter, we have derived integrability-based TBA equations \\eqref{eq: general Y system}--\\eqref{eq: theta n} that determine the Hagedorn temperature of planar $\\mathcal{N}=4$ SYM theory at any value of the 't Hooft coupling.\nAs an application, we have solved these equations perturbatively up to two-loop order.\nOur TBA equation can also be solved numerically at finite coupling, as was briefly discussed here but will be detailed on in a future publication \\cite{HW}.\nThus, they open up the door for an exact interpolation from weak to strong coupling, which, with the exception of \\cite{Harmark:2006ta}, \nwould be the first time for the case of thermal physics.\nPotentially, this could allow us to develop a better understanding of the phase structure of gauge theories and their dual gravitational theories in general.\n\nFor the spectral problem, the TBA equations have been recast into the form of the quantum spectral curve \\cite{Gromov:2013pga}, which allows to generate precision data at weak coupling \\cite{Marboe:2014gma} as well as at finite coupling \\cite{Gromov:2015wca}.\nWe will report on a similar reformulation of our equations in a future publication \\cite{HW}.\nMoreover, one can study the case of non-zero chemical potentials. We have generalized our method to this case as well, and we have solved the zeroth-order TBA equations for the case with chemical potentials turned on but corresponding still to a symmetric Y-system. We will report on this in a future publication as well \\cite{HW}.\n\nIn this letter, we have used the fact \\eqref{eq: hagedorn temperature via F} that the spin-chain free energy determines the Hagedorn temperature $T_{\\text{H}}$ at which the partition function diverges. The spin-chain free energy should however also determine the partition function in the vicinity of $T_{\\text{H}}$, which should allow to extract e.g.\\ critical exponents.\n\nThe partition function and Hagedorn temperature have also been studied in integrable deformations of $\\mathcal{N}=4$ SYM theory up to one-loop order \\cite{Fokken:2014moa}, where it was found that although $\\mathcal{Z}(T)$ is different, $T_{\\text{H}}$ is unchanged.\nIt would be interesting to see whether this statement continues to hold at higher loop orders. \nSimilarly, one might apply our framework to the three-dimensional $\\mathcal{N}=6$ superconformal Chern-Simons theory, which is known to be integrable as well.\n\n\n\\begin{acknowledgments}\n\\paragraph{Acknowledgements.}\nIt is a pleasure to thank Marta Orselli for collaboration in an earlier stage of the project.\nWe thank\nZoltan Bajnok,\nJohannes Br\\\"{o}del,\nSimon Caron-Huot,\nMarius de Leeuw,\nSergey Frolov,\nNikolay Gromov,\nSebastien Leurent,\nFedor Levkovich-Maslyuk,\nChristian Marboe,\nDavid McGady,\nRyo Suzuki,\nDmytro Volin\nand Konstantin Zarembo\nfor very useful discussions and \nRyo Suzuki for sharing his Mathematica implementation of the TBA equations used in \\cite{Bajnok:2013wsa}. \nT.H.\\ acknowledges support from FNU grant number DFF-6108-00340 and the Marie-Curie-CIG grant number 618284.\nM.W.\\ was supported in part by FNU through grants number DFF-4002-00037 and by the ERC advance grant 291092.\nM.W.\\ further acknowledges the kind hospitality of NORDITA during the program ``Holography and Dualities,'' where parts of this work were carried out.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe non-empty set $G$ together with an $n$-ary operation $f:G^n\\to\nG$ is called an {\\it $n$-ary groupoid} (or an {\\it $n$-ary\noperative} -- in the Gluskin terminology, cf. \\cite{glu65}) and\nis denoted by $(G;f)$. We will assume that $n>2$.\n\nAccording to the general convention similar to that introduced in\nthe theory of $n$-ary systems by G. \\v{C}upona (cf. \\cite{Cup})\nthe sequence of elements $x_i,x_{i+1},\\ldots,x_j$ is denoted by\n$x_i^j$. In the case $j2$ there are\n$n$-ary groups which are not of this form. $n$-ary groups of the\nfirst form are called {\\it reducible} to the group $(G;\\circ )$ or\n{\\it derived} from the group $(G;\\circ )$, the second one are\ncalled {\\it irreducible}. Moreover, in some $n$-ary groups there\nexists an element $e$ (called an {\\it $n$-ary identity} or {\\it\nneutral element}) such that\n \\begin{equation} \\label{n-id}\n f(\\stackrel{(i-1)}{e},x,\\stackrel{(n-i)}{e})=x\n \\end{equation}\nholds for all $x\\in G$ and for all $i=1,\\ldots,n$. It is\ninteresting that $n$-ary groups containing a neutral element are\nreducible (cf. \\cite{dor}). Irreducible $n$-ary groups do not\ncontain such elements. On the other hand, there are $n$-ary groups\nwith two, three and more neutral elements. The set\n$\\mathbb{Z}_{n-1}=\\{0,1,\\ldots,n-2\\}$ with the operation\n$f(x_1^{n})=(x_1+x_2+\\ldots +x_{n})({\\rm mod}\\,(n-1))$ is a simple\nexample of an $n$-ary group in which every element is neutral. All\n$n$-ary groups with this property are derived from the commutative\ngroup of the exponent $k|(n-1)$.\n\nIt is worthwhile to note that in the definition of an $n$-ary\ngroup, under the assumption of the associativity of $f$, it\nsuffices only to postulate the existence of a solution of\n(\\ref{solv}) at the places $i=1$ and $i=n$ or at one place $i$\nother than $1$ and $n$. Then one can prove the uniqueness of the\nsolution of (\\ref{solv}) for all $i=1,\\ldots,n$ (cf. \\cite{post},\np. $213^{17}$).\n\nThe above definition of $n$-ary groups is a generalization of the\nWeber's and Huntington formulation of axioms of a group as a\nsemigroup in which the equations $xa=b$, $ya=b$ have solutions.\nMany authors used the notion of $n$-ary groups as a generalization\nof Pierpont's definition of groups as a semigroup with neutral and\ninverse elements. Unfortunately, in this case we obtain only\n$n$-ary groups derived from groups.\n\nE.I. Sokolov proved in \\cite{sok} that in the case of $n$-ary\nquasigroups (i.e. in the case of the existence of a unique\nsolution of (\\ref{solv}) at any place $i=1,\\ldots,n$) it is\nsufficient to postulate the $(j,j+1)$-associativity for some fixed\n$j=1,\\ldots,n-1$.\n\nUsing the same method as Sokolov we can prove the following\nproposition (for details see \\cite{DGG}):\n\n\\begin{proposition}\\label{DGG1}\nAn $n$-ary groupoid $(G;f)$ is an $n$-ary group if and only if\n$($at least$)$ one of the following conditions is satisfied:\n\\begin{enumerate}\n\\item [$(a)$] the $(1,2)$-associative law holds and the\nequation $(\\ref{solv})$ is solvable for $\\,i=n\\,$ and uniquely\nsolvable for $\\,i=1$,\n\\item [$(b)$] the $(n-1,n)$-associative law holds and the\nequation $(\\ref{solv})$ is solvable for $\\,i=1\\,$ and uniquely\nsolvable for $\\,i=n$,\n\\item [$(c)$] the $(i,i+1)$-associative law holds for some\n$\\,i\\in \\{2,...,n-2\\}\\,$ and the equation $(\\ref{solv})$ is\nuniquely solvable for $\\,i\\,$ and some $j>i$.\n\\end{enumerate}\n\\end{proposition}\n\n\\medskip\n\nIn \\cite{DD80} (see also \\cite{cel77}) the following\ncharacterization of $n$-ary groups is given:\n\n\\begin{proposition}\nAn $n$-ary semigroup $(G;f)$ is an $n$-ary group if and only if\nfor some $k\\in\\{1,2,\\ldots,n-2\\}$ and all $a_1^k\\in G$ there are\nelements $x_{k+1}^{n-1},\\,y_{k+1}^{n-1}\\in G\\,$ such that\n\\begin{equation}\nf(a_1^k,x_{k+1}^{n-1},b)=f(b,y_{k+1}^{n-1},a_1^k)=b\\label{DG-80}\n\\end{equation}\nfor all $\\,b\\in G$.\n\\end{proposition}\n\n\n\\medskip\n\n\\begin{proposition}\nAn $n$-ary semigroup $(G;f)$ is an $n$-ary group if and only if\nfor some $i,j\\in\\{1,2,\\ldots,n-1\\}$ and all $a,b\\in G$ there are\n$x,y\\in G\\,$ such that\n\\begin{equation}\nf(x,\\stackrel{(i-1)}{b},\\stackrel{(n-i)}{a})=f(\\stackrel{(n-j)}{a},\\stackrel{(j-1)}{b},y)=b.\n\\label{gal-r1}\n\\end{equation}\n\\end{proposition}\n\n\n\\medskip\nPutting in the above proposition $i=j=1$ we obtain the following\nmain result of \\cite{Tyu85}.\n\n\\begin{corollary}\n{\\em An $n$-ary semigroup $(G;f)$ is an $n$-ary group if and only\nif for all $a,b\\in G$ there are $x,y\\in G\\,$ such that}\n\\[\nf(x,\\stackrel{(n-1)}{a})=f(\\stackrel{(n-1)}{a},y)=b.\n\\]\n\\end{corollary}\n\n\\bigskip\n\nFrom the definition of an $n$-ary group $(G;f)$ we can directly\nsee that for every $x\\in G$ there exists only one $z\\in G$\nsatisfying the equation\n \\begin{equation} \\label{skew}\nf(\\stackrel{(n-1)}{x},z)=x .\n \\end{equation}\nThis element is called {\\it skew} to $x$ and is denoted by\n$\\overline{x}$. In a ternary group ($n=3$) derived from the binary\ngroup $(G;\\circ)$ the skew element coincides with the inverse\nelement in $(G;\\circ)$. Thus, in some sense, the skew element is a\ngeneralization of the inverse element in binary groups. This\nsuggests that for $n\\geqslant 3$ any $n$-ary group $(G;f)$ can be\nconsidered as an algebra $(G;f,\\bar{\\,}\\;)$ with two operations:\none $n$-ary $\\,f:G^n\\to G$ and one unary $\\;\\bar{\\,} :\nx\\to\\overline{x}$. D\\\"ornte proved (see \\cite{dor}) that in\nternary groups for all $x\\in G$ we have\n$\\overline{\\overline{x}}=x$, but for $n>3$ this is not true. For\n$n>3$ there are $n$-ary groups in which one fixed element is skew\nto all elements (cf. \\cite{D90}) and $n$-ary groups in which any\nelement is skew to itself. Then, in the second case, of course the\n$n$-ary group operation $f$ is idempotent. An $n$-ary group in\nwhich $f(\\stackrel{(n)}{x})=x$ for every $x\\in G$ is called {\\it\nidempotent}.\n\nNevertheless, the concept of skew elements plays a crucial role in\nthe theory of $n$-ary groups. Namely, as D\\\"ornte proved, the\nfollowing theorem is true.\n\n\\begin{theorem}\\label{dor-th}\n In any $n$-ary group $(G;f)$ the following identities:\n \\begin{eqnarray}\n f(\\stackrel{(i-2)}{x},\\overline{x},\\stackrel{(n-i)}{x},y)=y, \\label{dor-r}\\\\\n f(y,\\stackrel{(n-j)}{x},\\overline{x},\\stackrel{(j-2)}{x})=y, \\label{dor-l}\\\\\n f(\\stackrel{(k-1)}{x},\\overline{x},\\stackrel{(n-k)}{x})=x\n \\label{skew2}\n \\end{eqnarray}\nhold for all $\\,x,y\\in G$, $\\,2\\leqslant i,j\\leqslant n\\,$ and\n$\\,1\\leqslant k\\leqslant n$.\n\\end{theorem}\n\n\n\\medskip\n\nThe first two identities, called now {\\it D\\\"ornte's identities},\nare used by many authors to describe the class of $n$-ary groups.\nFor example, in 1967 B. Gleichgewicht and K. G{\\l}azek proved in\n\\cite{GG67} (see also \\cite{Sio67}) that for fixed $n\\geqslant 3$\nthe class of all $n$-ary groups, considered as algebras of type\n$(n,1)$, forms a Mal'cev variety and found the system of\nidentities defining this variety. This means that all congruences\nof a given $n$-ary group commute and that the lattice of all\ncongruences of a fixed $n$-ary group is modular. But, as was\nobserved many years later, from the theorem on page 448 in\nGluskin's paper \\cite{glu65} it follows that the system of\nidentities given by B. Gleichgewicht and K. G\\l azek is not\nindependent. For similar axiom considerations, see also\n\\cite{cel77}, \\cite{rus79} and \\cite{rus81} (for other systems of\naxioms, see, e.g., \\cite{monk2}). The first independent system of\nidentities defining this variety was given in our paper\n\\cite{DGG}. Now we give the minimal system of such identities.\nThis is the main result of \\cite{Rem}.\n\n\\begin{theorem}\\label{DGG}\nThe class of $n$-ary groups coincides with the variety of $n$-ary\ngroupoids $(G;f,\\bar{}\\;)$ with a unary operation\n$\\,\\bar{}:x\\to\\overline{x}$ satisfying for some fixed\n$i,j\\in\\{2,\\ldots,n\\}$ the D\\\"ornte identities $(\\ref{dor-r})$,\n$(\\ref{dor-l})$ and the identity\n\\[\n f(f(x_1^{n}),x_{n+1}^{2n-1})=f(x_1,f(x_2^{n+1}),x_{n+2}^{2n-1}).\n \\]\n\\end{theorem}\n\n\\medskip\n\nTheorem \\ref{DGG} gives the minimal system of identities defining\n$n$-ary groups. In fact, for $n>3$ the set $Z$ of all integers\nwith the operation $f(x_1^n)=x_{n-1}+x_n$ is an example of a\n$(1,2)$-associative $n$-ary groupoid in which (\\ref{dor-r}) holds\nfor $\\overline{x}=0$ but (\\ref{dor-l}) is not satisfied.\nSimilarly, $(Z;f)$ with $f(x_1^n)=x_1$ is an example of a\n$(1,2)$-associative $n$-ary groupoid satisfying (\\ref{dor-l}) but\nnot (\\ref{dor-r}). It is clear that the $(1,2)$-associativity\ncannot be deleted.\n\nNote by the way that in some papers there are investigated\nso-called {\\it infinitary} semigroups and quasigroups, i.e.\ngroupoids $(G;f)$, where the number of variables in the operation\n$f:G^{\\infty}\\to G$ is infinite, but countable. Infinitary\nsemigroups are the infinitary groupoids $(G;f)$, where for all\nnatural $\\,i, j\\,$ the operation $f$ satisfies the identity\n \\[\nf(x_1^{i-1},f(x_i^{\\infty}),y_1^{\\infty})=\nf(x_1^{j-1},f(x_j^{\\infty}),y_1^{\\infty}).\n \\]\nInfinitary quasigroups are infinitary groupoids $(G;f)$ in which\nthe equation $\\,f(x_1^{k-1},z_k,x_{k+1}^{\\infty})=x_0\\,$ has a\nunique solution $z_k$ at any place $k$.\n\nFrom the general results obtained in \\cite{belz} and \\cite{MTC}\none can deduce that infinitary groups have only one element. Below\nwe present a simple proof of this fact.\n\nIf $(G;f)$ is an infinitary group, then, according to the\ndefinition, for any $y,z\\in G$ and $u=f(\\stackrel{(\\infty)}{y})$\nthere exists $x\\in G$ such that\n$z=f(u,y,x,\\stackrel{(\\infty)}{y})$. Thus\n\\[\\arraycolsep.5mm\n \\begin{array}{rl}\nf(z,\\stackrel{(\\infty)}{y})=&f(f(u,y,x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})=\n f(u,y,f(x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})\\\\[6pt]\n=&f(f(\\stackrel{(\\infty)}{y}),y,f(x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})\n=f(y,f(\\stackrel{(\\infty)}{y}),y,f(x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})\\\\[6pt]\n=&f(y,u,y,f(x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})\n=f(y,f(u,y,x,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{y})=f(y,z,\\stackrel{(\\infty)}{y}),\n \\end{array}\n \\]\ni.e. for all $y,z\\in G\\,$ we have\n\\[\nf(z,\\stackrel{(\\infty)}{y})=f(y,z,\\stackrel{(\\infty)}{y}).\n\\]\nUsing this identity and the fact that for all $\\,x,y\\in G$ there\nexists $z\\in G$ such that $\\,x=f(z,\\stackrel{(\\infty)}{y}),\\,$ we\nobtain\n\\[\\arraycolsep.5mm\n \\begin{array}{rl}\nf(\\stackrel{(\\infty)}{x})=&f(x,f(z,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{x})=\n f(x,f(y,z,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{x})\\\\[6pt]\n=&f(x,y,f(z,\\stackrel{(\\infty)}{y}),\\stackrel{(\\infty)}{x})\n=f(x,y,\\stackrel{(\\infty)}{x}),\n \\end{array}\n \\]\nwhich together with the existence of only one solution at the\nsecond place implies $x=y$. Hence $G$ has only one element.\n\n\\medskip\n\nAccording to Theorem~\\ref{DGG}, the class of all $n$-groups (for\n$n>2$) can be considered as a variety of algebras\n$(G;f,\\bar{\\,}\\;)$ with one $n$-ary operation $f$ and one unary\n$\\,\\bar{}:x\\to\\overline{x}$. The class of $n$-ary groups can be\nalso considered as a variety of algebras of different types (cf.\n\\cite{filomat} and \\cite{jus'03}).\n\nTheorem~\\ref{DGG} is valid for $n>2$, but, as it was observed in\n\\cite{Rem}, this theorem can be extended to the case $n=2$.\nNamely, let \\ $\\hat{\\,}:x\\to\\hat{x}$ be a unary operation, where\n$\\hat{x}$ is defined as a solution of the equation\n$f_{(2)}(\\stackrel{(2n-2)}{x},\\hat{x})=x$. Then using the same\nmethod as in the proof of Theorem 2 in \\cite{DGG} we can prove:\n\n\\begin{theorem}\nLet $(G;f)$ be an $n$-ary $(n\\geqslant 2)$ semigroup with a unary\noperation $\\hat{}:x\\to\\hat{x}$. Then $(G;f,\\,\\hat{\\,}\\,)$ is an\n$n$-ary group if and only if for some $i,j\\in\\{2,\\ldots,2n-1\\}$\nthe following identities\n\\[\nf_{(2)}(y,\\stackrel{(i-2)}{x},\\hat{x},\\stackrel{(2n-1-i)}{x})=y=\nf_{(2)}(\\stackrel{(2n-1-j)}{x},\\hat{x},\\stackrel{(j-2)}{x},y)\n\\]\nhold.\n\\end{theorem}\n\nFrom this theorem we can deduce other definitions of $n$-ary\n$(n\\geqslant 2)$ groups presented in \\cite{Gal'95}, \\cite{Gal'03},\n\\cite{rus79} and \\cite{Sio67}.\n\n\\medskip\n\nAn $n$-ary group is said to be {\\em semiabelian} if the following\nidentity\n\\begin{equation}\nf(x_1^n)=f(x_n,x_2^{n-1},x_1)\n\\end{equation}\nis satisfied. In this case the operation $\\,\\bar{\\,}:x\\to\n\\overline{x}\\,$ is a homomorphism (cf. \\cite{GG'77}). Note by the\nway that the class of all semiabelian $n$-ary groups coincides\nwith the class of {\\it medial} $n$-ary groups (cf. \\cite{medial},\n[29]). (Some authors used also the name {\\it abelian} instead of\n{\\it semiabelian} (see, e.g., \\cite{Sio67}, [29]).) Such $n$-ary\ngroups are a special case of {\\it $\\sigma$-permutable} $n$-ary\ngroups (cf. \\cite{St-D'86}), i.e. $n$-ary groups in which\n$f(x_1^n)=f(x_{\\sigma(1)},x_{\\sigma(2)},\\ldots,x_{\\sigma(n)})$ for\nfixed $\\sigma\\in S_n$. An $n$-ary group which is\n$\\sigma$-permutable for every $\\sigma\\in S_n$ is usually called\n{\\it commutative}.\n\nAn {\\it $n$-ary power} of $x$ in an $n$-ary group $(G;f)$ is\ndefined in the following way: $x^{<0>}=x$, \\\n$x^{<1>}=f(\\stackrel{(n)}{x})\\;$ and\n$x^{}=f(\\stackrel{(n-1)}{x},x^{})\\;$ for all $k>0$. In\nthis convention $x^{<-k>}$ means an element $z$ such that\n$f(x^{},\\stackrel{(n-2)}{x},z)=x^{<0>}=x$. Then\n$\\overline{x}=x^{<-1>}$ and\n\\[\n\\begin{array}{l}\nf(x^{},\\ldots,x^{})=x^{}\\\\[4pt]\n(x^{})^{}=x^{}\n\\end{array}\n\\]\n(cf. \\cite{post}, \\cite{Gl'82} or \\cite{auto}).\n\nNow, putting $x^{\\!\\!\\!\\!-(0)}=x$ and denoting by\n$x^{\\!\\!\\!\\!-(m+1)}$ the skew element to $x^{\\!\\!\\!\\!-(m)}$, from\nthe above two identities and results obtained by W. A. Dudek in\n\\cite{auto} and \\cite{medial} we deduce the following proposition.\n\\begin{proposition}\nIn any $n$-ary group \\ $x^{\\!\\!\\!\\!-(m)}=x^{}$, where\n$S_m=\\frac{(2-n)^m-1}{n-1}$.\n\\end{proposition}\n\nThis means that for every $n>2$ we have\n$\\overline{\\overline{x}}=x^{}$. In particular,\n$\\overline{\\overline{x}}=x^{<1>}$ in all $4$-ary groups, and\n$\\overline{\\overline{x}}=x^{<2>}$ in all $5$-ary groups (cf. \\cite{Gl'82})\n\n\\section{Hossz\\'u-Gluskin algebras}\n\nLet $(G;f)$ be an $n$-ary group. Fixing in $f(x_1^n)$ some $m2$, is an $n$-ary group if and\nonly if\n\\begin{enumerate}\n\\item[$(i)$] on $G$ one can define a binary operation $\\cdot$ such\nthat $(G;\\cdot)$ is a group,\n\\item[$(ii)$] there exist an automorphism $\\varphi$ of $(G;\\cdot)$ and\n$b\\in G$ such that $\\varphi(b)=b$,\n\\item[$(iii)$] $\\varphi^{n-1}(x)=b\\cdot x\\cdot b^{-1}$ holds for every\n$x\\in G$,\n\\item[$(iv)$] $f(x_1^n)=x_1\\cdot\\varphi(x_2)\\cdot\\varphi^2(x_3)\\cdot\\varphi^3(x_4)\\cdot\\ldots\n\\cdot\\varphi^{n-1}(x_n)\\cdot b$ for all $x_1,\\ldots,x_n\\in G$.\n\\end{enumerate}\n\\end{theorem}\n\nTwo years later, this theorem was proved by L. M. Gluskin (see\n\\cite{glu65}) in a more general form (for so-called positional\noperatives). For a generalization to $n$-ary semigroups, see also\n\\cite{Monk} and \\cite{Zup}. In another version this theorem was\nalso formulated by E. L. Post (cf. \\cite{post}, p. 246). An\nelegant short proof was given by E. I. Sokolov in \\cite{sok}. His\nproof is based on the observation that $(G;\\cdot)=ret_a(G;f)$.\nThen we have:\n \\begin{equation}\n\\varphi(x)=f(\\overline{a},x,\\stackrel{(n-2)}{a})\\label{autom}\n \\end{equation}\nand\n \\begin{equation}\n b=f(\\stackrel{(n)}{\\overline{a}}).\\label{b}\n \\end{equation}\nFrom (14) and (7) or (8), we can deduce that commutative $n$-ary group\noperations have the form $f(x_1^n)=x_1\\cdot x_2\\cdot\\ldots\\cdot x_n\\cdot b$,\nwhere $(G;\\cdot)$ is a commutative group.\n\nNote that the last condition of Theorem~\\ref{thGH} can be\nrewritten in the form\n\\begin{equation}\nf(x_1^n)=x_1\\cdot\\varphi(x_2)\\cdot\\varphi^2(x_3)\\cdot\\varphi^3(x_4)\\cdot\\ldots\n\\cdot\\varphi^{n-2}(x_{n-1})\\cdot b\\cdot x_n. \\label{e10}\n\\end{equation}\n\nThe above theorem has the following generalization (cf.\n\\cite{DM1}):\n\\begin{theorem}\\label{genGH}\nAn $n$-ary groupoid $(G;f)$, $n>2$, is an $n$-ary group if and\nonly if\n\\begin{enumerate}\n\\item[$(i)$] on $G$ one can define a $k$-ary operation $g$ such\nthat $(G;g)$ is a $k$-ary group and $k-1$ divides $n-1$,\n\\item[$(ii)$] there exist an automorphism $\\varphi$ of $(G;g)$\nand elements $b_2,\\ldots,b_k\\in G$ such that $\\varphi(b_i)=b_i$\nfor $i=2,\\ldots,k$,\n\\item[$(iii)$] $g(\\varphi^{n-1}(x),b_2^k)=g(b_2^k,x)$ holds for every\n$x\\in G$,\n\\item[$(iv)$]\n$f(x_1^n)=g_{(\\cdot)}(x_1,\\varphi(x_2),\\varphi^2(x_3),\\ldots,\n\\varphi^{n-1}(x_n),b_2^k)$ for all $x_1,\\ldots,x_n\\in G$.\n\\end{enumerate}\n\\end{theorem}\n\nIn this theorem $(G;g)=ret_{a_1^r}(G;f)$, where $a_1=\\ldots=a_r=a$.\nIn this case, we get:\n\\begin{equation}\n\\varphi(x)=f(\\overline{a},\\stackrel{(n-r-2)}{a},x,\\stackrel{(r)}{a}),\n\\label{k-fi}\n\\end{equation}\n\\begin{equation}\nb_2=\nf_{(\\cdot)}(\\stackrel{(n-r-2)n}{a},\\stackrel{(n)}{\\overline{a}},\\stackrel{(k-2)(n-r-2)}{a}),\n \\ \\ b_3=\\ldots=b_k=\\overline{a}.\\label{b_2}\n\\end{equation}\n\nOther important generalizations can be found in \\cite{Hosszu2}\n(for heaps), \\cite{Mar-Jan} (for vector valued groups),\n\\cite{Sokh} (for partially associative $n$-ary quasigroups).\n\n\\medskip\n\nFollowing E. L. Post (see \\cite{post}, cf. [4], p. 36--40, and\n[28]) a binary group \\ $\\mathfrak{G}^{\\ast }=\\left(G^{\\ast };\\circ\n\\right)$ is said to be a {\\it covering group} for the $n$-ary\ngroup $(G;f)$ if there exists an embedding $\\tau :G\\rightarrow\nG^{\\ast }$ such that $\\tau(G)$ is a generating set of $G^{\\ast }$\nand \\ $\\tau (f(x_1^n)) = \\tau\n(x_1)\\circ\\tau(x_2)\\circ\\ldots\\circ\\tau(x_n)$ for every\n$x_1,\\ldots,x_n\\in G.$ \\ $\\mathfrak{G}^{\\ast }$ is a\n\\textit{universal covering group} (or a \\textit{free covering\ngroup}) if for any covering group $\\mathfrak{G}_{1}^{\\ast }$\\\nthere exists a homomorphism from $G^*$ onto $G^*_1$ such that the\nfollowing diagram is commutative (or compatible -- in another\nterminology):\n\n\\begin{center}\n\\begin{minipage}{6cm}\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $G$\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ { \\ }$\\swarrow ${ \\ }$\n\\circlearrowleft ${ \\ }$\\searrow $\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $G^{\\ast}--\\rightarrow \\ G_{1}^{\\ast}$\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ {\\footnotesize onto}\n\\end{minipage}\n\\end{center}\n\nPost proved in \\cite{post} that for every $n$-ary group $(G;f)$\nthere exist a covering group $(G^{\\ast};\\circ)$ and its normal\nsubgroup $G_0$ such that $G^{\\ast}\\diagup G_0$ is a cyclic group\nof order $n-1$ and $f(x_1^n)=x_1\\circ x_2\\circ\\ldots\\circ x_n$ for\nall $x_1,\\ldots,x_n\\in G$. So, the theory of $n$-ary groups is\nclosely related to the theory of {\\it cyclic extensions of\ngroups}, but these theories are not equivalent.\n\nIndeed, the above theorems show that for any $n$-ary group $(G;f)$\nwe have the sequence\n$$\nO\\rightarrow (G_0;\\circ)\\rightarrow (G^{\\ast};\\circ )\n\\stackrel{\\zeta }{\\longrightarrow }\\mathcal{C}\\left( n\\right)\n\\rightarrow O,\n $$\nwhere $(G^{\\ast };\\circ)$ is the free covering group of $(G;f)$\nwith $G=\\zeta^{-1}\\left( 1\\right)$, and $1$ is a generator of the\ncyclic (additively writing) group $\\mathcal{C}\\left( n\\right)\n=(C_{n};+_{n}).$\n\nWe have\n\\[\n\\begin{array}{ccccc}\n& & ( G_{1}^{\\ast };\\circ) & & \\\\\n& \\nearrow & \\uparrow & \\searrow & \\\\\n(G_0;\\circ ) & & \\circlearrowright \\;\\;\\; {\\vert}\n\\circlearrowleft\\circlearrowright & & C\\left( n\\right) \\\\\n& \\searrow &\\downarrow & \\nearrow & \\\\\n& & (G_2^{\\ast};\\circ)& &\n\\end{array}\n\\]\nwhere we use\n\n$\\circlearrowright $ \\ for the equivalence of extensions,\n\n$\\circlearrowleft $ \\ for the isomorphism of suitable $n$-ary\ngroups.\n\n\\medskip\n\nOf course, two $n$-ary groups determined in the above-mentioned\nsense by two equivalent cyclic extensions are isomorphic. However,\ntwo non-equivalent cyclic extensions can determine two isomorphic\n$n$-ary groups.\n\n\\begin{example} Consider two cyclic extensions of the cyclic group $%\n\\mathcal{C}(3)$ by $\\mathcal{C}(3)$:\n$$\n0\\rightarrow \\mathcal{C}(3)\\stackrel{\\alpha }{\\rightarrow\n}\\mathcal{C}(9)\\stackrel{\\beta _{1}}{\\rightarrow\n}\\mathcal{C}(3)\\rightarrow 0\n$$\nand\n$$\n0\\rightarrow \\mathcal{C}(3)\\stackrel{\\alpha }{\\rightarrow\n}\\mathcal{C}(9)\\stackrel{\\beta _{2}}{\\rightarrow\n}\\mathcal{C}(3)\\rightarrow 0,\n$$\nwhere the homomorphisms $\\alpha $, $\\beta _{1}$ and $\\beta _{2}$\nare given by:\n\\[\n\\begin{array}{lcl}\n\\;\\alpha (x)=3x&{\\rm for}& x\\in C_{3},\\\\[4pt]\n\\beta_{1}(x)\\equiv x({\\rm mod}\\,3)&{\\rm for}&x\\in C_{9},\\\\[4pt]\n\\beta_{2}(x)\\equiv 2x({\\rm mod}\\,3)&{\\rm for}&x\\in C_{9}.\n\\end{array}\n\\]\n\nIt is easy to verify that if $g(x,y,z,v)=(x+y+z+v)({\\rm mod}\\,9),$\nthen the $4$-ary groups $(\\beta _{1}^{-1}(1);g)$ and $(\\beta\n_{2}^{-1}(2);g)$, corresponding to those extensions are isomorphic\nto the $4$-ary groups $(C_{3};f_{1})$, $(C_{3};f_{2})$,\nrespectively, where\n $$\nf_{1}(x,y,z,v)\\equiv (x+y+z+v+1)({\\rm mod}\\,3)\n $$\nand\n$$\nf_{2}(x,y,z,v)\\equiv (x+y+z+v+2)({\\rm mod}\\,3).\n$$\nThese 4-groups are isomorphic. The isomorphism\n$\\varphi:(C_{3};f_{1})\\to(C_{3};f_{2})$ has the form\n$\\varphi(x)\\equiv 2x({\\rm mod}\\,3)$. Nevertheless, the\nabove-mentioned extensions are not equivalent (because there is no\nautomorphism $\\lambda $ of $\\mathcal{C}(9)$ such that $\\lambda\n\\circ \\alpha =\\alpha $ and \\ $\\beta _{2}\\circ \\lambda =\\beta\n_{1}$).\n\\end{example}\n\nThe algebra $(G;\\cdot,\\varphi,b)$ of the type $(2,1,0)$, where\n$(G;\\cdot)$ is a (binary) group, $b\\in G$ is fixed, $\\varphi\\in\nAut(G;\\cdot)$, $\\varphi(b)=b$ and $\\varphi^{n-1}(x)=b\\cdot x\\cdot\nb^{-1}$ for every $x\\in G$ is called a {\\it Hossz\\'u-Gluskin\nalgebra} (briefly: an {\\it $HG$-algebra}). We say that an\n$HG$-algebra $(G;\\cdot,\\varphi,b)$ is {\\it associated} with an\n$n$-group $(G;f)$ if the identity (\\ref{e10}) is satisfied. In\nthis case we say also that an $n$-ary group $(G;f)$ is {\\it\n$\\langle\\varphi,b\\rangle$-derived} from the group $(G;\\cdot)$. A\n$k$-ary $HG$-algebra $(G;g,\\varphi,b_2^k)$ can be defined\nsimilarly. Binary $HG$-algebras are studied in \\cite{jus'95a},\n\\cite{jus'95b} and \\cite{jus'03}.\n\nTheorems~\\ref{thGH} and \\ref{genGH} state that every $k$-ary\n$HG$-algebra is associated with some $n$-ary group. Any $n$-ary\ngroup is $\\langle\\varphi,b\\rangle$-derived from some binary group\nand $\\langle\\varphi,b_2^k\\rangle$-derived from some $k$-ary group.\n\n\\section{Calculation of $n$-ary groups on small sets}\n\nResults presented in the previous section give the possibility to\nevaluate the number of non-isomorphic $n$-ary groups. To calculate\nthese groups we must use the following result proved in\n\\cite{DM2}.\n\n\\begin{theorem}\\label{izoth}\nTwo $n$-ary groups $(G_1;f_1)$, $(G_2;f_2)$ are isomorphic if and\nonly if for every $c\\in G_1$ there exists an isomorphism\n$h:ret_c(G_1;f_1)\\to ret_d(G_2;f_2)$ such that $d=h(c)$, \\\n$h(f_1({\\overline{c}},\\ldots,{\\overline{c}}))=\nf_2(\\overline{d},\\ldots,\\overline{d}) $ and\n$h(f_1(\\overline{c},x,\\!\\stackrel{(n-2)}{c}))=\nf_2(\\overline{d},h(x),\\!\\stackrel{(n-2)}{d}).$\n\\end{theorem}\n\n\\begin{corollary}\\label{izoth2}\n{\\it Two commutative $n$-ary groups $(G_1;f_1)$, $(G_2;f_2)$ are\nisomorphic if and only if for every $c\\in G_1$ there exists an\nisomorphism $h:ret_c(G_1;f_1)\\to ret_d(G_2;f_2)$ such that\n$d=h(c)$ and $h(f_1({\\overline{c}},\\ldots,{\\overline{c}}))=\nf_2(\\overline{d},\\ldots,\\overline{d})$.}\n\\end{corollary}\n\nIf $(G,\\cdot)$ is an abelian group, then, of course, we can\nconsider the automorphism of the form $\\alpha(x)=x^{-1}$. Then $G$\nwith the operation\n\\begin{equation}\nf(x_1^n)=x_1\\cdot x_2^{-1}\\cdot x_3\\cdot\\ldots\\cdot\nx^{-1}_{n-1}\\cdot x_n\\label{-1}\n\\end{equation}\nis an $n$-ary group if $n$ is odd. Such $n$-ary groups are\ncharacterized by the following theorem proved in \\cite{Gl-Mi'84}.\n\n\\begin{theorem} Let $m$ be odd and let $(G;f)$ be an\n$n$-ary group. Then the operation $f$ has the form $(\\ref{-1})$,\nwhere $(G;\\cdot)$ is an abelian group, if and only if\n\\begin{enumerate}\n\\item[$(i)$] \\ $f\\left( x,\\ldots,x\\right) =x$,\n\\item[$(ii)$] \\ $f\\left(x_1^{i},y,y,x_{i+3}^{n}\\right)=\nf\\left(x_1^{i},z,z,x_{i+3}^{n}\\right)$ for all $\\;0\\leqslant\ni\\leqslant n-2.$\n\\end{enumerate}\nIn this case $(G;\\cdot)=ret_a(G;f)$ for some $a\\in G$.\n\\end{theorem}\n\nConsider an abelian group $(G;+)$. Then, as a special case of an $n$-ary\ngroup operation of form (19), one can obtain the ternary term operation\n$$\nf(x,y,z)=x-y+z\n$$\nwhich is a so-called Mal'tsev term in the group $G$. Of course, it is\nidempotent and medial ({\\it entropic} -- in another terminology). Such\nternary operations appear in several branches of mathematics. For example,\nthey play very important role in affine geometry and the theory of modes\n(because of idempotency and mediality), in the theory of congruences\nin general algebras (because existence of a Mal'tsev term in general algebras\nimplies permutability of congruences and then modularity of lattices of\ncongruences) and also in the theory of clones which is important in\nUniversal Algebra and as well in Multiple-valued Logics.\n\n\\medskip\nFrom results obtained in \\cite{Gl-Mi'84} (cf. also \\cite{Tim'72})\nwe can deduce:\n\n\\begin{proposition} Let $(G;\\cdot)$ be a group and let $t_1,\\ldots,t_n$\nbe fixed integers. Then $G$ with the operation\n\\[\nf(x_1^{m})=(x_1)^{t_{1}}\\cdot(x_{2})^{t_{2}}\\cdot\\ldots \\cdot\n(x_{n-1})^{t_{n-1}}\\cdot(x_{n})^{t_{n}},\n\\]\nis an $n$-ary group if and only if\n\\begin{enumerate}\n\\item \\ $x^{t_1}=x=x^{t_n}$,\n\\item \\ $t_j=k^{j}$ \\ for some integer $k$ and all $j=2,\\ldots,n-1$,\n\\item \\ $(x\\cdot y)^k=x^k\\cdot y^k.$\n\\end{enumerate}\n\\end{proposition}\n\nIn this case we say that $(G;f)$ is {\\em derived from the\n$k$-exponential group}.\n\\begin{proposition}\nAn $n$-ary group $(G;f)$ is derived from the $k$-exponential\n$(k>0)$ group $(G;\\cdot)$ if and only if there exists $a\\in G$\nsuch that\n\\begin{enumerate}\n\\item \\ $f(a,\\ldots,a)=a$,\n\\item \\ $f_{(k)}(\\stackrel{(n-2)}{a},x,\\stackrel{(n-2)}{a},x,\n\\ldots,\\stackrel{(n-2)}{a},x,a)=x$.\n\\end{enumerate}\nMoreover, $(G;\\cdot)=ret_a(G;f)$.\n\\end{proposition}\n\nUsing the above results we can describe all non-isomorphic $n$-ary\ngroups with small numbers of elements.\n\nFor this let $(\\mathbb{Z}_k;+)$ be the cyclic group modulo $k$.\nConsider the following $n$-ary operation:\n \\[\\arraycolsep=.5mm\n \\begin{array}{rl}\nf_{a}( x_1^n) &\\equiv (x_{1}+\\ldots+x_{n}+a)\\,({\\rm mod}\\,k) ,\\\\[4pt]\ng_{d}(x_1^n)&\\equiv (x_1+dx_2+\\ldots+d^{n-2}x_{n-1}+x_n)\\,({\\rm mod}\\,k),\\\\[4pt]\ng_{d,c}(x_1^n)&\\equiv\n(x_1+dx_2+\\ldots+d^{n-2}x_{n-1}+x_n+c)\\,({\\rm mod}\\,k),\n\\end{array}\n \\]\nwhere $a\\in\\mathbb{Z}_k$, \\ $c,d\\in\\mathbb{Z}_k\\setminus\\{0,1\\},$\n\\ $d^{\\,n-1}\\equiv 1\\,({\\rm mod}\\,k).$ Additionally, for the\noperation $g_{d,c}$ we assume that $dc\\equiv c\\,({\\rm mod}\\,k)$\nholds. By Theorem~\\ref{thGH}, $(\\mathbb{Z}_{k};f_{a})$,\n$(\\mathbb{Z}_{k};g_{d})$ and $(\\mathbb{Z}_{k};g_{d,c})$ are\n$n$-ary groups.\n\n\\bigskip\n\nIn \\cite{Gl-Mi-S} the following theorem is proved:\n\\begin{theorem} A $k$-element $n$-ary group $(G;f)$ is\n$\\langle\\varphi,b\\rangle$-derived from the cyclic group of order\n$k$ if and only if it is isomorphic to exactly one $n$-ary group\nof the form $(\\mathbb{Z}_{k};f_{a})$, $(\\mathbb{Z}_{k};g_{d})$ or\n$(\\mathbb{Z}_{k};g_{d,c})$, where $d|gcd(k,n-1)$ and $c|k.$\n\\end{theorem}\n\nAn infinite cyclic group can be identified with the group\n$(\\mathbb{Z};+)$. This group has only two automorphisms:\n$\\varphi(x)=x$ and $\\varphi(x)=-x$. So, according to\nTheorem~\\ref{thGH}, $n$-ary groups defined on $\\mathbb{Z}$ have\nthe form $(\\mathbb{Z};f_a)$ or $(\\mathbb{Z};g_{-1})$, where\n$$\ng_{-1}(x_1^n)=x_1-x_2+x_3-x_4+\\ldots+x_n, $$ and $n$ is odd. Since\n$\\varphi_k(x)=x+k$ is an isomorphism of $n$-ary groups\n$(\\mathbb{Z};f_a)$ and $(\\mathbb{Z};f_b)$, where $a=b+(n-1)k$, the\ncalculation of non-isomorphic $n$-ary groups of the form\n$(\\mathbb{Z};f_a)$ can be reduced to the case when\n$a=0,1,\\ldots,n-2$. From Corollary~\\ref{izoth2} it follows that\nthese $n$-ary groups are non-isomorphic.\n\nSo, we have proved\n\\begin{theorem}\nAn $n$-ary group $\\langle\\varphi,b\\rangle$-derived from the\ninfinite cyclic group $(\\mathbb{Z};+)$ is isomorphic to an $n$-ary\ngroup $(\\mathbb{Z};f_a)$, where $0\\leqslant a\\leqslant (n-2)$, or\nto $(\\mathbb{Z};g_{-1})$, where $n$ is odd.\n\\end{theorem}\n\nDenote by $Inn\\left(G;\\cdot\\right)$ the group of all inner\nautomorphisms of $(G;\\cdot)$, by $Out\\left( G;\\cdot \\right)$ the\nfactor group of $Aut\\left(G;\\cdot\\right)$ by\n$Inn\\left(G;\\cdot\\right)$, and by $Out_{n}\\left( G;\\cdot \\right) $\nthe set of all cosets $\\overline{\\gamma}\\in\nOut\\left(G;\\cdot\\right)$ containing $\\gamma$ such that\n$\\gamma^{n-1}\\in Inn\\left(G;\\cdot\\right)$. Then, as it is proved\nin \\cite{Gl-Mi-S}, for centerless groups, i.e. groups for which\n$\\,card(Cent\\left(G;\\cdot\\right))=1$, the following theorem is\ntrue.\n\n\\begin{theorem} Let $\\left( G;\\cdot \\right)$ be a centerless group such that\n$Out_{n}\\left(G;\\cdot\\right)$ is abelian, and let\n$\\left(G;f\\right)$ be $\\langle\\alpha,a\\rangle$-derived, and\n$\\left(G;g\\right)$ be $\\langle\\beta,b\\rangle$-derived from\n$\\left(G;\\cdot\\right)$. Then $\\left(G;f\\right)$ is isomorphic to\n$\\left( G;g\\right)$ if and only if $\\,\\alpha\\circ\\beta^{-1}\\in\nInn\\left( G;\\cdot\\right)$.\n\\end{theorem}\n\nThe number of pairwise non-isomorphic $n$-ary groups\n$\\langle\\varphi,b\\rangle$-derived from a centerless group\n$(G;\\cdot)$ is smaller or equal to $s=card(\nOut_{n}\\left(G;\\cdot\\right))$. It is equal to $s$ if and only if\n$Out\\left(G;\\cdot\\right)$ is abelian.\n\nFor every $n$ and $k\\neq 2,6$, there exists exactly one $n$-ary\ngroup which is $\\langle\\varphi,b\\rangle$-derived from $S_{k}$ (for\n$k=2$ and $k=6$ we have one or two such $n$-ary groups relatively\nto evenness of $n$).\n\nLet now $(G;\\cdot)$ be an arbitrary group, $c\\in G,$ $\\varphi \\in\nAut\\left(G;\\cdot \\right) $. Let us put\n\\[\n\\arraycolsep=.5mm \\begin{array}{rl}\n f_{c}^{(\\cdot)}(x_1^n)&=x_1\\cdot x_2\\cdot\\ldots\\cdot x_n\\cdot\n c,\\\\[4pt]\ng_{\\varphi }^{(\\cdot)}(x_1^n)& =x_1\\cdot\\varphi\\left(x_{1}\\right)\n\\cdot\\ldots\\cdot\\varphi^{n-1}( x_n) ,\\\\[4pt]\ng_{\\varphi ,c}^{( \\cdot)}(x_1^n)& =x_1\\cdot\\varphi\\left(\nx_{2}\\right)\\cdot\\ldots\\cdot\\varphi^{n-1}(x_n)\\cdot c.\n\\end{array}\n\\]\n\nFor example (for details see \\cite{Gl-Mi'87}), we have the\nfollowing:\n\n\\begin{theorem} Let $l=gcd\\left(n-1,12\\right),$ \\ $\\left(G_{4};\\ast\\right)$\nbe the Klein four-group $($with $0$ as the neutral element$)$, let\n$\\gamma ,\\varepsilon \\in Aut\\left( G_{4};\\ast \\right),$ where\n$\\gamma $ is of order $2$ and $\\varepsilon $ of order $3$, and let\n$c\\in G_{4}\\backslash \\{0\\}$ be the fix point of $\\gamma $. Then\nevery $n$-ary group $\\langle\\varphi,b\\rangle$-derived from\n$(G_{4};\\ast)$ is isomorphic to exactly one $(G_{4};f)$, where $f$\nis one of the following $n$-ary group operations:\n\\begin{enumerate}\n\\item[$(a)$] \\ $f_{0}^{\\left( \\ast \\right) },$ $f^{\\left( \\ast\n\\right) },g_{\\ \\gamma }^{\\left( \\ast \\right) },g_{\\ \\gamma\n,c}^{\\left( \\ast \\right) } $ or $g_{\\ \\varepsilon }^{\\left( \\ast\n\\right) }$ \\ \\ for \\ $l=12$,\n\\item[$(b)$] \\ $f_{0}^{\\left( \\ast \\right) },$ $f_{1}^{\\left(\n\\ast \\right) },g_{\\ \\gamma }^{\\left( \\ast \\right) }$ or $g_{\\\n\\varepsilon }^{\\left( \\ast \\right) }$ \\ \\ for \\ $l=6$,\n\\item[$(c)$] \\ $f_{0}^{\\left( \\ast \\right) },$ $f_{1}^{\\left( \\ast\n\\right) },g_{\\ \\gamma }^{\\left( \\ast \\right) }$ or $g_{\\ \\gamma\n,c}^{\\left( \\ast \\right) }$ \\ \\ for \\ $l=4$,\n\\item[$(d)$] \\ $f_{0}^{\\left( \\ast \\right) }$ or $g_{\\ \\varepsilon\n}^{\\left( \\ast \\right) }$ \\ \\ for \\ $l=3$,\n\\item[$(e)$] \\ $f_{0}^{\\left( \\ast \\right) },$ $f_{1}^{\\left( \\ast\n\\right) }$ or $g_{\\ \\gamma }^{\\left( \\ast \\right) }$ \\ \\ for \\\n$l=2$, \\item[$(f)$] \\ $f_{0}^{\\left( \\ast \\right) }$ \\ \\ for \\\n$l=1.$\n\\end{enumerate}\n\\end{theorem}\n\nComparing our results with results obtained in \\cite{Gl-Mi'87},\n\\cite{Gl-Mi'88} and \\cite{Gl-Mi-S} (cf. also \\cite{post} for\n$k=2,3$) we can tabularize the numbers of $n$-ary groups on\n$k$-element sets with $k<8$ in the following way (we use the\nabbreviations: commut. = commutative, idem. = idempotent):\n\n\\bigskip {\\small\\noindent\n\\begin{tabular}{|l|c|c|}\n\\hline $k=2$, \\ $l=gcd\\,(n-1,2)$ & $l=2$& $l=1\\rule{0mm}{3mm}$\n\\\\ \\hline\n$n\\equiv t({\\rm mod}\\,2)$&$t=1$&$t=0\\rule{0mm}{3mm}$\\\\\n\\hline all &2& 1\\\\ \\hline\n commutative&2 & 1 \\\\ \\hline\n commutative, idempotent & 1 & 0 \\\\ \\hline\n\\end{tabular}\n\n\\bigskip\\noindent\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline $k=3$, \\ $l=gcd\\,(n-1,6)$ &$l=6$ &$l=3$&$l=2$&$l=1\\rule{0mm}{3mm}$ \\\\\n\\hline $n\\equiv t\\,({\\rm mod}\\,6)$&$t=1$\n&$t=4$&$t=3,\\,5$&$t=0,\\,2\\rule{0mm}{3mm}$\\\\ \\hline\n {all} &3&2&2&1\\\\ \\hline\n{commutative} &2&2&1&1 \\\\ \\hline\n commutative, idempotent &1&1&0&0\\\\ \\hline\nnon-commut., medial, idempotent &1&0&1&0 \\\\ \\hline\n\\end{tabular}\n\n\\bigskip\\noindent\n\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline $k=4$, \\ $l=gcd\\,(n-1,12)$ &$l=12$&$l=6$&$l=4$&$l=3$&$l=2$&$l=1\\rule{0mm}{3mm}$ \\\\\n\\hline $n\\equiv t\\,({\\rm mod}\\,12)$ & $t=1$\n&$t=7$&$t=5,\\,9$&$t=4,\\,10$&$t=3,11$&$t=t_0\\rule{0mm}{3mm}$\\\\\n\\hline {all} &10&8&9&3&7&2 \\\\ \\hline\n commutative &5&4&5&2&4&2 \\\\ \\hline\n {commutative, idempotent} &2&1&2&0&1&0 \\\\ \\hline\n non-commut., medial, idem. &3&2&1&1&1&0 \\\\ \\hline\n {non-commut., medial}, non-idem., &2&2&3&0&2&0 \\\\ \\hline\n\\end{tabular}\n\n\\smallskip $t_0=0,\\,2,\\,6,\\,8$.\n\n\\bigskip\\noindent\n\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline $k=5$, \\ $l=gcd\\,(n-1,20)$ &$l=20$&$l=10$&$l=5$&$l=4$&$l=2$\n&$l=1\\rule{0mm}{3mm}$\n\\\\ \\hline $n\\equiv t\\,({\\rm mod}\\, 20)$&$t=1$ &$t=11$&$t=6,16$&$t=t_1$\n& $t=t_2$ & $t=t_3\\rule{0mm}{3mm}$\\\\ \\hline\n {all}&5&3&2&4&2&1 \\\\ \\hline\n {commutative}&2&2&2&1&1&1 \\\\ \\hline\n {commutative, idempotent} &1&1&1&0&0&0 \\\\ \\hline\n non-commut., idem., medial &3&1&0&3&1&0 \\\\ \\hline\n non-commut., non-idem., medial &0&0&0&0&0&0 \\\\ \\hline\n\\end{tabular}\n\n\\smallskip $t_1=5,9,13,17$, \\ \\\n\n$t_2=3,7,15,19$, \\ \\\n\n$t_3=0,2,4,8,10,12,14,18$.\n\n\\bigskip\\noindent\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline $k=6$, \\ $l=gcd\\,(n-1,6)$\n&$l=6$&$l=3$&$l=2$&$l=1\\rule{0mm}{3mm}$\n\\\\ \\hline\n$n\\equiv t\\,({\\rm mod}\\,6)$&$t=1$\n&$t=4$&$t=3$&$t=0,\\,2\\rule{0mm}{3mm}$\n\\\\ \\hline all&7&3&5&2\\\\ \\hline\ncommutative&4&2&2&1 \\\\\n\\hline {commutative, idempotent}&1&0&0&0\\\\ \\hline {medial,\nidempotent, non-commut.}&1&0&1&0 \\\\ \\hline\n non-commut., medial, non-idem.,&1&0&1&0 \\\\ \\hline\nnon-medial &1&1&1&1 \\\\\n\\hline\n\\end{tabular}\n\n\\bigskip\\noindent\n\\begin{tabular}{|l|c|c|c|c|c|c|c|c|}\\hline\n$k=7$, \\ $l=gcd\\,(n-1,42)$ &$l=42$&$l=21$&$l=14$&$l=7$ &$l=6$ &\n$l=3$&$l=2$ &$l=1\\rule{0mm}{3mm}$ \\\\\n\\hline $n\\equiv t\\,({\\rm mod}\\,42)$&$t=1$&$t=22$&$t=t_4$ &$t=t_5$ & $t=t_6$\n&$t=t_7$&$t=t_8$&$t=t_9\\rule{0mm}{3mm}$ \\\\\n\\hline all&7&4&3&2&6&3&2&1 \\\\\n\\hline {commutative}&2&2&2&2&1&1&1&1\\\\ \\hline\nnon-com., medial, idem.,& 5&2&1&0&5&2&1&0\\\\\n\\hline commutative, idempotent&1&1&1&1&0&0&0&0\\\\\n\\hline\n\\end{tabular}\n\n\\smallskip\n\n$t_4= 15, 29$,\n\n$t_5=8, 36$,\n\n$t_6=7,13,19,25,31,37$,\n\n$t_7=4,10,16,28,34,40$,\n\n$t_8= 3,5,9,11,17,21,23,33,35,39,41$,\n\n$t_9=0,2,6,12,14,18,20,24,26,30,32,38$.\n}\n\n\\section{Term equivalence of $n$-ary groups}\n\nFor any general algebra $\\frak{A}=(A;\\mathbb{F})$ one can define\nthe set $\\mathbb{T}^{(n)}(\\frak{A})$ of all {\\it $n$-ary term\noperations} as the smallest set of $n$-ary operations on $A$\ncontaining $n$-ary projections (or $n$-ary trivial operations, in\nanother terminology) and closed under compositions with\nfundamental operations. Then the set\n$\\mathbb{T}(\\frak{A})=\\bigcup\\limits_{n=1}^{\\infty}\\mathbb{T}(\\frak{A})$\nof all {\\it term operations} is the smallest set of operations on\nthe set $A$ containing the set $\\mathbb{F}$ of fundamental\noperations and all projections $e_i^{(n)}(x_1^n)=x_i$,\n($i=1,2,\\ldots,n$, $n=1,2,\\ldots$), and closed under (direct)\ncompositions. Of course, $\\mathbb{T}(\\frak{A})$ is a {\\it clone}\nin the sense of Ph. Hall (see, e.g., \\cite{Cohn}). It is worth\nmentioning that the term operations were also called {\\it\nalgebraic operations} by several authors (see, e.g.,\n\\cite{Mar'61}). Two algebras $\\frak{A}_1=(A;\\mathbb{F})$ and\n$\\frak{A}_2=(A;\\mathbb{G})$ are called {\\it term equivalent} if\n$\\mathbb{T}(\\frak{A}_1)=\\mathbb{T}(\\frak{A}_2)$ (see, e.g.,\n\\cite{Gb}, p. 32, 56). If elements from some subsets $A_1$ and\n$A_2$ of $A$ are treated as constant elements of algebras\n$\\frak{A}_1=(A;\\mathbb{F}\\cup A_1)$ and\n$\\frak{A}_2=(A;\\mathbb{G}\\cup A_2)$, respectively, and\n$\\mathbb{T}(\\frak{A}_1)=\\mathbb{T}(\\frak{A}_2)$, then $\\frak{A}_1$\nand $\\frak{A}_2$ are {\\it polynomially equivalent}. Two varieties\n$\\mathcal{V}_1$ and $\\mathcal{V}_2$ of algebras (perhaps of\ndifferent types) are term equivalent (polynomially equivalent,\nrespectively) if for every algebra $\\frak{A}_1\\in\\mathcal{V}_1$\nthere exists an algebra $\\frak{A}_2\\in\\mathcal{V}_2$ term\nequivalent (polynomially equivalent, resp.) to $\\frak{A}_1$, and\nvice versa.\n\n\\medskip\n\nUsing Theorem~\\ref{genGH} and taking into account formulas\n(\\ref{k-fi}) and (\\ref{b_2}), we have\n\n\\begin{theorem}\\label{teq}\nLet $\\mathfrak{G}=(G;f,\\bar{}\\;)$ be an $n$-ary group for a fixed\n$n>2$, an element $a$ belong to $G$, and let $k$ be such a natural\nnumber that $(k-1)$ divide $(n-1)$. Then the algebra\n$\\mathfrak{G}_a=(G;f,\\bar{}\\; ,a)$, with the additional constant\n$a\\in G$ is term equivalent to the algebra $(G;g,\\varphi,b_2^k)$,\nwhere $\\varphi$ is an automorphism of a $k$-ary group $(G;g)$,\n$(k-1)$ divides $(n-1)$, and $b_2,\\ldots,b_k$ are constant\nelements in $G$ such that $\\varphi(b_i)=b_i$ for $i=2,\\ldots,k$\nand $g(\\varphi^{n-1}(x),b_2^k)=g(b_2^k,x)$ for all $x\\in G $.\n\\end{theorem}\n\nIndeed, $f$ is determined by $g$, $\\varphi$ and $b_2,\\ldots,b_k$\nby the formula $(iv)$ from Theorem~\\ref{genGH}. The function \\\n$\\bar{}:x\\to\\overline{x}$ can be easily expressed by the operation\n$g$. Namely, if $f=g_{(t)}$, then $\\overline{x}=x^{<-t>}$, where\n$x^{}$ is a $k$-ary power of $x$. According to\nTheorem~\\ref{genGH}, the element $\\overline{x}$ also can be\nexpressed by $g$, $\\varphi$ and $b_2,\\ldots,b_n$ as a solution $z$\nof the equation\n$$\nx=f(\\stackrel{(n-1)}{x},z)=g_{(\\cdot)}(x,\\varphi(x),\\varphi^2(x),\\ldots,\n\\varphi(x)^{n-2},b_2^k,z).\n$$\n\nConversely, the operations of $(G;g,\\varphi,b_2^k)$ are term\nderived from the operations of $(G;f,\\bar{}\\;)$ by (\\ref{k-fi})\nand (\\ref{b_2}). $(G;g)=ret_{a_1^r}(G;f)$, where\n$a_1=\\ldots=a_r=a$, which completes the proof of\nTheorem~\\ref{teq}.\n\n\\medskip\n\nBy Theorem~\\ref{thGH} and formulas (\\ref{autom}) and (\\ref{b}), we\nhave the following corollaries.\n\n\\begin{corollary}\\label{term1}{\\em\nLet $\\mathfrak{G}=(G;f,\\bar{}\\;)$ be an $n$-ary group for a fixed\n$n>2$, and let an element $a$ belong to $G$. Then the algebra\n$\\mathfrak{G}_a=(G;f,\\bar{}\\; ,a)$ is term equivalent to the\n$HG$-algebra $(G;\\cdot,\\varphi,b)$, where $(G;\\cdot)$ is a group,\n$\\varphi\\in Aut(G;\\cdot)$, $b\\in G$, $\\varphi(b)=b$,\n$\\varphi^{n-1}(x)=b\\cdot x\\cdot b^{-1}$ for all $x\\in G$.}\n\\end{corollary}\n\n\\begin{corollary}\\label{term2}\n{\\em For fixed $n>2$, the variety of $n$-ary groups $($as algebras\nof type $(n,1)\\,)$ is polynomially equivalent to the variety of the\ncorresponding $HG$-algebras $($as algebras of type $(2,1,1,0)\\,)$.\n}\n\\end{corollary}\n\nLet now $\\mathfrak{G}=(G;f,\\bar{}\\;)$ be a semiabelian $n$-ary\ngroup $(n>2)$. \\linebreak Then the $HG$-algebra associated with\n$\\mathfrak{G}$ has a commutative group operation denoted by $+$.\nLet $\\mathfrak{H}=(G;+,\\varphi,b)$ be associated with\n$\\mathfrak{G}$ and $\\mathfrak{G}_a=(G;f,\\bar{}\\;,a)$. Then\n$\\mathfrak{H}$ and $\\mathfrak{G}_a$ are term equivalent (see\nTheorems \\ref{thGH} and \\ref{genGH}, Corollary~\\ref{term1}, and\nformulas (\\ref{inv}) -- (\\ref{b_2})). In this case we have\n\\[\n\\arraycolsep=.5mm\n\\begin{array}{rl}\n-y&=f(\\overline{a},\\stackrel{(n-3)}{x},\\overline{x},\\overline{a}\\,),\\\\[4pt]\nx+y&=f(x,\\stackrel{(n-3)}{(-y)},\\overline{(-y)},\\overline{a}\\,),\\\\[4pt]\n\\varphi(x)&=f(\\overline{a},x,\\stackrel{(n-2)}{a}),\\\\[4pt]\n{\\rm and } \\ \\ \\ \\ b&=f(\\stackrel{(n)}{\\overline{a}}).\n\\end{array}\n\\]\n\nWe can describe of all term operations of $\\mathfrak{G}_a$ by using\nthe language of $HG$-algebras.\n\nAt first, we consider unary term operations. Denote by $g_i(x)$\nthe following operation\n\\begin{equation}\\label{g_i}\ng_i(x)=k_{i1}\\varphi^{l_{i1}}(x)+k_{i2}\\varphi^{l_{i2}}(x)+\\ldots\n+k_{it}\\varphi^{l_{it}}(x)\n\\end{equation}\nfor some $t,l_{i1},\\ldots,l_{it}$ non-negative integers and some\n$k_{i1},\\ldots,k_{it}\\in\\mathbb{Z}$. Then it is easily to verify\n\n\\begin{lemma}\nLet $\\,\\mathfrak{H}=(G;+,\\varphi,b)$ be the $HG$-algebra\nassociated with a semiabelian $n$-ary group $\\mathfrak{G}$. Then\nall unary term operations of $\\mathfrak{H}$ $($and of\n$\\;\\mathfrak{G}_a\\,)$ are of the form\n\\begin{equation}\\label{g}\ng(x)=g_i(x)+k_g b\n\\end{equation}\nfor some $g_i$ of the form $(\\ref{g_i})$ and $k_g\\in\\mathbb{Z}$.\n\\end{lemma}\n\nIndeed, it is enough to observe that\n$g\\in\\mathbb{T}^{(1)}(\\mathfrak{H})$, $\\varphi(g(x))$ is again of\nthe form (\\ref{g}), and the set of all such operations is closed\nunder addition.\n\n\\begin{theorem}\nLet $\\,\\mathfrak{H}=(G;+,\\varphi,b)$ be the $HG$-algebra\nassociated with a semiabelian $n$-ary group $\\mathfrak{G}$. Then\nall $m$-ary term operations of $\\mathfrak{H}$ $($and of\n$\\;\\mathfrak{G}_a\\,)$ are of the form\n\\begin{equation}\\label{sum}\nF(x_1,\\ldots,x_m)=\\sum\\limits_{i=1}^{m}g_i(x_i)+k_F b\n\\end{equation}\nfor some $g_i(x)$ of the form $(\\ref{g_i})$ and\n$k_F\\in\\mathbb{Z}$.\n\\end{theorem}\n\nA verification of this theorem can be done by induction with\nrespect to the complexity of term operations and we left it to\nreaders.\n\n\\section{$\\mathcal{Q}$-independent sets in $HG$-algebras}\n\nE. Marczewski observed at the end of the 1950s that there are\ncommon features of linear independence of vectors and\nset-theoretical independence, and proposed a general scheme of\nindependence called here {\\em $\\mathcal{M}$-independence}. Recall\nthat the notion of set-theoretical independence (or, more\ngenerally, independence in Boolean algebras, see, e.g.,\n\\cite{BalF82}, \\cite{BrRu81}, \\cite{Gla71}, \\cite{Mar60}) was\nintroduced at the mid-1930s by G. Fichtenholz and L. Kantorovich\n\\cite{FiKa34} and also, independently, by E. Marczewski himself,\nand this notion is very important in Measure Theory (see, e.g.,\n\\cite{FiKa34}, \\cite{Mar38}, \\cite{Mar48a}, \\cite {Myc68},\n\\linebreak and \\cite{Sik64}).\n\nLet $\\mathfrak{A}=(A;\\mathbb{F})$ be an algebra $\\emptyset \\neq\nX\\subseteq A$. The set $X$ is said to be $\\mathcal{M}$\\textit{-independent}\n(see \\cite{Mar'58}, \\cite{Mar'61}) $(X\\in Ind(\\mathfrak{A};\\mathcal{M})$,\nfor short$)$ if\n\n\\begin{enumerate}\n\\item[(a) ] $(\\forall n\\in \\mathbb{N}$, $n\\leqslant card(X))$\n$(\\forall f,g\\in\\mathbb{T}^{(n)}(\\mathfrak{A}))$\n($\\forall\\underset{\\neq }{\\underbrace{a_{1},\\ldots ,a_{n}}}\\in X$)\n\n\\hspace*{3mm}$\\big[f(a_{1}^{n})=g(a_{1}^{n})\\Longrightarrow f=g \\\n({\\rm in }\\ A)\\big]$.\n\n\\bigskip\n\nThis condition is equivalent to each of the following ones\n\\medskip\n\n\\item[(b)] $(\\forall n\\in \\mathbb{N}$, $n\\leqslant card(X))$\n$(\\forall f,g\\in\\mathbb{T}^{(n)}(\\mathfrak{A}))$ $(\\forall p:\nX\\rightarrow A)$ $(\\forall a_{1},\\ldots ,a_{n}\\in X)$\n$$\n\\big[f(a_{1}^{n})=g(a_{1}^{n})\\Longrightarrow f(p(a_{1}),\\ldots\n,p(a_{n}))=g(p(a_{1}),\\ldots ,p(a_{n}))\\big],\n$$\n\n\\item[(c)] $(\\forall p\\in A^{X}) \\ (\\exists\\bar{p}\\in Hom(\\langle\nX\\rangle_{\\mathfrak{A}},\\mathfrak{A})) \\ \\bar{p}|_{X}=p$, where\n$\\langle X\\rangle_{\\mathfrak{A}}$ is a subalgebra of\n$\\mathfrak{A}$ generated by $X$,\n\n\\item[(d)] $\\langle X\\rangle _{\\mathfrak{A}}$ is a $\\mathbb{K}$-{\\em free\nalgebra $\\mathbb{K}$-freely generated by} $X$, where\n$\\mathbb{K}=\\{\\mathfrak{A}\\}$ (or, by Birkhoff Theorem,\n$\\mathbb{K}=\\mathcal{H}\\mathcal{S}\\mathcal{P}\\{\\mathfrak{A}\\}$, a variety gene\\-rated\nby $\\mathfrak{A}$).\n\\end{enumerate}\n\n\\medskip\n\nBasic properties of $\\mathcal{M}$-independence are the following\nones:\n\\begin{itemize}\n\\item (``\\textit{hereditarity}'') $X\\in Ind\\;(\\mathfrak{A},\\mathcal{M})$, \\\n$Y\\subseteq X\\Longrightarrow Y\\in\nInd\\;(\\mathfrak{A},\\mathcal{M})$,\\vspace{5pt}\n\n\\item $(\\forall X\\subseteq A)$ $(\\forall\\,{\\rm finite} \\ Y\\subseteq\nX)\\;\\big( Y\\in Ind(\\mathfrak{A},\\mathcal{M})\\Longrightarrow X\\in\nInd(\\mathfrak{A},\\mathcal{M})\\big)$\\\\\n(i.e. the family $\\mathbb{J}=Ind(\\mathfrak{A},\\mathcal{M})$ is of finite\ncharacter).\n\\end{itemize}\n\nThe notion of $\\mathcal{M}$-independence is stronger than that of\nindependence with respect to the closure operator of such a kind\n$X\\mapsto\\langle X\\rangle_{\\mathfrak{A}}$ (for $X\\subseteq A$).\n\nThere are some notions of independence which are not special cases\nof $\\mathcal{M}$-independence, such as:\n\n\\noindent\n\\begin{tabular}{rl}\n$\\bullet $ & linear independence in abelian groups, \\\\\n$\\bullet $ & independence with respect to a closure operator\n$\\mathcal{C}$\n(i.e. $\\mathcal{C}$-independence), \\\\\n$\\bullet $ & stochastic independence, \\\\\n$\\bullet $ & ``independence-in-itself'' defined by J.~Schmidt (in 1962), \\\\\n$\\bullet $ & ``weak independence'' used by S.~\\thinspace\n\\'{S}wierczkowski (in 1964).\n\\end{tabular}\n\nFor this reason, a general notion of independence with respect to\na family of mappings was proposed by E.~Marczewski in 1966 (and\nstudied in \\cite{Mar'69} and \\cite{Gla71}). This notion is general\nenough to cover the above-mentioned kinds of independences.\n\nLet $\\emptyset\\neq X\\subseteq A$ and\n$$\n\\mathcal{Q}_X\\subseteq A^X=\\mathcal{M}_X=\\{p\\;|\\;p:X\\rightarrow A\\},\n$$\n$$\n\\mathcal{Q}(A)=\\mathcal{Q}=\\bigcup\\{\\mathcal{Q}_X \\;|\\;X\\subseteq A \\},\n$$\n$$\n\\mathcal{M}(A)=\\mathcal{M}=\\bigcup\\{A^X \\;|\\;X\\subseteq A\\}.\n$$\n\nFor an algebra $\\mathfrak{A} = (A, \\mathbb{F})$, a mapping \\ $p:X\n\\rightarrow A$ belongs to $\\mathcal{H}_X (\\mathfrak{A})$ if and only if\nthere exists a homomorphism \\ $\\bar{p}:\\langle X\\rangle\n_{\\mathfrak{A}}\\rightarrow A$ such that $\\bar{p}|_{X}=p$.\n\nThe set $X$ is said to be $\\mathcal{Q}$-{\\it independent} ($X\\in\nInd(\\mathfrak{A},\\mathcal{Q})$, for short) if\n\n\\begin{center}\n$\\mathcal{Q}_X \\subseteq \\mathcal{H}_X (\\mathfrak{A})$\n\\end{center}\n\n\\noindent or, equivalently,\n$$\n(\\forall p\\in \\mathcal{Q}_{X})\\;(\\forall\\;{\\rm finite }\\; n\\leqslant\ncard(X))\\;(\\forall f,g\\in \\mathbb{T}^{(n)}(\\mathfrak{A}))\\;\n(\\forall a_{1},\\ldots ,a_{n}\\in X)\\;$$\n$$\\big[f(a_{1}^{n})=g(a_{1}^{n})\\Longrightarrow f(p(a_{1}),\\ldots\n,p(a_{n}))=g(p(a_{1}),\\ldots ,p(a_{n}))\\big].$$\n\n\\medskip\n\\noindent{\\bf Examples.} (In the following examples we will use a\nterminology which differs from the original one.)\n\n\\begin{enumerate}\n\\item[1)] $\\mathcal{Q}=\\mathcal{M}=\\bigcup \\{A^{X}\\;|\\;X\\subseteq A\\};$ $\\mathcal{M}$-\\textit{independence}\n(E.~Marczewski: \\textit{general algebraic independence},\n\\cite{Mar'58}),\n\n\\item[2)] $\\mathcal{Q}=\\mathcal{G}=\\bigcup \\{p|_{X}\\;|\\;p\\in A^{A}$ is \\ diminishing,\n$X\\subseteq A\\};\\;\\mathcal{G}$-{\\it independence }(G.~Gr\\\"{a}tzer:\n\\textit{weak independence}, \\cite{Grat67}), where a mapping $p$ is\ncalled \\textit{diminishing} if\n$$\n(\\forall f,g\\in \\mathbb{T}^{(1)}(\\mathfrak{A}))\\;(\\forall a\\in A)\n\\;\\big[f(a)=g(a)\\Longrightarrow f(p(a))=g(p(a))\\big].$$\n\\end{enumerate}\nFor abelian groups, the notion of $\\mathcal{G}$-independence gives\nus the well-known \\textit{linear independence}.\n\nNow we can able to obtain some results on $\\mathcal{Q}$-independence (for\nspecial families $\\mathcal{Q}$ of mappings, e.g., for $\\mathcal{Q}=\\mathcal{M}$ and $\\mathcal{G}$ ) in\n$HG$-algebras of type $\\mathfrak{H}=(G;+,\\varphi,b)$, where $(G;+)$\nis an abelian group.\n\nIn this case, the equality\n\\begin{equation}\\label{F=G}\nF_1(x_1,\\ldots,x_m)=F_2(x_1,\\ldots,x_m)\n\\end{equation}\n(for two term operations of the form (\\ref{sum}) in $\\mathfrak{H}$)\nis equivalent to the equality\n\\begin{equation}\\label{H=0}\nH(x_1,\\ldots,x_m)=0,\n\\end{equation}\nwhere $H\\in\\mathbb{T}^{(m)}(\\mathfrak{H})$, i.e.\n$H(x_1,\\ldots,x_m)=\\sum\\limits_{i=1}^{m}h_i(x_i)+k_{_H} b$, and\n$0$ denotes the zero of the group $(G;+)$.\n\nConsider a subset $X$ of $G$. Let for $a_1,\\ldots,a_m\\in X$ the\nequality\n\\begin{equation}\\label{H_a}\nH(a_1,\\ldots,a_m)=0,\n\\end{equation}\nhold. Taking into account the mapping $p:X\\to\\langle\nX\\rangle_{\\mathfrak{A}}$ defined by $p(a_i)=0$ and $p(x)=x$ for\n$x\\in X\\setminus\\{a_1,\\ldots,a_m\\}$, we get $k_{_H}b=0$. (We\nobserve that such mapping $p$ belongs to families $\\mathcal{M}$ and $\\mathcal{G}$ .)\nTherefore we have\n$$\n\\sum\\limits_{i=1}^{m} h_i(a_i)=0.\n$$\nConsider the mapping $q_j:X\\to \\langle X\\rangle_{\\mathfrak{A}}$\ndefined for fixed $j\\in\\{1,\\ldots,m\\}$ as follows:\n\\[\nq_j(x)=\\left\\{\\begin{array}{ccl} a_j&{\\rm if }&x=a_j ,\\\\[4pt]\n0&{\\rm if }&x\\ne a_j.\\end{array}\\right.\n\\]\nWe obtain $h_j(a_j)=0$ for all $j=1,2,\\ldots,m$. (In the\nconsidered case all $q_j$ belong to $\\mathcal{M}$ and $\\mathcal{G}$ .)\n\nIn particular, we can easily observe, by similar considerations,\nthat the following result holds:\n\n\\begin{theorem}\nLet $X\\subseteq G$ be a subset of the $HG$-algebra\n$\\mathfrak{H}=(G;+,\\varphi,b)$. Then $X\\in Ind(\\mathfrak{H},\\mathcal{G})$ if and only if\nfor any $m\\leqslant card(X)$ for all $a_1,\\ldots,a_m\\in X$ and\nevery term operation\n$H(x_1,\\ldots,x_m)=\\sum\\limits_{i=1}^{m}h_i(x_i)+k_{_H}b$ the\nequality\n\\begin{equation}\n\\sum\\limits_{i=1}^{m}h_i(a_i)+k_{_H}b=0\n\\end{equation}\nis equivalent with\n\\[\n\\left(\\forall i\\in\\{1,\\ldots,m\\}\\right)\\,\\left(h_i(a)=0\\;\\&\\;\nk_{_H}b=0\\right).\n\\]\n\nMoreover, $X$ is $\\mathcal{M}$-independent in this $HG$-algebra\niff for all pairwise different elements $a_1,\\ldots,a_m$ from $X$\nequality $(26)$ implies $h_i(x)=0$ for all $i=1,2,\\ldots,m$ and\n$\\,k_Hb=0$.\n\\end{theorem}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nThe evaluation of the exponential of a square matrix $\\e^{\\bf A}$ is a classic problem of computational linear algebra.[1] A large number of methods have\nbeen proposed and used for its evaluation. None of these methods have produced a generally applicable method which is sufficiently accurate as well as being reasonably fast.\nThus this problem might be considered unsolved. It is clearly an important and pervasive problem which arises in a wide variety of contexts.[2,3] A method which is capable of producing\na solution to essentially arbitrary precision would thus be of great importance. The present work uses a variant of a method described in Ref. 1, namely the introduction of an\nartificial time-parameter\nwhich produces an initial-value problem. Instead of calling a \\lq canned\\rq \\ solver, the present work uses a method introduced by several of the present authors [4] to solve quantum mechanical\ninitial-value problems. In this method, a finite element technique is used to propagate from an initial condition at $t=0$, which is the unit matrix, to the desired result at $t=1$. The time axis\nis broken up into an arbitrary number of time elements and the solution is propagated from element to element, using a special basis in time introduced here for the first time. \n\nThe next section presents the analysis of the problem and describes the solution algorithm. Then the algorithm is applied to evaluate the exponential of a number of test matrices. Finally, the\nconclusions are presented.\n\n\\section*{Analysis and Solution Algorithm}\n\nThe problem at hand is the evaluation of the exponential of a square (generally complex) matrix $e^{\\bf A}$. The present method introduces an\nartificial time parameter so as to transform the evaluation into the solution of an initial-value problem. For a given $n\\times n$ square matrix $\\mathbf A$, consider the following parametrized function definition:\n\n\\begin{equation} \n \\mathbf{\\Psi}(t) \\equiv \\rme^{\\mathbf{A}t}, \\\\\n\\end{equation}\n\n\\noindent\nwhere $\\bf \\Psi$ is also an $n\\times n$ square matrix and the desired solution is $\\mathbf{\\Psi}(1)=\\rme^{\\mathbf{A}}$ which evolves\nfrom the initial value given by $\\mathbf{\\Psi}(0)=\\mathbf{1}$ ($\\mathbf{1}$ is the diagonal unit matrix). This is a solution of the following linear ordinary differential equation, written for each matrix element,\n\\begin{equation}\n\\dot{\\Psi}_{ij}(t)=\\sum_{k=1}^{n}A_{ik}\\Psi_{kj}(t),\n\\end{equation}\n\n\\noindent\nwhere the over-dot stands for the time-derivative. The time axis extends over the interval $[0, 1]$. Now break the time axis\nup into elements that extend between nodes $t_{\\rm{i}}$ and $t_{\\rm{i+1}}$, and define a local time $\\tau$ that spans $[-1, 1]$.\nThe local time transformation is defined by the relation, \n\\begin{equation}\n\\tau =qt -p,\n\\end{equation}\n\n\\noindent\nwhere, $q = 2\/(t_{\\rm{i+1}} - t_{\\rm{i}}) $ and $p= (t_{\\rm{i+1}} + t_{\\rm{i}})\/(t_{\\rm{i+1}} - t_{\\rm{i}})$.\nThus, for an arbitrary time element $e$, Eq. $(2)$ can be written in terms of local time $\\tau$ as\n\n\\begin{equation}\nq\\dot{\\Psi}^{(e)}_{ij}(\\tau)=\\sum_{k=1}^{n}A^{(e)}_{ik}\\Psi^{(e)}_{kj}(\\tau).\n\\end{equation}\n\nAt this point, we will use the following \\emph{ansatz} for $\\bf \\Psi^{(e)}$ to enforce continuity between two consecutive finite elements\n\n\\begin{equation}\n\\Psi^{(e)}_{ij}(\\tau) = f^{(e)}_{ij}(\\tau)+\\Psi^{(e-1)}_{ij}(+1), \\qquad f^{(e)}_{ij}(-1) = 0\n\\end{equation}\n\n\\noindent\nand expand $f^{(e)}_{ij}(\\tau)$ as\n\n\\begin{equation}\nf^{(e)}_{ij}(\\tau) = \\sum_{\\mu=0}^{m-1} B_{\\mu}^{ij(e)} s_\\mu (\\tau)\n\\end{equation}\n\n\\noindent\nin a basis we define by\n\n\\begin{equation}\ns_{\\mu}(\\tau) = \\int_{-1}^\\tau T_{\\mu}(\\tau) \\, \\rm{d}\\tau\n\\end{equation}\n\n\\noindent\nwhere $T_{\\mu}(\\tau)$ are Chebyshev Polynomials of the first kind.[5] Note that these basis functions enforce the\ninitial condition on the $f$'s given in Eq. (5) since $s_{\\mu}(-1)=0$. The result for the decomposition of $f$ in $m$ basis functions is\n\n\\begin{eqnarray}\n\\Psi^{(e)}_{ij}(\\tau) = \\sum_{\\mu=0}^{m-1} B_{\\mu}^{ij(e)} s_\\mu (\\tau) +\\Psi_{ij}^{(e-1)}(+1) \\\\\n\\dot{\\Psi}^{(e)}_{ij} (\\tau) = \\sum_{\\mu=0}^{m-1} B_{\\mu}^{ij(e)} T_\\mu (\\tau).\n\\end{eqnarray}\n\n\\noindent\nNow, insert (8) and (9) into Eq. (4), and multiply from the left by $w(\\tau) s_{\\mu'}(\\tau)$ and integrate from $-1$ to $+1$\n(note that $w(\\tau) = (1- \\tau^2)^{-1\/2}$ is the weighting function for Chebyshev polynomials). Rearranging terms we get,\n\n\\begin{equation}\n\\fl \\eqalign {q\\sum_{\\mu} [\\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) T_{\\mu}(\\tau) \\, \\rm{d} \\tau] B_{\\mu}^{ij(e)} =\n\\sum_{k\\mu} A^{(e)}_{ik} [\\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) s_{\\mu}(\\tau) \\, \\rm{d} \\tau] B_{\\mu}^{kj(e)} \\cr\n+ \\sum_{k} A^{(e)}_{ik} [\\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) T_{0}(\\tau) \\, \\rm{d} \\tau] \\Psi_{kj}^{(e-1)}(+1)}\n\\end{equation}\n\n\\noindent\nwhere, $T_{0}(\\tau) = 1$. Defining, the integrals in the above equation as\n\n\\begin{eqnarray}\nC_{\\mu' \\mu} &\\equiv \\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) T_{\\mu}(\\tau) \\, \\rm{d} \\tau \\\\\nD_{\\mu' \\mu} &\\equiv \\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) s_{\\mu}(\\tau) \\, \\rm{d} \\tau \\\\\ng_{\\mu'} &\\equiv \\int_{-1}^{1} s_{\\mu'}(\\tau) \\omega (\\tau) T_{0}(\\tau) \\, \\rm{d} \\tau\n\\end{eqnarray}.\n\n\\noindent\nand substituting Eqs. (11-13) into Eq. (10) gives,\n\n\\begin{equation}\n{q\\sum_{\\mu} C_{\\mu' \\mu} B_{\\mu}^{ij(e)} =\n\\sum_{k\\mu} A^{(e)}_{ik} D_{\\mu' \\mu} B_{\\mu}^{kj(e)}\n+ g_{\\mu'} \\sum_{k} A^{(e)}_{ik} \\Psi_{kj}^{(e-1)}(+1)}\n\\end{equation}\n\n\\noindent\nor, rearranging\n\n\\begin{equation}\n\\sum_{\\mu k} (q C_{\\mu' \\mu} \\delta_{ik} - A^{(e)}_{ik} D_{\\mu' \\mu} )B_{\\mu}^{kj(e)}\n= g_{\\mu'} \\sum_{k} A^{(e)}_{ik} \\Psi_{kj}^{(e-1)}(+1)\n\\end{equation}\n\n\\noindent\nwhere $\\delta_{ik}$ is the usual Kronecker delta function. Then rewrite Eq. (15) as\n\n\\begin{equation}\n\\sum_{\\mu k} \\Omega^{(e)}_{(\\mu' i)(\\mu k)}B_{\\mu}^{kj(e)}\n= \\Gamma_{\\mu'}^{ij(e,e-1)}\n\\end{equation}\n\n\\noindent\nwhere\n\n\\begin{eqnarray}\n\\Omega^{(e)}_{(\\mu' i)(\\mu k)} &\\equiv (q C_{\\mu' \\mu} \\delta_{ik} - A^{(e)}_{ik} D_{\\mu' \\mu} ) \\\\\n\\Gamma_{\\mu'}^{ij(e,e-1)} &\\equiv g_{\\mu'} \\sum_{k} A^{(e)}_{ik} \\Phi_{kj}^{(e-1)}(+1).\n\\end{eqnarray}\n\nEquation (16) is a set of simultaneous equations of size $(n \\times m)$, which can be written in matrix form as, \n\n\\begin{equation}\n\\mathbf{\\Omega^{(e)} B}^{j (e)}= \\mathbf{\\Gamma}^{j(e,e-1)}\\qquad j = 1, 2,..., n.\n\\end{equation}\n\n\\noindent\nHere, $\\mathbf{\\Omega^{(e)}}$ is a (complex) matrix and for each $j$, $\\mathbf{\\Gamma^{j(e,e-1)}}$ and $\\mathbf{B^{j(e)}}$ are vectors. \nEq. (19) applies for each time element $e$. The solution is propagated from element to element, from $t=0$ to $t=1$. \nThe above equation can be solved numerically in many ways, but we have chosen the method of LU decomposition.[6]\nThe present method is ideally suited to high-performance computers where the solver of choice would probably\nbe iterative. In the present case, we\napply LU decomposition to \n$\\mathbf{\\Omega^{(e)}}$ and back substituting all of the $\\mathbf{\\Gamma}^{j(e,e-1)}$'s, we will have all the elements for the matrix (which can also be viewed as three dimensional) $\\mathbf{B^{j(e)}}$. This LU decomposition only needs to be done once since $\\mathbf{\\Omega^{(e)}}$ is independent of time. Thus, the propagation\njust involves a matrix vector multiply.\nThen, we employ Eq. $(8)$ to solve for $\\mathbf{\\Psi^{(e)}}(\\tau = 1)$ for the element e, which, in turn, will be used as $\\mathbf{\\Psi^{(e + 1)}}(\\tau = -1)$ for the next element e + 1. Starting off with a unit matrix for $\\mathbf{\\Psi^{(1)}}(t = 0)$, we continue this process till we calculate $ \\mathbf{\\Psi(t = 1)}$ at the last node, which is the exponential of the given matrix $\\mathbf{A}$.\n\n\\section*{Results}\n\nThe calculations presented below were done on a Macintosh Intel laptop using Gnu C++ which has machine accuracy limit of $2.22045\\times 10^{-16} $.\nAs an illustration, let's borrow a 'pathological' matrix from [1], which we have modified slightly to make it even worse. Consider a matrix $\\mathbf{M1}$ given by,\n\n\n\\begin{eqnarray}\n\\eqalign{\\mathbf{M1} &= \\left[\\begin{array}{cc}-73 & 36 \\\\-96 & 47\\end{array}\\right] \\\\\n&=\\left[\\begin{array}{cc}1 & 3 \\\\2 & 4\\end{array}\\right] \\left[\\begin{array}{cc}-1 & 0 \\\\0 & -25\\end{array}\\right] {\\left[\\begin{array}{cc}1 & 3 \\\\2 & 4\\end{array}\\right]}^{-1}.}\n\\end{eqnarray}\n\n\\noindent\nThe exponent of $\\mathbf{M1}$ can be easily calculated as,\n\\begin{eqnarray*}\ne^\\mathbf{M1} &= \\left[\\begin{array}{cc}1 & 3 \\\\2 & 4\\end{array}\\right] \\left[\\begin{array}{cc}e^{-1} & 0 \\\\0 & e^{-25}\\end{array}\\right] \\left[\\begin{array}{cc}-2 & 3\/2 \\\\1 & -1\/2\\end{array}\\right] \\\\\n&= \\left[\\begin{array}{cc}{-2e^{-1}+3e^{-25}} & {(3\/2)(e^{-1}-e^{-25})} \\\\{-4e^{-1}+4e^{-25}} & {3e^{-1}-2e^{-25}}\\end{array}\\right].\n\\end{eqnarray*}\n\n\\noindent\nThe above matrix, exact to $16$ decimal places, is given by\n\n\\begin{equation}\ne^\\mathbf{M1}\\cong \\left[\\begin{array}{cc}-0.7357588823012208 & 0.5518191617363316 \\\\-1.4715177646302175 & 1.1036383234865511\\end{array}\\right].\n\\end{equation}\n\n\\noindent\nThe result of our program is displayed below and we run it by using just $8$ time steps and $8$ basis functions. The result is accurate to $13$ decimal places already.\n\n\\begin{equation}\ne^\\mathbf{M1}\\cong \\left[\\begin{array}{cc}-0.7357588823012(181) & 0.5518191617363(358) \\\\-1.4715177646302(120) & 1.1036383234865(592)\\end{array}\\right].\n\\end{equation}\n\nAs an example of a non - diagonalizable matrix, consider the following matrix $\\mathbf{M2}$, with complex eigenvalues\n\n\\begin{equation}\n\\mathbf{M2} = \\left[\\begin{array}{cc}0 & -1 \\\\1 & 0\\end{array}\\right]\n\\end{equation}\n\n\\noindent\nIt can be shown that,\n\n\\begin{equation}\ne^\\mathbf{M2} = \\left[\\begin{array}{cc}cos(1) & -sin(1) \\\\sin(1) & cos(1)\\end{array}\\right]\n\\end{equation}\n\n\\noindent\nWe are able to achieve $14$ decimal digit accuracy with $8$ time steps and $8$ basis functions.\n\n\\begin{equation}\ne^\\mathbf{M2} \\cong \\left[\\begin{array}{cc} 0.54030230586814 & -0.84147098480790 \\\\ 0.84147098480790 & 0.54030230586814\\end{array}\\right]\n\\end{equation}\n\n\\noindent\n Table 1 shows the minimum number of basis functions, for a given number of time steps, which were required to achieve a precision of $\\pm 1\\times 10^{-14}$ on matrices whose exponential is known exactly. The matrices chosen are: the simplest possible matrix - a $2 \\times 2$ real unit matrix, for which the result is the constant $e$ on the diagonals, and the matrices $\\mathbf {M1}$ and $\\mathbf {M2}$.\n\nLet's check our program on matrices, which we picked randomly and for which we had no $\\it apriori$ knowledge as to the result of their exponentiation. We fixed 8 time steps and\/or 8 basis functions, and varied the other corresponding parameter from 5 to 40 and checked how the results of the program varied in accuracy. For the sake of saving space, we only displayed the result of the last element--the other elements of the matrix exponential behaved similarly. \nThe matrices chosen are a $5 \\times 5$ real matrix $\\mathbf{M3}$,\n\n\\begin{equation}\n\\mathbf{M3} = \\left[\\begin{array}{ccccc}-0.1 & -0.2 & -0.3 & -0.4 & -0.5 \\\\-0.6 & -0.7 & -0.8 & -0.9 & -1 \\\\0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\\\0.6 & 0.7 & 0.8 & 0.9 & 1 \\\\1 & 2 & 3 & 4 & 0\\end{array}\\right]\n\\end{equation}\n\n\\noindent\nand a $3 \\times 3$ complex matrix $\\mathbf{M4}$\n\n\\begin{equation}\n\\mathbf{M4} = \\left[\\begin{array}{ccc}1+i & 1-i & i \\\\1 & 2i & 0 \\\\1+2i & -1+i & -1-i\\end{array}\\right].\n\\end{equation}\n\n\nFrom Table 2, one can see that the numbers up to $12$ decimal digits have saturated after $5$ time steps and\/or basis functions. Similarly, Table 3 shows $13$ digits of accuracy as we switch the two parameters from $5$ to $40$, except for the case of $5$ basis functions, which only shows $8$ accurate significant digits. This shows that for complex matrices, there is inherently more work for the program to handle because of the imaginary part of the matrix elements and there is apparently more sensitivity to the number of basis functions used than to the number of time steps.\n\n\n\\Table{\\label{tone}Minimum number of basis functions and time steps required for a precision of $\\pm 1\\times 10^{-14}$ for a $2 \\times 2$ unit matrix, $\\mathbf{M1}$ and $\\mathbf{M2}$.}\n\\br\n\\centre{2}{$2\\times 2$ unit matrix}&\\centre{2}{$\\mathbf{M1}$}&\\centre{2}{$\\mathbf{M2}$}\\\\\n\n\\crule{2}&\\crule{2}&\\crule{2}\\\\\nTime steps & Basis functions& Time steps & Basis functions& Time steps & Basis functions\\\\\n\\mr\n\\01&\t11&\t\\0\\005&\t\t-&\t\t\\01&\t\t11\\\\\n\\02&\t\\09&\t\\0\\008&\t\t7&\t\\02&\t\t\\09\\\\\n\\04&\t\\08&\t\\016&\t\t6&\t\\04&\t\t\\08\\\\\n\\08&\t\\07&\t\\050&\t\t5&\t\\08&\t\t\\07\\\\\n16&\t\\06&\t256&\t\t\t4&\t15&\t\t\\06\\\\\n58&\t\\05&\t\\0\\0-&\t\t-&\t\t40&\t\t\\05\\\\\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\Table{\\label{ttwo}Results of matrix $e^{M3}_{5 5}$ for typical runs of 8 time steps and 8 basis functions.}\n\\br\nResult& Time steps & Basis functions\\\\\n\\mr\n\\underline{3.210309305973}118 &\t\t\\05&\t\t\\08\\\\\n\\underline{3.210309305973}288 &\t\t40&\t\t\\08\\\\\n\\underline{3.210309315373}377&\t\t\\08&\t\t\\05\\\\\n\\underline{3.210309305973}281&\t\t\\08&\t\t40\\\\\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\Table{\\label{tthree}Results of matrix $e^{M4}_{3 3}$ for typical runs of 8 time steps and 8 basis functions.}\n\\br\nResult& Time steps & Basis functions\\\\\n\\mr\n\\underline{-0.5119771222980}63 - i \\underline{0.0897728113135}12 &\t\t\\05&\t\t\\08\\\\\n\\underline{-0.5119771222980}81 - i \\underline{0.0897728113135}26 &\t\t40&\t\t\\08\\\\\n\\underline{-0.51197712}1264660 - i \\underline{0.08977281}0979965&\t\t\\08&\t\t\\05\\\\\n\\underline{-0.5119771222980}82 - i \\underline{0.0897728113135}26&\t\t\\08&\t\t40\\\\\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\section*{Conclusion}\n\nWe have presented a robust, easily used, and accurate algorithm for the evaluation of the exponential of a matrix. We did this by introducing an\nartificial time parameter and evaluating the matrix exponential as the solution of an initial-value problem in this artificial time. We solved the initial-value problem\nby using finite elements in time with a new time basis which we defined here so as to enforce the initial conditions on the solution at the beginning of each time finite element. This resulted in set of simultaneous equations for the expansion coefficients. \nThe actual algorithm employed here was an LU decomposition which was very fast and efficient. The relative efficiency of the method should be most\napparent when implemented on high-performance computers since the algorithm is highly parallel.\nThe method was applied to several matrices as a proof of the validity of the algorithm. \nThe results of our calculations show that we only need about $8$ basis functions and $8$ time steps for the matrices considered for accuracies as great as $13$ significant digits. We trust that this method of numerically calculating the exponential of a matrix will be recognized to be a \\emph{nondubious} one! \\\\\n\n\\section*{Acknowlegements}\nThis work was supported by the NSF CREST Center for Astrophysical Science and Technology under Cooperative Agreement \nHRD-0630370.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgment}\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Introduction} \\label{sec:intro}\n\n\\IEEEPARstart{D} {ynamic} systems which exhibit both continuous state evolution and discrete state transitions can typically be modeled as {\\em hybrid automata (HA)} (\\cite{henzinger:96, lynch:03}).\nComputing the reach set of a hybrid automaton from a given set of initial states is a problem of fundamental importance as it is related to safety verification and automated controller synthesis.\nEven though many systems can be so modeled, it is in general undecidable to compute the exact reach set \\cite{henzinger:95} except for classes of hybrid automata whose continuous dynamics are fairly simple, such as timed automata (TA) \\cite{alur:94} and initialized rectangular hybrid automata (IRHA) \\cite{henzinger:95}.\nNeither of these automata allow the standard linear systems dynamics which is widely used for control systems. \nTo broaden the class of systems that can be addressed, research in hybrid system verification in the recent years has focused on algorithms computing over-approximations of the reach set for various classes of hybrid automata (\\cite{henzinger:97, frehse:08, chutinan:03, girard:05, asarin:00, kurzh:00, clarke:03, tiwari:02}).\nHowever, even with this relaxation from exact reach set to over-approximations, it is still a challenging problem to compute an over-approximation of the reach set of hybrid automata with linear dynamics with arbitrarily small approximation error and a termination guarantee for the computation.\n\n\n\\subsection{Related Work}\nFor the computation of reach set of hybrid automata with linear dynamics, several tools and approaches have been proposed in the literature.\nAs an example, HyTech \\cite{henzinger:97} computes the reach set of hybrid automata whose continuous dynamics are more general than those of IRHA by translating the original model into an IRHA if the model is {\\em clock translatable}.\nOtherwise, an over-approximate reach set is computed through an approach, called {\\em linear phase-portrait approximation}, which approximates the original hybrid automaton by relaxing the continuous dynamics of the original automaton.\nPHAVer \\cite{frehse:08} can handle a class of systems called linear hybrid automata that have affine dynamics. \nIt computes a conservative over-approximation of the reach set of such hybrid automata through on-the-fly over-approximation of the phase portrait, which is a variation of the phase-portrait approximation in \\cite{henzinger:97}.\nRecently, another tool, SpaceEx, has been developed based on the algorithm called LeGuernic-Girard (LGG) algorithm \\cite{guernic:10} which allows the handling of hybrid automata with linear differential equations with a larger number of continuous variables compared to other approaches.\n\nIn \\cite{chutinan:03}, a class of hybrid automata, called {\\em polyhedral-invariant hybrid automata (PIHA)}, is defined and an algorithm is proposed to construct a finite state transition system, which is a conservative approximation of the original PIHA. \nDetermining a polyhedral approximation of each sampled segment of the continuous state evolution between switching planes is the underlying fundamental technique in the algorithm that is used.\nAnother approach proposed in \\cite{asarin:00} is also based on the idea of sampling and polyhedral over-approximation of continuous state evolution of a continuous linear dynamics. \nOn the other hand, in \\cite{kurzh:00} and \\cite{girard:05}, ellipsoids and zonotopes are used respectively for approximating continuous state evolution.\n\nHowever, while these algorithms and tools compute some over-approximation of the reach set of hybrid systems with linear dynamics, computation of an over-approximate reach set which is arbitrarily close to the exact reach set of such hybrid systems with guaranteed termination remains an open issue for further research. \n\n\n\n\\subsection{Challenges and Contributions}\nIn general, the key challenges in reach set computation of HA are \n\\begin{inparaenum}[(i)] \n\t\\item to over-approximate the exact continuous flow with arbitrarily small approximation error,\n\t\\item to determine when and where a discrete transition occurs, and\n\t\\item to develop a reach set computation algorithm with termination guarantee. \n\\end{inparaenum}\nIn this paper, we address the problem of computing an over-approximation of the reach set of a special class of hybrid automata, called {\\em Deterministic and Transversal Linear Hybrid Automaton (DTLHA)}, starting from an initial state over a finite time interval.\nWe call such an over-approximate reach set as a {\\em bounded $\\epsilon$-reach set}.\nOur approach can be related to other approaches that use sampling and polyhedral over-approximation as in \\cite{chutinan:03, asarin:00}.\nThe main contributions of our approach are as follows: \n\\begin{inparaenum}[(i)]\n\t\\item We show that an over-approximation of the reach set of a DTLHA can be computed arbitrarily closely to the exact reach set.\n\t\\item We also show that such computation is guaranteed to terminate under a deterministic and transversal restriction on the discrete dynamics. \n\t\\item Furthermore, to facilitate practical computation, we extend these theoretical results to consider the numerical calculation errors caused by finite precision calculation capabilities. \n\\end{inparaenum}\nBased on the theoretical results, we propose an algorithm to compute a bounded $\\epsilon$-reach set of a DTLHA, as well as a software architecture that is designed to improve the flexibility and the efficiency in computing such an over-approximation.\n\nThe paper is organized as follows.\nIn Section \\ref{sec:pre}, we introduce definitions and notations that are used throughout this paper.\nIn Section \\ref{sec:theory}, we show that, for arbitrarily small $\\epsilon > 0$, a bounded $\\epsilon$-reach set of a DTLHA starting from an initial state can be computed under the assumption of infinite precision numerical calculation capabilities. \nIn Section \\ref{sec:cond}, we first derive a set of conditions for computation of a bounded $\\epsilon$-reach set, and then extend these conditions to consider errors caused by finite precision numerical calculation capabilities.\nIn Section \\ref{sec:design}, we propose an algorithm for a bounded $\\epsilon$-reach set computation, as well as an architecture for software implementation of the proposed algorithm.\nFinally, we illustrate an example of bounded $\\epsilon$-reach set computation in Section \\ref{sec:imp}, followed by concluding remarks in Section \\ref{sec:con}.\n\n\n\n\n\n\\section{Preliminaries} \\label{sec:pre}\n\n\nLet $\\mathcal{X} \\subset \\mathbb{R}^n$ be a continuous state space over which a hybrid automaton is defined.\nFor a polyhedron $\\C \\subseteq \\mathbb{R}^n$, we denote its interior by $\\C^{\\circ}$, and its boundary by $\\partial \\C$.\nWe will also use the notation $\\B_r(x)$ to denote a closed ball of radius $r$ with center $x$, i.e., $\\B_r(x) := \\lbrace y \\in \\mathbb{R}^n : \\Vert y - x \\Vert \\leq r \\}$. \nThe specific norm that we use in the definition of $\\B_r(x)$ as well as the sequel is the $\\ell_{\\infty}$-norm. \nSince we are using the $\\ell_{\\infty}$-norm, $\\B_r(x)$ is a hypercubic neighborhood of $x$.\nOne of the advantages of using the $\\ell_{\\infty}$-norm is that the induced hypercubic neighborhood is easily computed. \nMore generally, a hypercube is a special case of a polyhedron, which is important since it is easy to propagate the image of this set under linear dynamics.\nThis is useful in Section \\ref{sec:theory} when we describe our approach for bounded $\\epsilon$-reach set computation.\n\n\nWe now describe the class of hybrid automata considered.\nWe assume that $\\mathcal{X}$ is a closed and bounded subset of Euclidean space, and is partitioned into a collection of polyhedral regions $\\mathcal{C} := \\{\\mathcal{C}_1, \\cdots, \\mathcal{C}_m \\}$ such that $\\C^{\\circ}_i \\ne \\emptyset$ for each $i \\in \\{1, \\cdots, m\\}$ and \n\n\n\\begin{equation}\\label{eq:pre:tess}\n\t\\bigcup_{i=1}^{m} \\mathcal{C}_{i} = \\mathcal{X}, \\quad \\mathcal{C}_{i}^{\\circ} \\cap \\mathcal{C}_{j}^{\\circ} = \\emptyset \\quad for~i \\ne j,\n\\end{equation}\nwhere $m$ is the size of the partition, and each $\\C_i$ is a polyhedron, called \\emph{cell}. \nTwo cells $\\C_i$ and $\\C_j$ are said to be \\emph{adjacent} if the affine dimension of $\\partial \\mathcal{C}_i \\cap \\partial \\mathcal{C}_j$ is $(n-1)$, or, equivalently, cells $\\C_i$ and $\\C_j$ intersect in an $(n-1)$-dimensional facet. \nTwo cells $\\C_i$ and $\\C_j$ are said to be \\emph{connected} if there exists a sequence of adjacent cells between $\\C_i$ and $\\C_j$.\n\n\n\\begin{defn} \\label{def:lha}\nAn $n$-dimensional \\emph{Linear Hybrid Automaton (LHA)},\\footnote{In the hybrid system literature \\cite{henzinger:97, alur:93} the word ``linear automaton'' has been used to denote a system where the differential equations and inequalities involved have constant right hand sides. This does not conform to the standard notion of linearity where the right hand side is allowed to be a function of state. In particular, it does not include the standard class of linear time-invariant systems that is of central interest in control systems design and analysis. We use the term ``linear'' in this latter more mathematically standard way that therefore encompasses a larger class of systems, and, more importantly, encompasses classes of switched linear systems that are of much interest.} \nis a tuple $(\\mathbb{L}, Inv, A, u, \\xrightarrow{G})$ satisfying the following properties: \n\\begin{enumerate}[(a)]\n\t\\item $\\mathbb{L}$ is a finite set of \\emph{locations} or \\emph{discrete states}. The state space is $\\mathbb{L} \\times \\mathbb{R}^n$, and an element $(l, x) \\in \\mathbb{L} \\times \\mathbb{R}^n$ is called a \\emph{state}.\n\t%\n\t\\item[(b)] $Inv: \\mathbb{L} \\rightarrow 2^{\\C}$ is a function that maps each location to a set of cells, called an \\emph{invariant set} of a location, such that \n\t \\begin{inparaenum}[(i)]\n\t\t\\item for each $l \\in \\mathbb{L}$, all the cells in $Inv(l)$ are connected, \n\t\t\\item for any two locations $l, l' \\in \\mathbb{L}$, $Inv(l)^{\\circ} \\cap Inv(l')^{\\circ} = \\emptyset$, and \n\t\t\\item $\\bigcup_{l \\in \\mathbb{L}} Inv(l) = \\X$.\n\t\\end{inparaenum}\n\t%\n\t\\item[(c)] $A: \\mathbb{L} \\rightarrow \\mathbb{R}^{n \\times n}$ is a function that maps each location to an $n \\times n$ real-valued matrix, and \n\t%\n\t\\item[(d)] $u: \\mathbb{L} \\rightarrow \\mathbb{R}^n$ is a function that maps each location to an $n$-dimensional real-valued vector.\n\t%\n\t\\item[(e)] $\\xrightarrow{G}: (\\mathbb{R}^n, \\mathbb{L}) \\times (\\mathbb{R}^n, \\mathbb{L})$ is a binary relation which defines a \\emph{discrete transition} from one state $(x_1, l_1)$ to another state $(x_2, l_2)$ such that $(x_1, l_1) \\xrightarrow{G} (x_2, l_2)$ when $G$ is satisfied and $x_2$ is set to $x_1$ after a discrete transition.\n\\end{enumerate}\n\\end{defn}\nIn the sequel, for each $l_i \\in \\mathbb{L}$, we use $A_i$, $u_i$, $Inv_i$ to denote $A(l_i)$, $u(l_i)$, and $Inv(l_i)$, respectively.\n\nAn example LHA which satisfies Definition \\ref{def:lha} is shown in Section \\ref{sec:imp:example}.\nNext, we define the behavior of LHA.\n\\begin{defn} \\label{def:traj}\nFor a location $l_i \\in \\mathbb{L}$, a \\emph{trajectory} of duration $t \\in \\mathbb{R}^{+}$ for an $n$-dimensional LHA $\\A$ is a continuous map $\\eta$ from $[0,t]$ to $\\mathbb{R}^n$, such that\n\\begin{enumerate}[(a)]\n \\item $\\eta(\\tau)$ satisfies the differential equation\n \\begin{equation}\\label{eq:pre:lti}\n \\dot{\\eta}(\\tau) = A_i \\eta(\\tau) + u_i ,\n \\end{equation}\n \\item $\\eta(\\tau) \\in Inv_i$ for every $\\tau \\in [0,t]$.\n\\end{enumerate}\n\\end{defn}\n\n\n\\begin{defn} \\label{def:exec\nAn \\emph{execution} $\\alpha$ of an LHA $\\A$ from a starting state $(l_0, x_0) \\in \\mathbb{L} \\times \\mathbb{R}^n$ is defined to be the concatenation of a finite or infinite sequence of trajectories $\\alpha = \\eta_0 \\eta_1 \\eta_2 \\ldots$, such that\n\\begin{enumerate}[(a)]\n \\item $\\eta_0(0) = x_0$,\n \\item $\\eta_{k}(0) = \\eta_{k-1}(\\eta_{k-1}.dur)$ for $k \\ge 1$, \n\\end{enumerate}\nwhere $\\eta_k$ represents a trajectory defined at some location $l \\in \\mathbb{L}$ and $\\eta_k.dur$ denotes the duration of $\\eta_k$.\nWe also define $\\alpha.dur := \\sum_k \\eta_k.dur$ where $\\alpha.dur$ denotes the duration of an execution $\\alpha$.\n\\end{defn}\n\n\nWe can represent an execution $\\alpha$ of an LHA $\\A$ from an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\mathbb{R}^n$ for time $[0,t]$ as a continuous map $x:[0,t] \\rightarrow \\mathbb{R}^n$ such that\n\\begin{inparaenum}[(a)]\n \\item $t = \\alpha.dur$,\n \\item $x(0) = x_0 \\in Inv_0$,\n \\item $x(\\tau_k) = \\eta_{k}(0)$, and\n \\item $x(\\tau) = \\eta_{k-1}(\\tau - \\tau_{k-1})$ for $\\tau \\in [\\tau_{k-1}, \\tau_k]$,\n\\end{inparaenum}\nwhere $\\tau_0 = 0$, and $\\tau_k = \\sum_{i=0}^{k-1} \\eta_i.dur$ for $k \\ge 1$.\nNote that $\\tau_k$ for $k \\ge 1$ represents the time at the $k$-th discrete transition between locations and the continuous state is not reset during discrete transitions.\n\n\\begin{defn} \\label{def:trans}\nFor an execution $x(t)$ of an LHA, a discrete transition $(x_i, l_i) \\xrightarrow{G} (x_j, l_j)$ occurs if $x_i = x(\\tau')$ for some time $\\tau'$, \n$x(\\tau') \\in Inv_i \\cap Inv_j$ and $x(\\tau') = \\lim_{\\tau \\nearrow \\tau'} x(\\tau)$ where $x(\\tau) \\in (Inv_i)^{\\circ}$ for $\\tau \\in (\\tau'-\\delta,\\tau')$ for some $\\delta > 0$.\n\n\\end{defn}\n\\begin{defn} \\label{def:transversal} \nA discrete transition is called \\emph{deterministic} if there is only one location $l_j \\in \\mathbb{L}$ to which a discrete transition state $x(\\tau_k)$ can make a discrete transition from $l_i$. \nWe call a discrete transition a \\emph{transversal discrete transition} if there exists $\\epsilon > 0$ such that\n\\begin{equation}\\label{eq:pre:trans}\n \\langle \\dot{x}_{i}(\\tau_k), \\vec{n}_i \\rangle \\ge \\epsilon ~~\\land~~ \\langle \\dot{x}_{j}(\\tau_k), \\vec{n}_i \\rangle \\ge \\epsilon ,\n\\end{equation}\nwhere $\\langle x, y \\rangle$ denotes the inner product between $x$ and $y$, $\\vec{n}_i$ is an outward normal vector of $\\partial Inv_i$ at $x(\\tau_k)$, and $\\dot{x}_{i}(\\tau_k) = A_i x(\\tau_k) + u_i$, and $\\dot{x}_{j}(\\tau_k) = A_j x(\\tau_k) + u_j$ are the vector fields at $x(\\tau_k)$ evaluated with respect to the continuous dynamics of location $l_i$ and $l_j$, respectively. \n\\end{defn}\nFig. \\ref{fig:transition} illustrates a case where $x(\\tau_k)$ satisfies such a deterministic and transversal discrete transition condition.\nNote that if $x(\\tau_k)$ satisfies a deterministic and transversal discrete transition condition, then $x(\\tau_k)$ must make a discrete transition from a location $l_i$ to the other location $l_j$, and $l_j$ has to be unique. \nFurthermore, the \\emph{Zeno behavior}, an infinite number of discrete transitions within a finite amount of time, does not occur if a discrete transition is a transversal discrete transition.\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=7cm]{transition.png}\n\\caption{A deterministic and transversal discrete transition from a location $l_i$ to a location $l_j$ occurring at $x(\\tau_k) \\in \\partial Inv(l_i) \\cap \\partial Inv(l_j)$.}\n\\label{fig:transition}\n\\end{center}\n\\end{figure}\n\n\nWe now define a special class of LHA whose every discrete transition satisfies the deterministic and transversality conditions defined in Definition \\ref{def:transversal} as follows:\n\n\\begin{defn} \\label{pre:def:dtlha}\nGiven an LHA $\\mathcal{A}$, a starting state $(l_0, x_0) \\in \\mathbb{L} \\times \\X$, a time bound $T$, and a jump bound $N$, we call an LHA $\\mathcal{A}$ as a \\emph{Deterministic and Transversal Linear Hybrid Automaton (DTLHA)} if all discrete transitions in the execution starting from $x_0$ up to time $t_f := \\min \\{T, \\tau_N \\}$ are deterministic and transversal, where $\\tau_N$ is the time at the N-th discrete transition.\n\\end{defn}\n\n\nNext, we define the bounded reach set of a DTLHA and its over-approximation as follows:\n\n\\begin{defn}\nA continuous state in $\\X$ is \\emph{reachable} if there exists some time $t$ at which it is reached by some execution $x$.\n\\end{defn}\n\\begin{defn} \\label{def:pre:reach}\nGiven a state $x_0$ and a time $t$, the \\emph{bounded reach set} up to time $t$, denoted as $\\R_t(x_0)$, of a DTLHA $\\A$ is defined to be the set of continuous states that are reachable for some time $\\tau \\in [0, t]$ by some execution $x$ starting from $x_0 \\in Inv_0$.\n\\end{defn}\n\\begin{defn} \\label{def:pre:ereach}\nGiven $\\epsilon > 0$, a set of continuous states $S$ is called a \\emph{bounded $\\epsilon$-reach set} of a DTLHA $\\A$ over a time interval $[0,t]$ from an initial state $x_0$ if $\\R_t(x_0) \\subseteq S$ and\n\\begin{equation}\\label{eq:pre:ereach}\n\td_H(\\R_t(x_0), S) \\le \\epsilon,\n\\end{equation}\nwhere $d_H(\\P, \\Q)$ denotes the Hausdorff distance \nbetween two sets $\\P$ and $\\Q$\nthat is defined as $d_H(\\P, \\Q) := \\max \\{ \\sup_{p \\in \\P} \\inf_{q \\in \\Q} d(p, q), \\sup_{q \\in \\Q} \\inf_{p \\in \\P} d(p, q) \\}$ where $d(p,q) := \\Vert p - q \\Vert$.\n\\end{defn}\n\n\nIn the sequel, we use $\\D_t(\\P)$ to denote the set of states reached at time $t$ from a set $\\P$ at time $0$.\nSimilarly, for the set of reached states over a time interval $[t_1, t_2)$ from $\\P$, we use $\\D_{[t_1, t_2)}(\\P)$.\nWe also use $\\D_t(\\P, \\gamma)$ to denote an over-approximation of $\\D_t(\\P)$ with an approximation parameter $\\gamma > 0$, calling it a $\\gamma$-approximation of $\\D_t(\\P)$ if it satisfies \n\\begin{inparaenum}[(i)]\n\\item $\\D_t(\\P) \\subset \\D_t(\\P, \\gamma)$ and\n\\item $d_H(\\D_t(\\P),$ $\\D_t(\\P, \\gamma)) \\le \\gamma$.\n\\end{inparaenum}\nNote that $\\D_0(\\P, \\gamma)$ is simply a $\\gamma$-approximation of the set $\\P$.\n\n\n\n\\section{Bounded $\\epsilon$-Reachability of a DTLHA} \\label{sec:theory}\n\nIn this section, we consider the problem of a bounded $\\epsilon$-reach set computation of a DTLHA starting from an initial state over a finite time interval.\nMore precisely, we show that, for any given $\\epsilon > 0$, a DTLHA $\\A$, an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\X$, a time upper bound $T \\in \\mathbb{R}^+$, and a discrete transition upper bound $N \\in \\mathbb{N}$, it is possible to compute a bounded $\\epsilon$-reach set of $\\A$ over a finite time interval $[0, t_f]$ under the assumptions that the following computations can be performed exactly:\n\\begin{inparaenum}[(i)]\n\t\\item $x(t) = e^{At} x_0 + \\int_0^t e^{A(t-s)} u ds$,\n\t\\item the convex hull of a set of finite points in $\\mathbb{R}^n$, and\n\t\\item the intersection between a polyhedron and a hyperplane,\n\\end{inparaenum}\nwhere $t_f$ is as defined in Definition \\ref{pre:def:dtlha}, $A \\in \\mathbb{R}^{n \\times n}$, and $u \\in \\mathbb{R}^n$.\n\n\n\n\\subsection{Bounded $\\epsilon$-Reach Set of a DTLHA at Initial Location} \\label{sec:theory:l0}\n\nWe first show how a trajectory of a DTLHA can be over-approximated through sampling and polyhedral over-approximation of each sampled state.\nThe basic approach for such over-approximation is shown in Fig. \\ref{fig:traj}. \nIt is necessary that, for a given size of over-approximation of each sampled state, a sampling period $h$ has to ensure that a trajectory $x(t)$ is contained in the computed set of polyhedra.\nFor a given value of $\\epsilon > 0$, we now show how we can determine a sampling period $h$ which guarantees that.\n\\begin{equation}\\label{eq:hcond}\n \\max_{\\tau \\in [0, h]} \\Vert x(t+\\tau) - x(t) \\Vert < \\epsilon \\qquad \\forall x(t) \\in \\mathcal{X}.\n\\end{equation}\n\nTo determine a suitable value of $h$ which results in (\\ref{eq:hcond}), we suppose $x(s) \\in (Inv_i)^{\\circ}$ for all $s \\in [t, t+h]$ for some location $l_i \\in \\mathbb{L}$.\nThen for a given $\\Sigma_i$, $\\X$, and $x(s) \\in \\X$, we have \n\n\\begin{eqnarray} \\nonumber\n\\max_{s \\in [t, t+\\tau]} \\Vert \\dot{x}(s) \\Vert &=& \\max_{s \\in [t, t+ \\tau]} \\Vert A_i x(s) + u_i \\Vert \\\\ \\nonumber\n&\\le& \\max_{s \\in [t, t+\\tau]} \\{ \\Vert A_i \\Vert \\Vert x(s) \\Vert+ \\Vert u_i \\Vert \\} \\\\ \n&\\le& \\Vert A_i \\Vert \\bar{x}+ \\Vert u_i \\Vert,\n\\end{eqnarray}\nwhere $\\bar{x} = \\max_{x \\in \\mathcal{X}} \\Vert x \\Vert$. \n\nFor a fixed $\\tau \\in [0, h]$, we can compute an upper bound on $\\Vert x(t+\\tau) - x(t) \\Vert$ as follows:\n\\begin{eqnarray} \\label{eq:h:ineq:0} \\nonumber\n\\Vert x(t+\\tau) - x(t) \\Vert &\\le& \\int^{t+\\tau}_{t} \\Vert \\dot{x}(s) \\Vert ds \\nonumber \\\\\n&\\le& \\int^{t+\\tau}_{t} \\max_{s \\in [t, t+ \\tau]} \\Vert \\dot{x}(s) \\Vert ds \\nonumber \\\\\n&\\le& \\int^{t+\\tau}_{t} (\\Vert A_i \\Vert\\bar{x} + \\Vert u_i \\Vert) ds \\nonumber \\\\\n&=& (\\Vert A_i \\Vert \\bar{x} + \\Vert u_i \\Vert) \\tau.\n\\end{eqnarray}\n\nMaximization of both sides of (\\ref{eq:h:ineq:0}) over $\\tau \\in [0, h]$ gives us \n\\begin{eqnarray} \\label{eq:h:ineq:1} \\nonumber\n\t\\max_{\\tau \\in [0,h]} \\Vert x(t+\\tau) - x(t) \\Vert &\\le& (\\Vert A_i \\Vert \\bar{x} + \\Vert u_i \\Vert) h \\\\\n\t&\\le& \\max_{l_i \\in \\mathbb{L}} (\\Vert A_i \\Vert \\bar{x} + \\Vert u_i \\Vert) h.\n\\end{eqnarray}\n\nIf we upper bound the right hand side by $\\epsilon > 0$, then we can choose \n\\begin{equation} \\label{eq:h:ineq}\n\th < \\frac{\\epsilon}{\\bar{v}}.\n\\end{equation}\nwhere $\\bar{v} := \\max_{l_i \\in \\mathbb{L}} (\\Vert A_i \\Vert \\bar{x} + \\Vert u_i \\Vert)$.\n\nSo, if we choose $h$ as \n\\begin{equation} \\label{eq:h:eq}\n h = \\frac{\\epsilon\/2}{\\bar{v}} ,\n\\end{equation}\nthen it is clear that we can ensure (\\ref{eq:hcond}). \n\nWe now show that, for a given $\\epsilon > 0$, if a sampling period $h$ satisfies (\\ref{eq:h:eq}), then a set constructed as a union of $\\epsilon$-neighborhood of each sampled state along a trajectory is indeed a bounded $\\epsilon$-reach set at an initial location. \nMoreover, such a bounded $\\epsilon$-reach set contains the bounded reach set not only from the initial state but also from the $(\\epsilon\/2)$-neighborhood of the initial state.\n\n\\begin{figure}\n\\begin{center}\n\t\\includegraphics[width=8cm]{traj.png}\n\t\\caption{An over-approximation of a trajectory $x(t)$ through sampling.}\\label{fig:traj}\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{lem} \\label{lem:theory:l0}\nGiven $\\epsilon > 0$ and a time bound $T > 0$, a bounded $\\epsilon$-reach set $\\R_{t_f}(x_0, \\epsilon)$ of a DTLHA $\\A$ from an initial state $(x_0, l_0)$ can be determined as follows:\n\\begin{equation} \\label{eq:lem:theory:l0}\n \\R_{t_f}(x_0, \\epsilon) := \\bigcup_{k=0}^{m-1} \\B_{\\epsilon}(x(k h)) ,\n\\end{equation}\nwhere $t_f := \\min \\{\\tau_1, T \\}$, $\\tau_1 := \\inf \\{ t \\in (0, T] : x(t) \\not\\in Inv_0 \\land x(0) = x_0 \\}$, $m := \\lceil t_f\/h \\rceil$ and $h = (\\epsilon\/2)\/\\max_{l_i \\in \\mathbb{L}} (\\Vert A_i \\Vert \\bar{x} + \\Vert u_i \\Vert)$.\nMoreover, this set has two additional properties:\n\\begin{enumerate}[(i)]\n \\item $\\lim_{\\epsilon \\rightarrow 0} \\R_{t_f}(x_0, \\epsilon) = \\R_{t_f}(x_0)$, and\n \\item It contains an $\\epsilon\/2$ neighborhood of $\\R_{t_f}(x_0)$, i.e.,\n \\begin{equation}\\nonumber\n \\bigcup_{z \\in \\R_{t_f}(x_0)} \\B_{\\epsilon\/2}(z) \\subseteq \\R_{t_f}(x_0, \\epsilon).\n \\end{equation}\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nSince $h$ satisfies (\\ref{eq:h:ineq}), it is easy to see that $\\R_{t_f}(x_0) \\subset \\R_{t_f}(x_0, \\epsilon)$ from the construction of $\\R_{t_f}(x_0, \\epsilon)$.\nNext, by the relation between $\\epsilon$ and $h$ in (\\ref{eq:h:eq}), it is clear that $h \\rightarrow 0$ as $\\epsilon \\rightarrow 0$. \nThis implies that $\\R_{t_f}(x_0, \\epsilon) \\rightarrow \\R_{t_f}(x_0)$ as $\\epsilon \\rightarrow 0$, establishing (i). \nFor (ii), as noted above, (\\ref{eq:h:eq}) actually chooses half the sampling period that would have sufficed to make it a bounded $\\epsilon$-reach set over $[0,t_f]$. \nHence, replacing $\\epsilon$ by $\\epsilon\/2$ in the right hand side of (\\ref{eq:lem:theory:l0}) still yields a bounded $\\epsilon$-reach set. \nThus the over stringent choice of $h$ contains not just $\\R_{t_f}(x_0)$ but actually all points that are within a distance $\\epsilon\/2$ from it.\n\\end{proof}\n\n\n\n\n\\subsection{Continuity Property of DTLHA} \\label{sec:theory:cont}\n\nNow let us consider the problem of computing a bounded $\\epsilon$-reach set of a DTLHA $\\A$ not from an initial state $x_0$ but from a $\\delta$-neighborhood of $x_0$.\nWe first show that there exists a $\\delta > 0$ such that the bounded reach set of a DTLHA $\\A$ from a set $\\B_{\\delta}(x_0)$ at an initial location $l_0$ is contained in a bounded $\\epsilon$-reach set of $\\A$ from $x_0$ defined in (\\ref{eq:lem:theory:l0}).\n\n\n\\begin{lem} \\label{lem:theory:cont:1}\nGiven $\\epsilon > 0$, a time bound $T > 0$, an initial state $x_0$, and a DTLHA $\\A$, there exists a $\\delta > 0$ such that \n\\begin{equation}\n \\R_{t_f}(\\B_{\\delta}(x_0)) \\subseteq \\R_{t_f}(x_0,\\epsilon) ,\n\\end{equation}\nwhere $\\B_{\\delta}(x_0)$ is a $\\delta$-neighborhood around $x_0$ and $\\R_{t_f}(\\B_{\\delta}(x_0))$ is the bounded reach set of $\\A$ from $\\B_{\\delta}(x_0)$ up to time $t_f$ and $t_f$ is as defined in Lemma \\ref{lem:theory:l0}. \nIn particular, $\\R_{t_f}(\\B_{\\epsilon \/(2C)}(x_0)) \\subseteq \\R_{t_f}(x_0, \\epsilon)$ for an appropriate $C$.\n\\end{lem}\n\\begin{proof}\nNotice that $x(t) = e^{A_0t} x_0 + \\int_0^t e^{A_0(t-s)} u_0 ds$, where $A_0$ and $u_0$ define the linear dynamics in an initial location $l_0$.\nIf we consider two different initial states $x_0$ and $y_0$ in $\\B_{\\delta}(x_0)$, then their trajectories $x(t)$ and $y(t)$ satisfy $x(t) - y(t) = e^{At} (x_0 - y_0)$.\nHence $\\Vert x(t) - y(t) \\Vert \\leq c e^{\\lambda t} \\Vert x_0 - y_0 \\Vert$ for some positive constant $c$ and some constant $\\lambda$.\n\nLet $C := c \\cdot \\max_{0 \\leq t \\leq {t_f}}\\lbrace e^{\\lambda t}\\rbrace$.\nThen\n\\begin{equation} \\label{eq:cont:1}\n \\Vert x(t) - y(t) \\Vert \\leq C \\Vert x_0 - y_0 \\Vert \\qquad \\mbox{for} \\quad t \\in [0, {t_f}].\n\\end{equation}\nSince $\\Vert x_0 - y_0 \\Vert \\leq \\delta$, $\\Vert x(t) - y(t) \\Vert \\leq C \\delta$ for all $t \\in [0, {t_f}]$.\nThis implies that any initial condition $y_0$ in $\\B_\\delta(x_0)$ results in a $y(t)$ that lies in a $C \\delta$ neighborhood of $\\R_{t_f}(x_0)$ for all $t \\in [0,{t_f}]$. \nIn particular, from property (ii) of Lemma \\ref{lem:theory:l0}, it also follows that $\\R_{t_f}(\\B_{\\delta}(x_0)) \\subseteq \\R_{t_f}(x_0, 2 C \\delta)$.\nIf we set $\\delta = \\epsilon \/(2C)$, then it is clear that $\\R_{t_f}(\\B_{\\delta}(x_0)) \\subseteq \\R_{t_f}(x_0,\\epsilon)$.\n\\end{proof}\n\n\nNext we extend the result in Lemma \\ref{lem:theory:cont:1} to show that there exist a $\\delta > 0$ and a $\\gamma > 0$ such that an over-approximation of the bounded reach set $\\R_{t_f}(\\B_{\\delta}(x_0))$, denoted as $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, is also contained in $\\R_{t_f}(x_0,\\epsilon)$ that is defined in (\\ref{eq:lem:theory:l0}).\n\n\n\\begin{lem} \\label{lem:theory:cont:2}\nGiven $\\epsilon > 0$, a time bound $T > 0$, an initial state $x_0$, and a DTLHA $\\A$, there exist $\\delta > 0$ and $\\gamma > 0$ such that \n\\begin{equation}\n\t\\R_{t_f}(\\B_{\\delta}(x_0),\\gamma) \\subseteq \\R_{t_f}(x_0,\\epsilon),\n\\end{equation}\nwhere $\\R_{t_f}(\\B_{\\delta}(x_0),\\gamma)$ is a $\\gamma$-approximation of $\\R_{t_f}(\\B_{\\delta}(x_0))$, and $t_f$ is as defined in Lemma \\ref{lem:theory:l0}.\nIn particular, $\\R_{t_f}(x_0) \\subseteq \\R_{t_f}(\\B_{\\epsilon\/(4C)}(x_0), \\epsilon\/4) \\subseteq \\R_{t_f}(x_0, \\epsilon)$.\n\\end{lem}\n\\begin{proof}\nLet $x(t;z)$ denote the solution at time $t$ of the differential equation $\\dot{x}(t) = Ax(t)+u$ with initial condition $x(0) = z \\in \\B_\\delta(x_0)$.\nNow consider $w \\in \\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$. \nThen, by the definition of $\\R_{t_f}(\\B_{\\delta}(x_0))$ and $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, \n\\[ \\Vert w - x(t;z) \\Vert < \\gamma\\] \nfor some $t \\in [0,t_f]$ and $z \\in \\B_{\\delta}(x_0)$. \nHence\n\\begin{eqnarray} \\nonumber\n \\Vert w - x(t;x_0) \\Vert &=& \\Vert w - x(t;z) + x(t;z) - x(t;x_0) \\Vert \\\\ \\nonumber\n &\\le& \\Vert w - x(t;z) \\Vert + \\Vert x(t;z) - x(t;x_0) \\Vert \\\\ \\nonumber\n &\\le& \\gamma + \\Vert x(t;z) - x(t;x_0) \\Vert . \\nonumber\n\\end{eqnarray}\nFrom (\\ref{eq:cont:1}), we know that \n\\[\n\t\\Vert x(t;z) - x(t;x_0) \\Vert \\le C \\Vert z - x_0 \\Vert \\le C \\delta.\n\\]\nHence \n\\[\n\t\\Vert w - x(t;x_0) \\Vert \\le \\gamma + C \\delta\n\\]\nwhich implies that $w$ lies in a $(\\gamma + C \\delta)$-neighborhood of $\\R_{t_f}(x_0)$.\nFrom the property (ii) in Lemma \\ref{lem:theory:l0}, if we replace $\\epsilon\/2$ with $(\\gamma + C \\delta)$, then we have $w \\in \\R_{t_f}(x_0, 2(\\gamma + C \\delta))$ which in turn implies that $\\R_{t_f}(\\B_{\\delta}(x_0),\\gamma) \\subseteq \\R_{t_f}(x_0, 2(\\gamma + C \\delta))$.\nSo, given $\\epsilon > 0$, we can choose $\\gamma = \\epsilon\/4$ and $\\delta = \\epsilon\/(4C)$, and then $\\R_{t_f}(\\B_{\\delta}(x_0),\\gamma) \\subseteq \\R_{t_f}(x_0, \\epsilon)$.\n\\end{proof}\n\n\n\n\n\\subsection{Decidability of Discrete Transition Event} \\label{sec:theory:trans}\n\nRecall that $\\tau_1$ is the time $t$ when a reached state $x(t)$ of a DTLHA starting from an initial state first exits the invariant set of an initial location.\nWe now show that, for a given $T$, even though it is not known to be decidable to determine $\\tau_1$ exactly, we can still determine the event of exit of a reached state $x(t)$ from the invariant set of an initial location if $\\tau_1 < T$.\n\n\n\\begin{lem} \\label{lem:theory:trans:exit}\nGiven a time bound $T > 0$, an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\mathbb{R}^n$, and a DTLHA $\\A$, if $\\tau_1 < T$, then for all small enough $\\delta > 0$ and for some small enough $h > 0$, $\\B_{\\delta}(x(nh)) \\subset (Inv_0)^{c}$ for some $n \\in \\mathbb{N}$ satisfying $nh \\le T$.\n\\end{lem}\n\\begin{proof}\nLet $\\vec{n}_1$ be an outward normal vector of $\\partial Inv_0$ at $x(\\tau_1)$.\nSince $\\langle \\dot{x}(\\tau_1), \\vec{n}_1 \\rangle > 0$ by assumption, then by the continuity of the vector field of a linear dynamics in $l_0$, there exists an $r > 0$ such that for all $z \\in \\B_{3 r}(x(\\tau_1)) \\cap \\partial Inv_0$, $\\langle \\dot{z}, \\vec{n}_1 \\rangle > 0$ where $\\dot{z} := A_0 z + u_0$.\nNotice that $\\Vert \\dot{z} \\Vert \\le \\bar{v}$ by the definition of $\\bar{v}$ in (\\ref{eq:h:ineq}).\nLet $x(t;z)$ denotes the solution at time $t$ of the differential equation $\\dot{x}(t) = A_0 x(t)+u_0$ with initial condition $x(0) = z$.\nThen for any $z \\in \\B_{r}(x(\\tau_1)) \\cap \\partial Inv_0$, it is guaranteed that $x(t;z) \\in (Inv_0)^C$ for $t \\in (0, 2 h)$ for any $h > 0$ satisfying $h < r \/ \\bar{v}$.\nThis implies that $x(nh) \\in (Inv_0)^{c}$ for some $n \\in \\mathbb{N}$.\nMoreover by compactness of $Inv_0$, there exists a $\\delta > 0$ such that $\\B_{\\delta}(x(nh)) \\subset (Inv_0)^C$.\n\\end{proof}\n\n\nNow suppose that $x(t) \\in Inv_0$ for all $0 \\le t \\le T+\\theta$ for some $\\theta > 0$.\nThen this fact can also be determined.\n\n\n\\begin{lem} \\label{lem:theory:trans:noexit}\nSuppose $x(t) \\in Inv_0$ for all $0 \\le t \\le T+\\theta$ for some $\\theta > 0$. \nThen for all small enough $\\delta > 0$ and $\\gamma > 0$,\n\\begin{equation}\n \\R_{t_f}(\\B_{\\delta}(x_0), \\gamma) \\subseteq (Inv_0)^{\\circ}.\n\\end{equation}\nwhere $t_f := \\min \\{ \\tau_1, T \\} = T$.\n\\end{lem}\n\\begin{proof}\nSince $x(t) \\in (Inv_0)^{\\circ}$ for all $0 \\le t \\le T$, the result immediately follows from Lemma \\ref{lem:theory:cont:2}.\n\\end{proof}\n\n\n\n\\subsection{Over-approximation of Discrete Transition State} \\label{sec:theory:over}\n\nFor a given time bound $T$, suppose that the event $\\tau_1 < T$ is determined for some $\\delta$ and $h$ as shown in Lemma \\ref{lem:theory:trans:exit}.\nThen, to continue to compute a bounded $\\epsilon$-reach set beyond an initial location, we need to determine \n\\begin{inparaenum}[(i)]\n\t\\item a new location to which a discrete transition is made from an initial location, and also\n\t\\item an over-approximation of a discrete transition state from which the bounded $\\epsilon$-reach set computation can be continued.\n\\end{inparaenum}\nWe now show that these can be determined, if a discrete transition state $x(\\tau_1)$ is deterministic and, more importantly, transversal, as defined in Definition \\ref{def:transversal}.\n\n\n\\begin{lem} \\label{lem:theory:over}\nGiven $\\tau_1 < T$, if $x(\\tau_1) \\in \\partial Inv_0$ satisfies a deterministic and transversal discrete transition condition, then there exists a $\\delta > 0$ such that $\\B_{2\\delta}(x(\\tau_1)) \\subset (Inv_0 \\cup Inv_1)$ for some location $l_1$. Furthermore, there exists a $\\Delta > 0$ such that\n\\begin{enumerate}[(i)]\n \\item $x(t) \\in (Inv_1)^{\\circ}$ for $t \\in (\\tau_1, \\tau_1 + \\Delta)$ , and\n \\item\n \\begin{equation} \\label{eq:lem:lha:6}\n \\bigcup_{y \\in \\mathcal{J}_{0,1}} x(\\tau;y) \\subset (Inv_1)^{\\circ} \\quad \\mbox{for } \\tau \\in (0, \\Delta),\n \\end{equation}\n\\end{enumerate}\nwhere $x(\\tau;y)$ is the solution at time $\\tau$ of an LTI system for the location $l_1$ with an initial state $y$ and $\\mathcal{J}_{0,1} := \\B_{\\delta}(x(\\tau_1)) \\cap Inv_0 \\cap Inv_1$. \n\\end{lem}\n\\begin{proof}\nLet $Inv_1, Inv_2$ be invariant sets for some locations $l_1$ and $l_2$ such that $Inv_0 \\cap Inv_1 \\cap Inv_2 \\ne \\emptyset$.\nSince $x(\\tau_1)$ satisfies a deterministic discrete transition condition, if $x(\\tau_1) \\in Inv_0 \\cap Inv_1$, then $x(\\tau_1) \\notin Inv_0 \\cap Inv_2$.\nThis implies that $x(\\tau_1) \\not\\in Inv_2$.\nThen by compactness of $Inv_2$, we know that there exists a $\\delta' > 0$ such that $\\B_{\\delta'}(x(\\tau_1)) \\cap Inv_2 = \\emptyset$. Therefore, we conclude that $\\B_{\\delta'}(x(\\tau_1)) \\subset Inv_0 \\cup Inv_1$.\n\nLet $\\vec{n}_1$ be an outward normal vector of $\\partial Inv_0$ at $x(\\tau_1)$.\nSince $x(\\tau_1)$ satisfies a transversal discrete transition condition from the location $l_0$ to the other location $l_1$, we know that there exists a $\\delta'' > 0$ such that for all $x(t) \\in \\B_{\\delta''}(x(\\tau_1)) \\cap Inv_0 \\cap Inv_1$, $\\langle \\dot{x}(t), \\vec{n}_1 \\rangle > 0$, where $\\dot{x}(t)$ is taken as either $A_0 x(t) + u_0$ or as $A_1 x(t) + u_1$, by the continuity of vector fields of the LTI dynamics for $l_0$ and $l_1$.\n\nLet $\\delta = \\min \\lbrace \\delta'\/2, \\delta''\/2\\rbrace$, and $\\Delta := \\delta\/(2 \\bar{v})$ where $\\bar{v}$ is as defined in (\\ref{eq:h:ineq}).\nThen by the definition of $\\delta$ and $\\bar{v}$, it is clear that (i) and (ii) hold for these choices of $\\delta$ and $\\Delta$.\n\\end{proof}\n\n\n\nIn Lemma \\ref{lem:theory:over}, $\\mathcal{J}_{0,1}$ is an over-approximation of $x(\\tau_1)$ that is determined by taking a $\\delta$-ball around $x(\\tau_1)$ for suitably small $\\delta > 0$, and intersecting it with $Inv_0$ and $Inv_1$. Once such a suitably small $\\delta$ is known, then the following lemma shows that it is also possible to determine a $\\delta_0$-neighborhood of an initial state $x_0$ such that the reach set at time $\\tau_1$ of a DTLHA $\\A$ from $\\B_{\\delta_0}(x_0)$ is contained in $\\B_{\\delta}(x(\\tau_1))$.\n\n\\begin{lem} \\label{lem:theory:over:cont:1}\nGiven $\\delta$ determined by Lemma \\ref{lem:theory:over}, there exists a $\\delta_0$ such that\n\\begin{equation}\n \\D_{\\tau_1}(\\B_{\\delta_0}(x_0)) \\subseteq \\B_{\\delta}(x(\\tau_1)),\n\\end{equation}\nand $\\D_{\\tau_1}(\\B_{\\delta_0}(x_0)) \\cap Inv_0 \\cap Inv_1$ is an over-approximation of $x(\\tau_1)$ determined by $\\delta_0$.\n\\end{lem}\n\\begin{proof}\nThis follows from the same argument used in the proof of Lemma \\ref{lem:theory:cont:1}, by choosing $\\delta_0 = \\delta\/C$.\n\\end{proof}\n\nThe next lemma shows that $\\delta_0$ for $\\B_{\\delta_0}(x_0)$ can be determined at each discrete transition time $\\tau_k$ for $k \\ge 1$.\n\n\\begin{lem} \\label{lem:theory:over:cont:2}\nLet $\\delta_{k}$ be the radius of a ball centered at $x(\\tau_k)$ intersecting only $Inv_{k-1}$ and $Inv_k$, where $\\tau_k$ is the $k$-th discrete transition time and $l_k$ is the location after the $k$-th discrete transition. Then for any $x(\\tau_k)$ satisfying a deterministic and transversal discrete transition condition, there exists a $\\delta_0$ such that\n\\begin{equation}\n \\D_{\\tau_k}(\\B_{\\delta_0}(x_0)) \\subseteq \\B_{\\delta_{k}}(x(\\tau_k)) ,\n\\end{equation}\nwhere $\\D_{\\tau_k}(\\B_{\\delta_0}(x_0))$ is the reached states of a given DTLHA $\\A$ from $\\B_{\\delta_0}(x_0)$ at time $\\tau_k$.\n\\end{lem}\n\\begin{proof}\nFrom the continuity property shown in Lemma \\ref{lem:theory:cont:1}, there is a $\\delta_{k-1} > 0$ such that $\\D_{[0, \\tau_k - \\tau_{k-1}]}(\\B_{\\delta_{k-1}}(x(\\tau_{k-1})))$ $\\subseteq$ $\\D_{[0, \\tau_k - \\tau_{k-1}]}(x(\\tau_{k-1}), \\delta_{k})$ for a given $\\delta_{k}$ where $\\D_{[0, \\tau_k - \\tau_{k-1}]}(x(\\tau_{k-1}), \\delta_{k})$ denotes a $\\delta_k$-approximation of $\\D_{[0, \\tau_k - \\tau_{k-1}]}(x(\\tau_{k-1}))$.\nThen for this $\\delta_{k-1}$, it is clear that $\\D_{\\tau_k}(\\B_{\\delta_{k-1}}(x(\\tau_{k-1})))$ $\\subseteq$ $\\B_{\\delta_{k}}(x(\\tau_k))$.\nUsing the same argument, we can find $\\delta_{k-2}, \\delta_{k-3}, \\cdots, \\delta_1$.\nThen from Lemma \\ref{lem:theory:over:cont:1}, we know that there exists a $\\delta_0 > 0$ such that $\\D_{\\tau_1}(\\B_{\\delta_0}(x_0))$ $\\subseteq$ $\\B_{\\delta_1}(x(\\tau_1))$.\nSince $\\D_{\\tau_2 - \\tau_1}(\\B_{\\delta_1}(x(\\tau_1)))$ $\\subseteq$ $\\B_{\\delta_2}(x(\\tau_2))$, we have $\\D_{\\tau_2}(\\B_{\\delta_0}(x_0))$ $\\subseteq$ $\\B_{\\delta_2}(x(\\tau_2))$.\nThis relation holds for each $\\tau_i$ where $i = 1, 2, \\cdots, k$.\nTherefore, $\\D_{\\tau_k}(\\B_{\\delta_0}(x_0))$ $\\subseteq$ $\\B_{\\delta_k}(x(\\tau_k))$. \n\\end{proof}\n\n\nWe now present our main result for the bounded $\\epsilon$-reachability of a DTLHA.\n\n\\begin{thm} \\label{thm:theory:thm}\nGiven $\\epsilon > 0$, a time bound $T > 0$, a discrete transition bound $N \\in \\mathbb{N}$, and a DTLHA $\\A$ starting from an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\mathbb{R}^n$, there exist $\\delta > 0$, $\\gamma > 0$, and a sampling period $h > 0$ satisfying $h < \\gamma\/\\bar{v}$ such that \n\\begin{equation} \\label{eq:theory:thm}\n\t\\R_{t_f}(x_0) \\subseteq \\R_{t_f}(\\B_{\\delta}(x_0), \\gamma) \\subseteq \\R_{t_f}(x_0, \\epsilon),\n\\end{equation}\nwhere $t_f := \\min \\{\\tau_N, T\\}$ and $\\tau_N$ is the time at the N-th discrete transition. \n\\end{thm}\n\\begin{proof}\nLet $C_i := \\max_{0 \\le t \\le t_f} \\{ e^{\\Vert A_i \\Vert t}\\}$ for a location $l_i \\in \\mathbb{L}$ and $C := \\max_{l_i \\in \\mathbb{L}} \\{ C_i \\}$.\nFor a given $\\epsilon > 0$, suppose $\\delta_k < \\epsilon\/(4 C)$ at each $\\tau_k$ up to $t_f$ where $\\delta_k$ is as defined in Lemma \\ref{lem:theory:over:cont:2}.\nThen, from Lemmas \\ref{lem:theory:over}, \\ref{lem:theory:over:cont:1}, and \\ref{lem:theory:over:cont:2}, we know that there exist a $\\delta' > 0$ such that $\\D_{\\tau_k}(\\B_{\\delta'}(x_0)) \\subseteq \\B_{\\delta_{k}}(x(\\tau_k))$ where $x(t)$ is the execution of a DTLHA $\\A$ starting from $x_0$ at time zero.\nFurthermore, from Lemmas \\ref{lem:theory:trans:exit} and \\ref{lem:theory:over}, there also exists $h > 0$ and $\\delta'' > 0$ such that \n\\begin{inparaenum}[(i)]\n\t\\item $h < \\Delta_k$ and\n\t\\item $h$ and $\\delta''$ satisfy Lemma \\ref{lem:theory:trans:exit}\n\\end{inparaenum}\nat every $\\tau_k$ up to $t_f$, where $\\Delta_k$ is the $\\Delta$ that is defined in Lemma \\ref{lem:theory:over} for the $k$-th deterministic and transversal discrete transition.\n\nLet $\\hat{\\delta} := \\min \\{ \\delta', \\delta'' \\}$.\nThen, with $\\hat{\\delta}$ and $h$, we can determine every discrete transition event and also construct an over-approximation of the discrete transition state as long as it is deterministic and transversal.\nSince $\\hat{\\delta} \\le \\delta'$, $\\D_{\\tau_k}(\\B_{\\hat{\\delta}}(x_0)) \\subseteq \\B_{\\delta_{k}}(x(\\tau_k))$ at each $\\tau_k$ up to $t_f$.\nThus, for any $\\gamma > 0$, \n\\[ \n\t\\D_{[0, \\tau_k^{k+1}]}(\\D_{\\tau_k}(\\B_{\\hat{\\delta}}(x_0)), \\gamma) \\subseteq \\D_{[0, \\tau_k^{k+1}]}(\\B_{\\delta_k}(x_{\\tau_k}), \\gamma)\n\\]\nwhere $ \\tau_k^{k+1} := \\tau_{k+1} - \\tau_k$.\n\nNow, we notice that if $\\gamma < \\epsilon\/4$, then from Lemma \\ref{lem:theory:cont:2},\n\\[\n\t\\D_{[0, \\tau_k^{k+1}]}(\\D_{\\tau_k}(\\B_{\\hat{\\delta}}(x_0)), \\gamma) \\subseteq \\D_{[0, \\tau_k^{k+1}]}(x(\\tau_k), \\epsilon),\n\\]\nfor each $\\tau_k$ up to $t_f$, where the left hand side is a segment of $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ for $[\\tau_k, \\tau_{k+1}]$, and the right hand side is a segment of $\\R_{t_f}(x_0, \\epsilon)$ for $[\\tau_k, \\tau_{k+1}]$ that is defined as $\\bigcup_{n = 0}^{N_k -1} \\B_{\\epsilon}(x(\\tau_k + n h))$ where $N_k := \\lceil (\\tau_{k+1} - \\tau_k)\/h\\rceil$.\n\nFurthermore, if $h < \\gamma\/\\bar{v}$, then from (\\ref{eq:h:ineq}) replaced with $\\epsilon$ by $\\gamma$, it is clear that \n\\[\n\t\\D_{[0, \\tau_k^{k+1}]}(\\D_{\\tau_k}(x_0)) \\subseteq \\D_{[0, \\tau_k^{k+1}]}(\\D_{\\tau_k}(\\B_{\\hat{\\delta}}(x_0)), \\gamma),\n\\]\nwhere the left hand side is a segment of $\\R_{t_f}(x_0)$ for $[\\tau_k, \\tau_{k+1}]$.\nTherefore, the result holds.\n\\end{proof}\n\n\n\n\n\n\n\\section{Computing a Bounded $\\epsilon$-Reach Set of a DTLHA} \\label{sec:cond}\n\nFrom Theorem \\ref{thm:theory:thm}, we know that a set $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, a bounded $\\epsilon$-reach set of a DTLHA, can be computed for some $\\delta, \\gamma$, and $h$. \nIn this section, we discuss how to compute $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$.\nMore precisely, we derive a set of conditions, based on the results in Section \\ref{sec:theory}, that are needed to correctly detect a deterministic and transversal discrete state transition event and also to determine whether the values for the parameters $\\delta, \\gamma$, and $h$ are appropriate so as to ensure that $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ is a correct bounded $\\epsilon$-reach set.\nFurthermore, later in this section, we extend these conditions to incorporate the numerical calculation errors caused by the finite precision numerical calculations capabilities. \n\n\n\n\\subsection{Conditions for Bounded $\\epsilon$-Reach Set Computation} \\label{sec:cond:exact}\n\nWe first note some properties that a set $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ needs to satisfy so that it can be considered as a bounded $\\epsilon$-reach set of a DTLHA.\n\n\\begin{rem} \\label{rem:cond:exact}\nNotice that any $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ that can be determined by $\\delta, \\gamma$, and $h$ in Theorem \\ref{thm:theory:thm} for a given $\\epsilon > 0$ needs to satisfy the following properties.\n\\begin{enumerate}[(i)]\n\t\\item $d_H(\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma), \\R_{t_f}(x_0)) \\le \\epsilon$, \n\t\\item $\\R_{t_f}(\\B_{\\delta}(x_0))) \\subset \\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, and\n\t\\item $d_H(\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma), \\R_{t_f}(\\B_{\\delta}(x_0))) \\le \\gamma$.\t\n\\end{enumerate}\n\\end{rem}\n\n\nFor given $\\delta$ and $h$, the following lemma shows how we can detect a discrete state transition event if there is one.\n\n\n\\begin{lem} \\label{lem:cond:exact:trans}\nGiven a location $l_c$ and a DTLHA $\\A$, \nif $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset (Inv_c)^{\\circ}$ and $\\D_t(\\B_{\\delta}(x_0)) \\subset Inv_c^C$ for some $\\delta > 0$ and $h > 0$, where $\\B_{\\delta}(x_0)$ is a $\\delta$-neighborhood of the initial state $x_0$, then there is a discrete transition from the location $l_c$ to some other locations at some time in $(t-h, t)$.\n\\end{lem}\n\\begin{proof}\nRecall that $\\D_t(x_0)$ denotes the reached state of $\\A$ at time $t$ from $x_0$.\nThen it is clear that $\\D_t(x_0) \\in \\D_t(\\B_{\\delta}(x_0))$. \nSimilarly, $\\D_{t-h}(x_0) \\in \\D_{t-h}(\\B_{\\delta}(x_0))$. \nHence, from the hypothesis, $\\D_t(x_0) \\in Inv_c^C$ and $\\D_{t-h}(x_0) \\in (Inv_c)^{\\circ}$.\nThis implies that there exists $\\tau \\in (t-h ,t)$ such that $\\D_{s}(x_0) \\in Inv_c^{\\circ}$ for $s \\in [t-h, \\tau)$ and $\\D_{s}(x_0) \\in Inv_c^C$ for $s \\in (\\tau, t]$.\nTherefore, there is a discrete transition at some time $\\tau \\in (t-h, t)$.\n\\end{proof}\n\n\nOnce a discrete state transition is detected, then, by Lemma \\ref{lem:cond:exact:det}, we can check if it is deterministic or not.\n\n\n\\begin{lem} \\label{lem:cond:exact:det}\nGiven an initial state $x_0$ and a DTLHA $\\A$,\nsuppose that there is a discrete transition from a location $l_c$ to some other locations at time $t$, i.e., $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset (Inv_c)^{\\circ}$ and $\\D_t(\\B_{\\delta}(x_0)) \\subset Inv_c^C$ for some $\\delta > 0$ and $h > 0$. \nThen the discrete transition is deterministic if there exists a location $l_n$ such that $l_n \\ne l_c$ and $\\D_t(\\B_{\\delta}(x_0)) \\subset (Inv_n)^{\\circ}$.\n\\end{lem}%\n\\begin{proof}\nThis follows from the definition of a deterministic discrete transition in Definition \\ref{def:transversal}.\n\\end{proof}\n\n\nWe now present conditions to determine the transversality of a discrete state transition; this is more complicated than those in previous two lemmas.\nThe main idea of the conditions in the following Lemma \\ref{lem:cond:exact:transv} is that \n\\begin{inparaenum}[(i)]\n\t\\item $\\delta$ and $\\gamma$ have to be small enough so that every state in an over-approximation of a deterministic and transversal discrete transition state, which can be computed by $\\delta$ and $\\gamma$, is also deterministic and transversal, and also\n\t\\item the sampling period $h$ should be small enough so that any reached states right after a discrete state transition can be captured correctly. \n\\end{inparaenum}\n\n\n\\begin{lem} \\label{lem:cond:exact:transv}\nGiven $\\gamma > 0$ and $h > 0$ satisfying $h < \\gamma\/\\bar{v}$,\nsuppose that there is a deterministic discrete transition from a location $l_c$ to another location $l_n$ at time $t$, i.e., $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset (Inv_c)^{\\circ}$ and $\\D_t(\\B_{\\delta}(x_0)) \\subset (Inv_n)^{\\circ}$ for some $\\delta > 0$ and $h > 0$. \nThen for any $\\epsilon > 0$, the discrete transition is transversal if the following conditions hold:\n\\begin{itemize}\n\\item[(i)] $h < (dia(\\mathcal{J}_{c,n})\/2)\/(2 \\bar{v})$, \n\\item[(ii)] $\\D_0(\\mathcal{J}_{c,n}, dia(\\mathcal{J}_{c,n})\/2) \\subset (Inv_c \\cup Inv_n)$, and\n\\item[(iii)] $\\langle \\dot{x}_c, \\vec{n}_c \\rangle \\ge \\epsilon \\land \\langle \\dot{x}_n, \\vec{n}_c \\rangle \\ge \\epsilon, ~~\\forall x \\in \\V(\\mathcal{J}_{c,n}')$,\n\\end{itemize}\nwhere $\\mathcal{J}_{c,n} := \\D_t(\\B_{\\delta}(x_0), \\gamma) \\cap Inv_c \\cap Inv_n$, $\\mathcal{J}_{c,n}' := \\D_0(\\mathcal{J}_{c,n},$ $dia(\\mathcal{J}_{c,n})\/2) \\cap Inv_c \\cap Inv_n$, $\\bar{v}$ is as defined in (\\ref{eq:h:ineq}), $\\V(\\P)$ is a set of vertices of a polyhedron $\\P$, $\\vec{n}_c$ is an outward normal vector of $\\partial Inv_c$, and $\\dot{x}_i$ is the vector flow evaluated with respect to the LTI dynamics of location $l_i \\in \\mathbb{L}$.\n\\end{lem}\n\\begin{proof}\nNotice that $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset \\D_t(\\B_{\\delta}(x_0), \\gamma)$ since $\\gamma$ and $h$ satisfy $h < \\gamma\/\\bar{v}$.\nIn fact, $\\bigcup_{z \\in \\D_{t-h}(\\B_{\\delta}(x_0))} x(\\tau;z) \\subset \\D_{t}(\\B_{\\delta}(x_0), \\gamma)$ for $\\tau \\in [0, h]$ where $x(\\tau;z) := e^{A_c \\tau} z + \\int_{0}^{\\tau} e^{A_c s} u_c ds$ under the LTI dynamics of the location $l_c$.\nSince $\\D_{t-h}(x_0) \\in \\D_{t-h}(\\B_{\\delta}(x_0))$ and $\\D_t(x_0) \\in \\D_{t}(\\B_{\\delta}(x_0))$, $\\D_{\\tau'}(x_0) \\in \\mathcal{J}_{c,n}$ for some $\\tau' \\in (t-h, t)$ where $\\D_{\\tau'}(x_0)$ is a discrete transition state from $l_c$ to $l_n$ at time $\\tau'$.\nThus $\\mathcal{J}_{c,n} \\ne \\emptyset$ (more precisely, $\\mathcal{J}_{c,n}^{\\circ} \\ne \\emptyset$) and it is in fact an over-approximation of the deterministic discrete transition state $x_{\\tau'} \\in Inv_c \\cap Inv_n$.\n\nIf (ii) and (iii) hold, then it is easy to see that $z'$ satisfies the deterministic and transversal discrete transition condition in Definition \\ref{def:transversal} for any $z' \\in \\mathcal{J}'_{c,n}$.\nNow we suppose (i) holds \nand let $x(h;z)$ is the state reached from $z$ at time $h$ under the LTI dynamics of the location $l_n$, then, for any $z \\in \\mathcal{J}_{c,n}$, \n\\[\n\t\\Vert x(h;z) - z \\Vert \\le \\bar{v}h < dia(\\mathcal{J}_{c,n})\/2.\n\\] \n\nIf we now consider the fact that $dia(\\mathcal{J}'_{c,n}) \\ge 2 \\cdot dia(\\mathcal{J}_{c,n})$, then it is easy to see that $x(\\tau;z) \\in Inv_n^{\\circ}$ for $\\tau \\in (0, h)$.\nSince $z \\in \\mathcal{J}_{c,n}$ is arbitrary, we conclude that \n\\[\n\t\\D_{\\tau}(\\mathcal{J}_{c,n}) \\in Inv_n^{\\circ}\n\\]\nfor all $\\tau \\in (0, h)$.\nThus, the discrete transition state $\\D_t(x_0) \\in \\mathcal{J}_{c,n}$ is transversal and it can be determined through $\\mathcal{J}_{c,n}$ with $h$ satisfying (i).\n\\end{proof}\n\n\n\n\\subsection{Finite Precision Basic Calculations} \\label{sec:cond:basic}\n\nNotice that the results in Section \\ref{sec:cond:exact} are based on the assumption that the following quantities can be computed exactly:\n\\begin{itemize}\n\\item $x(t;x_0) = e^{At}x_0 + \\int_0^t e^{As} u ds$.\n\\item $\\H \\cap \\P$, where $\\H$ is a hyperplane and $\\P$ is a polyhedron.\n\\item $hull(\\mathcal{V})$, where $hull(\\mathcal{V})$ is the convex hull of $\\mathcal{V}$ that is a finite set of points in $\\mathbb{R}^n$.\n\\end{itemize}\nHowever, these exact computation assumptions cannot be satisfied in practice and we can only compute each of these with possibly arbitrarily small computation error. \nTherefore, instead of assuming exact computation capabilities for $x(t;x_0)$, $\\H \\cap \\P$, and $hull(\\mathcal{V})$, we now assume that the following basic calculation capabilities are available for approximately computing these quantities, and it only these that we can use to compute a bounded $\\epsilon$-reach set. \nMore precisely, we assume that for given $\\mu_c > 0$ and $\\mu_h > 0$, \n\\begin{itemize}\n\t\\item $a(\\mathcal{H} \\cap \\mathcal{P}, \\mu_c)$ and $a(hull(\\mathcal{V}), \\mu_h)$\n\\end{itemize}\nare available such that $d_H(x, a(x,y)) \\le y$, where $a(x, y)$ denotes an approximate computation of $x$, with $y > 0$ as an upper bound on the approximation error.\nWe also assume that for given $\\sigma_e > 0$ and $\\sigma_i > 0$,\n\\begin{itemize}\n\t\\item $a(e^{At}, \\sigma_e)$, and $a(\\int_0^t e^{A \\tau} d\\tau, \\sigma_i)$\n\\end{itemize}\nare available as an approximate computation of $x(t;x_0)$ such that $\\Vert x - a(x,y) \\Vert \\le y$.\nNotice that from these basic calculation capabilities for $x(t;x_0)$, we can compute $a(x(t;x_0), \\mu_x)$ with an approximation error denoted as $\\mu_x$, which is upper bounded by a finite value as shown below.\n\n\nWe first note that, for all approximate computations $a(x,y)$ that are used for computing $x(t;x_0)$, we have \n\\begin{equation}\nx - y \\cdot {\\bf{1}}_{n \\times m} \\le a(x, y) \\le x + y \\cdot {\\bf{1}}_{n \\times m},\n\\end{equation}\nwhere $x \\in \\mathbb{R}^{n \\times m}$ and ${\\bf{1}}_{n \\times m}$ is an $n$ by $m$ matrix whose every element is $1$, and the inequalities hold elementwise.\nWith this, an upper bound of $\\mu_x$ can be derived as follows:\n\\begin{equation} \\nonumber\ne^{At} -\\sigma_e \\cdot {\\bf{1}}_{n \\times n} \\le a( e^{At}, \\sigma_e) \\le e^{At} + \\sigma_e \\cdot {\\bf{1}}_{n \\times n}.\n\\end{equation}\nSimilarly, \n\\begin{eqnarray} \\nonumber\n\\int_0^t e^{As}ds - \\sigma_i \\cdot {\\bf{1}}_{n \\times n} \\le a(\\int_0^t e^{As}ds, \\sigma_i) \\\\ \\nonumber\n\\le \\int_0^t e^{As}ds + \\sigma_i \\cdot {\\bf{1}}_{n \\times n}.\n\\end{eqnarray}\nHence, we have \n\\begin{equation} \\nonumber\nx(t;x_0) - \\delta_x \\le a(x(t;x_0), \\delta_x) \\le x(t;x_0) + \\delta_x,\n\\end{equation}\nwhere $\\delta_x := (\\sigma_e \\vert x_0 \\vert + \\sigma_i \\vert u \\vert) \\cdot {\\bf{1}}_{n \\times 1}$.\n\nNow, we know that $\\mu_x$ is upper bounded by the maximum of $\\vert \\delta_x \\vert$ over the continuous state space $\\X$ and the control input domain $\\U$,\n\\begin{equation} \\label{eq:mux}\n\t\\mu_x \\le \\max_{x \\in \\X, u \\in \\U} \\vert \\delta_x \\vert.\n\\end{equation}\n\n\n\n\\subsection{Conditions for Computation under Finite Precision Calculations} \\label{sec:cond:inexact}\n\nIn this section, we extend the results in Section \\ref{sec:cond:exact} to derive a set of conditions for a bounded $\\epsilon$-reach set computation of the DTLHA under finite precision numerical calculation capabilities.\nThe following remark is an immediate extension of Remark \\ref{rem:cond:exact} in Section \\ref{sec:cond:exact}.\n\nIn the sequel, for simplicity of notation, we use $\\hat{x}$ to denote $a(x,\\rho)$ for a given approximation error bound $\\rho > 0$.\n\n\\begin{rem} \\label{rem:cond:inexact}\nLet $\\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ be an approximation of $\\R_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ that is determined by $\\delta, \\gamma$, and $h$ in Theorem \\ref{thm:theory:thm} and approximate calculations for $x(t;x_0)$, $\\H \\cap \\P$, and $hull(\\mathcal{V})$ defined in Section \\ref{sec:cond:basic}.\nThen, for a given $\\epsilon > 0$, it is sufficient for $\\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma)$ to be a bounded $\\epsilon$-reach set of a DTLHA $\\A$ if the following properties hold.\n\\begin{enumerate}[(i)]\n\t\\item $d_H(\\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma), \\R_{t_f}(x_0)) \\le \\epsilon$, \n\t\\item $\\R_{t_f}(\\B_{\\delta}(x_0))) \\subset \\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, and\n\t\\item $d_H(\\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma), \\R_{t_f}(\\B_{\\delta}(x_0))) \\le \\gamma$.\t\n\\end{enumerate}\n\\end{rem}\n\n\nNext, we discuss how the relation between $h$ and $\\gamma$ can be modified so as to satisfy (ii) and (iii) in Remark \\ref{rem:cond:inexact} when there is numerical calculation error in computing $x(t;x_0)$. \n\n\n\\begin{lem} \\label{lem:cond:inexact:isover}\nGiven a DTLHA $\\A$ and its reached state $x(t)$ at time $t$ starting from an initial condition $x(0)$, \nlet $\\rho > 0$ be an upper bound on the approximation errors such that $\\Vert x(t) - \\hat{x}(t) \\Vert \\le \\rho$.\nIf a given sampling period $h$ satisfies $h < (\\gamma - \\rho)\/\\bar{v}$ for a given $\\gamma$ satisfying $\\gamma > \\rho$, where $\\bar{v}$ is as defined in (\\ref{eq:h:ineq}), then the following property holds at any location $l_i \\in \\mathbb{L}$ of $\\A$:\n\\begin{equation}\n\tx(t+\\tau) \\subset \\B_{\\gamma}(\\hat{x}(t)), ~~ \\forall \\tau \\in [0, h],\n\\end{equation}\nwhere $x(t+\\tau) = e^{A_i \\tau} x(t) + \\int_0^{\\tau} e^{A_i s} u_i ds$.\n\\end{lem}\n\\begin{proof}\nSince $\\Vert x(t) - \\hat{x}(t) \\Vert \\le \\rho$, $x(t) \\in \\B_{\\rho}(\\hat{x}(t))$.\nMoreover, from (\\ref{eq:h:ineq:1}), we know that for any $x(t) \\in \\X$,\n\\begin{eqnarray} \\nonumber\n\t\\max_{\\tau \\in [0,h]} \\Vert x(t+\\tau) - x(t) \\Vert &\\le& \\max_{\\tau \\in [0,h]} \\int_t^{t+\\tau} \\Vert \\dot{x}(s) \\Vert ds \\\\ \\nonumber\n\t &\\le& \\bar{v} h. \n\\end{eqnarray}\nHence, if $h < (\\gamma - \\rho)\/\\bar{v}$, then, for any $x(t) \\in \\X$, \n\\[\n\t\\max_{\\tau \\in [0,h]} \\Vert x(t+\\tau) - x(t) \\Vert < \\gamma - \\rho. \n\\] \nThis means that $x(t+\\tau) \\in \\B_{\\gamma - \\rho}(x(t))$ for $\\tau \\in [0, h]$.\nTherefore, for $\\tau \\in [0, h]$, \n\\begin{eqnarray} \\nonumber\n\t\\Vert \\hat{x}(t) - x(t+\\tau)\\Vert &\\le& \\Vert \\hat{x}(t) - x(t) \\Vert + \\Vert x(t) -x(t+\\tau) \\Vert \\\\ \\nonumber\n\t&\\le& \\rho + (\\gamma -\\rho).\n\\end{eqnarray}\nThus $\\Vert \\hat{x}(t) - x(t+\\tau)\\Vert \\le \\gamma$.\n\\end{proof}\n\nNotice that Lemma \\ref{lem:cond:inexact:isover} says that if $h < (\\gamma - \\rho)\/\\bar{v}$ for a given $\\rho > 0$, then a $\\gamma$-neighborhood of a sampled state is indeed an over-approximation of a trajectory over the time interval $h$.\nWe now extend the result in Lemma \\ref{lem:cond:inexact:isover} to the case where we need to compute a $\\gamma$-approximation of a polyhedron.\n\n\\begin{lem} \\label{lem:cond:inexact:over}\nGiven a DTLHA $\\A$ and its reached states $\\D_t(\\B_{\\delta}(x_0))$ at some time $t$ from initial states in $\\B_{\\delta}(x_0)$, \nlet $\\rho > 0$ be an upper bound on the approximation errors such that $d_H(\\D_t(\\B_{\\delta}(x_0)), \\hat{\\D}_t(\\B_{\\delta}(x_0))) \\le \\rho$.\nIf a given sampling period $h$ satisfies the following inequality \n\\begin{equation} \\label{eq:lem:cond:inexact:over}\nh < \\frac{\\gamma - \\rho}{\\bar{v}} ,\n\\end{equation}\nthen, for a given $\\gamma$ satisfying $\\gamma > \\rho$, \n\\begin{equation}\n\t\\D_{t+ \\tau}(\\B_{\\delta}(x_0)) \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma), \\quad \\forall \\tau \\in [0, h], \n\\end{equation}\n where $\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)$ is a $\\gamma$-approximation of $\\hat{\\D}_{t}(\\B_{\\delta}(x_0))$ that is constructed as the convex hull of the set of extreme points of a polyhedral $\\gamma$-neighborhood of all vertices of $\\hat{\\D}_{t}(\\B_{\\delta}(x_0))$ and $\\bar{v}$ is as defined in (\\ref{eq:h:ineq}).\n\\end{lem}\n\\begin{proof}\nLet $\\V$ and $\\hat{\\V}$ be the set of extreme points of $\\D_t(\\B_{\\delta}(x_0))$ and $\\hat{\\D}_t(\\B_{\\delta}(x_0))$, respectively. \nSince $d_H(\\D_t(\\B_{\\delta}(x_0)), \\hat{\\D}_t(\\B_{\\delta}(x_0))) \\le \\rho$ and $\\gamma > \\rho$, it is clear that $\\D_{t}(\\B_{\\delta}(x_0)) \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$.\nFrom Lemma \\ref{lem:cond:inexact:isover}, we know that for each $x(t) \\in \\V$, $x(t+\\tau) \\subset \\B_{\\gamma}(\\hat{x})$ for all $\\tau \\in [0, h]$ where $\\hat{x} \\in \\hat{\\V}$ corresponding to $x(t)$.\nLet $\\V_{t+\\tau}$ be the set of extreme points of $\\D_{t+ \\tau}(\\B_{\\delta}(x_0))$. \nThen $\\V_{t+\\tau} \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$ for all $\\tau \\in [0, h]$ since \n\\begin{inparaenum}[(i)]\n\t\\item for each $x(t) \\in \\V$, $x(t+\\tau) \\subset \\B_{\\gamma}(\\hat{x})$ for all $\\tau \\in [0, h]$ and \n\t\\item from the construction of $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$, $\\B_{\\gamma}(\\hat{x}) \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$ for each $\\hat{x} \\in \\hat{\\V}$.\n\\end{inparaenum}\nTherefore, the convex hull of $\\V_{t+\\tau}$, which is $\\D_{t+ \\tau}(\\B_{\\delta}(x_0))$, has to be contained in $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$ for all $\\tau \\in [0,h]$ since $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$ is convex and $\\V_{t+\\tau} \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)$ for all $\\tau \\in [0, h]$.\n\\end{proof}\n\n\nFor (i) in Remark \\ref{rem:cond:inexact}, Lemma \\ref{lem:cond:inexact:eps} below shows that the diameter of a set $\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)$ has to be smaller than a given $\\epsilon > 0$.\n\n\n\n\\begin{lem} \\label{lem:cond:inexact:eps}\nGiven $\\epsilon > 0$, $\\delta > 0$, $\\gamma >0$, $\\rho > 0$, and a DTLHA $\\A$, \nsuppose a given sampling period $h > 0$ satisfies the inequality (\\ref{eq:lem:cond:inexact:over}).\nThen $\\D_{[t, t+h]}(x_0) \\subset \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)$ and $d_H(\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma), \\D_{[t, t+h]}(x_0)) \\le \\epsilon$, if the following hold:\n\\begin{equation} \\label{eq:lem:cond:inexact:eps}\n\tdia(\\hat{\\D}_t(\\B_{\\delta}(x_0), \\gamma)) \\le \\epsilon,\n\\end{equation}\nwhere $\\D_{[t, t+h]}(x_0)$ is the set of reached states of $\\A$ starting from $x_0$ during the time interval $[t, t+h]$. \n\\end{lem}\n\\begin{proof}\nSince $h$ satisfies (\\ref{eq:lem:cond:inexact:over}), it is trivial to see that $\\D_{[t, t+h]}(x_0) \\subset \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)$ holds from Lemma \\ref{lem:cond:inexact:over}.\nMoreover, if (\\ref{eq:lem:cond:inexact:eps}) is also true, then for any $z \\in \\D_{[t, t+h]}(x_0)$, $\\max_{y \\in \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)} \\Vert y - z \\Vert \\le \\epsilon$ since $z \\in \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma)$.\nTherefore, it is clear that $d_H(\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma), \\D_{[t, t+h]}(x_0)) \\le \\epsilon$ if (\\ref{eq:lem:cond:inexact:over}) and (\\ref{eq:lem:cond:inexact:eps}) hold.\n\\end{proof}\n\nNow we can extend the results of Lemmas \\ref{lem:cond:exact:trans}, \\ref{lem:cond:exact:det}, and \\ref{lem:cond:exact:transv} to incorporate a numerical calculation error $\\rho > 0$.\n\n\\begin{lem} \\label{lem:cond:inexact:istrans}\nGiven $\\rho > 0$, a location $l_c$, and $\\hat{\\D}_t(\\B_{\\delta}(x_0))$ at time $t$,\nif\n\\begin{enumerate}[(i)]\n\\item $\\hat{\\D}_{t-h}(\\B_{\\delta}(x_0),$ $\\rho) \\subset (Inv_c)^{\\circ}$, and\n\\item $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\rho) \\subset Inv_c^C$ \n\\end{enumerate} \nfor some $\\delta > 0$ and $h > 0$, then there is a discrete transition from the location $l_c$ to some other locations.\n\\end{lem}\n\\begin{proof}\nNotice that $d_H(\\D_t(\\B_{\\delta}(x_0)), \\hat{\\D}_t(\\B_{\\delta}(x_0))) \\le \\rho$, which implies $\\D_t(\\B_{\\delta}(x_0)) \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\rho)$.\nSimilarly, $\\D_{t-h}(\\B_{\\delta}(x_0))$ $\\subset \\hat{\\D}_{t-h}(\\B_{\\delta}(x_0), \\rho)$.\nHence if (i) and (ii) hold, then it is clear that $\\D_t(\\B_{\\delta}(x_0)) \\subset Inv_c^C$ and $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset (Inv_c)^{\\circ}$.\nThen the result follows from Lemma \\ref{lem:cond:exact:trans}. \n\\end{proof}\n\n\\begin{lem} \\label{lem:cond:inexact:det}\nGiven $\\rho > 0$, a location $l_c$, and $\\hat{\\D}_t(\\B_{\\delta}(x_0))$ at time $t$, suppose that a discrete transition from a location $l_c$ to some other locations is determined as in Lemma \\ref{lem:cond:inexact:istrans}.\nThen the discrete transition is a deterministic discrete transition from $l_c$ to $l_n$ if there exists a location $l_n$ such that $l_n \\ne l_c$ and $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\rho) \\subset (Inv_n)^{\\circ}$.\n\\end{lem}\n\\begin{proof}\nNotice that $\\D_{t-h}(\\B_{\\delta}(x_0)) \\subset (Inv_c)^{\\circ}$ from the result in Lemma \\ref{lem:cond:inexact:istrans}.\nSince $\\D_t(\\B_{\\delta}(x_0)) \\subset \\hat{\\D}_t(\\B_{\\delta}(x_0), \\rho)$, if $\\hat{\\D}_t(\\B_{\\delta}(x_0), \\rho) \\subset (Inv_n)^{\\circ}$, then $\\D_t(\\B_{\\delta}(x_0)) \\subset (Inv_n)^{\\circ}$.\nThus by Lemma \\ref{lem:cond:exact:det}, the conclusion holds.\n\\end{proof}\n\n\n\n\n\\begin{lem} \\label{lem:cond:inexact:transv}\nGiven $\\rho > 0$, $\\gamma > 0$ and $h > 0$ satisfying (\\ref{eq:lem:cond:inexact:over}), suppose that a deterministic discrete transition from a location $l_c$ to another location $l_n$ is determined as in Lemma \\ref{lem:cond:inexact:istrans} and Lemma \\ref{lem:cond:inexact:det}, i.e., $\\hat{\\D}_{t-h}(\\B_{\\delta}(x_0), \\rho) \\subset (Inv_c)^{\\circ}$ and $\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\rho) \\subset (Inv_n)^{\\circ}$.\nThen, for any $\\epsilon > 0$, the discrete transition is transversal if the following conditions hold:\n\\begin{itemize}\n\\item[(i)] $h < (dia(\\hat{\\mathcal{J}}_{c,n})\/2)\/(2 \\bar{v})$, \n \\item[(ii)] $\\D_0(\\hat{\\mathcal{J}}_{c,n}, dia(\\hat{\\mathcal{J}}_{c,n})\/2 + \\rho) \\subset (Inv_c \\cup Inv_n)$, and \n \\item[(iii)] $\\langle \\dot{x}_c, \\vec{n}_c \\rangle \\ge \\epsilon \\land \\langle \\dot{x}_n, \\vec{n}_c \\rangle \\ge \\epsilon,~~\\forall x \\in \\V(\\hat{\\mathcal{J}}_{c,n}')$,\n \\end{itemize}\n %\n where $\\hat{\\mathcal{J}}_{c,n} := \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma + \\rho) \\cap Inv_c \\cap Inv_n$, $\\hat{\\mathcal{J}}_{c,n}' := \\D_0(\\hat{\\mathcal{J}}_{c,n},$ $dia(\\hat{\\mathcal{J}}_{c,n})\/2 + \\rho) \\cap Inv_c \\cap Inv_n$, $ \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma + \\rho)$ is a $(\\gamma + \\rho)$-approximation of $ \\hat{\\D}_{t}(\\B_{\\delta}(x_0))$, and $\\dot{x}_i$ and $\\vec{n}_c$ are as defined in Lemma \\ref{lem:cond:exact:transv}. \n\\end{lem}\n\\begin{proof}\nNotice that $\\D_{t}(\\B_{\\delta}(x_0), \\gamma) \\subset \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\gamma + \\rho)$.\nThen, by the definition of $\\mathcal{J}_{c,n}$ given in Lemma \\ref{lem:cond:exact:transv} and $\\hat{\\mathcal{J}}_{c,n}$, we know $\\mathcal{J}_{c,n} \\subset \\hat{\\mathcal{J}}_{c,n}$.\nHence, $\\hat{\\mathcal{J}}_{c,n} \\ne \\emptyset$ since $\\mathcal{J}_{c,n} \\ne \\emptyset$ by the construction of $\\mathcal{J}_{c,n}$.\nNow if (i) holds, then it is easy to see that $\\D_{\\tau}(\\hat{\\mathcal{J}}_{c,n}) \\subset \\D_0(\\hat{\\mathcal{J}}_{c,n}, dia(\\hat{\\mathcal{J}}_{c,n})\/2)$ for $\\tau \\in (0, h)$.\nMoreover, (ii) and (iii) imply that $\\D_{\\tau}(\\hat{\\mathcal{J}}_{c,n})$ is in fact contained in $Inv_n^{\\circ}$ for $\\tau \\in (0, h)$.\n\\end{proof}\n\n\n\n\n\n\\section{Architecture and Algorithm for Bounded $\\epsilon$-Reach Set Computation of a DTLHA} \\label{sec:design}\n\nWe are now in a position to propose an algorithm for bounded $\\epsilon$-reach set computation of a DTLHA.\nBefore proving its correctness, we first describe its architecture.\n\n\n\nFor \\emph{flexibility}, we decouple the higher levels of the algorithm, called \\emph{Policy}, from the component, called \\emph{Mechanisms}, where specific steps of calculations are performed through some numerical routines.\nThe proposed architecture of the algorithm, shown in Fig. \\ref{fig:arch}, consists of roughly five different components \\emph{Policy}, \\emph{Mechanism}, \\emph{System Description}, \\emph{Data}, and \\emph{Numerics}.\nA more detailed explanation of each of these modules is given below. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 9cm]{arch4algo.png} \n\\caption{An architecture for bounded $\\epsilon$-reach set computation.}\n\\label{fig:arch}\n\\end{center}\n\\end{figure}\n\n\nThe System Description contains all information describing a problem of a bounded $\\epsilon$-reach set computation of a DTLHA.\nThis consists of $\\X$, the domain of continuous state space, a DTLHA $\\A$, and an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\X$. \nAlso, an upper bound $T \\in \\mathbb{R}^+$ on terminal time, an upper bound $N \\in \\mathbb{N}$ on the total number of discrete transitions, and an approximation parameter $\\epsilon > 0$, are described.\nA bounded $\\epsilon$-reach set of a DTLHA $\\A$ is computed in the Mechanism component based on a given set of numerical calculation algorithms in Numerics, as well as a given Policy, which captures some of the higher level choices of the algorithm's outer loops. \nIn the Data component, all computation data that is relevant to a computed bounded $\\epsilon$-reach set, generated on-the-fly in the Mechanism part, are stored. \nEach of the functions in Numerics is in fact an implementation of some numerical computation algorithms. \nAs an example, $e^{At}$ can be computed in many different ways as shown in \\cite{moler:78} and each of the different algorithms can compute the value with a certain accuracy. \nHere we assume that a set of such numerical computation algorithms for basic calculations are given\\footnote{In this way, we decouple the low-level numerical calculations from our bounded $\\epsilon$-reach set algorithm. This is the reason why the Numerics component is represented separately from the Mechanism component.} \nand the corresponding approximation error bounds, i.e., $\\sigma_e, \\sigma_i, \\mu_c$, and $\\mu_h$, are known a priori.\nThe Policy component represents a user-defined rules that choose appropriate values of the parameters, especially $\\delta > 0$, $\\gamma > 0$, and $h > 0$, which are needed to continue to compute a bounded $\\epsilon$-reach set of a DTLHA, when a bounded $\\epsilon$-reach set algorithm in Mechanism fails to determine some events or to satisfy some required properties, during its computation.\nThe Mechanism component represents the core of the bounded $\\epsilon$-reach set algorithm based on the theoretical results in Section \\ref{sec:theory} and \\ref{sec:cond}, and is detailed in Section \\ref{sec:design:algo}. \nGiven values for parameters $\\delta > 0$, $\\gamma > 0$, and $h > 0$, it computes a bounded $\\epsilon$-reach set of a DTLHA $\\A$ until it either successfully finishes its computation or cannot make further progress, which happens when some required conditions or properties are not met.\nNotice that, as stated in Section \\ref{sec:cond}, there are a set of conditions and properties that a computed set needs to satisfy to be a correct bounded $\\epsilon$-reach set.\nIf the algorithm fails to resolve a computation, then it returns to Policy indicating the problems so that a user-defined rule in Policy can choose another set of values for the parameters to resolve the problems.\nEvery computation result is stored in the Data component to be possibly used later in Policy and Mechanism.\n\n\n\n\n\\subsection{Core Algorithm for Bounded $\\epsilon$-Reach Set of a DTLHA} \\label{sec:design:algo}\n\n\n\n\n\nAn algorithm to compute a bounded $\\epsilon$-reach set of a DTLHA is proposed and shown in Algorithm \\ref{alg:main}.\nLet $k$ indicate a computation step of the algorithm from which the proposed algorithm starts its bounded $\\epsilon$-reach set computation.\nAll computation history up to the $(k-1)$-th computation step is stored as data, called {\\tt Reached}, in Data part.\nThen, given an input $(k, \\delta_k, \\gamma_k, h_k)$ from Policy, the algorithm first retrieves the computation data at the $(k-1)$-th computation step from {\\tt Reached} and starts its $k$-th computation step using this data.\nAs shown in Algorithm \\ref{alg:main}, the algorithm continues its computation until it either\n\\begin{inparaenum}[(i)]\n\t\\item returns {\\tt done} when it successfully finished to compute a bounded $\\epsilon$-reach set or\n\t\\item returns {\\tt error} when it encounters some erroneous situations during the execution of a function, called {\\tt Post()}.\n\\end{inparaenum}\nIf the algorithm returns an {\\tt error}, it also indicates the cause of the {\\tt error} so that a user-defined rule in Policy can choose appropriate values for the input parameters.\n\n\n\\begin{algorithm} \n\\SetAlgoLined \n\\BlankLine\n\\KwIn{$k, \\delta_k, \\gamma_k, h_k$ from Policy} \n\\BlankLine\n\\Compute $\\mu_x$ from $(\\sigma_e, \\sigma_i)$\\\\\n\\BlankLine\n\\While{\\True} {\n\\Get data at $(k-1)$-th step from \\Reached \\\\\n\\If{$\\delta_k \\ne \\delta_{k-1}$} {\n\t\\Compute $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$\\\\\n\t\\Update $\\rho_{k-1}$\n}\n$t_k \\leftarrow t_{k-1}+h_k$\\\\\n\\Call \\Post \\\\\n\\Store $k$-th computation data into \\Reached \\\\\n$k \\leftarrow k+1$\\\\\n\\lIf{$(t_k \\ge T) \\lor (\\Jump \\ge N)$}{\\Return \\Done}\n}\n\\BlankLine\n\\caption{An algorithm for bounded $\\epsilon$-reach set computation of a DTLHA.}\n\\label{alg:main}\n\\end{algorithm}\n\n\\begin{algorithm}\n\\SetAlgoLined \n\\BlankLine\n\\KwIn{$h_k, \\gamma_k, l_{k-1}, \\rho_{k-1}, \\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$} \n\\BlankLine\n\\Compute $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ from $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$\\\\\n\\Compute $ \\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0),\\gamma_k)$ from $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$\\\\\n\\Update $\\rho_k \\leftarrow \\rho_{k-1} + \\mu_x$\\\\\n\\BlankLine\n\\lIf{$h_k \\ge (\\gamma_k - \\rho_k)\/\\bar{v}$}{\\Return \\Err}\\\\\n\\lIf{$dia(\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)) \\ge \\epsilon$} {\\Return \\Err} \\\\\n\\uIf{$\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0)) \\cap Inv(l_{k-1}) = \\emptyset$}{\n\t\\If{$\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0)) \\subset Inv(l_{k-1})^{\\circ}$} {\n\t\t\\uIf{$\\Deterministic \\land \\Transversal$}{\n\t\t\t\\Update $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ and $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$\n\t\t\t\\Update $\\rho_k \\leftarrow \\rho_k + \\mu_x + \\mu_c + \\mu_h$\\\\\n\t\t\t\\Update $l_k$\\\\\n\t\t\t \\Jump $\\leftarrow$ \\Jump + 1\n\t\t}\n\t\t\\lElse{\\Return \\Err}\n\t}\n}\n\\uElseIf{$\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0)) \\not\\subset Inv(l_{k-1})^{\\circ}$}{\\Return \\Err}\n\\lElse{$l_k \\leftarrow l_{k-1}$}\n\\BlankLine\n\\caption{A function {\\tt Post()}.}\n\\label{alg:post}\n\\end{algorithm}\n\n\nIn the proposed algorithm in {\\tt Post()}, $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ is computed from $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$ as follows:\n\nGiven a polyhedron $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$, we first compute the set of the vertices of $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$ that is denoted as $\\V$.\nThen for each $v_i \\in \\V$, we compute \n\\[\n\tv_i(h_k) := e^{A_k h_k} v_i + \\int_0^{h_k} e^{A_k s} u_k ds\n\\] \nwhere $A_k$ and $u_k$ are given by the linear dynamics of a location $l_k$ on which the linear image of $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$ is computed at the $k$-the computation step in Algorithm \\ref{alg:main}. \nIf we let $\\V_h := \\{v_i(h_k) : v_i \\in \\V \\}$, then we can compute $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ as follows: \n\\[\n\t\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0)) := hull(\\V_h)\n\\]\nwhere $hull(\\V_h)$ is the convex hull of $\\V_h$. \n\nOnce we have $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$, we compute $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ in the following way.\nTo compute $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ for a given $\\gamma_k$, we first construct a hypercubic $\\gamma_k$-neighborhood of $v_i(h_k)$ for each $v_i(h_k) \\in \\V_h$.\nLet $\\B_{\\gamma_k}(v_i(h_k))$ be such a $\\gamma_k$ hypercubic neighborhood of $v_i(h_k)$ and $\\V_{h}^{\\gamma}$ be the set of vertices of $\\B_{\\gamma_k}(v_i(h_k))$ for all $v_i(h_k) \\in \\V_h$.\nThen we can compute $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ as follows:\n\\begin{equation} \\label{sec:design:algo:hull}\n\t\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k) := hull(\\V_h^{\\gamma}).\n\\end{equation}\n\nThis process of polyhedral image computation under a linear dynamics is illustrated in Fig. \\ref{fig:post:linear}.\nWe now show that $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ that is computed as in (\\ref{sec:design:algo:hull}) is indeed a $\\gamma_k$-approximation of $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ for a given $\\gamma_k$.\n\n\n\n\n\\begin{figure} [htbp]\n\\begin{center}\n\t\\includegraphics[width=8cm]{linear_image.png}\n\\caption{The image computation under a linear dynamics.}\n\\label{fig:post:linear}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\begin{lem}\nLet $\\H$ be the convex hull of $\\V_h^{\\gamma}$. \nThen $\\H$ is exactly the closed $\\gamma$-neighborhood of the convex hull of $\\V_h$.\n\\end{lem}\n\\begin{proof}\nSuppose $\\bar{w} \\in \\H$ and $\\bar{w} \\not\\in hull(\\V_h)$.\nThen $\\bar{w} = \\lambda \\bar{y}_1 + (1-\\lambda)\\bar{y}_2$ for some $\\bar{y}_1$ and $\\bar{y}_2$ such that $\\Vert \\bar{y}_1 - v_1 \\Vert \\le \\gamma$ and $\\Vert \\bar{y}_2 - v_2 \\Vert \\le \\gamma$ for some $v_1, v_2 \\in \\V_h$ and $0 \\le \\lambda \\le 1$.\nThen there exists $v = \\lambda v_1 + (1-\\lambda)v_2 \\in hull(\\V_h)$ such that\n\\begin{eqnarray} \\nonumber\n\\Vert \\bar{w} -v \\Vert &=& \\Vert \\lambda(\\bar{y}_1 - v_1) + (1-\\lambda)(\\bar{y}_2 - v_2) \\Vert \\nonumber \\\\\n&\\le& \\lambda \\Vert \\bar{y}_1 - v_1 \\Vert + (1-\\lambda) \\Vert \\bar{y}_2 - v_2 \\Vert \\nonumber \\\\\n&\\le& \\gamma. \\nonumber\n\\end{eqnarray}\nThus $\\bar{w}$ is in the $\\gamma$-neighborhood of the convex hull of $\\V_h$.\n\nFor the converse, consider $\\bar{z}$ in the $\\gamma$-neighborhood of the convex hull of $\\V_h$.\nThen for some $\\lambda_i \\ge 0$, $\\sum_i \\lambda_i = 1$, $\\Vert \\bar{z} - \\sum_i \\lambda_i v_i(h) \\Vert \\le \\gamma$, where $v_i(h) \\in \\V_h$.\nLet $s := \\bar{z} - \\sum_i \\lambda_i v_i(h)$.\nNow $\\bar{z} = \\sum_i \\lambda_i(v_i(h) + s)$.\nSo $\\bar{z}$ is in the convex hull of $\\lbrace v_i(h) + s \\rbrace$.\nHowever each $v_i(h) + s \\in \\B_{\\gamma}(v_i(h))$.\nHence each $v_i(h) + s$ is in the convex hull of the vertices of $\\B_{\\gamma}(v_i(h))$ which is $\\H$.\nThus $\\bar{z}$ is in $\\H$.\n\\end{proof}\n\n\n\nNotice that the first update of $\\rho_k$ in {\\tt Post()} is due to the computation of $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ from $\\hat{\\D}_{t_{k-1}}(\\B_{\\delta_k}(x_0))$ over the time interval $h_k$ under the linear dynamics of $l_{k-1}$.\nThe second update after a deterministic and transversal discrete transition is due to a series of computations from $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ that is used to determine such a discrete transition to a new $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ that represents a reached states at time $t_k$ right after a deterministic and transversal discrete transition.\nAs described in Lemma \\ref{lem:cond:inexact:transv}, the steps involved during this discrete transition are to compute\n\\begin{inparaenum}[(i)]\n\t\\item $\\hat{\\mathcal{J}}_{c,n}$ from $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ and\n\t\\item $\\D_{h_k}(\\hat{\\mathcal{J}}_{c,n})$ from $\\hat{\\mathcal{J}}_{c,n}$. \n\\end{inparaenum}\nNotice that (i) requires an intersection between a hyperplane and a polyhedron as well as a convex hull computation.\nMoreover, for (ii), we need to compute a polyhedral image under the linear dynamics of a new location that is determined in {\\tt Post()}.\nRecall that we have derived a set of conditions in Lemmas \\ref{lem:cond:inexact:istrans}, \\ref{lem:cond:inexact:det}, and \\ref{lem:cond:inexact:transv} to determine a deterministic and transversal discrete transition event.\nThese conditions are used in {\\tt Post()} to determine such an event.\nFurthermore, we also use conditions derived in Lemmas \\ref{lem:cond:inexact:over} and \\ref{lem:cond:inexact:eps}, to ensure that a set $\\hat{\\R}_{t_f}(\\B_{\\delta}(x_0), \\gamma)$, which can be constructed as a collection of $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ as shown in the following theorem, satisfies the properties given in Remark \\ref{rem:cond:inexact}.\n\n\n\nNow, we present our main result for the problem of computing a bounded $\\epsilon$-reach set of a DTLHA.\n\n\n\\begin{thm}\nGiven input $(\\X, \\A, l_0, x_0, T,$ $N, \\epsilon)$ for a problem to compute a bounded $\\epsilon$-reach set of a DTLHA $\\A$, \nif Algorithm \\ref{alg:main} returns {\\tt done}, then a bounded $\\epsilon$-reach set of a DTLHA $\\A$ defined over the continuous state domain $\\X$ starting from an initial condition $(l_0, x_0) \\in \\mathbb{L} \\times \\mathbb{R}^n$, denoted as $\\hat{\\R}_{t_f}(x_0, \\epsilon)$, is the following:\n\\begin{equation}\n\\hat{\\R}_{t_f}(x_0, \\epsilon) := \\bigcup_{k = 1}^{K} \\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0),\\gamma_k),\n\\end{equation}\nfor some $K \\in \\mathbb{N}$ where $t_f := \\min \\{T, \\tau_N \\}$ and $\\tau_N$ is the time at the $N$-th discrete transition. \n\\end{thm}\n\\begin{proof}\nFor each $k \\le K$, \n\\begin{inparaenum}\n\\item[(i)] $\\gamma_k, h_k, \\rho_k$ satisfies Lemma \\ref{lem:cond:inexact:over}, and \n\\item[(ii)] $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0),\\gamma_k)$ satisfies Lemma \\ref{lem:cond:inexact:eps}. \n\\end{inparaenum}\nHence $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ is guaranteed to satisfy \n$\\D_{[t, t+h]}(x_0) \\subset \\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k)$ and $d_H(\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k), \\D_{[t, t+h]}(x_0)) \\le \\epsilon$.\nFurthermore, if a deterministic and transversal discrete transition is detected at the $k$-th step by $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$, then\n\\begin{inparaenum}\n\t\\item[(iii)] by Lemmas \\ref{lem:cond:inexact:istrans}, \\ref{lem:cond:inexact:det}, and \\ref{lem:cond:inexact:transv}, there is in fact a deterministic and transversal discrete transition in $(t_{k-1}, t_k)$. \n\\end{inparaenum}\nThis implies that a deterministic and transversal discrete transition event is correctly determined. \nFinally, if the proposed algorithm returns {\\tt done}, then this implies that \n\\begin{inparaenum}\n\t\\item[(iv)] either $t_k \\ge T$ or ${\\tt jump} \\ge N$. \n\\end{inparaenum}\nHence, $t_f$ is $\\min \\{ T, \\tau_N\\}$.\nTherefore, $\\R_{t_f}$ is a bounded $\\epsilon$-reach set of $\\A$ from $x_0$ by (i), (ii), (iii), and (iv).\n\\end{proof}\n\n\n\n\n\\section{Optimization and Implementation of the Proposed Algorithm} \\label{sec:imp}\n\nA prototype software tool has been implemented, based on the architecture and the algorithm proposed in Section \\ref{sec:design}, to demonstrate the idea of a bounded $\\epsilon$-reach set computation. \nIn our implementation, we use the Multi-Parametric Toolbox \\cite{mpt} for polyhedral operations and also use some built-in Matlab functions for other calculations. \n\n\nNotice that the size of the $\\hat{\\D}_t(\\B_{\\delta}(x_0))$ right after a discrete transition increases roughly by the amount $\\gamma$ through the computation of $\\hat{\\mathcal{J}}_{c,n}$. \nThis can potentially affect the capability to determine a discrete transition event.\nHence, we determine a smaller value of $\\gamma$ to construct a tighter over-approximation of a discrete transition state. \nSuppose that a discrete transition from a location $l_i$ to some other location $l_j$ has already been determined by the proposed algorithm for given $h > 0$, $\\hat{\\D}_{t-h}(\\B_{\\delta}(x_0), \\rho)$, and $\\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\rho)$ at some time $t$.\nThen the procedure for construction of a tight over-approximation of a discrete transition state $x(\\tau_k)$ for some $\\tau_k \\in (t-h, t)$ can be improved shown in Algorithm \\ref{alg:opt}.\n\n\n\\begin{algorithm}\n\\SetAlgoLined \n\\BlankLine\n1. Partition the time interval $[t-h, t]$ into a finite sequence of $\\{ I_m \\}_{m=1}^M$ for some $M \\in \\mathbb{N}$, where\n\\[ I_m : = [t-h +(m-1) \\cdot \\Delta h, t-h + m \\cdot \\Delta h] \\]\nfor some $\\Delta h \\ll h$. \\\\\n\\BlankLine\n2. Find a time $\\tau := t-h + m \\cdot \\Delta h \\in (t-h, t)$ such that \n\t\\begin{itemize}\n\t\t\\item $volume(\\P_{\\tau}^i) > volume(\\P_{\\tau}^j)$ and\n\t\t\\item $volume(\\P_{\\tau + \\Delta h}^i) < volume(\\P_{\\tau + \\Delta h}^j)$, \n\t\\end{itemize}\n\twhere $\\P_t^k := Inv_k \\cap \\hat{\\D}_{t}(\\B_{\\delta}(x_0), \\rho)$. \\\\\n\\BlankLine\n3. Construct $\\hat{\\D}_{\\tau+\\Delta h}(\\B_{\\delta}(x_0), \\gamma' + \\rho)$ where $\\gamma' > \\Delta h \\cdot \\bar{v}$. \\\\\n\\BlankLine\n4. Compute an over-approximate discrete transition state\n\\[ \\hat{\\mathcal{J}}_{i,j} := \\hat{\\D}_{\\tau+\\Delta h}(\\B_{\\delta}(x_0), \\gamma' + \\rho) \\cap Inv_i \\cap Inv_j. \\]\n\\caption{A procedure to compute a tight over-approximation of discrete transition state.}\n\\label{alg:opt}\n\\end{algorithm}\n\n\n\n\\subsection{An Example of Bounded $\\epsilon$-Reach Set Computation} \\label{sec:imp:example}\n\nAs an example to evaluate the proposed algorithm for a bounded $\\epsilon$-reach set computation of a DTLHA $\\A$, we consider an LHA $\\A := (\\mathbb{L}, Inv, A, u, \\xrightarrow{G})$ over a continuous state space $\\X := [-8, 8] \\times [-8, 8] \\subset \\mathbb{R}^2$ where\n\\begin{enumerate}[(i)]\n\t\\item $\\mathbb{L} = \\{Up, Down,$ $Left, Right \\}$,\n\t\\item $A(l)$ and $u(l)$ for each location $l \\in \\mathbb{L}$ are defined as shown in Table \\ref{tab:ex:Au}, \n\t\\item The invariant set for each location $l \\in \\mathbb{L}$, $Inv(l)$, is defined as shown in Fig. \\ref{fig:eg:rho}, and\n\t\\item $\\xrightarrow{G}$ holds at the intersection between invariant sets of different locations. \n\\end{enumerate}\n\nNotice that all the LTI dynamics defined in the given LHA $\\A$ are asymptotically stable.\nMoreover, from the the vector fields determined by $A(l)$ and $u(l)$ for each $l \\in \\mathbb{L}$, every discrete transition which occurs along the boundary of the invariant set between different locations is deterministic and transversal. \nHence the given LHA $\\A$ is in fact a DTLHA.\n\n\n\n\\begin{table}[htdp]\n\\caption{$A(l)$ and $u(l)$ for each $l \\in \\mathbb{L}$ of $\\A$}\n\\begin{center}\n{\\small\n\\begin{tabular}{c|p{0.15\\textwidth}|c}\n\\toprule\n$l$ & \\centering{$A(l)$} & $u(l)$ \\\\ \n\\midrule\n$UP$ & \\centering{$\\begin{pmatrix} -0.2 & -1 \\\\ 3 & -0.2 \\end{pmatrix}$} & $\\begin{pmatrix} 0.1 \\\\ 0.1 \\end{pmatrix}$ \\\\ \n$DOWN$ & \\centering{$\\begin{pmatrix} -0.2 & -1 \\\\ 3 & -0.2 \\end{pmatrix}$} & $\\begin{pmatrix} -0.2 \\\\ -0.2 \\end{pmatrix}$ \\\\ \n$LEFT$ & \\centering{$\\begin{pmatrix} -0.2 & -3 \\\\ 1 & -0.2 \\end{pmatrix}$} & $\\begin{pmatrix} 0.15 \\\\ 0.15 \\end{pmatrix}$ \\\\ \n$RIGHT$ & \\centering{$\\begin{pmatrix} -0.2 & -3 \\\\ 1 & -0.2 \\end{pmatrix}$} & $\\begin{pmatrix} 0.3 \\\\ 0.3 \\end{pmatrix}$ \\\\ \n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:ex:Au}\n\\end{table}%\n\nThe bounded $\\epsilon$-reach set computation problem is specified by $(\\A, l_0, x_0, T, N, \\epsilon)$ where $l_0 = Up$, $x_0 = (2.5, 6)^T$, $T = 20$ sec., $N = 10$, and $\\epsilon = 0.5$.\n\nIn this example, we also assume that numerical calculation algorithms are available for basic calculations defined in Section \\ref{sec:cond:basic} such that $a(e^{At}, \\rho)$, $a(\\int_{0}^{t} e^{A \\tau} d \\tau, \\rho)$, $a(\\H \\cap \\P, \\rho)$, and $a(hull(\\V), \\rho)$ where $\\rho$ is specified as $10^{-15}$.\n\n\n\\begin{figure}\n\\begin{center}\n\t\\includegraphics[width=8.5cm]{case_02_loc.png}\n\\caption{A bounded $\\epsilon$-reach set of a DTLHA $\\A$ starting from $(Up, [2.5, 6]^T)$.}\t\n\\label{fig:eg:rho}\n\\end{center}\n\\end{figure}\n\n\n\nA policy that is used to choose values for $(k, \\delta_k, \\gamma_k, h_k)$ is as follows:\n\\begin{enumerate}[(i)]\n\t\\item $k$ is chosen in non-decreasing manner, \n\t\\item $\\delta_k := 10^{-5}$ to define a fixed sufficiently small $\\B_{\\delta}(x_0)$,\n\t\\item $\\gamma_k := (\\epsilon - dia(\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\rho_k)))\/2$, and\n\t\\item $h_k := (\\gamma_k\/2)\/ \\bar{v}$ where $\\bar{v}$ is as defined in (\\ref{eq:h:ineq}).\n\\end{enumerate}\nNotice that (i) means that whenever the proposed $\\epsilon$-reach set algorithm fails to continue its computation at the $k$-th computation step, then the policy decides to restart the computation from the $k$-th step with different values of the other parameters.\nRecall that $\\rho_k$ denotes the approximation error of $\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0))$ when the algorithm computes $\\D_{t_k}(\\B_{\\delta_k}(x_0))$ at time $t_k$.\nAs shown in (iii), for a given $\\epsilon$, the policy chooses the largest value of $\\gamma_k$ at each computation step.\nThe equation for $\\gamma_k$ given in (iii) can easily be derived by considering \n\\begin{equation} \\nonumber\n\tdia(\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\gamma_k + \\rho_k)) \\le dia(\\hat{\\D}_{t_k}(\\B_{\\delta_k}(x_0), \\rho_k)) + 2 \\gamma_k.\n\\end{equation}\nIf we upper bound the right hand side by $\\epsilon$, then we have (iii).\n\n\nFig. \\ref{fig:eg:rho} shows the computation result.\nAs shown in Fig. \\ref{fig:eg:rho}, a bounded $\\epsilon$-reach set is successfully computed.\nIn this example, the algorithm terminates at the computation step $k = 2259$ right after the algorithm makes the tenth discrete transition from locations $Left$ to $Down$ at the time $t = 12.1415$ sec. and {\\tt jump $= 10$}.\nFor given $\\rho := 10^{-15}$, the accumulated numerical calculation error $\\rho_k$ for $\\D_{t_k}(\\B_{\\delta_k}(x_0))$ at this termination time is $2.5638 \\times 10^{-11}$.\n\n\n\n\n\\section{Conclusion} \\label{sec:con}\nWe have defined a special class of hybrid automata, called Deterministic and Transversal Linear Hybrid Automata (DTLHA), for which we can address the problem of bounded $\\epsilon$-reach set computation starting from an initial state.\nFor this class, we can also incorporate the impact of numerical calculation errors caused by finite precision numerical computation.\n\nIt is of importance to determine more general and useful models of hybrid systems that permit computational verification of safety properties. Hybrid linear systems that incorporate linear models widely employed in control systems are a natural candidate around which to build such a theory of verification and validation.\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfaiq b/data_all_eng_slimpj/shuffled/split2/finalzzfaiq new file mode 100644 index 0000000000000000000000000000000000000000..db56a71599691908fa4e17aea4f564c7b079dfd7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfaiq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\emph{This Game Is Not Going To Load Itself} (TGINGTLI{}) \\cite{tgingtli} is a free game created in 2015 by Roger ``atiaxi'' Ostrander for the Loading Screen Jam, a game jam hosted on \\protect\\url{itch.io},\nwhere it finished $7$th overall out of $46$ entries.\nThis game jam was a celebration of the expiration of US Patent 5,718,632 \\cite{US5718632}, which covered the act of including mini-games during video game loading screens.\nIn this spirit, TGINGTLI{} is a real-time puzzle game themed around the player helping a game load three different resources of itself --- save data, gameplay, and music, colored red, green, and blue --- by placing arrows on the grid cells to route data entering the grid to a corresponding sink cell.\nFigure~\\ref{fig:realplay} shows an example play-through.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.63]{figs\/realplay_input}\\hfill\n \\includegraphics[scale=0.63]{figs\/realplay}\n \\caption{%\n Left: The (eventual) input for a real-world Level 16 in TGINGTLI{}.\n Right: A successful play-through that\n routes every packet to its corresponding sink.}\n \n \\label{fig:realplay}\n\\end{figure}\n\nWe formalize TGINGTLI{} as follows.\nYou are given an $m \\times n$ grid where each unit-square cell is either empty,\ncontains a data sink, or contains an arrow pointing\nin one of the four cardinal directions.\n(In the implemented game, $m=n=12$ and no arrows are placed initially.)\nEach data sink and arrow has a color (resource) of red, green, or blue;\nand there is exactly one data sink of each color in the grid.\nIn the online version (as implemented), sources appear throughout the game;\nin the offline version considered here, all sources are known a priori.\nNote that an outer edge of the grid may have multiple sources of different colors.\nFinally, there is an loading bar that starts at an integer $k_0$\nand has a goal integer~$k^*$.\n\nDuring the game,\neach source periodically produces data packets of its color,\nwhich travel at a constant speed into the grid.\nIf a packet enters the cell of an arrow of the same color,\nthen the packet will turn in that direction.\n(Arrows of other colors are ignored.)\nIf a packet reaches the sink of its color,\nthen the packet disappears and the loading bar increases by one unit of data.\nIf a packet reaches a sink of the wrong color, or exits the grid entirely,\nthen the packet disappears and the loading bar decreases by one unit of data, referred to as taking damage.\nPackets may also remain in the grid indefinitely\nby going around a cycle of arrows;\nthis does not increase or decrease the loading bar.\nThe player may at any time permanently fill an empty cell with an arrow, which may be of any color and pointing in any of the four directions.\nIf the loading bar hits the target amount $k^*$, then the player wins; but if the loading bar goes below zero, then the player loses.\n\nIn Section~\\ref{sec:NP-hardness},\nwe prove NP-hardness of the {TGINGTLI} decision problem:\ngiven a description of the grid\n(including sources, sinks, and preplaced arrows),\ncan the player place arrows to win?\nThis reduction works even for just six sources and three colors;\nit introduces a new problem, \\defn{3DSAT}, where variables have three\ndifferent colors and each clause mixes variables of all three colors.\nIn Section~\\ref{sec:sigma-2}, we introduce more detailed models for the\nperiodic behavior of sources, and show that many sources of differing periods\nenable both NP- and coNP-hardness of winning the game,\n\\emph{even without player input} (just simulating the game).\nOn the positive side, we prove that these problems are in $\\Sigma_2^P$;\nand in NP when the source periods are all equal,\nas in our first NP-hardness proof, so this case is in fact NP-complete.\n\nIn Section~\\ref{sec:perfect-layouts}, we consider how levels start in\nthe implemented game: a grid with placed sinks but no preplaced arrows.\nWe give a full characterization of when there is a \\defn{perfect layout}\nof arrows, where all packets are guaranteed to route to the correct sink,\n\\emph{no matter where sources get placed}.\nIn particular, this result provides a winning strategy for most\nsink arrangements in the implemented game.\nNotably, because this solution works independent of the sources,\nit works in the online setting.\n\n\\xxx{We should perhaps point to some of the other hardness of videogames literature and if there is literature on solving small instances of puzzle games through math and exhaustive search.}\n\n\\section{NP-Hardness for Three Colors and Six Sources}\n\\label{sec:NP-hardness}\n\nWe first prove that TGINGTLI{} is NP-hard by reducing from a new problem called\n\\defn{3-Dimensional SAT (3DSAT)},\ndefined by analogy to 3-Dimensional Matching (3DM).\n3DSAT is a variation of 3SAT where, in addition to a 3CNF formula,\nthe input specifies one of three colors (red, green, or blue) to each variable\nof the CNF formula, and the CNF formula is constrained to have trichromatic\nclauses, i.e., to have exactly one variable (possibly negated) of each color.\n\n\\begin{lemma}\n 3DSAT is NP-complete.\n\\end{lemma}\n\n\\begin{proof}\n We reduce from 3SAT to 3DSAT by converting a 3CNF formula $F$\n into a 3D CNF formula~$F'$.\n For each variable $x$ of~$F$,\n we create three variables $x^{(1)}, x^{(2)}, x^{(3)}$ in $F'$\n (intended to be equal copies of $x$ of the three different colors)\n and add six clauses to $F'$ to force $x^{(1)} = x^{(2)} = x^{(3)}$:\n %\n \\def\\halfup#1{\\raisebox{2.5ex}{\\smash{$#1$}}}\n \\begin{align*}\n \\lnot x^{(1)} \\lor x^{(2)} \\lor x^{(3)} &\\iff (x^{(1)} \\to x^{(2)}) \\lor x^{(3)} \\\\\n \\lnot x^{(1)} \\lor x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(1)} \\to x^{(2)}) \\lor \\lnot x^{(3)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(1)} \\to x^{(2)}} \\\\\n x^{(1)} \\lor \\lnot x^{(2)} \\lor x^{(3)} &\\iff (x^{(2)} \\to x^{(3)}) \\lor x^{(1)} \\\\\n \\lnot x^{(1)} \\lor \\lnot x^{(2)} \\lor x^{(3)} &\\iff (x^{(2)} \\to x^{(3)}) \\lor \\lnot x^{(1)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(2)} \\to x^{(3)}} \\\\\n x^{(1)} \\lor x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(3)} \\to x^{(1)}) \\lor x^{(2)}\\\\\n x^{(1)} \\lor \\lnot x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(3)} \\to x^{(1)}) \\lor \\lnot x^{(2)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(3)} \\to x^{(1)}}\n \\end{align*}\n %\n Thus the clauses on the left are equivalent to the implication loop\n $x^{(1)} \\implies x^{(2)} \\implies x^{(3)} \\implies x^{(1)}$,\n which is equivalent to $x^{(1)} = x^{(2)} = x^{(3)}$.\n \n For each clause $c$ of $F$ using variables $x$ in the first literal,\n $y$ in the second literal, and $z$ in the third literal,\n we create a corresponding clause $c'$ in $F'$ using\n $x^{(1)}$, $y^{(2)}$, and $z^{(3)}$ (with the same negations as in~$c$).\n All clauses in $F'$ (including the variable duplication clauses above)\n thus use a variable of the form $x^{(i)}$ in the\n $i$th literal for $i \\in \\{1,2,3\\}$,\n so we can 3-color the variables accordingly.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:NP-hard36}\n {TGINGTLI} is NP-hard, even with three colors and six sources\n of equal period.\n\\end{theorem}\n\n\\begin{proof}\n Our reduction is from 3DSAT.\n Figure~\\ref{fig:reduction-sketch} gives a high-level sketch:\n each variable gadget has two possible routes\n for a packet stream of the corresponding color,\n and each clause gadget allows at most two colors of packets\n to successfully route through.\n When a clause is satisfied by at least one variable,\n the clause gadget allows the other variables to successfully pass\n through; otherwise, at least one of the packet streams enters a cycle.\n Variables of the same color are chained together\n to re-use the packet stream of that color.\n\n \\begin{figure}\n \\centering\n \\subcaptionbox{Satisfied clause}{\\includegraphics[scale=0.5]{figs\/reduction-sketch-sat}}\\hfil\n \\subcaptionbox{Unsatisfied clause}{\\includegraphics[scale=0.5]{figs\/reduction-sketch-unsat}}\n \\caption{Sketch of our NP-hardness reduction.}\n \\label{fig:reduction-sketch}\n \\end{figure}\n \n In detail, most cells of the game board\n will be prefilled, leaving only a few empty cells\n (denoted by question marks in our figures) that the player can fill.\n \n \n \n\n For each color, say red, we place a red source gadget\n on the left edge of the construction.\n Then, for each red variable $x$ in sequence,\n we place a variable gadget of Figure~\\ref{fig:gadget:variable}\n at the end of the red stream.\n To prevent the packets from entering a loop,\n the player must choose between sending the stream upward or downward,\n which results in it following one of the two rightward paths,\n representing the literals $x$ and $\\overline x$ respectively.\n The path followed by the packet stream is viewed as \\emph{false};\n an empty path is viewed as \\emph{true}.\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}[b]{0.51\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/clause.pdf}\n \\caption{Clause gadget. At most two streams of data, representing false literals, can pass through the gadget (by placing upwards arrows in the ``?'' cells) without entering a cycle.\n Placing any other direction of arrow also puts the stream into a cycle.}\n \\label{fig:gadget:clause}\n \n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.22\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/variable.pdf}\n \\caption{\\centering Variable gadget lets the player route packets from a source to one of two literal paths.}\n \\label{fig:gadget:variable}\n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.22\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/merge.pdf}\n \\caption{\\centering Merge gadget combines two literal paths back into one.\\linebreak~}\n \\label{fig:gadget:merge}\n \\end{minipage}\n \n \\bigskip\n \n \\begin{minipage}[b]{0.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/crossover.pdf}\n \\caption{Crossover gadget between two literal paths of same or different colors. (The center cell is colored different from both paths.)}\n \\label{fig:gadget:crossover}\n \\end{minipage}\\hfill\n \n \n \n \n \n \n \\begin{minipage}[b]{0.46\\linewidth}\n \\includegraphics[scale=0.9]{figs\/death.pdf}\n \\caption{Damage gadget forces damage at a unit rate after a desired start delay.\\hspace{\\fill}\\linebreak~}\n \\label{fig:gadget:damage}\n \\end{minipage}\n\\end{figure}\n\n Then we route each literal path to sequentially visit\n every clause containing it.\n Figure~\\ref{fig:gadget:crossover} shows a crossover gadget\n to enable such routing.\n (Note that it still works if both lines are the same color.)\n\n Figure~\\ref{fig:gadget:clause} shows the clause gadget,\n which allows at most two packet streams to pass through it.\n If all three literals are false then at least\n one stream of data must be placed into a cycle.\n On the other hand, if the clause is satisfied, then the literal paths\n carrying data can pass their data on to the next clause, and so on.\n Note that the length of the diverted path through the clause\n is the same for all three colors.\n\n After all uses of the red variable \\(x\\),\n we lengthen the two red literal paths\n corresponding to \\(x\\) and \\(\\overline{x}\\) to have the same length,\n then combine them back together\n using the merge gadget of Figure~\\ref{fig:gadget:merge}.\n We then route this red path into the variable gadget\n for the next red variable, and so on.\n Finally, after all red variables, we connect the red stream\n to a red sink.\n\n We lengthen the red, green, and blue streams\n to all have the same length~$\\ell$.\n If the player successfully satisfies all clauses, then\n they will increase the loading bar by $3$ units ($1$~per color)\n after an initial delay of~$\\ell$.\n We set the parameters so that the player wins in this case:\n $k^* - k_0 = 3$.\n Otherwise, the loading rate is at most~$2$.\n To ensure that the player loses in this case, we add\n $3$ damage gadgets of Figure~\\ref{fig:gadget:damage},\n each incurring damage at a rate of $1$ after an initial delay of $\\ell+1$.\n Thus we obtain a net of $-1$ per period,\n so the player eventually loses even if $k_0$ is large.\n\\end{proof}\n\n\nThis NP-hardness result does not need a very specific model of sources and\nhow they emit packets.\nTo understand whether the problem is in NP, we need a more specific model,\nwhich is addressed in the next section.\n\n\n\\section{Membership in $\\Sigma_2^P$ and Hardness from Source Periodicity}\n\\label{sec:sigma-2}\n\n\\iffalse\nmembership in sigma 2 - algorithm\n check if you didn't lose\n check if you didn't win\n skip to the next event\n\nco-np hard:\n checking if you didn't win\/lose\n\\fi\n\nIn this section, we consider the effect of potentially differing periods\nfor different sources emitting packets.\nSpecifically, we show that carefully setting periods together with\nthe unbounded length of the game results in both NP- and coNP-hardness\nof determining the outcome of {TGINGTLI},\n\\emph{even when the player is not making moves}.\nConversely, we prove that the problem is in~$\\Sigma_2^P$,\neven allowing player input.\n\n\\subsection{Model and Problems}\n\nMore precisely, we model each source $s$ as emitting data packets of its color\ninto the grid with its own period $p_s$, after a small warmup time $w_s$\nduring which the source may emit a more specific pattern of packets.\n{TGINGTLI} as implemented has a warmup behavior of each source\ninitially (upon creation) waiting 5 seconds before the first emitted packet,\nthen emitting a packet after 2 seconds, after 1.999 seconds,\nafter 1.998 seconds, and so on,\nuntil reaching a fixed period of 0.5 seconds.\nThis is technically a warmup period of 1881.25 seconds with\n1500 emitted packets, followed by a period of 0.5 seconds.\n\nIn the \\defn{simulation} problem, we are given the initial state of the grid,\na list of timestamped \\defn{events} for\nwhen each source emits a packet during its warmup period,\nwhen each source starts periodic behavior,\nand when the player will place each arrow.\nWe assume that timestamps are encoded in binary but\n(to keep things relatively simple)\nperiods and warmup times are encoded in unary.\nThe problem then asks to predict whether the player wins;\nthat is, the loading bar reaches~$k^*$ before a loss occurs.\n\nIn the \\defn{game} problem, we are not given the player's arrow placements.\nIf we allow nondeterministic algorithms, the game problem reduces to the\nsimulation problem: just guess what arrows we place, where, and at what times.\n\nA natural approach to solving the simulation problem is to simulate the game\nfrom the initial state to each successive event.\nSpecifically, given a state of the game (a grid with sinks, sources of varying periods and offsets, placed arrows, and the number of in-flight packets at each location) and a future timestamp \\(t\\),\nwe wish to determine the state of the game at time \\(t\\).\nUsing this computation, we can compute future states of the game quickly by\n``skipping ahead'' over the time between events.\nOn an $m \\times n$ grid, there are $O(m n)$ events, so we can determine the state of the game at any time \\(t\\) by simulating polynomially many phases between events.\n\nThis computation is easy to do.\nGiven the time \\(t\\), we can divide by each source's period and the period of each cycle of arrows to determine\nhow many packets each source produces and where the arrows route them ---\neither to a sink which affects loading, off the grid, stuck in a cycle,\nor in-flight outside a cycle ---\nand then sum up the effects to obtain the new amount loaded and the number of packets at each location.\n\\xxx{Be more specific?}\n\nHowever, being able to compute future states\ndoes not suffice to solve the simulation and game problems because there might be an intermediate time\nwhere the loading amount drops below $0$ or reaches the target amount~$k^*$.\nNonetheless, this suffices to show that the problems are in \\(\\Sigma_2^P\\),\nby guessing a win time and verifying there are no earlier loss times:\n\n\\begin{lemma}\n The simulation and game problems are in \\(\\Sigma_2^P\\).\n\\end{lemma}\n\\begin{proof}\n The player wins if there exists a time with a win such that all smaller times are not losses.\n To solve the simulation problem,\n nondeterministically guess the winning time and verify that it is a win\n by computing the state at that time.\n Then check using a coNP query that there was no loss before that time,\n again using the ability to quickly compute states at individual timestamps.\n\n To solve the game problem, we first existentially guess the details of the arrow placements,\n then solve the resulting simulation problem as before.\n\\end{proof}\n\nAn easier case is when the source periods are all the same\nafter warmup, as implemented in the real game.\nTheorem~\\ref{thm:NP-hard36} proved this version of the game NP-hard,\nand we can now show that it is NP-complete:\n\n\\begin{lemma}\n If all sources have the same polynomial-length period after a\n polynomial number $t_p$ of time steps,\n then the simulation problem is in P and the game problem is in NP.\n\\end{lemma}\n\n\\begin{proof}\n In this case, we can check for wins or losses in each phase between events by explicitly simulating longer than all packet paths,\n at which point the loading bar value becomes periodic with the common source period.\n (Cycles of arrows may have different periods but these do not affect the loading bar, and thus do not matter when checking for wins and losses.)\n We skip over each phase, checking for win or loss along the way.\n If the game continues past the last event, we measure the sign of the net score change over the period.\n If it is positive, the player will eventually win;\n if it is negative, the player will eventually lose; and\n if it is zero, the game will go on forever.\n\\end{proof}\n\n\nIn the remainder of this section, we consider the case where each source can be assigned any integer period, and the period does not change over time.\n\n\\subsection{Periodic Sum Threshold Problem}\n\nWith varying source periods, the challenge is that the overall periodic\nbehavior of the game can have an extremely large (exponential) period.\n\nWe can model this difficulty via the\n\\defn{Periodic Sum Threshold Problem}, defined as follows.\nWe are given a function $f(x) = \\sum_{i} g_i(x)$\nwhere each $g_i$ has unary integer period $T_i$\nand unary maximum absolute value $M_i$.\nIn addition, we are given a unary integer $\\tau > 0$\nand a binary integer time $x^*$.\nThe goal is to determine whether there exists an integer $x$\nin $[0, x^*)$ such that $f(x) \\geq \\tau$.\n(Intuitively, reaching $\\tau$ corresponds to winning.)\n\n\\begin{theorem}\n\\label{thm:pstp-np-complete}\n The Periodic Sum Threshold Problem is NP-complete,\n even under the following restrictions:\n \\begin{enumerate}\n \\item \\label{prop:one-hot}\n Each $|g_i|$ is a one-hot function, i.e.,\n $g_i(x) = 0$ everywhere except for exactly one $x$ in its period\n where $g_i(x) = \\pm 1$.\n \\item \\label{prop:lambda}\n We are given a unary integer $\\lambda < 0$ such that\n $f(x) > \\lambda$ for all $0 \\leq x < x^*$ and $f(x^*) \\leq \\lambda$.\n (Intuitively, dipping down to $\\lambda$ corresponds to losing.)\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n First, the problem is in NP: we can guess $x \\in [0,x^*)$\n and then evaluate whether $f(x) \\leq c$ in polynomial time.\n\n For NP-hardness, we reduce from 3SAT.\n We map each variable $v_i$ to the $i$th prime number $p_i$ excluding~$2$.\n Using the Chinese Remainder Theorem,\n we can represent a Boolean assignment $\\phi$ as a single integer $0 \\le x < \\prod_i p_i$\n where $x \\equiv 1 \\mod p_i$ when $\\phi$ sets $v_i$ to true,\n and $x \\equiv 0 \\mod p_i$ when $\\phi$ sets $v_i$ to false.\n (This mapping does not use other values of $x$ modulo~$p_i$.\n In particular, it leaves $x \\equiv -1 \\mod p_i$ unused,\n because $p_i \\geq 3$.)\n\n Next we map each clause such as $C = (v_i \\vee v_j \\vee \\neg v_k)$\n to the function\n %\n $$g_C(x) = \\max\\{[x \\equiv 1 \\mod p_i], [x \\equiv 1 \\mod p_j], [x \\equiv 0 \\mod p_k]\\},$$\n %\n i.e., positive literals check for $x \\equiv 1$\n and negated literals check for $x \\equiv 0$.\n This function is $1$ exactly when $x$ corresponds\n to a Boolean assignment that satisfies~$C$.\n \n \n \n This function has period $p_i p_j p_k$,\n whose unary value is bounded by a polynomial.\n Setting $\\tau$ to the number of clauses,\n there is a value $x$ where the sum is $\\tau$ if and only if\n there is a satisfying assignment for the 3SAT formula.\n (Setting $\\tau$ smaller, we could reduce from Max 3SAT.)\n\n To achieve Property~\\ref{prop:one-hot}, we split each $g_C$ function\n into a sum of polynomially many one-hot functions\n (bounded by the period). In fact, seven functions per clause suffice,\n one for each satisfying assignment of the clause.\n\n To achieve Property~\\ref{prop:lambda},\n for each prime $p_i$, we add the function\n $h_i(x) = -[x \\equiv -1 \\mod p_i]$.\n This function is $-1$ only for unused values of \\(x\\)\n which do not correspond to any assignment \\(\\phi\\),\n so it does not affect the argument above.\n Setting $-\\lambda$ to the number of primes (variables)\n and $x^* = \\prod_i p_i - 1$,\n we have $\\sum_i h_i(x^*) = \\lambda$\n because $h_i(x^*) \\equiv -1 \\mod p_i$ for all~$i$,\n while $\\sum_i h_i(x) > \\lambda$ for all $0 \\leq x < x^*$.\n All used values \\(x\\) are smaller than \\(x^*\\).\n\n In total, $f(x)$ is the sum of the constructed functions\n and we obtain the desired properties.\n\\end{proof}\n\n\n\\subsection{Simulation Hardness for Two Colors}\n\nWe can use our hardness of the Periodic Sum Threshold Problem\nto prove hardness of simulating {TGINGTLI}, even without player input.\n\n\\begin{theorem} \\label{thm:sim NP-hard}\n Simulating {TGINGTLI} and determining whether the player wins\n is NP-hard, even with just two colors.\n\\end{theorem}\n\\begin{proof}\n We reduce from the Periodic Sum Threshold Problem proved NP-complete\n by Theorem~\\ref{thm:pstp-np-complete}.\n\n For each function $g_i$ with one-hot value $g_i(x_i) = 1$ and period~$T_i$,\n we create a blue source $b_i$ and a red sources $r_i$,\n of the same emitting period~$T_i$,\n and route red and blue packets from these sources to the blue sink.\n By adjusting the path lengths and\/or the warmup times of the sources,\n we arrange for a red packet to arrive one time unit after each blue packet\n which happens at times $\\equiv x_i \\mod T_i$.\n Thus the net effect on the loading bar value is $+1$ at time $x_i$\n but returns to $0$ at time $x_i + 1$.\n Similarly, for each function $g_i$ with one-hot value $g_i(x_i) = -1$,\n we perform the same construction but swapping the roles of red and blue.\n\n Setting $k_0 = -\\lambda-1 \\geq 0$, the loading bar goes negative\n (and the player loses)\n exactly when the sum of the functions $g_i$ goes down to~$\\lambda$.\n Setting $k^* = k_0 + \\tau$, the loading bar reaches $k^*$\n (and the player wins)\n exactly when the sum of the functions $g_i$ goes up to~$\\tau$.\n\\end{proof}\n\nThis NP-hardness proof relies on completely different aspects of the game\nfrom the proof in Section~\\ref{sec:NP-hardness}: instead of using player input,\nit relies on varying (but small in unary) periods for different sources.\nMore interesting is that we can also prove the same problem coNP-hard:\n\n\\begin{theorem}\n Simulating {TGINGTLI} and determining whether the player wins\n is coNP-hard, even with just two colors.\n\\end{theorem}\n\\begin{proof}\n We reduce from the complement of the Periodic Sum Threshold Problem,\n which is coNP-complete by Theorem~\\ref{thm:pstp-np-complete}.\n The goal in the complement problem is to determine whether\n there is \\emph{no} integer $x$ in $[0, x^*)$ such that $f(x) \\geq \\tau$.\n The idea is to negate all the values to flip the roles of winning and losing.\n \n For each function $g_i$, we construct two sources and wire them to a sink\n in the same way as Theorem~\\ref{thm:sim NP-hard}, but negated:\n if $g_i(x_i) = \\pm 1$, then we design the packets to have a net effect\n of $\\mp 1$ at time $x_i$ and $0$ otherwise.\n\n Setting $k_0 = \\tau-1$, the loading bar goes negative\n (and the player loses)\n exactly when the sum of the functions $g_i$ goes up to~$\\tau$, i.e.,\n the Periodic Sum Threshold Problem has a ``yes'' answer.\n Setting $k^* = k_0 - \\lambda$, the loading bar reaches $k^*$\n (and the player wins)\n exactly when the sum of the functions $g_i$ goes down to $\\lambda$, i.e.,\n the Periodic Sum Threshold Problem has a ``no'' answer.\n\\end{proof}\n\n\n\\section{Characterizing Perfect Layouts}\n\\label{sec:perfect-layouts}\n\nSuppose we are given a board which is empty except for the location of the\nthree data sinks.\nIs it possible to place arrows such that all possible input packets\nget routed to the correct sink?\nWe call such a configuration of arrows a \\defn{perfect layout}.\nIn particular, such a layout guarantees victory,\nregardless of the data sources.\nIn this section, we give a full characterization of boards\nand sink placements that admit a perfect layout.\nSome of our results work for a general number $c$ of colors,\nbut the full characterization relies on $c=3$.\n\n\\subsection{Colors Not Arrows}\n\nWe begin by showing that we do not need to consider the directions of the arrows, only their colors and locations in the grid.\n\nLet \\(B\\) be a board with specified locations of sinks,\nand let \\(\\partial B\\) be the set of edges on the boundary of \\(B\\).\nSuppose we are given an assignment of colors to the cells of \\(B\\)\nthat agrees with the colors of the sinks;\nlet \\(C_i\\) be the set of grid cells colored with color \\(i\\).\nWe call two cells of \\(C_i\\), or a cell of \\(C_i\\) and a boundary edge\n$e \\in \\partial B$, \\defn{visible} to each other if and only if\nthey are in the same row or the same column\nand no sink of a color other than \\(i\\) is between them.\nLet \\(G_i\\) be the graph whose vertex set is \\(C_i \\cup \\partial B\\),\nwith edges between pairs of vertices that are visible to each other.\n\n\\begin{lemma}\n \\label{lem:perfect-colors}\n Let \\(B\\) be a board with specified locations of sinks.\n Then \\(B\\) admits a perfect layout if and only if it is possible to choose\n colors for the remaining cells of the grid such that, for each color~\\(i\\),\n the graph \\(G_i\\) is connected.\n\\end{lemma}\n\\begin{proof}\n ($\\Longrightarrow$)\n Without loss of generality, assume that\n the perfect layout has the minimum possible number of arrows.\n Color the cells of the board with the same colors as the sinks and arrows in the perfect layout.\n (If a cell is empty in the perfect layout,\n then give it the same color as an adjacent cell;\n this does not affect connectivity.)\n Fix a color \\(i\\).\n Every boundary edge is connected to the sink of color \\(i\\)\n by the path a packet of color \\(i\\) follows when entering from that edge.\n (In particular, the path cannot go through a sink of a different color.)\n By minimality of the number of arrows in the perfect layout,\n every arrow of color \\(i\\) is included in such a path.\n Therefore \\(G_i\\) is connected.\n\n ($\\Longleftarrow$)\n We will replace each cell by an arrow of the same color to form a perfect layout.\n Namely, for each color~\\(i\\), choose a spanning tree of \\(G_i\\)\n rooted at the sink of color~\\(i\\),\n and direct arrows from children to parents in this tree.\n By connectivity, any packet entering from a boundary edge\n will be routed to the correct sink, walking up the tree to its root.\n\\end{proof}\n\n\\subsection{Impossible Boards}\n\\label{sec:impossible-boards}\n\nNext we show that certain boards cannot have perfect layouts.\nFirst we give arguments about boards containing sinks\ntoo close to the boundary or each other.\nThen we give an area-based constraint on board size.\n\n\\begin{lemma}\n \\label{lem:sink-distance}\n If there are fewer than $c-1$ blank cells in a row or column\n between a sink and a boundary of the grid,\n then there is no perfect layout.\n\\end{lemma}\n\\begin{proof}\n A perfect layout must prevent packets of the other \\(c-1\\) colors\n entering at this boundary from reaching this sink;\n this requires enough space for \\(c-1\\) arrows.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:impossible-c3}\n For $c=3$, a board has no perfect layout if either\n (as shown in Figure~\\ref{fig:impossible})\n \\begin{enumerate}[(a)]\n \\item a data sink is two cells away from three boundaries and adjacent to another sink;\n \\item a data sink is two cells away from two incident boundaries and is adjacent to two other sinks;\n \\item a data sink is two cells away from two opposite boundaries and is adjacent to two other sinks; or\n \\item a data sink is two cells away from three boundaries and is one blank cell away from a pair of adjacent sinks.\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{figure}\n \\centering\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible1}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible2}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible3}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible4}}\n \\caption{Sink configurations with no perfect layout.\n Dots indicate arrows of forced colors\n (up to permutation within a row or column).}\n \\label{fig:impossible}\n\\end{figure}\n\n\\begin{proof}\n Assume by symmetry that, in each case, the first mentioned sink is red.\n\n Cases (a), (b), and (c):\n The pairs of cells between the red sink and the boundary\n (marked with dots in the figure) must contain a green arrow and a blue arrow\n to ensure those packets do not reach the red sink.\n Thus there are no available places to place a red arrow\n in the same row or column as the red sink,\n so red packets from other rows or columns cannot reach the red sink.\n \n Case (d): The pairs of cells between the red sink and the boundary\n (marked with green and blue dots in the figure)\n must contain a green arrow and a blue arrow to ensure those packets\n do not collide with the red sink.\n Thus the blank square between the red sink and the other pair of sinks\n must be a red arrow pointing toward the red sink,\n to allow packets from other rows and columns to reach the red sink.\n Assume by symmetry that the sink nearest the red sink is green.\n As in the other cases, the pairs of cells between the green sink and the\n boundary must be filled with red and blue arrows.\n Thus there are no green arrows to route green packets from other\n rows or columns to the green sink.\n \n \n \n\\end{proof}\n\nWe now prove a constraint on the sizes of boards that admit a perfect layout.\n\n\\begin{lemma}\n \\label{lem:size-constraint}\n Let \\(c\\) be the number of colors.\n Suppose there is a perfect layout on a board where \\(m\\) and \\(n\\) are respectively the number of rows and columns, and \\(p\\) and \\(q\\) are respectively the number of rows and columns that contain at least one sink.\n Then\n \\begin{equation}\n \\label{eqn:size-constraint}\n c(m + n) + (c-2)(p + q) \\le m n - c.\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\n Each of the \\(m - p\\) unoccupied rows must contain \\(c\\) vertical arrows in order to redirect packets of each color out of the row.\n Each of the \\(p\\) occupied rows must contain \\(c-1\\) vertical arrows to the\n left of the leftmost sink in order to redirect incorrectly colored packets\n from the left boundary edge away from that sink;\n similarly, there must be \\(c-1\\) vertical arrows to the right of the\n rightmost sink.\n Thus we require \\(c(m - p) + 2(c - 1)p = c m + (c - 2)p\\)\n vertical arrows overall.\n By the same argument, we must have \\(c n + (c - 2)q\\) horizontal arrows,\n for a total of\n \\(c(m + n) + (c - 2)(p + q)\\)\n arrows.\n There are \\(m n - c\\) cells available for arrows, which proves the claim.\n\\end{proof}\n\nUp to insertion of empty rows or columns, rotations, reflections, and\nrecolorings, there are six different configurations that $c=3$ sinks\nmay have with respect to each other, shown in Figure~\\ref{fig:3-sink-configs}.\nWe define a board's \\defn{type} according to this configuration of its sinks\n(C, I, J, L, Y, or \/).\n\n\\begin{figure}[htbp]\n \\centering\n \\hfil\n \\subcaptionbox{C}{\\includegraphics[scale=0.9]{figs\/config-C}}\\hfil\n \\subcaptionbox{I}{\\includegraphics[scale=0.9]{figs\/config-I}}\\hfil\n \\subcaptionbox{J}{\\includegraphics[scale=0.9]{figs\/config-J}}\\hfil\n \\subcaptionbox{L}{\\includegraphics[scale=0.9]{figs\/config-L}}\\hfil\n \\subcaptionbox{Y}{\\includegraphics[scale=0.9]{figs\/config-Y}}\\hfil\n \\subcaptionbox{\/}{\\includegraphics[scale=0.9]{figs\/config-slash}}%\n \\hfil\n \\caption{The six possible configurations of three sinks up to rotations, reflections, recolorings, and removal of empty rows.}\n \\label{fig:3-sink-configs}\n\\end{figure}\n\nA board's type determines the values of \\(p\\) and \\(q\\)\nand thus the minimal board sizes as follows.\nDefine a board to have size \\defn{at least} $m \\times n$ if\nit has at least $m$ rows and at least $n$ columns, or vice versa.\n\n\\begin{lemma}\n \\label{lem:size-constraint-c3}\n For a perfect layout to exist with \\(c=3\\), it is necessary that:\n \\begin{itemize}\n \\item Boards of type Y or \/ have size at least $7\\times8$.\n \\item Boards of type C or J have size at least $7\\times8$ or $6\\times9$.\n \\item Boards of type L have size at least $7\\times7$ or $6\\times9$.\n \\item Boards of type I have size at least $7\\times7$, $6\\times9$, or $5\\times11$.\n \\end{itemize}\n\\end{lemma}\n\\begin{proof}\n These bounds follow from Lemma~\\ref{lem:size-constraint} together with the requirement from Lemma~\\ref{lem:sink-distance} that it be possible to place sinks at least two cells away from the boundary.\n\\end{proof}\n\n\\subsection{Constructing Perfect Layouts}\n\nIn this section, we complete our characterization of boards\nwith perfect layouts for \\(c=3\\).\nWe show that Lemmas \\ref{lem:sink-distance}, \\ref{lem:impossible-c3}, and\n\\ref{lem:size-constraint-c3} are the only obstacles to a perfect layout:\n\n\\begin{theorem}\n \\label{thm:characterization-c3}\n A board with $c=3$ sinks has a perfect layout if and only if the following conditions all hold:\n \\begin{enumerate}\n \\item All sinks are at least two cells away from the boundary\n (Lemma~\\ref{lem:sink-distance}).\n \\item The board does not contain any of the four unsolvable configurations in Figure~\\ref{fig:impossible} (Lemma~\\ref{lem:impossible-c3}).\n \\item The board obeys the size bounds of Lemma~\\ref{lem:size-constraint-c3}.\n \\end{enumerate}\n\\end{theorem}\n\nWe call a board \\defn{minimal} if it has one of the minimal dimensions\nfor its type as defined in Lemma~\\ref{lem:size-constraint-c3}.\nOur strategy for proving Theorem~\\ref{thm:characterization-c3}\nwill be to reduce the problem to the finite set of minimal boards,\nwhich we then verify by computer.\nWe will accomplish this by removing empty rows and columns\nfrom non-minimal boards to reduce their size,\nwhich we show can always be done while preserving the above conditions.\n\n\\begin{lemma}\n \\label{lem:characterization-c3-minimal}\n All minimal boards satisfying the three conditions of\n Theorem~\\ref{thm:characterization-c3} have a perfect layout.\n\\end{lemma}\n\\begin{proof}\n \\renewcommand{\\qedsymbol}{\\(\\blacksquare\\)}\n The proof is by exhaustive computer search of all such minimal boards.\n We wrote a Python program to generate all possible board patterns,\n reduce each perfect layout problem to Satisfiability Modulo Theories\n (SMT), and then solve it using Z3 \\cite{z3}.\n The results of this search are in Appendix~\\ref{apx:computer-solutions}.\n\\end{proof}\n\nIf \\(B_0\\) and \\(B_1\\) are boards, then we define \\defn{\\(B_0 \\pmb{\\lessdot} B_1\\)}\nto mean that \\(B_0\\) can be obtained by removing a single empty row or column\nfrom \\(B_1\\).\n\n\\begin{lemma}\n \\label{lem:add-row}\n If \\(B_0 \\lessdot B_1\\) and \\(B_0\\) has a perfect layout,\n then \\(B_1\\) also has a perfect layout.\n\\end{lemma}\n\\begin{proof}\n By symmetry, consider the case where $B_1$ has an added row.\n By Lemma~\\ref{lem:perfect-colors}, it suffices to show that we can color\n the cells of the new row while preserving connectivity in each color.\n We do so by duplicating the colors of the cells (including sinks)\n in an adjacent row.\n Connectivity of the resulting coloring follows from that of the original.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:characterization-c3-nonminimal}\n Let \\(B_1\\) be a non-minimal board satisfying the three conditions\n of Theorem~\\ref{thm:characterization-c3}.\n Then there exists a board \\(B_0\\) that also satisfies all three conditions\n and such that \\(B_0 \\lessdot B_1\\).\n\\end{lemma}\n\\begin{proof}\n By symmetry, suppose $B_1$ is non-minimal in its number $m$ of rows.\n By removing a row from $B_1$ that is not among the first or last two rows\n and does not contain a sink, we obtain a board \\(B'_0\\) satisfying conditions\n (1) and (3) such that \\(B'_0 \\lessdot B_1\\).\n If \\(B'_0\\) also satisfies condition (2), then we are done,\n so we may assume that it does not.\n\n Then \\(B'_0\\) must contain one of the four unsolvable configurations,\n and \\(B_1\\) is obtained by inserting a single empty row or column to\n remove the unsolvable configuration.\n Figure~\\ref{fig:perfect-reductions} shows all possibilities for \\(B'_0\\),\n as well as the locations where rows or columns may be inserted to yield\n a corresponding possibility for \\(B_1\\).\n (\\(B'_0\\) may have additional empty rows and columns beyond those shown,\n but this does not affect the proof.)\n For each such possibility, Figure~\\ref{fig:perfect-reductions} highlights\n another row or column which may be deleted from \\(B_1\\) to yield\n \\(B_0 \\lessdot B_1\\) where \\(B_0\\) satisfies all three conditions.\n\\end{proof}\n\n\\begin{figure}\n \\centering\n \\hfil\n \\subcaptionbox{$L$, $7\\times7$}{\\includegraphics[scale=0.6]{solver\/reductions\/L7_7_reduction}}\\hfill\n \\subcaptionbox{$L$, $6\\times9$}{\\includegraphics[scale=0.6]{solver\/reductions\/L6_9_reduction}}\\hfill\n \\subcaptionbox{\\label{fig:perfect-reductions-row-choice}$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_0_reduction}}\\hfill\n \\subcaptionbox{$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_1_reduction}\\vspace{0.2in}}\\hfill\n \\subcaptionbox{$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_2_reduction}}%\n \\hfil\n \\caption{\n All boards satisfying conditions (1) and (3) but not (2),\n up to rotations, reflections, and recolorings.\n An empty row or column may be inserted in any of the locations\n marked ``$+$'' to yield a board satisfying all three conditions.\n Removing the row or column marked ``$-$'' then preserves the conditions.\n In case~(c),\n \n remove a row that does not contain the blue sink.\n In case~(d),\n $\\vcenter{\\hbox{\\includegraphics[scale=0.6]{figs\/dotdotdot}}}$\n denotes zero or more rows.\n }\n \\label{fig:perfect-reductions}\n\\end{figure}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:characterization-c3}]\n It follows from Lemmas \\ref{lem:sink-distance}, \\ref{lem:impossible-c3},\n and \\ref{lem:size-constraint-c3} that all boards with perfect layouts\n must obey the three properties of the theorem.\n We prove that the properties are also sufficient\n by induction on the size of the board.\n As a base case, the claim holds for minimal boards\n by Lemma~\\ref{lem:characterization-c3-minimal}.\n For non-minimal boards \\(B_1\\),\n Lemma~\\ref{lem:characterization-c3-nonminimal} shows that\n there is a smaller board \\(B_0\\)\n that satisfies all three conditions and such that \\(B_0 \\lessdot B_1\\).\n By the inductive hypothesis, \\(B_0\\) has a perfect layout.\n Lemma~\\ref{lem:add-row} shows that \\(B_1\\) also has a perfect layout.\n\\end{proof}\n\n\n\\section{Open Questions}\n\nThe main complexity open question is whether {TGINGTLI} is $\\Sigma_2^P$-complete.\nGiven our NP- and coNP-hardness results, we suspect that this is true.\n\nOne could also ask complexity questions of more restrictive versions of the game.\nFor example, what if the board has a constant number of rows?\n\nWhen characterizing perfect layouts, we saw many of our lemmas generalized to different numbers of colors. It may be interesting to further explore the game and try to characterize perfect layouts with more than three colors.\n\nA related problem is which boards and configurations of sinks\nadmit a \\defn{damage-free} layout, where any packet entering from\nthe boundary either reaches the sink of the correct color\nor ends up in an infinite loop. Such a layout avoids losing,\nand in the game as implemented, such a layout actually wins the game\n(because the player wins if there is ever insufficient room\nfor a new source to be placed).\nCan we characterize such layouts like we did for perfect layouts?\n\nPerfect and damage-free layouts are robust to any possible sources.\nHowever, for those boards that do not admit a perfect or damage-free layout,\nit would be nice to have an algorithm that determines whether a given\nset of sources or sequence of packets still has a placement of arrows\nthat will win on that board.\nBecause the board starts empty except for the sinks,\nour hardness results do not apply.\n\nHaving a unique solution is often a desirable property of puzzles. Thus it is natural to ask about ASP-hardness and whether counting the number of solutions is \\#P-hard.\n\n\n\n\\section*{Acknowledgments}\n\nThis work was initiated during open problem solving in the MIT class on\nAlgorithmic Lower Bounds: Fun with Hardness Proofs (6.892)\ntaught by Erik Demaine in Spring 2019.\nWe thank the other participants of that class\nfor related discussions and providing an inspiring atmosphere.\nIn particular, we thank Quanquan C. Liu for helpful discussions\nand contributions to early results.\n\nMost figures of this paper were drawn using SVG Tiler\n[\\url{https:\/\/github.com\/edemaine\/svgtiler}].\nIcons (which match the game) are by looneybits and\nreleased in the public domain\n[\\url{https:\/\/opengameart.org\/content\/gui-buttons-vol1}].\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\nThere are insufficiently large or no datasets for research questions on camera-based AI systems from the field of medical operating rooms. Use cases for AI systems in this context include sterility of health professionals, whether and where certain medical devices are located or action recognition of health professionals. Furthermore, datasets are, if at all, only institution-related and not publicly available due to data protection regulations and ethics requirements.\n\nThe successes of deep learning in recent years are among others, due to the availability of large datasets such as Imagenet \\cite{ILSVRC15} for Image Classification, or MS COCO \\cite{DBLP:journals\/corr\/LinMBHPRDZ14} for Object Bounding Box Detection. In addition, the research of new methods or architectures like \\cite{He2016}\\cite{Lin2020}\\cite{Tan_2020}\\cite{Bochkovskiy2020} and the research of high performance hardware for parallel computing are to be mentioned. The authors of \\cite{Sze2018}\\cite{LeCun2019} analyze the hardware topic in depth. In this work however, special focus is put on the availability of datasets and methods for dataset generation in order to reduce the necessary amount of real data from the target domain. \n\nSeveral works already deal with the generation of synthetic data and with the goal of reducing the reality gap between synthetic and real data. Among these are the work on Domain Randomization (DR) and Structured Domain Randomization (SDR) \\cite{DrTobin}\\cite{DrTremblay}\\cite{Prakash_2019}. In addition to other work, it has already been shown that the use of synthetic image data can decrease the required amount of real data \\cite{DrTremblay}. Likewise, synthetic data of persons exist \\cite{h36m_pami}. However, the challenge lies in the specific characteristics of a medical intervention space. Health professionals wear special clothing with sometimes multiple layers, wear sterile gloves, masks and hairnets. The differences between conventional human data and data from the medical field are large. Nevertheless, Domain Randomization techniques sound promising for use in research questions around medical interventions.\n\nThis work presents a comparison in terms of detection accuracy and generalizability of different methods for synthetic clothing generation using either 3D clothing scans (SCANS) or designed CAD clothing (CAD) with SMPL models \\cite{SMPL:2015}. The comparison is performed using the example of medical clothing object detection. Both methods (SCANS, CAD) are incorporated into a domain randomization environment called NDDS \\cite{to2018ndds} and a Structured Domain Randomization environment implemented in Unity, based on \\cite{Technologies2017}, to generate synthetic training data. Likewise, the aim of the presented methodology is to explore a pipeline for the generation of synthetic data for the medical field, so that further research questions from the intervention space can be explored. In addition to synthetic data, real data of different persons are recorded in front of a green screen and in a clinical environment. All data are split into train\/val\/test sets while the clinical dataset serves as a test set for all methods.\n\n\n\\section{Related Work}\n\\label{sec:relatedWork}\nWith the rise of synthetic data generation methods, for example Domain Randomization \\cite{DrTobin}, it has already been shown that synthetic data can reduce the amount of real data required \\cite{DrTremblay}. However, one focus of research is the reduction of the reality gap between the synthetic data and the target domain.\n\nHere, the aforementioned Domain Randomization (DR) has turned out to be one way to reduce the gap. One idea of DR is that if enough variance can be generated in the synthetic data, reality represents another variance of the target domain \\cite{DrTobin}.\n\nThe work of \\textit{Tobin et al.} \\cite{DrTobin} and \\textit{Tremblay et al.} \\cite{DrTremblay} showed, that an object detection network for robot grasping or car detection can be trained from synthetic images with random positioning, random lighting, random backgrounds, distractor objects and non-realisitc textures alone. In addition, the work of \\textit{Tremblay et al.} showed, that the necessary amount of real target domain data can be reduced while maintaining adequate accuracy, when pretrained with DR generated images.\n\nAlso the work of \\textit{Borkman et al.}\\cite{Borkman2021} showed that when using Unity Perception for synthetic data generation, the amount of real-world data could be reduced to 10\\% when used together with the synthetic data, while achieving better AP score as with all real-world data alone. \n\nDomain Randomization has already been successfully applied in various fields. In addition to the mentioned areas of car detection and robot grasping, the work of Sadeghi et al. \\cite{Sadeghi2017} for flying a quadrocopter through indoor environments, \\textit{Zhang et al.} \\cite{DBLP:journals\/corr\/abs-1709-05746} for a table-top object reaching task through clutter or \\textit{James et al.} \\cite{DBLP:journals\/corr\/JamesDJ17} for grasping a cube and placing it in a basket can be named.\n\nThis leads us to believe that DR is a suitable approach for the medical intervention room domain, where no real data is largely available and access to that domain is widely restricted.\n\nAblation studies of \\cite{DrTremblay} and \\cite{DrTobin} showed, that high resolution textures and higher numbers of unique textures in the scene improve performance.\nAlso, \\cite{DBLP:journals\/corr\/abs-1709-05746} come to the conclusion, after testing their hypothesis, that using complex textures yields better performance than using random colors.\n\nIn contrast to the domain randomization approach is the photorealistic rendering of the scene and objects. A number of datasets have been created for this purpose in recent years. Here the works of \\cite{Tremblay2018a} \\cite{h36m_pami} \\cite{Varol2017LearningFS} or \\cite{GraspingDRphoto} are to be mentioned. Some of these works combine real image data with domain randomization and photorealistic rendered image data.\n\nIn \\cite{Tremblay2018a} a photorealistic rendered dataset was created for 21 objects of the YCB dataset. Here, the objects are rendered in different scenes with collision properties when falling down. The dataset is intended to accelerate progress in the area of object detection and pose estimation. \n\nIn \\cite{GraspingDRphoto}, domain randomization is combined with photo realistically rendered image data, for robotic grasping of household objects. Using the data generated in this way, the authors have managed to explore a real-time system for object detection and robot grasping with sufficient accuracy. They also showed that the combination of both domains improved performance as opposed to just one alone.\n\nIn the field of human pose estimation, the works of \\cite{h36m_pami} and \\cite{Varol2017LearningFS} need to be mentioned. Both works were able to show that the performance of networks can be increased by using synthetic and animated persons, respectively.\n\nThe work of \\cite{Varol2017LearningFS} generates photo-realistic synthetic image data and their ground truth for body part classifications.\n\nIn \\cite{h36m_pami}, animated persons are integrated into mixed reality environments. The movements were recorded by actors in a motion capture scenario and transferred to 3D scanned meshes. In their experiments, they were able to achieve a 20\\% increase in performance compared to the largest training set available in this domain.\n\nState of the art models for realistic human body shapes are the SMPL models introduced by \\cite{SMPL:2015} and improved by STAR in \\cite{Osman2020}. SMPL stands for Skinned Multi-Person Liner Model (SMPL) which according to the authors is a skinned vertex-based model which represents human shapes in a wide variety. In their work they learn male and female body shape from the CAESAR dataset \\cite{CAESAR}. Their model is compatible with a wide variety of rendering engines like Unity or Unreal and therefore highly suited to be used in synthetic data generation for humans. There also exist extensions to the SMPL model like MANO and SMPL-H which introduce a deformable hand model into the framework. MANO \\cite{MANO:SIGGRAPHASIA:2017} is learned from 1000 high resolution 3D scans of various hand poses.\n\n\\section{Method}\n\\label{sec:methods}\nAs previously mentioned, real-world data collection in medical intervention rooms is complex, costly, and requires approval from an ethics board and the persons involved. \nAs shown in the previous, DR can help train an object detection network with sufficient performance in real world applications.\n \nHowever, one challenge in dataset generation for the medical intervention space is domain-specific clothing. We argue, that randomizing the clothing textures with random textures would help improve detection rates of the clothing types, but when applied in real world applications, for example a colored T-shirt would not be distinguishable from the targeted blue colored specific area clothing. For the general detection of cars as in \\cite{DrTremblay} the randomization technique makes sense, but for the domain specific use case presented here something else should be used in our opinion.\n\n\nThe questions we try to address in this work are:\n\\begin{enumerate}\n\\item How can health professionals be modeled for synthetic data generation?\n\\item Which techniques are best suited for SDR\/DR clothing generation?\n\\item Can we close the reality gap further by including greenscreen data (Mixed Reality, MR)?\n\\item Can the required amount of real data be reduced by using SDR\/DR\/MR ?\n\\end{enumerate}\n\nFor point 1) we argue to use a deformable human shape model like the SMPL models. This provides sufficient variance of different human shapes and sizes. For point 2) we explore two different methods of clothing generation. First we 3D scan various persons wearing medical clothing and generate a database of different medical clothing scans for each clothing type, which we call SCANS. Second we commission a professional graphics designer to create assets based on the area clothing, which we call CAD. Regarding point 3) we take images in front of a greenscreen of different persons wearing medical clothing which we label by hand. For point 4) we explore wether and by how much the required amount of real data can be reduced.\n\nTo address the named questions further, we set up experiments where we want to detect the following classes with the help of the Scaled Yolov4 object detector \\cite{Wang_2021_CVPR}\n\nThe classes to be detected are:\n\\begin{itemize}\n\t\\item humans\n\t\\item area clothing shirt\n\t\\item area clothing pants\n\t\\item sterile gown\n\t\\item medical face mask\n\t\\item medical hairnet\n\t\\item medical gloves\n\\end{itemize}\n\nExamples of the medical clothing are given in figure \\ref{fig:expClothing}.\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/shirt}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/glove} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/pants}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/mask} \\\\\n\t\\end{tabular}\t\n\t\\caption{Medical clothing examples like area shirt, pants, mask and glove. These clothing types among others represent the target clothing for our object detection network.}\n\t\\label{fig:expClothing}\n\\end{figure}\n\n\n\\subsection{Character Creation}\n\\label{sec:charCreation}\nThe medical characters we use in SDR\/DR are built through a combination of SMPL body models, textures, animations and clothing assets. Within the following sections each of the components are described in detail. \n\n\\textbf{Body Model}\\\\\nAs the base of our characters we use the male and female model of the SMPL+H model from \\cite{MANO:SIGGRAPHASIA:2017}. The models cover a huge variety of realistic human shapes, which can be randomized through ten blend shapes. We decide to use the extended SMPL+H model instead of the original SMPL model \\cite{SMPL:2015}. This is because one of our cloth items are gloves and through the hand rig of the SMPL+H model we will be able to create more deformations of the glove asset. \n\n\\textbf{Human Texture}\\\\\nTo add more variation and realism to the appearance of the characters, the texture maps from \\cite{Varol2017LearningFS} are used.\nOut of the 930 textures, only 138 (69 of every gender) have been used. This is, as we created our own cloth assets, only the textures of people in undergarments were relevant. Those texture maps were created out of 3D body scans from the CAESAR dataset \\cite{CAESAR} and cover a variety of skin colors and identities, however all of the faces have been anonymized \\cite{Varol2017LearningFS}. \n\n\\textbf{Pose} \\\\\nTo provide a variety of realistic body poses, the models were animated through Motion Capture (MoCap) data, which has been captured within our laboratory. We track the movement of 74 joints down to the fingertips. We use an intrinsic Motion Capture suite with the Hand gloves Add-on called Perception Neuron Studio \\footnote{Perception Neuron Studio suite and gloves addon: https:\/\/neuronmocap.com\/perception-neuron-studio-system}. In order to keep the dataset simple, we only used one animation in our experiments. The potential to add more varying animations is given however.\\\\\n\n\\textbf{3D-scanned Cloth Assets (SCANS)} \\\\\nA 3D Scanner called Artec Leo\\footnote{3D Scanner Artec Leo: https:\/\/www.artec3d.com\/portable-3d-scanners\/artec-leo } with a 3D resolution of 0.2mm was used to capture the medical cloths. For our synthetic training dataset we used clothing scans of 4 male and 4 female models. In this way, variations of the real world textures, including reflections, wrinkles, colors and surface texture information are collected. After building an initial model from the 3D Scanner, we adapt the cloths to fit the standard male and female SMPL+H Character using 3D Modelling techniques. As the gloves should match the Characters fingers exactly, to ensure correct animations of the fingers, they have been modeled instead of scanned. Afterwards the textures of scanned gloves have been applied to the models. When the cloth assets are fitted, we create cloth blend shapes, which match the blend shapes of the SMPL+H model, in order to make them adaptable with the character. Additionally, the cloth meshes are bound to the same rig as the character, by transferring the skin weights of the SMPL+H models. Like this the cloth assets are just as adaptable in shape and pose as the body model. \nAccording to our research, medical cloths usually come in the colors blue, green and light pink. To cover this variation in our dataset, without scanning more cloth or performing augmentations on the whole image, we augmented the texture maps. Examples of the scanned and rigged clothing assets can be seen in figure \\ref{fig:expClothingScans}.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/mask-and-net}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/mask-and-net-1} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/shirt}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/gown} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of our 3D-scanned clothing assets with color augmentation.}\n\t\\label{fig:expClothingScans}\n\\end{figure}\n\n\n\\textbf{Designed Cloth Assets (CAD)} \\\\\nTo evaluate the performance of 3D-scanned cloth assets we compare them to hand designed clothing assets. Therefore we have asked a designer\\footnote{azeemdesigns: https:\/\/www.fiverr.com\/azeemdesigns?\\\\source=order\\_page\\_user\\_message\\_link} on Fiver to model the clothes. Examples of those assets can be seen in figure \\ref{fig:expClothingCad}. We first evaluated to what extent freely available assets from the assets stores can be used for this purpose. However, there are no assets available that match our specific clothing in total. Therefore, we have decided to have the assets designed. The designed assets have been processed in the same way as our scanned assets. They are also deformable and are bound to the same rig.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/mask-and-hat-1}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/mask-and-hat-2} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/shirt-1}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/gown-1} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of the designed clothing assets with color augmentation.}\n\t\\label{fig:expClothingCad}\n\\end{figure}\n\n\\textbf{Modular Character DR, NDDS} \\\\\nTo create the datasets from the body models, animations, cloth assets and all other components are combined in a character blueprint in Unreal Engine 4.22.3(DR). Here we have created a modular character, who is able to take on random shapes, textures and combinations of clothing. Also the 3D scanned cloths and designed cloths will vary in shape with the character and move with the animation. When the data capturing begins, the character first iterates over 1700 body shapes. The SMPL shape parameters for these body shapes have been taken from \\cite{Varol2017LearningFS} and represent 1700 male and female body types taken from the CEASAR Dataset. Next, one randomly sampled human texture is applied to the body model. Afterwards one cloth item out of each category is randomly sampled, textured and added to the body model. After the blend shapes of all cloth assets are adapted to the current body shape, one animation is chosen. When the animation finished playing, the next body shape will be chosen and the process will repeat itself. We create two seperate datasets, one with 3D-scanned (SCANS) assets and another with designed (CAD) assets for DR. An activity diagram, which represents the blueprint for Modular Character creation in NDDS, is given in figure \\ref{fig:flowChartBlueprint}.\\\\\n\n\\textbf{Modular Character SDR, Unity}\\\\\nTo be able to use SMPL models in unity, the male and female models are converted to a Humanoid character and an avatar is created from the model. Variation of clothing in unity is made possible using a custom-made component which selects random pieces of clothing from each category and applies them to a randomly selected SMPL model. In order to enable the clothing to animate together with the model, the bones of each piece of clothing are changed to match with those of the model using said component. Afterwards a random animation can be assigned to the model using the Animator component with a custom Animator Controller by setting the Motion parameter. Finally the model including all selected clothing is instantiated and randomly placed inside the scene. The animation is then moved along during data generation by setting the Motion Time parameter of the Animator Controller to a random value. The character itself is adapted using the custom-made randomization components by varying the texture of the models as well as the material of the clothing using predefined color variations. Models as well as clothing and position are varied each frame. The shape of the models is not modified as it resulted in clipping of the clothing and other unrealistic behavior. An activity diagram is shown in figure \\ref{fig:flowChartUnity}\n\n\\begin{figure}\n\t\\centering\t\n\t\\includegraphics[width=.49\\textwidth]{figures\/activity} \\\\\t\n\t\\caption{Activity Diagram of the modular character blueprint used in Unreal with NDDS.}\n\t\\label{fig:flowChartBlueprint}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\t\n\t\\includegraphics[width=.49\\textwidth]{figures\/activityUnity} \\\\\t\n\t\\caption{Activity Diagram of the character creation used in Unity for SDR.}\n\t\\label{fig:flowChartUnity}\n\\end{figure}\n\n\n\n\n\\textbf{Domain Randomization}\nFor the synthetic data generation of DR, an Unreal Engine 4 plugin called NDDS\\cite{to2018ndds} was used. This allows the generation of RGB images at rates similar to real cameras, as well as depth image data and segementation masks of the scene within Unreal Engine 4 (UE4). The plugin also creates bounding box labeling data for each object in the scene in 2D and 3D. The tool was specifically developed for Domain Randomization and therefore provides tools for scene randomization like object or camera position, lighting and distractor objects, among others. Using the aforementioned Modular Character blueprint, NDDS enables the generation of synthetic datasets for sterile clothing using 3D scanned clothing or designed based clothing. Example images are given in figure \\ref{fig:expSynth} on the top row.\n\n\\textbf{Structured Domain Randomization}\nFor dataset generation of SDR, we used a Unity plugin called ML-ImageSynthesis \\cite{Technologies2017} as a base and adapted it to work with the universal rendering pipeline (URP) for quality improvement. Using Unity 2020.3.32f1, additional components have been added to enable an export of additional metadata regarding each generated image such as camera parameters, bounding boxes and world position in .json format. SDR is made possible by making use of a variety of custom-made components which allow the randomization of parameters such as lighting, material, texture, position. The plugin ProBuilder provided by Unity was used to build an intervention room based on the target domain of the real dataset (Klinikum). Scene randomization is achieved by utilizing the aforementioned randomization components.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth, trim={5cm 2cm 25cm 2cm},clip]{figures\/NDDS_exp\/000014.jpg}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 15cm 9.5cm},clip]{figures\/NDDS_exp\/000024.jpg} \\\\\n\t\t\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 2.5cm 9.5cm},clip]{figures\/SDR_exp\/scans.jpg}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 2.5cm 4.5cm},clip]{figures\/SDR_exp\/cad.jpg} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of synthetic RGB image data from DR and SDR datasets (top: DR, bottom: SDR, left: SCANS, right: CAD).}\n\t\\label{fig:expSynth}\n\\end{figure}\n\n\n\\subsection{Datasets}\n\\label{sec:dataset}\nTo investigate the potential accuracy difference between 3D scanned clothing (SCANS) and designed clothing (CAD) and the amount of necessary real data, different datasets were generated.\n\nFirst, synthetic datasets of DR and SDR were generated for both scanned and designed clothing, using the presented pipelines in Unreal-Engine und Unity. Second, a dataset in front of a greenscreen was collected (MR-DR). It consists of 8 persons in the training dataset and 2 persons in the validation dataset. The recorded persons move in front of the green screen with a certain grasping motion, which is also used as motion animation for the synthetic data. Finally a dataset of the target domain was recorded (Klinikum). It serves as a baseline comparison for all models and also presents the test data. All datasets are divided into training and validation data. \n\nExamples of real data in front of the green screen with exchanged background can be seen in figure \\ref{fig:realDataExps}. Examples of the synthetic data can be seen in figure \\ref{fig:expSynth} and finally examples from the clinical test data can be seen in figure \\ref{fig:expKlinikumTest}.\n\nTable \\ref{tab:datasetDistr} shows a breakdown of the sizes and distributions of the datasets.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth,trim={15cm 5cm 25cm 2cm},clip]{figures\/gsExps\/i-1617957374561}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={20cm 2cm 20cm 5cm}, clip]{figures\/gsExps\/i-1617958913261}\n\t\\end{tabular}\t\n\t\\caption{Examples of the greenscreen dataset with exchanged backgrounds.}\n\t\\label{fig:realDataExps}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth,trim={20cm 5cm 20cm 5cm},clip]{figures\/expKlinikum\/i-1630600026553}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={20cm 5cm 20cm 5cm}, clip]{figures\/expKlinikum\/i-1630600245053}\n\t\\end{tabular}\t\n\t\\caption{Examples of the Klinikum test dataset.}\n\t\\label{fig:expKlinikumTest}\n\\end{figure}\n\n\\begin{table}\n\t\\caption{This table provides data distributions in the different datasets.}\n\t\\centering\n\t\\begin{tabular}{llll}\n\t\t\\toprule\n\t\t\\multicolumn{4}{c}{dataset distributions} \\\\\n\t\t\\midrule\n\t\tdataset name \t& train \t& validation \t& test \\\\\n\t\t\\midrule\n\t\tDRscans \t\t& 10000\t\t& 1000\t\t\t& \/ \t\\\\\n\t\tDRcad \t\t\t& 10000\t\t& 1000\t\t\t& \/\t\t\\\\\n\t\tSDRscans\t\t& 8000\t\t& 2000\t\t\t& \/\t \t\\\\\n\t\tSDRcad\t\t\t& 8000\t\t& 2000\t\t\t& \/\t \t\\\\\n\t\tMR-DR(GS) \t\t& 6443\t\t& 1536\t\t\t& \/\t\\\\\n\t\tKlinikum\t\t& 660\t\t& 110\t\t\t& 331\t\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{tab:datasetDistr}\n\\end{table}\n\n\\section{Experiment Results}\n\\label{sec:experiments}\n\n\nFor the experimental investigation of whether and how well 3D scanned clothing compares to designed CAD clothing for detection in the medical environment and how much addition real data is needed to achieve sufficient accuracy, various tests were carried out.\n\nFor our experiments we used Scaled-Yolov4 implementation from GitHub\\cite{Wang_2021_CVPR}.\nAt first, 6 different baseline networks were trained to show a basic comparison of the different methods. These baseline models include trainings with synthetic (DRscans, DRcad, SDRscans, SDRcad), mixed-reality (MR-DR) and real data from the clinic domain (Klinikum train).\n\nTraining was conducted with default finetune parameters provided from Scaled Yolo-V4\\footnote{https:\/\/github.com\/WongKinYiu\/ScaledYOLOv4}. Only Mosaic Augmentation ratio parameters $\\alpha$ and $\\beta$ were increased from $8.0$ to $20.0$. Additionally a green channel augmentation was used when MR data was present in the training dataset in order to reduce the greenscreen reflection influence which we had troubles with in some classes. We experimentally found out that this gives the best results. All networks were trained for 300 epochs and achieved convergence. Tests were performed with IoU-threshold: 0.5 and confidence-threshold: 0.2. The used Yolov4 network was yolov4-p5, image size setting was 896 in training and test and the pretrained weights provided were used.\n\nAll trained models were tested on the Klinikum test-set. The results of the baseline models are displayed in table \\ref{tab:baselinesTableResults}.\n\nThe results show, that CAD based synthetic data generally give better results than SCAN based data on this experiment. This is why we used SDRcad dataset as a baseline for all follow up experiments.\n\nTo investigate by how much the amount of real data can be reduced when used together with synthetic or MR data, while maintaining sufficient accuracy, experiments were conducted with a percentage distribution of the Klinikum training data. We decided to use the mosaic augmentation during these experiments as well and use all datasets as training data instead of a finetune experiment. We argue that the network can better learn relevant features while maintaining the advantages of the additional synthetic data when seeing a variation of all used datasets mixed together with mosaic augmentation as when only finetuning. During these experiments we decided to include the aforementioned green-channel augmentation on all trainings. Additionally the real data runs are trained with equal step size.\n\nThe follow up results with SDRcad, MR-DR as well as real data is shown in table \\ref{tab:followUpResults}. While generally reaching worse results than the SDR based datasets, the MR dataset provides an additional boost in accuracy when used together with the SDRcad dataset.\n\nThese results show that when using synthetic data, MR data and all real data or only 15\\% real data, the mAP accuracy could be improved further compared to when only using real data. Additionally it shows that when only using 15\\% of real data the gap between the synthetic, MR and real data could be closed.\n\n\n\\begin{table}[ht]\n\t\\small\n\t\\caption{Results on Klinikum test-set for baseline trainings. *additional green-channel augmentation used}\n\t\\centering\n\t\\begin{tabular}{llllll}\n\t\t\\multicolumn{5}{c}{\\textbf{all}}\\\\\n\t\t\\toprule\t \n\t\tExperiment \t\t& $mAP$ \t\t\t& $mAP50$ \t\t\t& P \t\t\t\t& R \\\\\n\t\t\\toprule\n\t\tKlinikum train \t \t& 81.60\t\t\t\t& 98.28 \t\t\t& 88.99\t\t\t\t& 98.73 \\\\\n\t\t\\midrule\n\t\tDRscans \t\t& 46.06\t\t\t\t& 75.51 \t\t\t& 76.89\t\t\t\t& 77.83 \\\\\n\t\tDRcad \t \t\t& 51.50\t\t\t\t& 77.85 \t\t\t& 76.40\t\t\t\t& 81.56 \\\\\n\t\tSDRscans\t\t\t& 65.52\t\t\t\t& 87.12 \t\t\t& 77.92\t\t\t\t& 89.31 \\\\\n\t\tSDRcad \t\t\t& \\textbf{67.44}\t& \\textbf{90.72} \t& \\textbf{81.98}\t& \\textbf{92.02} \\\\\n\t\tMR-DR* \t\t\t& 60.94\t\t\t\t& 82.29 \t\t\t& 79.54\t\t\t\t& 84.04 \\\\\n\t\t\\bottomrule\t \n\t\\end{tabular}\n\t\\label{tab:baselinesTableResults}\n\\end{table}\n\n\\begin{table}\n\t\\caption{Results on Klinikum test-set for follow up experiments. For these experiment we used the additional green-channel augmentation on all reported trainings.}\n\t\\centering\n\t\\begin{tabular}{llllll}\n\t\t\\multicolumn{5}{c}{\\textbf{all}}\\\\\n\t\t\\toprule\t \n\t\tExperiment \t\t& $mAP$ \t\t\t& $mAP50$ \t\t\t& P \t\t\t\t& R \\\\\n\t\t\\toprule\n\t\t\\midrule\n\t\tKlinikum(100) \t \t& 81.95\t\t\t\t& \\textbf{98.57} \t& \\textbf{88.88}\t& \\textbf{98.85} \\\\\n\t\t\\midrule\n\t\tSDR+MR+\\\\real(100) & \\textbf{83.35}\t& 98.14 \t\t\t& 88.41\t\t\t\t& 98.59 \\\\\n\t\t\\midrule\n\t\tKlinikum(15) \t\t& 77.52\t\t\t\t& 96.67 \t\t\t& 87.83\t\t\t\t& 97.37 \\\\\n\t\t\\midrule\n\t\tSDR+MR+\\\\real(15)\t& 80.05\t\t\t\t& 96.92 \t\t\t& 87.06\t\t\t\t& 97.64 \\\\\n\t\t\\midrule\n\t\tSDR+MR \t\t\t& 72.00\t\t\t\t& 92.27 \t\t\t& 83.81\t\t\t\t& 94.07 \\\\\n\t\t\\bottomrule\t \n\t\\end{tabular}\n\t\\label{tab:followUpResults}\n\\end{table}\n\n\nInference result images of SDR+MR+real(100) can be seen in figure \\ref{fig:bestScoreKlinikExp}. Here we used a slightly higher confidence threshold of 0.4 and and IoU-threshold of 0.5.\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth,trim={15cm 5cm 25cm 0cm},clip]{figures\/resultsKlinikumTest\/i_1630599989803.jpg}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth,trim={20cm 8cm 25cm 2cm},clip]{figures\/resultsKlinikumTest\/i_1630600036303.jpg} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth,trim={25cm 5cm 25cm 10cm},clip]{figures\/resultsKlinikumTest\/i_1630600109803.jpg}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth,trim={20cm 5cm 25cm 5cm},clip]{figures\/resultsKlinikumTest\/i_1630600252803.jpg} \\\\\n\t\\end{tabular}\t\n\t\\caption{Inference results with SDR+MR+real(100) trained net. Confidence threshold of 0.4 and IoU-threshold of 0.5}\n\t\\label{fig:bestScoreKlinikExp}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe were able to show that the use of SMPL models together with scanned or designed medical clothing is a suitable method for modeling heathcare professionals for AI questions in the intervention space using the example of medical clothing detection.\nDuring our experiments we found out that the designed clothing generally performed better on our test dataset than the 3D scanned cloths. This result surprised us, as we expected the potentially more accurate textures of the 3D scan to have a positive impact on detection rates. However, according to the results, it cannot be ruled out that artifacts in the rendering pipeline or pre-processing pipeline that we did not detect have an influence on this. In order to make a final statement about the potential of 3D scanned clothing for the modeling of health professionals, further experiments should be conducted.\nUsing Mixed-Reality data together with the synthetic data closed the gap further and while the margin is quite small, we could show that when using synthetic, mixed reality and 15\\% real data the remaining gap towards 100\\% real data could be nearly closed.\n\nFinally, it can be stated that the presented modeling of health professionals is a promising methodology to address the challenge of missing datasets from medical intervention rooms. We will further investigate it on various task around the medical field.\n\n\\section{Acknowledgements}\nThis research project is part of the Research Campus M2OLIE and funded by the German Federal Ministry of Education and Research (BMBF) within the Framework \"Forschungscampus \u2013 public-private partnership for Innovations\" und the funding code 13GW0389C.\n\nDisclaimer: The methods and information presented in this work are based on research and are not commercially available.\n\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet the domain of interest be $D=(0,1)^d$ where $d=2,3$. The Helmholtz\nequation is\n\\begin{equation}\n \\Delta u(x)+\\dfrac{\\omega^2}{c^2(x)}u(x)=f(x),\\quad \\forall x\\in D,\n \\label{eq:helm}\n\\end{equation}\nwhere $u(x)$ is the time-independent wave field generated by the\ntime-independent force $f(x)$, $\\omega$ is the angular frequency and\n$c(x)$ is the velocity field. Commonly used boundary conditions are\nthe approximations of the Sommerfeld radiation condition. By rescaling the system, we assume $c_{\\min}\\le c(x)\\le c_{\\max}$ where $c_{\\min}$\nand $c_{\\max}$ are of $\\Theta(1)$. Then $\\omega\/(2\\pi)$ is the typical\nwave number and $\\lambda=2\\pi\/\\omega$ is the typical wavelength.\n\nSolving the equation numerically is challenging in high frequency\nsettings for two reasons. First, in most applications, the equation is\ndiscretized with at least a constant number of points per wavelength,\nwhich makes the number of points in each direction $n=\\Omega(\\omega)$\nand the total degree of freedom $N=n^d=\\Omega(\\omega^d)$ very\nlarge. Second, the system is highly indefinite and has a very\noscillatory Green's function, which makes most of the classical\niterative methods no longer effective.\n\nThere has been a sequence of papers on developing iterative methods\nfor solving \\eqref{eq:helm}. The AILU method by Gander and Nataf\n\\cite{ailu} is the first to use the incomplete LU factorization to\nprecondition the equation. Engquist and Ying \\cite{sweephmf,sweeppml}\ndeveloped a series of sweeping preconditioners based on approximating\nthe inverse of the Schur complements in the LDU factorization and\nobtained essentially $\\omega$-independent iteration numbers. In\n\\cite{stolk2013domaindecomp}, Stolk proposed a domain decomposition\nmethod based on the PML which constructs delicate transmission\nconditions between the subdomains by considering the ``pulses''\ngenerated by the intermediate waves. In \\cite{vion2014doublesweep},\nVion and Geuzaine proposed a double sweep preconditioner based on the\nDirichlet-to-Neumann (DtN) map and several numerical simulations of\nthe DtN map were compared. In\n\\cite{chen2013sourcetrans,chen2013sourcetrans2}, Chen and Xiang\nintroduced a source transfer domain decomposition method which\nemphasizes on transferring the sources between the subdomains. In\n\\cite{demanet}, Zepeda-N{\\'u}{\\~n}ez and Demanet developed a novel\ndomain decomposition method for the 2D case by pairing up the waves\nand their normal derivatives at the boundary of the subdomains and\nsplitting the transmission of the waves into two directions. Most\nrecently in \\cite{Liu2015}, Liu and Ying proposed a recursive sweeping\npreconditioner for 3D Helmholtz problems. Other progresses includes\n\\cite{parallelsweep,sweepem,sweepemfem,sweepspectral} and we refer to\n\\cite{advances} by Erlangga and \\cite{why} by Ernst and Gander for a\ncomplete discussion.\n\nInspired by \\cite{stolk2013domaindecomp} and these previous\napproaches, we propose a new domain decomposition method in this paper\nwhich shares some similarities with\n\\cite{sweeppml,stolk2013domaindecomp}. The novelty of this new\napproach is that the transmission conditions are built with the\nboundary values of the intermediate waves directly. For each wave\nfield on the subdomains, we divide it into three parts -- the waves\ngenerated by the force to the left of the subdomain, to the right of\nthe subdomain, and within the subdomain itself. This corresponds to an\n$L+D+U$ decomposition of the Green's matrix $G$ as the sum of its\nlower triangular part, upper triangular part and diagonal part. This\nis why we call this new preconditioner the additive sweeping\npreconditioner.\n\nThe rest of this paper is organized as follows. First in Section\n\\ref{sec:1D} we use the 1D case to illustrate the idea of the\nmethod. Then in Section \\ref{sec:2D} we introduce the preconditioner\nin 2D and present the 2D numerical results. Section\n\\ref{sec:3D} discusses the 3D case. Conclusions and some future\ndirections are provided in Section \\ref{sec:Conclusion}.\n\n\\section{1D Illustration}\n\\label{sec:1D}\n\nWe use the PML\\cite{berenger1994pml,chew1994pml,johnson2008pmlnotes}\nto simulate the Sommerfeld condition. The PML introduces the auxiliary\nfunctions\n\\begin{align*}\n\\sigma(x) &:=\n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-\\eta}{\\eta}\\right)^2,\\quad &x\\in[0,\\eta),\\\\\n0,\\quad &x\\in[\\eta,1-\\eta],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-1+\\eta}{\\eta}\\right)^2,\\quad &x\\in(1-\\eta,1],\n\\end{dcases}\\\\\ns(x) &:= \\left(1+\\ii\\dfrac{\\sigma(x)}{\\omega}\\right)^{-1},\n\\end{align*}\nwhere C is an appropriate positive constant independent of $\\omega$,\nand $\\eta$ is the PML width which is typically around one wavelength.\n\nThe Helmholtz equation with PML in 1D is\n\\begin{equation*}\n\\begin{dcases}\n\\left((s(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),\\quad \\forall x\\in (0,1),\\\\\nu(0)=0,\\\\\nu(1)=0. \n\\end{dcases}\n\\end{equation*}\nWe discretize the system with step size $h=1\/(n+1)$, then $n$\nis the degree of freedom. With the standard central difference\nnumerical scheme the discretized equation is\n\\begin{equation}\n\\label{eqn:1D}\n\\dfrac{s_{i}}{h}\\left(\\dfrac{s_{i+1\/2}}{h}(u_{i+1}-u_{i})-\\dfrac{s_{i-1\/2}}{h}(u_{i}-u_{i-1})\\right)+\\dfrac{\\omega^2}{c_i^2}u_{i}=f_{i}, \\quad \\forall 1\\le i\\le n,\n\\end{equation}\nwhere the subscript $i$ means that the corresponding function is evaluated at $x=ih$.\n\nWe denote Equation \\eqref{eqn:1D} as $A\\pmb u=\\pmb f$, where $\\pmb u$ and $\\pmb f$ are the discrete array of the wave field and the\nforce\n\\begin{align*}\n\\pmb u:=[u_1,\\dots,u_n]^T,\\quad \\pmb f:=[f_1,\\dots,f_n]^T. \n\\end{align*}\nIn 1D, $A$ is tridiagonal and Equation \\eqref{eqn:1D} can be solved\nwithout any difficulty. However, here we are aiming at an approach\nwhich can be generalized to higher dimensions so the rest of this\nsection takes another point of view to solve \\eqref{eqn:1D} instead of\nexploiting the sparsity structure of $A$ directly.\n\nWith the Green's matrix $G=A^{-1}$, $\\pmb u$ can be written\nas $\\pmb u=G\\pmb f$. Now let us divide the discrete grid into $m$ parts. We\nassume that $\\eta=\\gamma h$ and $n=2\\gamma+mb-2$ where $\\gamma$ and $b$ are some\nsmall constants and $m$ is comparable to $n$, and we define\n\\begin{align*}\nX_1 &:= \\{ih:1 \\le i \\le \\gamma + b-1 \\}, \\\\\nX_p &:= \\{ih:\\gamma + (p-1) b \\le i \\le \\gamma + pb-1 \\},\\quad p=2,\\dots, m-1, \\\\\nX_m &:= \\{ih:\\gamma + (m-1) b \\le i \\le 2 \\gamma + mb-2 \\}, \n\\end{align*}\nwhich means, $X_1$ is the leftmost part containing the left PML of the\noriginal problem and a small piece of grid with $b$ points, $X_m$ is\nthe rightmost part containing the right PML and a grid of $b$ points,\nand $X_p, p=2,\\dots,m-1$ are the middle parts each of which contains\n$b$ points. $\\pmb u_p$ and $\\pmb f_p$ are defined as the restrictions\nof $\\pmb u$ and $\\pmb f$ on $X_p$ for $p=1,\\dots,m$, respectively,\n\\begin{align*}\n\\pmb u_1 &:= [u_1,\\dots,u_{\\gamma+b-1}]^T,\\\\\n\\pmb u_p &:= [u_{\\gamma+(p-1)b},\\dots,u_{\\gamma + pb-1}]^T,\\quad p=2,\\dots,m-1,\\\\\n\\pmb u_m &:= [u_{\\gamma + (m-1) b},\\dots,u_{2 \\gamma + m b -2}]^T,\\\\\n\\pmb f_1 &:= [f_1,\\dots,f_{\\gamma+b-1}]^T,\\\\\n\\pmb f_p &:= [f_{\\gamma+(p-1)b},\\dots,f_{\\gamma + pb-1}]^T,\\quad p=2,\\dots,m-1,\\\\\n\\pmb f_m &:= [f_{\\gamma + (m-1) b},\\dots,f_{2 \\gamma + m b -2}]^T.\n\\end{align*}\nThen $u=Gf$ can be written as\n\\begin{align*}\n\\begin{bmatrix}\n\\pmb u_1\\\\\\pmb u_2\\\\ \\vdots\\\\\\pmb u_m\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nG_{1,1}&G_{1,2}&\\ldots&G_{1,m}\\\\\nG_{2,1}&G_{2,2}&\\ldots&G_{2,m}\\\\\n\\vdots&\\vdots&&\\vdots\\\\\nG_{m,1}&G_{m,2}&\\ldots&G_{m,m}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\pmb f_1\\\\\\pmb f_2\\\\ \\vdots\\\\\\pmb f_m\n\\end{bmatrix}.\n\\end{align*}\n\nBy introducing $\\pmb u_{p,q}:=G_{p,q}\\pmb f_q$ for $1\\le p,q\\le m$,\none can write $\\pmb u_p=\\sum_{q=1}^m \\pmb u_{p,q}$. The physical\nmeaning of $\\pmb u_{p,q}$ is the contribution of the force $\\pmb f_q$\ndefined on the grid $X_q$ acting upon the grid $X_p$. If we know the\nmatrix $G$, the computation of $\\pmb u_{p,q}$ can be carried out\ndirectly. However, computing $G$, or even applying $G$ to the vector\n$\\pmb f$, is computationally expensive. The additive sweeping method\ncircumvent this difficulty by approximating the blocks of $G$\nsequentially and the idea works in higher dimensions. In what follows,\nwe shall use $\\td{\\pmb u}_{p,q}$ to denote the approximations of $\\pmb\nu_{p,q}$.\n\n \n\\subsection{Approximating $\\pmb u_{p,q}$ with auxiliary PMLs}\n\n\\subsubsection{Wave generated by $\\pmb f_1$}\n\nThe components ${\\pmb u}_{p,1}$ for $p=1,\\dots,m$ can be regarded as a\nsequence of right-going waves generated by $\\pmb f_1$. Note that the\nboundary condition of the system is the approximated Sommerfeld\ncondition. If we assume that the reflection during the transmission of\nthe wave is negligible, then, to approximate $\\pmb u_{1,1}$, we can\nsimply put an artificial PML on the right of the grid $X_1$ to solve a\nmuch smaller problem, since the domain of interest here is only $X_1$\n(see Figure \\ref{fig:approxU11}). To be precise, we define\n\\begin{align*}\n\\sigma_1^{M}(x) &:= \n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-\\eta}{\\eta}\\right)^2,\\quad &x\\in[0,\\eta),\\\\\n0,\\quad &x\\in[\\eta,\\eta+(b-1)h],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(b-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(b-1)h,2\\eta+(b-1)h],\n\\end{dcases}\\\\\ns_1^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_1^{M}(x)}{\\omega}\\right)^{-1}. \n\\end{align*}\nWe consider a subproblem on the auxiliary domain $D_1^M := (0,2\\eta + (b - 1)h)$\n\\begin{align*}\n\\begin{dcases}\n\\left((s_1^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad &\\forall x\\in D_1^M,\\\\\nv(x)=0, \\quad & \\forall x\\in \\partial D_1^M.\n\\end{dcases}\n\\end{align*}\nWith the same discrete numerical scheme and step size $h$, we have the\ncorresponding discrete system $H_1^{M} \\pmb v=\\pmb g$ on the extended grid\n\\begin{align*}\nX_1^{M}:=\\{ih :1\\le i\\le 2\\gamma+b-2\\}. \n\\end{align*}\nFigure \\ref{fig:Xp} shows a graphical view of $X_1^M$, as well as other extended grids which we will see later.\n\nWith the discrete system $H_1^M \\pmb v = \\pmb g$, we can define an operator $\\td{G}_{1}^{M}:\\pmb y\\to \\pmb z$, which is an approximation of\n$G_{1,1}$, by the following:\n\\begin{enumerate}\n\\item \n Introduce a vector $\\pmb g$ defined on $X_1^{M}$ by setting\n $\\pmb y$ to $X_1$ and zero everywhere else.\n\\item \n Solve $H_1^{M} \\pmb v=\\pmb g$ on $X_1^{M}$.\n\\item \n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_1$.\n\\end{enumerate}\nThen $\\td{\\pmb u}_{1,1}$ can be set as\n\\begin{align*}\n\\td{\\pmb u}_{1,1}:=\\td{G}_{1}^{M}\\pmb f_1.\n\\end{align*}\n\n\\begin{figure}[h!]\n \\centering\n \n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw\n (0.0 , 0.0)--(7.0 , 0.0)\n \n (0.0 , 1.5)--(2.0 , 1.5)\n (2.5 , 1.5)--(4.5 , 1.5)\n (5.0 , 1.5)--(7.0 , 1.5)\n \n (0.0 , 1.0)--(1.5 , 1.0)\n (2.5 , 1.0)--(4.0 , 1.0)\n \n (3.0 , 2.0)--(4.5 , 2.0)\n (5.5 , 2.0)--(7.0 , 2.0)\n ;\n \n \\draw\n (0.0 , -0.1)--(0.0 , 0.1)\n (1.5 , -0.1)--(1.5 , 0.1)\n (3.0 , -0.1)--(3.0 , 0.1)\n (4.0 , -0.1)--(4.0 , 0.1)\n (5.5 , -0.1)--(5.5 , 0.1)\n (7.0 , -0.1)--(7.0 , 0.1)\n \n (0.0 , 1.0 - 0.1)--(0.0 , 1.0 + 0.1)\n (1.5 , 1.0 - 0.1)--(1.5 , 1.0 + 0.1)\n (2.5 , 1.0 - 0.1)--(2.5 , 1.0 + 0.1)\n (4.0 , 1.0 - 0.1)--(4.0 , 1.0 + 0.1)\n \n (3.0 , 2.0 - 0.1)--(3.0 , 2.0 + 0.1)\n (4.5 , 2.0 - 0.1)--(4.5 , 2.0 + 0.1)\n (5.5 , 2.0 - 0.1)--(5.5 , 2.0 + 0.1)\n (7.0 , 2.0 - 0.1)--(7.0 , 2.0 + 0.1)\n \n (0.0 , 1.5 - 0.1)--(0.0 , 1.5 + 0.1)\n (2.0 , 1.5 - 0.1)--(2.0 , 1.5 + 0.1)\n (2.5 , 1.5 - 0.1)--(2.5 , 1.5 + 0.1)\n (4.5 , 1.5 - 0.1)--(4.5 , 1.5 + 0.1)\n (5.0 , 1.5 - 0.1)--(5.0 , 1.5 + 0.1)\n (7.0 , 1.5 - 0.1)--(7.0 , 1.5 + 0.1)\n ;\n \n \\draw\n (0.0 , 0.0)node[anchor=south west]{PML}\n (0.0 , 1.0)node[anchor=south west]{PML}\n (0.0 , 1.5)node[anchor=south west]{PML}\n (2.5 , 1.0)node[anchor=south west]{PML}\n (2.5 , 1.5)node[anchor=south west]{PML}\n (5.0 , 1.5)node[anchor=south west]{PML}\n (7.0 , 0.0)node[anchor=south east]{PML}\n (7.0 , 1.5)node[anchor=south east]{PML}\n (7.0 , 2.0)node[anchor=south east]{PML}\n (4.5 , 1.5)node[anchor=south east]{PML}\n (4.5 , 2.0)node[anchor=south east]{PML}\n (2.0 , 1.5)node[anchor=south east]{PML}\n ;\n \n \\draw\n (1.0 , 0.0)node[anchor = north]{$X_1$}\n (1.0 , 1.0)node[anchor = north]{$X_1^L$}\n (1.0 , 1.5)node[anchor = north]{$X_1^M$}\n \n (6.0 , 0.0)node[anchor = north]{$X_m$}\n (6.0 , 2.0)node[anchor = north]{$X_m^R$}\n (6.0 , 1.5)node[anchor = north]{$X_m^M$}\n \n (3.5 , 0.0)node[anchor = north]{$X_p$}\n (3.5 , 1.0)node[anchor = north]{$X_p^L$}\n (3.5 , 1.5)node[anchor = north]{$X_p^M$}\n (3.5 , 2.0)node[anchor = north]{$X_p^R$}\n ;\n \n \\draw\n (2.25 , 0.0)node[fill=white]{$\\ldots$}\n \n (4.75 , 0.0)node[fill=white]{$\\ldots$}\n \n ;\n \\end{tikzpicture}\n \n \\caption{This figure shows how the grids $X_p$ are extended with auxiliary PMLs.}\n \\label{fig:Xp}\n\\end{figure}\n\nOnce we have computed $\\td{\\pmb u}_{1,1}$, we can use the right\nboundary value of $\\td{\\pmb u}_{1,1}$ to compute $\\td{\\pmb u}_{2,1}$\nby introducing an auxiliary PML on the right of $X_2$ and solving the\nboundary value problem with the left boundary value at\n$x=(\\gamma+b-1)h$ equal to the right boundary value of $\\td{\\pmb\n u}_{1,1}$. The same process can be repeated to compute $\\td{\\pmb\n u}_{p+1,1}$ by exploiting the right boundary value of $\\td{\\pmb\n u}_{p,1}$ recursively for $p=2,\\dots,m-1$ (see Figure\n\\ref{fig:approxUp1}). In the following context of this section, we\nintroduce notations $g^{L}, g^{R}$ for a vector array $\\pmb\ng=[g_1,\\dots,g_s]^T$ by\n\\begin{align*}\ng^{L}:=g_1,\\quad g^{R}:=g_s,\n\\end{align*}\nwhere $g^{L}$ and $g^{R}$ should be interpreted as the\nleftmost and the rightmost element of the array $\\pmb g$.\n\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[The wave $\\pmb u_1$ (shown as the gray arrow) generated by $\\pmb f_1$.]{\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(4,0.5);\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(2.0,0.5)node[below=3]{$\\pmb u_1$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\draw(0,0)node[anchor=south west]{PML}(4,0)node[anchor=south east]{PML};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td{\\pmb u}_{1,1}$ is computed by introducing an auxiliary PML on the right of $X_1$. The dotted gray arrow stands for the restriction of ${\\pmb u}_1$ on $X_2\\cup\\dots \\cup X_m$, which is to be approximated.]{\n \\label{fig:approxU11}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(1,0.5);\n \\draw[->,dotted,gray,very thick](1,0.5)--(4,0.5);\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(0.5,0.5)node[below=3]{$\\td{\\pmb u}_{1,1}$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\draw(0,0)node[anchor=south west]{PML}(1,0)node[anchor=south west]{PML};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,1}$ for $p=2,\\dots,m$ are computed sequentially.]{\n \\label{fig:approxUp1}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(1,0.5);\n \\foreach \\x in{1,2,3}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(0.5,0.5)node[below=3]{$\\td{\\pmb u}_{1,1}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,1}$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,m}$ for $p=m,\\dots,1$ are computed sequentially.]\n {\n \\label{fig:approxUpm}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(2.5,0)node[below=3]{$X_{m-1}$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](3,0.5)--(4,0.5);\n \\foreach \\x in{3,2,1}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x-1,0.5);\n }\n \\draw(3.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \\draw(1.5,0.5)node[fill=white]{$\\ldots$}(1.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,q}$ are computed for $p=q$ first, and then for $p=q+1,\\dots,m$ and for $p=q-1,\\dots,1$ sequentially.]\n {\n \\label{fig:approxUpq}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(2.5,0)node[below=3]{$X_{q-1}$}(3.5,0)node[below=3]{$X_q$}(4.5,0)node[below=3]{$X_{q+1}$}(6.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](3,0.5)--(4,0.5);\n \\foreach \\x in{3,2,1}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x-1,0.5);\n \\draw[->,gray,very thick](\\x+3,0.5)--(\\x+4,0.5);\n }\n \\draw(3.5,0.5)node[fill=white]{$\\pmb f_q$};\n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,q}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{q-1,q}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{q,q}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{q+1,q}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,q}$};\n \\draw(1.5,0.5)node[fill=white]{$\\ldots$}(1.5,0)node[fill=white]{$\\ldots$}(5.5,0.5)node[fill=white]{$\\ldots$}(5.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\caption{This figure shows how $\\td {\\pmb u}_{p,q}$ are generated. The direction of the arrows indicates the computing orders of the approximating waves.}\n\\end{figure}\n\n\nTo formalize the definition of $\\td{\\pmb u}_{p,1}$ for each\n$p=2,\\dots,m$, we introduce the auxiliary domain $D_p^R$, which will be defined below, to simulate the right-transmission of the waves. The superscript $R$ means that the auxiliary domain is intended for approximating the right-going waves. The left boundary of $D_p^R$ will be denoted as $\\partial^L D_p^R$, on which the boundary value will be used to approximate the wave transmission as we shall see. We also extend $X_p$ with an auxiliary PML on the right to form an extended grid $X_p^R$ (see Figure \\ref{fig:Xp}), which corresponds the discretization of $D_p^R$. To be specific, we define\n\\begin{align*}\nD_p^R &:= (\\eta+((p-1)b-1)h,2\\eta+(pb-1)h), \\\\\n\\partial^L D_p^R &:= \\{\\eta+((p-1)b-1)h\\}, \\\\\n X_p^{R} &:= \\{ih:\\gamma+(p-1)b\\le i\\le 2\\gamma+pb-2\\}.\n\\end{align*}\nNote that the grid $X_m^{R}$ is $X_m$ itself since\n$X_m$ already contains the original right PML region. The purpose to use the notation $X_m^{R}$ is to simplify the description of the algorithm.\n\nFor the PML on $D_p^{R}$, we define\n\\begin{align*}\n \\sigma_p^{R}(x) &:= \n \\begin{dcases}\n 0,\\quad &x\\in[\\eta+((p-1)b-1)h,\\eta+(pb-1)h],\\\\\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(pb-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(pb-1)h,2\\eta+(pb-1)h],\n \\end{dcases}\\\\\n s_p^{R} &:= \\left(1+\\ii\\dfrac{\\sigma_p^{R}(x)}{\\omega}\\right)^{-1}.\n\\end{align*} \nWe consider the following subproblem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_p^{R}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^R,\\\\\n v(x)=w,\\quad & \\forall x \\in \\partial^L D_p^R,\\\\\n v(x)=0, \\quad & \\forall x \\in \\partial D_p^R \\setminus\\partial^L D_p^R,\n \\end{dcases}\n\\end{align*}\nwhere $w$ is the left boundary value of the unknown $v(x)$. We define $H_p^{R} \\pmb v=\\pmb g$ as the discretization of this\nproblem on $X_p^{R}$ where the right-hand side $\\pmb g$ is given by\n$\\pmb g:= (-1\/h^2)[w,0,\\dots,0]^T$ as a result of the central\ndiscretization. The subproblem $H_p^{R} \\pmb v=\\pmb g$ for each\n$p=2,\\dots,m$ induces the approximation operator $\\td{G}_{p}^{R}:w \\to\n\\pmb z$ by the following procedure:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[w,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$.\n\\end{enumerate}\nThen $\\td{\\pmb u}_{p,1}$ can be defined recursively for $p=2,\\dots,m$ by\n\\begin{align*}\n \\td{\\pmb u}_{p,1}:=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1}^{R}.\n\\end{align*}\nNote that, the operator $\\td{G}_p^{R}$ is not an approximation of the\nmatrix block $G_{p,1}$, since $\\td{G}_p^R$ maps the right boundary\nvalue of $\\td{\\pmb u}_{p-1,1}$ to $\\td{\\pmb u}_{p,1}$ while $G_{p,1}$\nmaps $\\pmb f_1$ to $\\pmb u_{p,1}$.\n\n\\subsubsection{Wave generated by $\\pmb f_m$}\n\nThe components ${\\pmb u}_{p,m}$ for $p=1,\\dots,m$ can be regarded as a\nsequence of left-going waves generated by $\\pmb f_m$. The method for\napproximating them is similar to what was done for $\\pmb f_1$ (see\nFigure \\ref{fig:approxUpm}). More specifically, for $\\td{\\pmb\n u}_{m,m}$ we define\n\\begin{align*}\n D_m^M &:= (1-2\\eta-(b-1)h,1),\\\\\n X_m^{M} &:= \\{ih:(m-1)b+1\\le i \\le 2\\gamma+mb-2\\},\\\\\n \\sigma_m^{M}(x) &:= \n \\begin{dcases}\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(1-\\eta-(b-1)h)}{\\eta}\\right)^2,\\quad &x\\in[1-2\\eta-(b-1)h,1-\\eta-(b-1)h),\\\\\n 0,\\quad &x\\in[1-\\eta-(b-1)h,1-\\eta],\\\\\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(1-\\eta)}{\\eta}\\right)^2,\\quad &x\\in(1-\\eta,1],\n \\end{dcases}\\\\\n s_m^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_m^{M}(x)}{\\omega}\\right)^{-1}. \n\\end{align*}\nWe consider the continuous problem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_m^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad & \\forall x\\in D_m^M,\\\\\n v(x)=0,\\quad & \\forall x\\in \\partial D_m^M,\n \\end{dcases}\n\\end{align*}\nand define $H_m^{M}\\pmb v=\\pmb g$ as its discretization on $X_m^{M}$.\nThe operator $\\td{G}_{m}^{M}: \\pmb y\\to \\pmb z$ can be defined as:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_m^{M}$ by setting $\\pmb y$ to\n $X_m$ and zero everywhere else.\n\\item\n Solve $H_m^{M} \\pmb v=\\pmb g$ on $X_m^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_m$. \n\\end{enumerate}\nThen\n\\begin{align*}\n \\td{\\pmb u}_{m,m}:=\\td{G}_{m}^{M}\\pmb f_m. \n\\end{align*}\n\nFor each $\\td{\\pmb u}_{p,m},p=1,\\dots,m-1$, we introduce the auxiliary domain $D_p^L$, the right boundary $\\partial ^R D_p^L$, the extended grid $X_p^L$, and the corresponding PML functions $\\sigma_p^L(x), s_p^L(x)$ as follows\n\\begin{align*}\n D_p^L &:= ((p-1)bh,\\eta+pbh),\\\\\n \\partial ^R D_p^L &:= \\{\\eta + pbh\\},\\\\\n X_p^{L} &:= \\{x_i:(p-1)b+1\\le i\\le \\gamma+pb-1\\},\\\\\n \\sigma_p^{L}(x) &:= \n \\begin{dcases}\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(p-1)bh)}{\\eta}\\right)^2,\\quad &x\\in[(p-1)bh,\\eta+(p-1)bh),\\\\\n 0,\\quad &x\\in[\\eta+(p-1)bh,\\eta+pbh],\n \\end{dcases}\\\\\n s_p^{L}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_p^{L}(x)}{\\omega}\\right)^{-1}, \n\\end{align*}\nand we consider the continuous problem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_p^{L}(x)\\dfrac{\\dd}{\\dd\n x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad & \\forall x\\in D_p^L,\\\\\n v(x) = w,\\quad & \\forall x \\in \\partial^R D_p^L,\\\\\n v(x) = 0,\\quad & \\forall x \\in \\partial D_p^L \\setminus \\partial^R D_p^L,\n \\end{dcases}\n\\end{align*}\nwhere $y$ is the right boundary value of $v(x)$. Let $H_p^{L}\\pmb v=\\pmb g$ be its discretization on $X_p^{L}$ with\n$\\pmb g:=(-1\/h^2)[0,\\dots,0,w]^T$. We introduce the operator\n$\\td{G}_{p}^{L}:w\\mapsto \\pmb z$ by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,w]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nThen $\\td{\\pmb u}_{p,m}$ can be defined recursively for\n$p=m-1,\\dots,1$ by\n\\begin{align*}\n\\td{\\pmb u}_{p,m}:=\\td{G}_{p}^{L}\\td{\\pmb u}_{p+1,m}^{L}. \n\\end{align*}\n\n\\subsubsection{Wave generated by $\\pmb f_q$ for $q=2,\\dots,m-1$}\n\nFor each $q$, the components ${\\pmb u}_{p,q}$ for $p=1,\\dots,m$ can\nbe regarded as a sequence of left- and right-going waves generated by\n$\\pmb f_q$ (see Figure \\ref{fig:approxUpq}). For $\\td{\\pmb u}_{q,q}$,\nwe introduce\n\\begin{align*}\nD_q^M &:= ((q-1)bh,2\\eta+(qb-1)h),\\\\\nX_q^{M} &:= \\{x_i:(q-1)b+1\\le i \\le 2\\gamma+qb-2\\},\\\\\n\\sigma_q^{M}(x) &:= \n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(q-1)bh)}{\\eta}\\right)^2,\\quad &x\\in[(q-1)bh,\\eta+(q-1)bh),\\\\\n0,\\quad &x\\in[\\eta+(q-1)bh,\\eta+(qb-1)h],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(qb-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(qb-1)h,2\\eta+(qb-1)h],\n\\end{dcases}\\\\\ns_q^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_q^{M}(x)}{\\omega}\\right)^{-1}, \n\\end{align*}\nand define $H_q^{M} \\pmb v=\\pmb g$ as the discrete problem of the continuous problem\n\\begin{align*}\n\\begin{dcases}\n\\left((s_q^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad & \\forall x\\in D_q^M,\\\\\nv(x)=0, \\quad & \\forall x \\in \\partial D_q^M.\n\\end{dcases}\n\\end{align*}\nWe introduce the operator $\\td{G}_q^{M}: \\pmb y\\to \\pmb z$ as:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$\n and zero everywhere else.\n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nThen\n\\begin{align*}\n \\td{\\pmb u}_{q,q}:= \\td{G}_q^{M} \\pmb f_q. \n\\end{align*}\nFollowing the above discussion, the remaining components $\\td{\\pmb\n u}_{p,q}$ are defined recursively as\n\\begin{align*}\n\\td{\\pmb u}_{p,q} &:= \\td{G}_{p}^{R} \\td{\\pmb u}_{p-1,q}^{R}, \\quad \\text{for }p=q+1,\\dots,m,\\\\\n\\td{\\pmb u}_{p,q} &:= \\td{G}_{p}^{L}\\td{\\pmb u}_{p+1,q}^{L},\\quad \\text{for } p=q-1,\\dots,1. \n\\end{align*}\n\n\\subsection{Accumulating the boundary values}\nAfter all the above are done, an approximation of $\\pmb u_p$ is\ngiven by (see Figure \\ref{fig:approxUseparate})\n\\begin{align*}\n\\td{\\pmb u}_p:=\\sum_{q=1}^m\\td{\\pmb u}_{p,q},\\quad p=1,\\dots,m.\n\\end{align*}\n\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[$\\td {\\pmb u}_p$ is a superposition of $\\td {\\pmb u}_{p,q},q=1,\\dots,m$.]{ \n \\label{fig:approxUseparate}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(2.5,0)node[below=3]{$X_3$}(4.5,0)node[below=3]{$X_{m-2}$}(5.5,0)node[below=3]{$X_{m-1}$}(6.5,0)node[below=3]{$X_m$};\n \\foreach \\x in{0,...,5}\n {\n \\draw[->,gray,very thick](\\x+1,2.5)--(\\x+2,2.5);\n \\draw[<-,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw[<->,gray,very thick](6,0.5)--(7,0.5);\n \\draw[<->,gray,very thick](0,2.5)--(1,2.5);\n \n \\foreach \\x in{4,5,6}\n \\draw[->,gray,very thick](\\x,1.5)--(\\x+1,1.5);\n \\foreach \\x in{0,1,2}\n \\draw[<-,gray,very thick](\\x,1.5)--(\\x+1,1.5);\n \\draw[<->,gray,very thick](3,1.5)--(4,1.5);\n \n \\draw(6.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,2.5)node[fill=white]{$\\pmb f_1$};\n \n \\foreach \\x in{0.5,1.5,2.5}\n \\draw(3.5,\\x)node[fill=white]{$\\ldots$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\foreach \\y in{1.0,2.0}\n \\draw(\\x,\\y)node{$\\vdots$};\n \n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,m}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{3,m}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-2,m}$}(5.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,1.5)node[below=3]{$\\td {\\pmb u}_{1,q}$}(1.5,1.5)node[below=3]{$\\td {\\pmb u}_{2,q}$}(2.5,1.5)node[below=3]{$\\td {\\pmb u}_{3,q}$}(4.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-2,q}$}(5.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-1,q}$}(6.5,1.5)node[below=3]{$\\td {\\pmb u}_{m,q}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,2.5)node[below=3]{$\\td {\\pmb u}_{1,1}$}(1.5,2.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(2.5,2.5)node[below=3]{$\\td {\\pmb u}_{3,1}$}(4.5,2.5)node[below=3]{$\\td {\\pmb u}_{m-2,1}$}(5.5,2.5)node[below=3]{$\\td {\\pmb u}_{m-1,1}$}(6.5,2.5)node[below=3]{$\\td {\\pmb u}_{m,1}$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\draw[->,>=latex,line width=3 pt](\\x,2.7)--(\\x,3.0);\n \n \\draw(0.5,3.0)node[above]{$\\td {\\pmb u}_1$}(1.5,3.0)node[above]{$\\td {\\pmb u}_2$}(2.5,3.0)node[above]{$\\td {\\pmb u}_3$}(4.5,3.0)node[above]{$\\td {\\pmb u}_{m-2}$}(5.5,3.0)node[above]{$\\td {\\pmb u}_{m-1}$}(6.5,3.0)node[above]{$\\td {\\pmb u}_m$};\n \\end{tikzpicture}\n }\n \n \\subfigure[$\\td {\\pmb u}_p$ is a superposition of $\\td {\\pmb u}_{p,1:p-1}$, $\\td {\\pmb u}_{p,p}$ and $\\td {\\pmb u}_{p,p+1:m}$.]{\n \\label{fig:approxUaccumulate}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(2.5,0)node[below=3]{$X_3$}(4.5,0)node[below=3]{$X_{m-2}$}(5.5,0)node[below=3]{$X_{m-1}$}(6.5,0)node[below=3]{$X_m$};\n \n \\foreach \\x in{0,...,5}\n {\n \\draw[->,gray,very thick](\\x+1,1.5)--(\\x+2,1.5);\n \\draw[<-,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw[<->,gray,very thick](6,0.5)--(7,0.5);\n \\draw[<->,gray,very thick](0,1.5)--(1,1.5);\n \n \\draw(6.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,1.5)node[fill=white]{$\\pmb f_1$};\n \n \\foreach \\x in{0.5,1.5}\n \\draw(3.5,\\x)node[fill=white]{$\\ldots$};\n \n \\foreach \\x in{1,...,5}\n {\n \\draw[<->,gray,very thick](\\x,0.5)--(\\x+1,1.5);\n \\fill[white](3.35,0.7)rectangle(3.65,1.3);\n }\n \n \\draw(1.5,1.0)node[fill=white]{$\\pmb f_2$}(2.5,1.0)node[fill=white]{$\\pmb f_3$}(3.5,1.0)node[fill=white]{$\\ldots$}(4.5,1.0)node[fill=white]{$\\pmb f_{m-2}$}(5.5,1.0)node[fill=white]{$\\pmb f_{m-1}$};\n \n \\draw(1.5,1.0)node[below=3]{$\\td {\\pmb u}_{2,2}$}(2.5,1.0)node[below=3]{$\\td {\\pmb u}_{3,3}$}(4.5,1.0)node[below=3]{$\\td {\\pmb u}_{m-2,m-2}$}(5.5,1.0)node[below=3]{$\\td {\\pmb u}_{m-1,m-1}$};\n \n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,2:m}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,3:m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{3,4:m}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-2,m-1:m}$}(5.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,1.5)node[below=3]{$\\td {\\pmb u}_{1,1}$}(1.5,1.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(2.5,1.5)node[below=3]{$\\td {\\pmb u}_{3,1:2}$}(4.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-2,1:m-3}$}(5.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-1,1:m-2}$}(6.5,1.5)node[below=3]{$\\td {\\pmb u}_{m,1:m-1}$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\draw[->,>=latex,line width=3 pt](\\x,1.7)--(\\x,2.0);\n \n \\draw(0.5,2.0)node[above]{$\\td {\\pmb u}_1$}(1.5,2.0)node[above]{$\\td {\\pmb u}_2$}(2.5,2.0)node[above]{$\\td {\\pmb u}_3$}(4.5,2.0)node[above]{$\\td {\\pmb u}_{m-2}$}(5.5,2.0)node[above]{$\\td {\\pmb u}_{m-1}$}(6.5,2.0)node[above]{$\\td {\\pmb u}_m$};\n \n \\end{tikzpicture}\n }\n \\caption{This figure shows how the boundary values are accumulated after each step. The thin arrows indicate the transmission directions of the waves. The bold, up-pointing arrows symbolizes that summing up the corresponding waves on $X_p$ gives the superposition wave $\\td {\\pmb u}_p$.}\n\\end{figure}\n\n\nIn the algorithm described above, the computation of each component\n$\\td{\\pmb u}_{p,q}$ requires a separate solution of a problem of form\n$H_p^{R} \\pmb v =\\pmb g$ or $H_p^{L} \\pmb v =\\pmb g$. Since there are $O(m^2)$ such\ncomponents, the algorithm is computationally expensive. A key\nobservation is that the computation associated with each $p$ can be\ncombined in one single shot by accumulating the boundary values of the waves. More precisely, we define\n\\begin{align*}\n\\td{\\pmb u}_{p,q_1:q_2}:=\\sum_{t=q_1}^{q_2} \\td{\\pmb u}_{p,t}, \n\\end{align*}\nwhich is the total contribution of the waves generated by $\\pmb\nf_{q_1},\\dots,\\pmb f_{q_2}$ restricted to the grid $X_{p}$. The\nquantity $\\td{\\pmb u}_{p,1:p-1}$, which is the total right-going wave\ngenerated by $\\pmb f_1,\\dots,\\pmb f_{p-1}$ upon $X_p$, can be computed\nsequentially for $p=2,\\dots,m$ without computing each component and\nthen adding them together as we described above, as long as we\naccumulate the boundary values after each intermediate\nstep. Specifically, we first compute $\\td{\\pmb\n u}_{q,q}=\\td{G}_q^{M}\\pmb f_{q}$ for $q=1,\\dots,m$. This step is similar to what we did above. Then, to compute $\\td{\\pmb\n u}_{p,1:p-1}$ we carry out the following steps\n\\begin{align*}\n \\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R},\\quad \\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}, \\quad \\text{for } p=2,\\dots,m. \n\\end{align*}\nThis means, before computing the total right-going wave $\\td{\\pmb u}_{p+1,1:p}$ on subdomain $X_{p+1}$, the boundary values of the previous right-going\nwaves, $\\td{\\pmb u}_{p,1:p-1}^{R}$ and $\\td{\\pmb u}_{p,p}^{R}$, are added together, so\nthat the the current right-going wave $\\td{\\pmb u}_{p+1,1:p}$ can be computed in one shot, eliminating the\ntrouble of solving the subproblems for many times and adding the\nresults together (see Figure \\ref{fig:approxUaccumulate}).\n\nFor the left going waves $\\td{\\pmb u}_{p,p+1:m}$, a\nsimilar process gives rise to the recursive formula\n\\begin{align*}\n\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L},\\quad \\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}, \\quad \\text{for } p=m-1,\\dots,1.\n\\end{align*}\n\nFinally, each $\\td{\\pmb u}_p$ can be computed by summing $\\td{\\pmb\n u}_{p,1:p-1}$, $\\td{\\pmb u}_{p,p}$ and $\\td{\\pmb u}_{p,p+1:m}$\ntogether (for the leftmost and the rightmost one, $\\td{\\pmb u}_1$ and\n$\\td{\\pmb u}_m$, only two terms need to be summed), i.e.,\n\\begin{align*}\n\\td{\\pmb u}_1 &= \\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m},\\\\\n\\td{\\pmb u}_p &= \\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}, \\quad p=2,\\dots,m-1,\\\\\n\\td{\\pmb u}_m &= \\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}. \n\\end{align*}\n\nWe see that, by accumulating the boundary values after each intermediate step, we only need to solve $O(m)$ subproblems instead of $O(m^2)$.\n\nIn this algorithm, the approximation $\\td{\\pmb u}_{p}$ on each small\nsubdomain is divided into three parts. From a matrix point of view,\nthis is analogous to splitting the block matrix $G$ into its lower\ntriangular part, diagonal part and upper triangular part, and then\napproximating each part as an operator to get the intermediate waves and then summing the intermediate results\ntogether. This is why we call it the additive sweeping method. \n\nEquation \\eqref{eqn:LDUsplit} shows an analogy of this procedure, where the matrix $G$ is split into $3m - 2$ blocks, each of which corresponds to a subproblem solving process:\n\\begin{align*}\n\\td{\\pmb u}_{q,q} &\\approx {\\pmb u}_{q,q} = G_{q,q} {\\pmb f}_q,\\quad q = 1,\\dots, m,\\\\\n\\td{\\pmb u}_{p,1:p-1} &\\approx {\\pmb u}_{p,1:p-1} = \\sum_{q=1}^{p-1} G_{p,q} {\\pmb f}_q,\\quad p = 2,\\dots, m,\\\\\n\\td{\\pmb u}_{p,p+1:m} &\\approx {\\pmb u}_{p,p+1:m} = \\sum_{q=p+1}^{m} G_{p,q} {\\pmb f}_q,\\quad p = 1,\\dots, m-1.\\\\\n\\end{align*}\n\n\\begin{equation}\n \\label{eqn:LDUsplit}\n \\begin{bmatrix}\n \\pmb u_1\\\\\\pmb u_2\\\\ \\ldots\\\\\\pmb u_m\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\pmb u_{1,1}+\\pmb u_{1,2:m}\\\\\\pmb u_{2,1}+\\pmb u_{2,2}+\\pmb u_{2,3:m}\\\\ \\ldots\\\\\\pmb u_{m,1:m-1}+\\pmb u_{m,m}\n \\end{bmatrix}\n =\n \\left[\n \\begin{array}{ccccc}\n \\multicolumn{1}{c|}{G_{1,1}} & G_{1,2} & \\dots & & G_{1,m} \\\\ \n \\hline\n G_{2,1} & \\multicolumn{1}{|c|}{G_{2,2}} & G_{2,3} & \\ldots & G_{2,m} \\\\ \n \\hline\n & & \\ldots & & \\\\ \n \\hline\n G_{m,1} & \\ldots & \u2022 & G_{m,m-1} & \\multicolumn{1}{|c}{G_{m,m}}\n \\end{array} \n \\right]\n \\begin{bmatrix}\n \\pmb f_1\\\\\\pmb f_2\\\\ \\ldots\\\\\\pmb f_m\n \\end{bmatrix}\n\\end{equation}\n\nWhen combined with standard iterative solvers, the approximation\nalgorithm serves as a preconditioner for Equation \\eqref{eqn:1D} and it can be easily generalized to higher dimensions. In the following sections, we will discuss the details of the algorithm in 2D and 3D. To be structurally consistent, we will keep the notations for 2D and 3D the same with the 1D case without causing ambiguity. Some of the key notations and concepts are listed below as a reminder to the reader:\n\\begin{itemize}\n\\item\n$\\{X_p\\}_{p=1}^m$\\quad The sliced partition of the discrete grid.\n\\item\n$\\{D_q^M\\}_{q=1}^m$\\quad The auxiliary domains with two-sided PML padding.\n\\item\n$\\{D_p^R\\}_{p=2}^m$\\quad The auxiliary domains with right-side PML padding.\n\\item\n$\\{D_p^L\\}_{p=1}^{m-1}$\\quad The auxiliary domains with left-side PML padding.\n\\item\n$\\{X_q^M\\}_{q=1}^m$\\quad $X_q$ with two-sided PML padding, the discretization of $D_q^M$.\n\\item\n$\\{X_p^R\\}_{p=2}^m$\\quad $X_p$ with right-side PML padding, the discretization of $D_p^R$.\n\\item\n$\\{X_p^L\\}_{p=1}^{m-1}$\\quad $X_p$ with left-side PML padding, the discretization of $D_p^L$.\n\\item\n$\\{\\tilde{G}_q^M\\}_{q=1}^m$ \\quad The auxiliary Green's operators each of which maps the force on $X_q$ to the approximation of the wave field restricted to $X_q$.\n\\item\n$\\{\\tilde{G}_p^R\\}_{p=2}^m$ \\quad The auxiliary Green's operators each of which maps the left boundary value to the approximated wave field restricted to $X_p$, which simulates the right-transmission of the waves.\n\\item\n$\\{\\tilde{G}_p^L\\}_{p=1}^{m-1}$ \\quad The auxiliary Green's operators each of which maps the right boundary value to the approximated wave field restricted to $X_p$, which simulates the left-transmission of the waves.\n\\end{itemize}\n\n\\section{Preconditioner in 2D}\n\\label{sec:2D}\n\n\\subsection{Algorithm}\n\nThe domain of interest is $D=(0,1)^2$. We put PML on the two\nopposite sides of the boundary, $x_2=0$ and $x_2=1$, to illustrate the\nidea. The resulting equation is\n\\begin{align*}\n \\begin{dcases}\n \\left(\\partial_1^2+(s(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),&\\quad \\forall x=(x_1,x_2)\\in D,\\\\\n u(x)=0,&\\quad \\forall x\\in \\partial D, \n \\end{dcases}\n\\end{align*}\nWe discretize $D$ with step size $h=1\/(n+1)$ in each direction, which\nresults the Cartesian grid\n\\begin{align*}\n X:=\\{(i_1h,i_2h):1\\le i_1,i_2\\le n\\}, \n\\end{align*} \nand the discrete equation\n\\begin{equation}\n \\label{eqn:2D}\n \\begin{gathered}\n \\dfrac{s_{i_2}}{h}\\left(\\dfrac{s_{i_2+1\/2}}{h}(u_{i_1,i_2+1}-u_{i_1,i_2})-\\dfrac{s_{i_2-1\/2}}{h}(u_{i_1,i_2}-u_{i_1,i_2-1})\\right)\\\\\n +\\dfrac{u_{i_1+1,i_2}-2u_{i_1,i_2}+u_{i_1-1,i_2}}{h^2}+\\dfrac{\\omega^2}{c_{i_1,i_2}^2}u_{i_1,i_2}=f_{i_1,i_2}, \\quad \\forall 1\\le i_1,i_2\\le n,\n \\end{gathered}\n\\end{equation}\nwhere the subscript $(i_1,i_2)$ means that the corresponding function\nis evaluated at $(i_1h,i_2h)$, and since $s(x_2)$ is a function of\n$x_2$ only, we omit the $i_1$ subscript. $\\pmb u$ and $\\pmb f$ are\ndefined to be the column-major ordering of the discrete array $u$ and\n$f$ on the grid $X$\n\\begin{align*}\n \\pmb u:=[u_{1,1},\\dots,u_{n,1},\\dots,u_{n,n}]^T,\\quad \\pmb f:=[f_{1,1},\\dots,f_{n,1},\\dots,f_{n,n}]^T.\n\\end{align*}\nNow \\eqref{eqn:2D} can be written as $A\\pmb u=\\pmb f$.\n\nWe divide the grid into $m$ parts along the $x_2$ direction\n\\begin{align*}\n X_1 &:= \\{(i_1h,i_2h):1\\le i_1\\le n,1 \\le i_2 \\le \\gamma + b-1 \\}, \\\\\n X_p &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma + (p-1) b \\le i_2 \\le \\gamma + pb-1 \\},\\quad p=2,\\dots, m-1, \\\\\n X_m &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma + (m-1) b \\le i_2 \\le 2 \\gamma + mb-2 \\}, \n\\end{align*}\nand we define $\\pmb u_p$ and $\\pmb f_p$ as the column-major ordering\nrestriction of $u$ and $f$ on $X_p$\n\\begin{align*}\n \\pmb u_1 &:= [u_{1,1},\\dots,u_{n,1},\\dots,u_{n,\\gamma+b-1}]^T,\\\\\n \\pmb u_p &:= [u_{1,\\gamma+(p-1)b},\\dots,u_{n,\\gamma+(p-1)b},\\dots,u_{n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n \\pmb u_m &:= [u_{1,\\gamma + (m-1) b},\\dots,u_{n,\\gamma + (m-1) b},\\dots,u_{n,2 \\gamma + mb-2}]^T,\\\\\n \\pmb f_1 &:= [f_{1,1},\\dots,f_{n,1},\\dots,f_{n,\\gamma+b-1}]^T,\\\\\n \\pmb f_p &:= [f_{1,\\gamma+(p-1)b},\\dots,f_{n,\\gamma+(p-1)b},\\dots,f_{n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n \\pmb f_m &:= [f_{1,\\gamma + (m-1) b},\\dots,f_{n,\\gamma + (m-1) b},\\dots,f_{n,2 \\gamma + mb-2}]^T,\n\\end{align*}\nthen $\\pmb u=G\\pmb f$ for $G=A^{-1}$ can be written as\n\\begin{align*}\n\\begin{bmatrix}\n\\pmb u_1\\\\\\pmb u_2\\\\ \\vdots\\\\\\pmb u_m\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nG_{1,1}&G_{1,2}&\\ldots&G_{1,m}\\\\\nG_{2,1}&G_{2,2}&\\ldots&G_{2,m}\\\\\n\\vdots&\\vdots&&\\vdots\\\\\nG_{m,1}&G_{m,2}&\\ldots&G_{m,m}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\pmb f_1\\\\\\pmb f_2\\\\ \\vdots\\\\\\pmb f_m\n\\end{bmatrix}.\n\\end{align*}\n\n\\paragraph{Auxiliary domains.}\nFollowing to the 1D case, the extended subdomains and the\ncorresponding left and right boundaries are defined by\n\\begin{align*}\n D_q^{M} &= (0,1)\\times((q-1)bh,2\\eta+(qb-1)h),\\quad q=1,\\dots,m,\\\\\n D_p^{R} &= (0,1)\\times(\\eta+((p-1)b-1)h,2\\eta+(pb-1)h),\\quad p=2,\\dots,m,\\\\\n D_p^{L} &= (0,1)\\times((p-1)bh,\\eta+pbh),\\quad p=1,\\dots,m-1,\\\\\n \\partial^{L} D_p^{R} &= (0,1)\\times\\{\\eta+((p-1)b-1)h\\},\\quad p=2,\\dots,m,\\\\\n \\partial^{R} D_p^{L} &= (0,1)\\times\\{\\eta+pbh\\},\\quad p=1,\\dots,m-1.\n\\end{align*}\nThe extended grid for these domains are\n\\begin{align*}\n X_q^{M} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,(q-1)b+1\\le i_2 \\le 2\\gamma+qb-1\\}, \\quad q=1,\\dots,m, \\\\\n X_p^{R} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma+(p-1)b\\le i_2\\le 2\\gamma+pb-2\\}, \\quad p=2,\\dots,m, \\\\\n X_p^{L} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,(p-1)b+1\\le i_2\\le \\gamma+pb-1\\}, \\quad p=1,\\dots,m-1. \n\\end{align*}\n\n\\paragraph{Auxiliary problems.}\nFor $q=1,\\dots,m$, we define $H_q^{M} \\pmb v=\\pmb g$ to be the\ndiscretization on $X_q^{M}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_q^{M}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad &\\forall x\\in D_q^{M},\\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_q^{M}. \n \\end{dcases}\n\\end{align*}\nFor $p=2,\\dots,m$, $H_p^{R} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{R}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_p^{R}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{R},\\\\\n v(x)=w(x_1),\\quad &\\forall x\\in \\partial^{L} D_p^{R}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{R} \\setminus \\partial^{L} D_p^{R},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$ and $\\pmb w:= [w_1,\\dots,w_n]^T$ is the discrete value of $w(x_1)$. Finally, for\n$p=1,\\dots,m-1$, $H_p^{L} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{L}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_p^{L}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{L},\\\\\n v(x)=w(x_1),\\quad &\\forall x\\in \\partial^{R} D_p^{L}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{L} \\setminus \\partial^{R} D_p^{L},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$ and $\\pmb w:= [w_1,\\dots,w_n]^T$.\n\n\\paragraph{Auxiliary Green's operators.}\nFor $q=1,\\dots,m$, we define $\\td{G}_q^{M}:\\pmb y\\mapsto \\pmb z$ to be\nthe operator defined by the following operations:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$ and zero everywhere else. \n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nFor $p=2,\\dots,m$, the operators $\\td{G}_p^{R}:\\pmb w\\mapsto \\pmb z$\nis given by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nFinally, for $p=1,\\dots,m-1$, $\\td{G}_p^{L}:\\pmb w\\mapsto \\pmb z$ is\ndefined as:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\n\n\\paragraph{Putting together.}\nSimilar to the previous section, we introduce the left boundary value\n$\\pmb g^{L}$ and the right boundary value $\\pmb g^{R}$ for a\ncolumn-major ordering array\n$\\pmb g=[g_{1,1},\\dots,g_{s_1,1},\\dots,g_{s_1,s_2}]^T$\ninduced from some grid with size $s_1\\times s_2$ by\n\\begin{align*}\n \\pmb g^{L}:=[g_{1,1},\\dots,g_{s_1,1}]^T, \\quad \\pmb g^{R}:=[g_{1,s_2},\\dots,g_{s_1,s_2}]^T. \\\\\n\\end{align*}\nThen the approximations for $\\pmb u_p,p=1,\\dots,m$, can be defined step by step as\n\\begin{align*}\n \\td{\\pmb u}_{q,q} &:= \\td{G}_q^{M} \\pmb f_q,\\quad q=1,\\dots,m,\\\\\n \\td{\\pmb u}_{p,1:p-1} &:= \\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R},\\quad \\td{\\pmb u}_{p,1:p}^{R}:=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R},\\quad \\text{for } p=2,\\dots,m,\\\\\n \\td{\\pmb u}_{p,p+1:m} &:= \\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L},\\quad \\td{\\pmb u}_{p,p:m}^{L}:=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}, \\quad \\text{for } p=m-1,\\dots,1,\\\\\n \\td{\\pmb u}_1 &:= \\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m},\\\\\n \\td{\\pmb u}_p &:= \\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m},\\quad p=2,\\dots,m-1,\\\\\n \\td{\\pmb u}_m &:= \\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}. \n\\end{align*}\n\nTo solve the subproblems $H_q^{M} \\pmb v=\\pmb g$, $H_p^{R}\\pmb v=\\pmb\ng$ and $H_p^{L}\\pmb v=\\pmb g$, we notice that they are indeed quasi-1D\nproblems since $\\gamma$ and $b$ are some small constants. Therefore,\nfor each one of them, we can reorder the system by grouping\nthe elements along dimension 2 first and then dimension 1, which\nresults a banded linear system that can be solved by the LU\nfactorization efficiently. These factorization processes induce the\nfactorizations for the operators $\\td{G}_q^{M}$, $\\td{G}_p^{R}$ and\n$\\td{G}_p^{L}$ symbolically, which leads to our setup algorithm of the\npreconditioner in 2D as described in Algorithm \\ref{alg:2dsetup} and\nthe application algorithm as described in Algorithm \\ref{alg:2dapp}.\n\n\\begin{algorithm}[h!]\n \\caption{Construction of the 2D additive sweeping preconditioner of\n the Equation \\eqref{eqn:2D}. Complexity\n $=O(n^2(b+\\gamma)^3\/b)=O(N(b+\\gamma)^3\/b)$.}\n \\label{alg:2dsetup}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE Construct the LU factorization of $H_q^{M}$, which defines $\\td{G}_q^{M}$.\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE Construct the LU factorization of $H_p^{R}$, which defines $\\td{G}_p^{R}$.\n \\ENDFOR\n \\FOR {$p=1,\\dots,m-1$}\n \\STATE Construct the LU factorization of $H_p^{L}$, which defines $\\td{G}_p^{L}$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[h!]\n \\caption{Computation of $\\td{\\pmb u}\\approx G \\pmb f$ using the\n preconditioner from Algorithm \\ref{alg:2dsetup}. Complexity\n $=O(n^2(b+\\gamma)^2\/b)=O(N(b+\\gamma)^2\/b)$.}\n \\label{alg:2dapp}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{q,q}=\\td{G}_q^{M} \\pmb f_q$\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R}$\\\\\n $\\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}$\n \\ENDFOR\n \\FOR {$p=m-1,\\dots,1$}\n \\STATE\n $\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L}$\\\\\n $\\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_1=\\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m}$\\\\\n \\FOR {$p=2,\\dots,m-1$}\n \\STATE\n $\\td{\\pmb u}_p=\\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_m=\\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}$ \n \\end{algorithmic}\n\\end{algorithm}\n\nTo analyze the complexity, we note that, in the setup process, there\nare $O(n\/b)$ subproblems, each of which is a quasi-1D problem with\n$O(\\gamma+b)$ layers along the second dimension. Therefore, the setup\ncost of each subproblem by the LU factorization is $O(n(\\gamma+b)^3)$\nand the application cost is $O(n(\\gamma+b)^2)$. So the total setup\ncost is $O(n^2(\\gamma+b)^3\/b)$. Besides, one needs to solve each\nsubproblem once during the application process so the total\napplication cost is $O(n^2(\\gamma+b)^2\/b)$.\n\nThere are some differences when implementing the method practically:\n\\begin{enumerate}\n\\item\n In the above setting, PMLs are put only on two opposite sides of the\n unit square for illustration purpose. In reality, PMLs can be put on\n other sides of the domain if needed. As long as there are two\n opposite sides with PML boundary condition, the method can be\n implemented.\n\\item\n The thickness of the auxiliary PMLs introduced in the interior part\n of the domain needs not to be the same with the thickness of the PML\n at the boundary. In fact, the thickness of the auxiliary PML is\n typically thinner in order to improve efficiency.\n\\item\n The widths of the subdomains are completely arbitrary and they need\n not to be the same. Practically, the widths can be chosen to be\n larger for subdomains where the velocity field varies heavily.\n\\item\n The symmetric version of the equation can be adopted to save memory\n and computational cost.\n\\end{enumerate}\n\n\\subsection{Numerical results}\n\\label{sec:2Dnumerical}\n\nHere, we present some numerical results in 2D to illustrate the\nefficiency of the algorithm. The proposed method is implemented in\nMATLAB and the tests are performed on a 2.0 GHz computer with 256 GB\nmemory. GMRES is used as the iterative solver with relative residual\nequal to $10^{-3}$ and restart value equal to $40$. PMLs are put on\nall sides of the unit square. The velocity fields tested are given in\nFigure \\ref{fig:2D}:\n\\begin{enumerate}[(a)]\n\\item\n A converging lens with a Gaussian profile at the center of the domain.\n\\item\n A vertical waveguide with a Gaussian cross-section.\n\\item\n A random velocity field.\n\\end{enumerate}\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Da.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Db.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Dc.pdf}}\n \\caption{The three velocity fields tested in 2D.}\n \\label{fig:2D}\n\\end{figure}\n\nFor each velocity field, two external forces are tested:\n\\begin{enumerate}[(a)]\n\\item\n A Gaussian point source centered at $(1\/2,1\/8)$.\n\\item\n A Gaussian wave packet with wavelength comparable to the typical\n wavelength of the domain. The packet centers at $(1\/8,1\/8)$ and\n points to the direction $(1\/\\sqrt{2},1\/\\sqrt{2})$.\n\\end{enumerate}\n\n\\begin{table}[h!]\n \\centering\n \\begin{overpic}\n [width=0.45\\textwidth]{2Daa.pdf}\n \\put(40,6){force (a)}\n \\end{overpic}\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dab.pdf}\n \\put(40,6){force (b)}\n \\end{overpic}\n \\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (a)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 8.1669e$-$01 & 4 & 5.3199e$-$01 & 4 & 2.5647e$-$01 \\\\\n 32 & $255^2$ & 3.4570e$+$00 &4& 7.3428e$-$01 &4& 7.2807e$-$01\\\\ \n 64 & $511^2$ & 1.5150e$+$01 &5& 3.6698e$+$00 &4& 3.7239e$+$00\\\\ \n 128 & $1023^2$ & 6.2713e$+$01 &5& 1.6812e$+$01 &4& 1.6430e$+$01\\\\\n 256 & $2047^2$ & 2.6504e$+$02 &6& 7.8148e$+$01 &4& 5.6936e$+$01\\\\\n \\hline \n \\end{tabular} \n \\caption{Results for velocity field (a) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n \\label{tab:2domegaa}\n\\end{table}\n\\begin{table}[h!]\n \\centering\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dba.pdf}\n \\put(40,6){force (a)}\n \\end{overpic}\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dbb.pdf}\n \\put(40,6){force (b)}\n \\end{overpic}\n \\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (b)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 7.0834e$-$01 &6& 2.9189e$-$01 &4& 1.9408e$-$01\\\\\n 32 & $255^2$ & 3.2047e$+$00 &8& 1.6147e$+$00 &4& 7.9303e$-$01\\\\ \n 64 & $511^2$ & 1.4079e$+$01 &8& 6.3057e$+$00 &4& 3.9008e$+$00\\\\ \n 128 & $1023^2$ & 6.0951e$+$01 &8& 2.9097e$+$01 &4& 1.5287e$+$01\\\\\n 256 & $2047^2$ & 2.6025e$+$02 &8& 1.1105e$+$02 &5& 7.2544e$+$01\\\\\n \\hline \n \\end{tabular} \n \\caption{Results for velocity field (b) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n \\label{tab:2domegab}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n [width=0.45\\textwidth]{2Dca.pdf}\n \\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n [width=0.45\\textwidth]{2Dcb.pdf}\n \\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (c)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 7.0495e$-$01 &5& 2.4058e$-$01 &6& 2.8347e$-$01\\\\\n 32 & $255^2$ & 3.1760e$+$00 &5& 1.0506e$+$00 &5& 9.9551e$-$01\\\\ \n 64 & $511^2$ & 1.4041e$+$01 &6& 4.7083e$+$00 &7& 6.7852e$+$00\\\\\n 128 & $1023^2$ & 6.1217e$+$01 &6& 1.8652e$+$01 &6& 1.9792e$+$01\\\\\n 256 & $2047^2$ & 2.5762e$+$02 &8& 1.1214e$+$02 &6& 8.6936e$+$01\\\\\n \\hline \n\\end{tabular} \n\\caption{Results for velocity field (c) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n\\label{tab:2domegac}\n\\end{table}\n\nIn these tests, each typical wavelength is discretized with 8\npoints. The width of the PML at the boundary and the one of the PMLs introduced\nin the interior parts of the domain are both $9h$, i.e., $\\gamma=9$. The number of layers in each interior subdomain is $b=8$,\nthe number of layers in the leftmost subdomain is $b+\\gamma-1=16$ and\nthe one in the rightmost is $b+\\gamma-2=15$. \n\nWe vary the typical wave number $\\omega\/(2\\pi)$ and test the behavior\nof the algorithm. The test results are presented in Tables\n\\ref{tab:2domegaa}, \\ref{tab:2domegab} and\n\\ref{tab:2domegac}. $T_\\text{setup}$ is the setup time of the\nalgorithm in seconds. $T_{\\text{solve}}$ is the total solve time in\nseconds and $N_{\\text{iter}}$ is the iteration number. From these\ntests we see that the setup time scales like $O(N)$ as well as the\nsolve time per iteration, which is consistent with the algorithm\ncomplexity analysis. The iteration number remains constant or grows at\nmost logarithmically, which shows the efficiency of the\npreconditioner.\n\n\\section{Preconditioner in 3D}\n\\label{sec:3D}\n\n\\subsection{Algorithm}\n\nIn this section we briefly state the preconditioner in 3D case. The\ndomain of interest is $D=(0,1)^3$. PMLs are put on two opposite faces\nof the unit cube, $x_3=0$ and $x_3=1$, which results the equation\n\\begin{align*}\n \\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),&\\quad \\forall x=(x_1,x_2,x_3)\\in D,\\\\\n u(x)=0,&\\quad \\forall x\\in \\partial D, \n \\end{dcases}\n\\end{align*}\nDiscretizing $D$ with step size $h=1\/(n+1)$ gives the grid\n\\begin{align*}\n X:=\\{(i_1h,i_2h,i_3h):1\\le i_1,i_2,i_3\\le n\\}, \n\\end{align*} \nand the discrete equation\n\\begin{equation}\n \\label{eqn:3D}\n \\begin{gathered}\n \\dfrac{s_{i_3}}{h}\\left(\\dfrac{s_{i_3+1\/2}}{h}(u_{i_1,i_2,i_3+1}-u_{i_1,i_2,i_3})-\\dfrac{s_{i_3-1\/2}}{h}(u_{i_1,i_2,i_3}-u_{i_1,i_2,i_3-1})\\right)\\\\\n +\\dfrac{u_{i_1+1,i_2,i_3}-2u_{i_1,i_2,i_3}+u_{i_1-1,i_2,i_3}}{h^2}+\\dfrac{u_{i_1,i_2+1,i_3}-2u_{i_1,i_2,i_3}+u_{i_1,i_2-1,i_3}}{h^2}\\\\\n +\\dfrac{\\omega^2}{c_{i_1,i_2,i_3}^2}u_{i_1,i_2,i_3}=f_{i_1,i_2,i_3}, \\quad \\forall 1\\le i_1,i_2\\le n. \n \\end{gathered}\n\\end{equation}\n$\\pmb u$ and $\\pmb f$ are defined as the column-major ordering of $u$ and $f$ on the grid $X$\n\\begin{align*}\n \\pmb u:=[u_{1,1,1},\\dots,u_{n,1,1},\\dots,u_{n,n,1},\\dots,u_{n,n,n}]^T,\\quad \\pmb f:=[f_{1,1,1},\\dots,f_{n,1,1},\\dots,f_{n,n,1},\\dots,f_{n,n,n}]. \n\\end{align*}\n$X$ is divided into $m$ parts along the $x_3$ direction\n\\begin{align*}\n X_1 &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,1 \\le i_3 \\le \\gamma + b-1 \\}, \\\\\n X_p &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma + (p-1) b \\le i_3 \\le \\gamma + pb-1 \\}, \\quad p=2,\\dots, m-1, \\\\\n X_m &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma + (m-1) b \\le i_3 \\le 2 \\gamma + mb-2 \\}. \n\\end{align*}\n$\\pmb u_p$ and $\\pmb f_p$ are the column-major ordering restrictions of $u$ and $f$ on $X_p$ \n\\begin{align*}\n\\pmb u_1 &:= [u_{1,1,1},\\dots,u_{n,1,1},\\dots,u_{n,n,1},\\dots,u_{n,n,\\gamma+b-1}]^T,\\\\\n\\pmb u_p &:= [u_{1,1,\\gamma+(p-1)b},\\dots,u_{n,1,\\gamma+(p-1)b},\\dots,u_{n,n,\\gamma+(p-1)b},\\dots,u_{n,n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n\\pmb u_m &:= [u_{1,1,\\gamma + (m-1) b},\\dots,u_{n,1,\\gamma + (m-1) b},\\dots,u_{n,n,\\gamma + (m-1) b},\\dots,u_{n,n,2 \\gamma + mb-2}]^T,\\\\\n\\pmb f_1 &:= [f_{1,1,1},\\dots,f_{n,1,1},\\dots,f_{n,n,1},\\dots,f_{n,n,\\gamma+b-1}]^T,\\\\\n\\pmb f_p &:= [f_{1,1,\\gamma+(p-1)b},\\dots,f_{n,1,\\gamma+(p-1)b},\\dots,f_{n,n,\\gamma+(p-1)b},\\dots,f_{n,n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n\\pmb f_m &:= [f_{1,1,\\gamma + (m-1) b},\\dots,f_{n,1,\\gamma + (m-1) b},\\dots,f_{n,n,\\gamma + (m-1) b},\\dots,f_{n,n,2 \\gamma + mb-2}]^T. \n\\end{align*}\n\n\\paragraph{Auxiliary domains.}\nThe extended subdomains, the extended grids, and the corresponding\nleft and right boundaries are defined by\n\\begin{align*}\nD_q^{M} &:= (0,1)\\times(0,1)\\times((q-1)bh,2\\eta+(qb-1)h), \\quad q=1,\\dots,m,\\\\\nD_p^{R} &:= (0,1)\\times(0,1)\\times(\\eta+((p-1)b-1)h,2\\eta+(pb-1)h), \\quad p=2,\\dots,m,\\\\\nD_p^{L} &:= (0,1)\\times(0,1)\\times((p-1)bh,\\eta+pbh), \\quad p=1,\\dots,m-1,\\\\\n\\partial^{L} D_p^{R} &:= (0,1)\\times(0,1)\\times\\{\\eta+((p-1)b-1)h\\}, \\quad p=2,\\dots,m,\\\\\n\\partial^{R} D_p^{L} &:= (0,1)\\times(0,1)\\times\\{\\eta+pbh\\}, \\quad p=1,\\dots,m-1, \\\\\nX_q^{M} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,(q-1)b+1\\le i_3 \\le 2\\gamma+qb-1\\}, \\quad q=1,\\dots,m, \\\\\nX_p^{R} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma+(p-1)b\\le i_3\\le 2\\gamma+pb-2\\}, \\quad p=2,\\dots,m, \\\\\nX_p^{L} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,(p-1)b+1\\le i_3\\le \\gamma+pb-1\\}, \\quad p=1,\\dots,m-1. \n\\end{align*}\n\n\\paragraph{Auxiliary problems.}\nFor each $q=1,\\dots,m$, $H_q^{M} \\pmb v=\\pmb g$ is defined as the\ndiscretization on $X_q^{M}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_q^{M}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),&\\quad \\forall x\\in D_q^{M},\\\\\n v(x)=0,&\\quad \\forall x\\in \\partial D_q^{M}, \n \\end{dcases}\n\\end{align*}\nFor $p=2,\\dots,m$, $H_p^{R} \\pmb v=\\pmb g$ is defined as the\ndiscretization on $X_p^{R}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_p^{R}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{R},\\\\\n v(x)=w(x_1,x_2),\\quad &\\forall x\\in \\partial^{L} D_p^{R}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{R} \\setminus \\partial^{L} D_p^{R}, \n \\end{dcases}\n\\end{align*}\nwhere $\\pmb\ng:=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$ and $\\pmb w := [w_{1,1},\\dots,w_{n,1},\\dots,w_{n,n}]$ is the discrete boundary value. Finally,\nfor $p=1,\\dots,m-1$, $H_p^{L} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{L}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_p^{L}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{L},\\\\\n v(x)=w(x_1,x_2),\\quad &\\forall x\\in \\partial^{R} D_p^{L}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{L} \\setminus \\partial^{R} D_p^{L},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$ and $\\pmb w := [w_{1,1},\\dots,w_{n,1},\\dots,w_{n,n}]$. \n\n\\paragraph{Auxiliary Green's operators.}\nFor $q=1,\\dots,m$, $\\td{G}_q^{M}:\\pmb y\\mapsto \\pmb z$ is defined\nusing the following operations:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$ and zero everywhere else. \n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nFor $p=2,\\dots,m$, $\\td{G}_p^{R}:\\pmb w\\mapsto \\pmb z$ is given by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nFinally, for $p=1,\\dots,m-1$, the operators $\\td{G}_p^{L}:\\pmb\nw\\mapsto \\pmb z$ is introduced to be:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\n\n\\paragraph{Putting together.}\nIn the 3D case, $\\pmb g^{L}$ and $\\pmb g^{R}$ for the column-major ordering array\\\\\n$\\pmb\ng=[g_{1,1,1},\\dots,g_{s_1,1,1},\\dots,g_{s_1,s_2,1},\\dots,g_{s_1,s_2,s_3}]^T$\ninduced from some 3D grid with size\\\\\n$s_1\\times s_2 \\times s_3$ are\ngiven by\n\\begin{align*}\n \\pmb g^{L}:=[g_{1,1,1},\\dots,g_{s_1,1,1},\\dots,g_{s_1,s_2,1}]^T, \\quad \\pmb g^{R}:=[g_{1,1,s_3},\\dots,g_{s_1,1,s_3},\\dots,g_{s_1,s_2,s_3}]^T. \n\\end{align*}\n\nThe subproblems $H_q^{M} \\pmb v=\\pmb g$, $H_p^{R} \\pmb v=\\pmb g$ and\n$H_p^{L} \\pmb v=\\pmb g$ are quasi-2D. To solve them, we group the\nelements along dimension 3 first, and then apply the nested dissection\nmethod\\cite{george1973nested,duff1983multifrontal} to them, as in \\cite{sweeppml}. This gives the setup process of the 3D\npreconditioner in Algorithm \\ref{alg:3dsetup} and the application\nprocess in Algorithm \\ref{alg:3dapp}.\n\\begin{algorithm}[h!]\n \\caption{Construction of the 3D additive sweeping preconditioner of\n the system \\eqref{eqn:3D}. Complexity\n $=O(n^4(b+\\gamma)^3\/b)=O(N^{4\/3}(b+\\gamma)^3\/b)$.}\n \\label{alg:3dsetup}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE Construct the nested dissection factorization of $H_q^{M}$, which defines $\\td{G}_q^{M}$.\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE Construct the the nested dissection factorization of $H_p^{R}$, which defines $\\td{G}_p^{R}$.\n \\ENDFOR\n \\FOR {$p=1,\\dots,m-1$}\n \\STATE Construct the the nested dissection factorization of $H_p^{L}$, which defines $\\td{G}_p^{L}$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[h!]\n \\caption{Computation of $\\td{\\pmb u}\\approx G \\pmb f$ using the\n preconditioner from Algorithm \\ref{alg:3dsetup}. Complexity\n $=O(n^3\\log n (b+\\gamma)^2\/b)=O(N\\log N(b+\\gamma)^2\/b)$.}\n \\label{alg:3dapp}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{q,q}=\\td{G}_q^{M} \\pmb f_q$\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R}$\\\\\n $\\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}$\n \\ENDFOR\n \\FOR {$p=m-1,\\dots,1$}\n \\STATE\n $\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L}$\\\\\n $\\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_1=\\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m}$\\\\\n \\FOR {$p=2,\\dots,m-1$}\n \\STATE\n $\\td{\\pmb u}_p=\\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_m=\\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}$ \n \\end{algorithmic}\n\\end{algorithm}\n\nFor the algorithm analysis, we notice that each quasi-2D subproblem\nhas $O(\\gamma+b)$ layers along the third dimension. Therefore, the\nsetup cost for each subproblem is $O((\\gamma+b)^3n^3)$ and the\napplication cost is $O((\\gamma+b)^2n^2\\log n)$. Taking the total\nnumber of subproblems into account, the total setup cost for the 3D\npreconditioner is $O(n^4(b+\\gamma)^3\/b)$ and the total application\ncost is $O(n^3\\log n (b+\\gamma)^2\/b)$.\n\n\\subsection{Numerical results}\n\\label{sec:3Dnumerical}\n\nHere we present the numerical results in 3D. All the settings and\nnotations are kept the same with Section \\ref{sec:2Dnumerical}\nunless otherwise stated. The PMLs are put on all sides of the boundary\nand the symmetric version of the equation is adopted to save memory\ncost. The PML width is $\\eta= 9h$ for the boundary and is\n$\\eta_\\text{aux}=5h$ for the interior auxiliary ones. The number of\nlayers in each subdomain is $b=4$ for the interior ones,\n$b+\\gamma-1=12$ for the leftmost one and $b+\\gamma-2=11$ for the\nrightmost one.\n\nThe velocity fields tested are (see Figure \\ref{fig:3D}): \n\\begin{enumerate}[(a)]\n\\item\n A converging lens with a Gaussian profile at the center of the domain.\n\\item\n A vertical waveguide with a Gaussian cross-section.\n\\item\n A random velocity field.\n\\end{enumerate}\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Da.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Db.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Dc.pdf}}\n \\caption{The three velocity fields tested in 3D.}\n \\label{fig:3D}\n\\end{figure}\n\nThe forces tested for each velocity field are: \n\\begin{enumerate}[(a)]\n\\item\n A Gaussian point source centered at $(1\/2,1\/2,1\/4)$.\n\\item\n A Gaussian wave packet with wavelength comparable to the typical\n wavelength of the domain. The packet centers at $(1\/2,1\/4,1\/4)$ and\n points to the direction $(0,1\/\\sqrt{2},1\/\\sqrt{2})$.\n\\end{enumerate}\n\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Daa.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dab.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (a)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.3304e$+$01 &3& 2.9307e$+$00 &4& 3.7770e$+$00\\\\\n10 & $79^3$ & 3.2935e$+$02 &3& 3.6898e$+$01 &4& 4.6176e$+$01\\\\ \n20 & $159^2$ & 4.2280e$+$03 &4& 4.3999e$+$02 &4& 4.6941e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (a) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegaa}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dba.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dbb.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (b)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.1315e$+$01 &3& 2.7740e$+$00 &3& 2.7718e$+$00\\\\\n10 & $79^3$ & 3.4256e$+$02 &4& 4.4286e$+$01 &3& 3.4500e$+$01\\\\ \n20 & $159^2$ & 4.3167e$+$03 &5& 5.7845e$+$02 &4& 4.6462e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (b) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegab}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dca.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dcb.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (c)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.1063e$+$01 &4& 3.8074e$+$00 &4& 3.7975e$+$00\\\\\n10 & $79^3$ & 3.4735e$+$02 &4& 4.4550e$+$01 &4& 4.5039e$+$01\\\\ \n20 & $159^2$ & 4.3391e$+$03 &4& 4.4361e$+$02 &5& 5.8090e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (c) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegac}\n\\end{table}\n\nThe results are given in Tables \\ref{tab:3domegaa}, \\ref{tab:3domegab}\nand \\ref{tab:3domegac}. From these tests we see that the iteration\nnumber grows mildly as the problem size grows. We also notice that the\nsetup cost scales even better than $O(N^{4\/3})$, mainly because MATLAB\nperforms dense linear algebra operations in a parallel way, which\ngives some extra advantages to the nested dissection algorithm as the\nproblem size grows.\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\nIn this paper, we proposed a new additive sweeping preconditioner for\nthe Helmholtz equation based on the PML. When combined with the\nstandard GMRES solver, the iteration number grows mildly as the\nproblem size grows. The novelty of this approach is that the unknowns\nare split in an additive way and the boundary values of the\nintermediate results are utilized directly. The disadvantage is that,\nfor each subdomains, three subproblems need to be built up, which is\ntime consuming compared to \\cite{sweeppml} and\n\\cite{stolk2013domaindecomp}. However, the costly parts of the\nalgorithm, i.e. the whole setup process and the solve processes of the\nsubproblems $H_q^{M} \\pmb v=\\pmb g$, can be done in parallel. The only\nparts that must be implemented sequentially are the accumulations of\nthe left-going and right-going waves, where only the solve processes\nof the subproblems $H_p^{L} \\pmb v=\\pmb g$ and $H_p^{R} \\pmb v=\\pmb g$\nare involved, which are the cheapest parts of the algorithm. Besides,\nwe think that the whole approximation process is simple and\nstructurally clear from a physics point of view and the idea might be\neasy to be generalized to other equations.\n\nThere are also some other directions to make potential\nimprovements. First, other numerical schemes of the equation and other\napproximations of the Sommerfeld radiation condition can be used to\ndevelop more efficient versions of this additive\npreconditioner. Second, the parallel version of the nested dissection\nalgorithm can be combined to solve large scale problems. Last, in the\n3D case, the quasi-2D subproblems can be solved recursively by\nsweeping along the $x_2$ direction with the same technique, which\nreduces the theoretical setup cost to $O(N)$ and the application cost\nto $O(N)$. However, compared to \\cite{sweeppml}, the coefficient of\nthe complexity in this new method is larger, so it is not clear\nwhether or not the recursive approach will be more efficient\npractically. Nevertheless, it is of great theoretical interest to look\ninto it.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec: intro}\nInterstellar dust grains participate in many important physical and chemical processes in the interstellar medium (ISM). For example, the surface of dust is the catalyst for formation of some molecules, especially $\\rm H_2$ \\citep{GOULD63, CAZAUX04}. Dust also shields gas from the interstellar radiation field (ISRF), and allows the low temperatures crucial to star formation to emerge deep within molecular clouds \\citep{KRUMHOLZ11, YAMASAWA11, GLOVER12}. Dust plays an important role in the observed spectral energy distribution (SED) of galaxies: it absorbs and scatters starlight, and reemits the absorbed energy at infrared (IR) wavelengths \\citep{CALZETTI01, BUAT12}. Thus, it is important to understand the properties of dust before we can fully understand the ISM and the observed SED from galaxies.\n\nThe amount of interstellar dust depends on the balance between dust formation and dust destruction. The mechanisms of dust destruction include supernovae (SNe) shocks, thermal evaporation, cosmic rays, and dust incorporated into newly formed stars \\citep{DWEK98, HIRASHITA99}. The dust formation mechanisms include accretion of metals in the ISM onto existing dust grains, formation of new dust grains in the winds of AGB stars, and dust formation in type II SNe \\citep{DWEK98, ASANO13}. Different dominant dust destruction and formation mechanisms would result in a different dust-to-gas mass ratio (DGR):\n\\begin{equation}\n{\\rm DGR} \\equiv \\Sigma_d \/ \\Sigma_{\\rm gas}\n\\end{equation}\nand dust-to-metals ratio (DTM):\n\\begin{equation}\n {\\rm DTM} \\equiv {\\rm DGR} \/ Z,\n\\end{equation}\nwhere $\\Sigma_d$ is the dust mass surface density, $\\Sigma_{\\rm gas}$ is the total gas mass surface density, which includes the contribution from \\textsc{HI}, H$_2$ and He, and $Z$ is the metallicity. Note that some authors replace $\\Sigma_{\\rm gas}$ with hydrogen mass surface density in the definition of DGR, e.g., \\citet{DRAINE14, GORDON14}. Other than the formation and destruction mechanisms affecting DGR and DTM, the DTM itself can directly impact the ISM dust accretion rate \\citep{DWEK98}. Thus, studying DGR and DTM provides key insights into the dust life cycle.\n\nTheoretical dust life cycle models yield varying predictions for the DTM as a function of metallicity and local environment. Models in \\citet{SODROSKI97, DWEK98} show that the DGR gradient scales linearly with the metallicity gradient, and the DTM is nearly a constant. This can be achieved by a constant rate of dust formation and destruction, which results in a constant fraction of metal incorporated into dust, and thus DTM at all chemical evolution stages is a constant \\citep{GALLIANO08}. Other studies show that DTM is not always a constant, but a multi-stage variable as metallicity increases. At low metallicity, ISM accretion is less effective and the dust production rate is dominated by stellar ejecta, which could result in a locally constant DTM in this low metallicity regime \\citep{HIRASHITA11}. Above a certain critical metallicity, the efficiency of dust accretion may increase, which would result in a DTM increasing with metallicity \\citep{ZHUKOVSKA08, HIRASHITA11, FELDMANN15}. The critical metallicity depends on model and choices of parameters, and usually falls in the range of $\\rm 12+\\logt({\\rm O\/H}) = 7.5$ and $8.5$ \\citep{HIRASHITA99, ZHUKOVSKA08, HIRASHITA11, ASANO13, ZHUKOVSKA16}.\n\nSeveral observational studies support a constant DTM. In \\citet{ISSA90}, the authors collated the DGR gradients and metallicity gradients from previous studies in M31, M33, and M51, and reached the conclusion that the slopes of DGR and metallicity with galactic radius are consistent with each other. In \\citet{LEROY11}, the authors followed the approaches in \\citet{DRAINE07} to derive the dust masses in local group galaxies. They showed that DTM is a constant across $8.0 \\la \\rm 12+\\logt({\\rm O\/H}) \\la 9.0$. In \\citet{DRAINE14}, the authors fit the IR SED in M31 to a renormalized version of dust model described in \\citet{DRAINE07}. The authors showed that their derived DGR scales linearly with metallicity where metallicity measurements are reported by \\citet{ZURITA12}. Importantly, the relation between dust and metallicity is consistent with $M_d\/M_H \\sim 0.0091 Z\/Z_\\sun$, a prediction from depletion conditions in the cloud toward $\\zeta$Oph in the Milky Way (MW) \\citep{DRAINE11, DRAINE14}.\n\nThere are also observational results supporting a varying DTM. In \\citet{LISENFELD98}, the authors studied the DTM in 44 dwarf galaxies, and found a varying DTM. In \\citet{HIRASHITA02}, the authors study 16 blue compact dwarf (BCD) galaxies, and found that $\\log_{10}(\\rm DGR)$ spreads from $-3.3$ to $-4.6$ within $\\rm 7.9 < \\rm 12+\\logt({\\rm O\/H}) < 8.6$, indicating an variable DTM because the slope between DGR and metallicity is not unity. The authors hypothesized that this phenomenon is the result of the variation in dust destruction efficiency by SNe, which depends on the star formation history of the region. \\citet{HUNT05} also showed a 2 dex spread of DGR at $8 \\leq \\rm 12+\\logt({\\rm O\/H}) \\leq 9$. They also reported that the BCD SBS 0335$-$052, which has a metallicity $\\rm 12+\\logt({\\rm O\/H}) = 7.32$, has an extremely low dust mass, two orders of magnitude below a linear trend with metallicity. Similarly, \\citet{HERRERA-CAMUS12} and \\citet{FISHER14} showed that the local dwarf galaxy I Zw 18 has a DGR two orders of magnitude below the linear trend derived from local galaxies. In \\citet{REMY-RUYER14}, the authors compiled DGR measurements for 126 galaxies, with 30\\% of their sample having $\\rm 12+\\logt({\\rm O\/H})\\leq 8.0$. They showed that there might be a discontinuity of the linear DTM at oxygen abundance $\\rm 12+\\logt({\\rm O\/H}) = 8$, and the galaxies below that metallicity have $\\rm DGR \\propto Z^{3.1}$. That is, instead of a simple linear relation between DGR and $Z$, the authors suggest a broken power-law. In \\citet{ROMAN-DUVAL17}, the authors showed that the DGR changes by factors of 3 to 7 in the Magellanic Clouds, where metallicity is considered to be constant. This result also indicates a variable DTM. In \\citet{GIANNETTI17}, the authors found a ${\\rm DGR}(Z) \\propto Z^{1.4}$ in a sample set composed by 23 massive and dense star-forming regions in the far outer MW.\n\nIn this work, we revisit the possible variation of DTM in a single galaxy, M101. There are several benefits to studying DTM within a single galaxy. First, metallicity measurements are calibrated more uniformly within one galaxy than across galaxies, which is crucial for studying DTM variation \\citep{REMY-RUYER14, BERG15, CROXALL16}. Moreover, focusing on one galaxy can avoid the problem in galaxy-integrated results that DTM can be underestimated by integrating over dust-poor HI in outer disks \\citep{DRAINE07}. By comparing the DTM within one galaxy and across galaxies, we will also be able to determine whether the possible variation in DTM depends more on local physical properties or galactic properties. Lastly, observations within one galaxy would have the minimum differences of MW foreground, calibration, and background level estimation, which means the data are more uniform.\n\nM101 is an ideal target for this study for four reasons: 1) M101 has one of the most detailed studies of its metallicity from the Chemical Abundances Of Spirals survey \\citep[CHAOS,][]{BERG15, CROXALL16}, based on electron temperature ($T_e$) derived from auroral line measurements. 2) M101 has the largest metallicity gradient among those galaxies where direct $T_e$-based metallicity measurements are available, ranging $7.5 \\la \\rm 12+\\logt({\\rm O\/H}) \\la 8.8$ \\citep{CROXALL16}. This range covers both as high as the solar neighborhood and as low as the turning point in \\citet{REMY-RUYER14} broken-power law. 3) M101 has a good radial resolution even at far-infrared (FIR) observations because it is nearby (distance $\\sim \\rm 6.7~Mpc$), physically large (the 25th magnitude isophote in B band, or r$_{25}$, is $0.2^\\circ=23.4~{\\rm kpc}$ at distance $\\rm 6.7~Mpc$), and relatively face on \\citep[inclination $\\approx 16^\\circ$,][]{FREEMAN01, MAKAROV14}. 4) M101 also has high sensitivity \\textsc{Hi} and CO maps \\citep{WALTER08, LEROY09}, which let us map the total gas distribution.\n\nThis paper is presented as follows. \\secref{sec: observations} presents FIR, \\textsc{Hi}, CO, and other supporting data used in this study, with our data processing procedures. The five modified blackbody (MBB) model variants and the fitting methodologies are described in \\secref{Sec: methods}. We present our fitting results in \\secref{sec: results}, and compare them with known physical limitations and statistical properties. In \\secref{sec: discussions}, we discuss the implication of our results, and the relation between our DTM and previous findings. Finally, we give our conclusions in \\secref{sec: conclusions}.\n\n\\section{Observations} \\label{sec: observations}\n\\subsection{Data}\\label{sec: data}\nIn this section, we introduce the multi-wavelength measurements of M101 from several surveys and their uncertainties, which we adopted for this study. The physical properties (position, distance and orientation) of M101 adopted for this study are listed in Table \\ref{tab: samples}.\n\\begin{deluxetable}{lll}\n\\tablecaption{Properties of M101. \\label{tab: samples}}\n\\tablehead{\\colhead{Property} & \\colhead{Value} & \\colhead{Reference}}\n\\startdata\nR.A. (2000.0)\t& 14h~03m~12.6s & (1) \\\\\nDec (2000.0)\t& +54d~20m~57s & (1) \\\\\nDistance\t\t& 6.7~Mpc\\tablenotemark{$\\dagger$} &(2) \\\\\nr$_{25}$\t\t\t& $0^\\circ.19990$ & (1) \\\\\nInclination\t\t& $16^\\circ$ & (1) \\\\\nP.A.\t\t\t& $38^\\circ$ &(3) \\\\\n$\\alpha_{{\\rm CO}~J=(2-1)}$\\tablenotemark{*}\t& $\\rm (2.9\/R_{21})~M_\\sun~pc^{-2} (K~km~s^{-1})^{-1}$ & (4) \\\\\n$R_{21}$ & 0.7 & (4) \\\\\n\\enddata\n\\tablenotetext{\\dagger}{Consistent with the value in \\citet{SHAPPEE11}.}\n\\tablenotetext{*}{See \\secref{subsec: CO} for discussion of the $\\alpha_{\\rm CO}$ factor we use.}\n\\tablerefs{(1) HyperLeda database (\\url{http:\/\/leda.univ-lyon1.fr\/}), \\citet{MAKAROV14}; (2) \\citet{FREEMAN01}; (3) \\citet{SOFUE99}; (4) \\citet{SANDSTROM13}.}\n\\end{deluxetable}\n\n\\subsubsection{Infrared Imaging} \\label{subsec: IR}\nWe use FIR images from the ``Key Insights on Nearby Galaxies: A Far-Infrared Survey with \\textit{Herschel}'' survey \\citep[KINGFISH,][]{KENNICUTT11} to fit dust surface densities in M101. KINGFISH imaged 61 nearby galaxies in the FIR with the \\textit{Herschel Space Observatory} \\citep{PILBRATT10}, covering $70~\\micron$, $100~\\micron$, and $160~\\micron$ from Photoconductor Array Camera and Spectrometer \\citep[PACS,][]{POGLITSCH10}, and $250~\\micron$, $350~\\micron$, and $500~\\micron$ from Spectral and Photometric Imaging Receiver \\citep[SPIRE,][]{GRIFFIN10}. We do not include the $70~\\micron$ flux in our SED modeling because stochastic heating from small dust grains makes non-negligible contribution in that spectral range \\citep{DRAINE07}, which is not accounted for by the simple SED models we employ in this study. The PACS images were processed from level 1 with \\texttt{Scanamorphos v16.9} \\citep{ROUSSEL13} by the KINGFISH team. The SPIRE images were processed with \\texttt{HIPE} \\citep{OTT10} version \\texttt{spire-8.0.3287} and from level 1 to final maps with \\texttt{Scanamorphos v17.0} \\citep{ROUSSEL13} by the KINGFISH team. According to the KINGFISH DR3 user guide \\citep{KINGFISH13}, the SPIRE images have been multiplied by correction factors of 0.9282, 0.9351, and 0.9195 for SPIRE250, SPIRE350, and SPIRE500, respectively, due to improved effective beam size estimation. The FWHMs are approximately $7\\arcsec.0 = 0.23~\\rm kpc$, $11\\arcsec.2 = 0.36~\\rm kpc$, $18\\arcsec.2 = 0.59~\\rm kpc$, $24\\arcsec.9 = 0.81~\\rm kpc$, and $36\\arcsec.1 = 1.17~\\rm kpc$ for the 100\\micron, 160\\micron, 250\\micron, 350\\micron, and 500\\micron\\ band images, respectively.\n\n\\subsubsection{\\textsc{Hi}} \\label{subsec: HI}\nWe obtain \\textsc{Hi} 21 cm line data from ``The \\textsc{Hi} Nearby Galaxy Survey'' \\citep[THINGS,][]{WALTER08}. The images were obtained at the Very Large Array (VLA)\\footnote{The VLA is operated by the National Radio Astronomy Observatory (NRAO), which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}. The M101 dataset in this survey has (10\\arcsec.8, 10\\arcsec.2)$\\sim$(0.35~kpc, 0.33~kpc) angular resolution and $\\rm 5.2~km~s^{-1}$ velocity resolution with natural weighting. The observed 21 cm emission can be converted to \\textsc{Hi} column density ($N_{\\rm HI}$) via Eq. (1) and Eq. (5) in \\citet{WALTER08} assuming it is optically thin, and then further converted to surface density $\\Sigma_{\\rm HI}$ by multiplying by the atomic weight of hydrogen. The uncertainty in the THINGS survey is dominated by the estimated zero-point uncertainty in \\textsc{Hi}, which is around $1~\\rm M_\\sun\/pc^2$, corresponding to 0.04 to 0.17 dex in the center of M101 (molecular gas dominated region), 0.03 to 0.04 dex for most atomic gas dominated region, and goes above 0.08 dex for the outer most pixels.\n\n\\subsubsection{CO and Total Gas} \\label{subsec: CO}\nWe obtain CO emission line measurements from the ``HERA CO Line Extragalactic Survey'' \\citep[HERACLES,][]{LEROY09, SCHRUBA11, SCHRUBA12, LEROY13}, a survey mapping the ${\\rm ^{12}CO}~J=(2-1)$ rotational line at $230.538~\\rm GHz$ of 48 nearby galaxies, including M101. The observation was carried out with Heterodyne Receiver Array \\citep[HERA,][]{SCHUSTER04} on the IRAM 30-m telescope\\footnote{IRAM is supported by CNRS\/INSU (France), the MPG (Germany) and the IGN (Spain).}. The survey has 13\\arcsec\\ angular resolution and $\\rm 2.6~km~s^{-1}$ velocity resolution. The CO line integrated intensity can be converted to surface density of $\\rm H_2$ plus He ($\\Sigma_{\\rm mol}$) by:\n\\begin{equation}\n \\Sigma_{\\rm mol} = \\alpha_{\\rm CO}\\frac{I_{{\\rm CO}~J=(2-1)}}{R_{21}},\n\\end{equation}\nwhere $\\alpha_{\\rm CO}$ is the CO-to-$\\rm H_2$ conversion factor, see Table \\ref{tab: samples}. The standard $\\alpha_{\\rm CO}$ is quoted for $I_{{\\rm CO}~J=(1-0)}$, thus, we convert the $I_{{\\rm CO}~J=(2-1)}$ with a fixed line ratio\\footnote{We adopt the $\\alpha_{\\rm CO}$ value from \\citet{SANDSTROM13}, which the authors originally derived with $I_{{\\rm CO}~J=(2-1)}$ data and convert with $R_{21}=0.7$. Thus we need to use the same $R_{21}$ for consistency.} $R_{21}=(2-1)\/(1-0)=0.7$ \\citep{SANDSTROM13}.\n\nWith $\\Sigma_{\\rm HI}$ and $\\Sigma_{\\rm mol}$, we calculate the total gas mass surface density ($\\Sigma_{\\rm gas}$) with Eq. \\ref{eq: total gas}. A multiplier of value 1.36 is included in $\\Sigma_{\\rm mol}$ for helium mass \\citep{SANDSTROM13}. We multiply the $\\Sigma_{\\rm HI}$ by this factor to calculate the total gas surface density correctly.:\n\\begin{equation} \\label{eq: total gas}\n\\Sigma_{\\rm gas} = 1.36~\\Sigma_{\\rm HI} + \\alpha_{\\rm CO}\\frac{I_{{\\rm CO}~J=(2-1)}}{R_{21}}\n\\end{equation}\n\nWe have checked that a metallicity dependent $\\alpha_{\\rm CO}$ \\citep{WOLFIRE10, BOLATTO13} would make no significant difference in $\\Sigma_{\\rm gas}$ because in the region where H$_2$ is important in M101, the metallicity is still relatively high. See more discussion in \\secref{sec: alpha_CO discussion}.\n\n\\subsubsection{Metallicity\\label{subsec: metal_data}}\nWe obtained metallicity measurements from CHAOS survey \\citep{CROXALL16}. Measurements were taken in 109 \\textsc{Hii} regions by the Multi-Object Double Spectrographs (MODS) on the Large Binocular Telescope \\citep[LBT,][]{POGGE10}. They derived $T_e$ from a three-zone model with \\textsc{[Oiii]}, \\textsc{[Siii]}, and \\textsc{[Nii]} line ratios. The electron densities are derived from \\textsc{[Sii]} line ratios. This gives us gas phase oxygen abundances in 74 \\textsc{Hii} regions inside M101, and also an average metallicity gradient spread over the galactocentric radius considered in this study. We will compare our derived DGR with their derived metallicity gradient \\citep[Eq. 10 in][second line\\footnote{Instead of the 7.4 Mpc distance quoted in \\citet{CROXALL16}, we used a galaxy distance of 6.7 Mpc, thus we multiplied the slope in their Eq. 10 by $\\frac{7.4}{6.7}$ to account for the difference.}]{CROXALL16}. The uncertainty in $\\rm 12+\\logt({\\rm O\/H})$ from the average metallicity gradient is $\\sim0.02$ dex in the center and $\\sim0.07$ dex in the outer most part.\n\n\\subsubsection{Star formation rate and stellar mass\\label{subsec: other}}\nWe calculate star formation rate surface density ($\\Sigma_{\\rm SFR}$) from the Galaxy Evolution Explorer (GALEX) FUV \\citep{MARTIN05} and \\textit{Spitzer} Multiband Imaging Photometer (MIPS) 24 \\micron\\ data \\citep{WERNER04, RIEKE04}, and stellar mass surface density ($\\Sigma_\\star$) from \\textit{Spitzer} Infrared Array Camera (IRAC) 3.6 \\micron. These data are from the Local Volume Legacy survey \\citep[LVL,][]{DALE09}.\n\nWe use the following equation to convert observed FUV and IR emission to $\\Sigma_{\\rm SFR}$:\n\\begin{equation} \\label{eq: SFR}\n\\Sigma_{\\rm SFR}=(8.1\\times 10^{-2} I_{\\rm FUV} + 3.2 \\times 10^{-3} I_{24})\\cos i,\n\\end{equation}\nwhere $i$ is the inclination of M101. $\\Sigma_{\\rm SFR}$ is in $M_{\\sun}~\\rm kpc^{-2}~yr^{-1}$, and both $I_{\\rm FUV}$ and $I_{24}$ are in $\\rm MJy~sr^{-1}$. Eq. \\ref{eq: SFR} is adopted from \\cite{LEROY08}, and it is functionally similar to the prescription in \\citet{KENNICUTT12}.\n\nFor converting 3.6 \\micron\\ SED to $\\Sigma_\\star$, we use the relation:\n\\begin{equation}\n \\Sigma_\\star = 350I_{3.6}\\cos i,\n\\end{equation}\nwhere $\\Sigma_\\star$ is in $M_{\\sun}~\\rm pc^{-2}$, and $I_{3.6}$ is in $\\rm MJy~sr^{-1}$. Note that the appropriate mass to light ratio ($\\Upsilon_\\star^{3.6}$) remains a topic of research \\citep{MCGAUGH14, MEIDT14}. Here, we assume the $\\Upsilon_\\star^{3.6}=0.5$ \\citep{MCGAUGH14}, see discussions in \\citet{LEROY08} and A. K. Leroy et al. (2018, in preparation).\n\n\\subsection{Data processing\\label{sec: data proc}}\n\\subsubsection{Background subtraction} \\label{subsec: background error}\nThe IR and GALEX images that we use include contributions from various backgrounds and foregrounds. Throughout this study, we will neglect the structure in MW foreground over the relatively small angular ($r_{25}=0^\\circ.2$) extent of M101. To estimate the foreground\/background (hereafter referred to as background) level for each image, we need a uniform definition of background region. We define our background region as where $N_{\\rm HI} < 1.0\\times \\rm 10^{18}~cm^{-2}$. For the GALEX map, we take the mean value in the background region as recommended due to the Poisson statistics of the GALEX counts. For the IR images, we fit a tilted plane and iteratively reject outliers. This includes several steps: we fit a tilted plane to all the background pixels. We then subtract the tilted plane from the data and calculate the absolute deviation (AD) from the median for all pixels and derive median absolute deviation (MAD). Finally, we use only the pixels with AD smaller than three times MAD to fit a tilted plane, and iterate over step two and three for five times, keeping the last fitted tilted plane as the background to be removed.\n\nAfter background subtraction and convolution (\\secref{subsec: convolution}), we calculate the covariance matrix\\footnote{A matrix with its i-j element as the i-band to j-band covariance. Our covariance matrix has a dimension of 5x5, corresponding to the 100$-$500 \\micron\\ bands in \\textit{Herschel}.} in the background region of the five \\textit{Herschel} bands. This covariance matrix ($\\mathcal{C}_{\\rm bkg}$) will play an important role in calculation of likelihood in our fitting procedure because it incorporates the observed band-to-band correlation in the noise due to confusion and other astronomical sources into our fitting (\\secref{subsec: fitting}). \n\n\\subsubsection{Convolution}\\label{subsec: convolution}\nMaps obtained from different surveys do not have the same pixel scale and point spread function (PSF). In order to compare them pixel-by-pixel, we first convolve all the maps to match the PSF of SPIRE500 using the \\texttt{convolve\\_fft} function in \\texttt{astropy.convolution} \\citep{ASTROPY13}. Most kernels in this study were adapted from \\citet{ANIANO11}, except the Gaussian kernels for THINGS and HERACLES surveys. For these two surveys, we built elliptical or circular Gaussian kernels according to their beam sizes \\citep{WALTER08, LEROY09} to convolve them to match a Gaussian PSF with 25\\arcsec\\ FWHM. Then, we convolve the images with a second kernel from \\citet{ANIANO11}, which convolves Gaussian PSF with 25\\arcsec\\ FWHM to SPIRE500 PSF.\n\n\\subsubsection{Alignment\\label{subsec: alignment}}\nAfter convolution, we align the coordinates of all the images with the SPIRE500 image and its pixel scale using the function \\texttt{reproject\\_exact} in \\texttt{reproject}, an \\texttt{astropy} affiliated package. The final pixel scale is 14.0\\arcsec, or $\\rm \\sim 0.45~kpc$, which is smaller than half of SPIRE500 PSF FWHM, 36\\arcsec, thus enough for properly sampling the PSF. In the final images, one resolution element contains $\\sim 5.2$ pixels, therefore, neighboring pixels are not independent.\n\n\\subsubsection{Binning\\label{subsec: voronoi}}\nOne of our main interests is to analyze DTM in regions with $\\rm 12+\\logt({\\rm O\/H}) \\la 8.0$, where the relation of DTM with metallicity is expected to change \\citep{HIRASHITA99, HIRASHITA11, REMY-RUYER14}. However, individual pixels in the low metallicity region, or outer disk, tend to have insufficient signal-to-noise ratio (SNR) for analysis. One way we can solve this problem is to bin neighboring pixels together and average the measured quantities in those pixels to increase SNR according to:\n\n\\begin{equation}\n {\\rm SNR_{avg}} = \\frac{(\\sum_i {\\rm Signal}_i)\/n}{\\sqrt{(\\sum_i {\\rm Noise}_i^2)\/n^2}},\n\\end{equation}\n\nwhere the summation is over resolution elements inside the binned and $n$ is the number of resolution elements. As a consequence, uniform binning requires all regions on the map to sacrifice their spatial resolution in order to recover the regions with lower SNR, which means some structures that could have been resolved would be smoothed out in the binning process. To optimize the resolution and extend to the outer disk simultaneously, we choose to use adaptive binning: binning more pixels together in the low SNR region, while binning fewer pixels together or leaving pixels as individuals in the high SNR region.\n\nThe adaptive binning method we choose is the \\texttt{voronoi\\_2d\\_binning} function \\citep{CAPPELLARI03}. Instead of directly apply the algorithm to the entire SED, we execute some extra procedures listed below in order to preserve radial information:\n\\begin{enumerate}\n \\item We calculate SNR map for all five \\textit{Herschel} bands using the square root of diagonal terms in the covariance matrix ($\\mathcal{C}_{\\rm bkg}$), which is the variance of each band, as the noise of each band.\n \\item For each pixel, we select the lowest SNR among five bands at that pixel to build the worst SNR map, which is plotted in Figure \\ref{fig: voronoi} (a). This worst SNR map is used for the subsequent binning process in order to make sure all five bands will reach the target SNR with the same binned regions. 58\\% of pixels have their worst SNR from PACS100.\n \\item We cut the target galaxy into concentric rings with the same radial spacing, which is set to be the same as the FWHM of the SPIRE500 PSF. This initial radial cut is shown in Figure \\ref{fig: voronoi} (b).\n \\item Starting from the outermost ring, if the average SNR of all pixels within a ring is lower than target SNR, we combine it with one ring inside until target SNR is achieved. This final radial cut is shown in Figure \\ref{fig: voronoi} (c). The target SNR is set to be 5. However, since the pixels are oversampled with the SPIRE500 PSF (see \\secref{subsec: alignment}), the effective target SNR is $ 5 \/ \\sqrt{5.2} \\sim 2.2 $.\n \\item We apply \\texttt{voronoi\\_2d\\_binning} with \\texttt{targetSN} set to 5, to each ring from Step 4 and worst SNR map from Step 2 to generate the final binned regions, as shown in Figure \\ref{fig: voronoi} (d).\n\\end{enumerate}\nNote that we discard the \\texttt{roundness} threshold in the original function \\citep{CAPPELLARI03}. This \\texttt{roundness} threshold makes sure all binned region are nearly circular, which will result in malfunctions when we cut the image into concentric circles at the beginning. All pixels within radius 7.4 kpc ($0.3~\\rm r_{25}$) have high enough SNR thus remain unbinned.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Voronoi_NGC5457.pdf}\n\\caption{Voronoi binning process in this study. (a) The worst-SNR map. Among the 16,403 points, 58\\% have their worst SNR in PACS100. Both PACS160 and SPIRE500 take around 18\\%. (b) The initial radial cut. (c) The final radial cut after grouping rings according to target SNR. (d) The final binned regions. The white circles in panel (a) and (d) show the radius 7.4 kpc. All pixels within 7.4 kpc remain unbinned.\\label{fig: voronoi}}\n\\end{figure}\n\n\\section{Methods\\label{Sec: methods}}\n\\subsection{Models\\label{Sec: Model}}\nIn this work, we focus on the FIR part of the dust emission SED. It is reasonable to assume that emission from dust grains in thermal equilibrium dominates the FIR range \\citep{LI01, BLAIN02, GORDON14}, therefore we start with fitting the FIR emission with a modified blackbody (MBB) model:\n\\begin{equation}\nI_\\nu = \\kappa_{\\nu}\\Sigma_d B_\\nu(T_d),\n\\end{equation}\nwhere $I_\\nu$ is the specific intensity, $\\kappa_{\\nu}$ is the wavelength-dependent emissivity, $\\Sigma_d$ is the dust surface density, and $B_\\nu(T_d)$ is the blackbody spectral radiance at dust temperature $T_d$. An empirical power law emissivity is often assumed, that is, $\\kappa_\\nu = \\kappa_{\\nu_0}(\\nu\/\\nu_0)^\\beta$, where the emissivity index $\\beta$ is a constant and $\\nu_0=c\/\\lambda_0$. Throughout this study, $\\lambda_0=160~\\micron$ is used.\n\nThere are a few possible drawbacks to this simple model, some of them are physical, and the others are inherent to the process of fitting the model. The physical drawbacks include: 1) The simple model above does not allow for wavelength or environmental dependence of $\\beta$, which might exist \\citep{REACH95, FINKBEINER99, LI01, GORDON14}. 2) The model does not include stochastic heating \\citep{DRAINE07}, which might contribute to our shortest wavelength observation due to the width of the response functions of the PACS instruments. 3) The model does not include the broadening in the SED due to multiple heating conditions involved in one resolution element \\citep{DALE01}. The fitting process drawbacks include: 1) $\\kappa_{\\nu_0}$ and $\\Sigma_d$ are completely degenerate, thus there will be an inherent uncertainty in $\\Sigma_d$ from how we determine the $\\kappa_{\\nu_0}$ value. 2) Due to the nature of this model, $\\beta$ and $T_d$ are covariant, since they both shift the peak wavelength of the SED. Thus, there might be artificial correlation between them. \\citet{KELLY12} demonstrated this artificial correlation with traditional $\\chi^2$-minimization fitting.\n\nWe calibrate $\\kappa_{\\nu_0}$ with high-latitude MW diffuse ISM following the approach in \\citet{GORDON14} (see \\secref{Sec: calibration}). It is possible that this calibration is not appropriate at all local environmental conditions and it would result in a systematic uncertainty in our results (see \\secref{subsec: discuss_emissivity} for further discussion). We also use a probabilistic fitting procedure following \\citet{GORDON14} that lets us assess the correlations between fit parameters and properly marginalize over the degeneracy between $\\beta$ and $T_d$. Still, there is no simple way to solve all the physical drawbacks of the MBB model. In order to address the physical shortcomings of the MBB model, we construct five variant models. These each address a shortcoming of the MBB. They are not all mutually exclusive, and a full model \\citep[e.g.,][]{DRAINE07} might incorporate several of these. Our goal here is to identify the simplest possible modifications that yield a good fit to the IR SED. These variants are listed below:\n\n\\subsubsection{Simple emissivity (SE)\\label{subsubsec: SE}}\nHere, we assume a simple power-law emissivity, which gives a dust emission SED described by the following equation:\n\\begin{equation} \\label{eq: SE}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta \\Sigma_d B_\\nu(T_d).\n\\end{equation}\nThe free parameters in this model are $\\Sigma_d$, $T_d$ and $\\beta$. This method allows $\\beta$ to vary spatially, thus could partially avoid the environmental-dependent $\\beta$ drawback. However, it is also heavily affected by the possible artificial correlation between $\\beta$ and $T_d$.\n\n\\subsubsection{Fixing $\\beta$ (FB)\\label{subsubsec: FB}}\nUsing the same functional form as Eq. \\ref{eq: SE}, we can also fix the $\\beta$ value. This is one way to remove the inherent covariance between $T_d$ and $\\beta$ based on what is expected for the optical properties of ISM dust grain materials. In some previous studies \\citep{MENNELLA98, BOUDET05, GALLIANO17} and our preliminary test of SE method, there are fitting results with anti-correlated $T_d$ and $\\beta$. This could mean that $\\beta$ is a function of $T_d$, however, due to the degeneracy of $T_d$ and $\\beta$ in the model, it is also possible that this anti-correlation is all, or partially, artificial \\citep{SHETTY09, SHETTY09B, KELLY12}. In the latter case, fixing $\\beta$ can improve the accuracy of fitted $T_d$ \\citep{SHETTY09B}. Thus, we adapted $\\beta=2$ from previous studies \\citep{REACH95, DUNNE01, DRAINE07} as a variation of MBB spectrum. We also tested $\\beta$ values of 1.6, 1.8, and 2.2 and the difference in $\\Sigma_d$ and chi-square values between them and $\\beta=2$ results are insignificant. The insensitivity of the resulting $\\Sigma_d$ to our choice of $\\beta$ results from the fact that we calibrate the emissivity for each $\\beta$ value accordingly. The process of emissivity calibration is described in \\secref{Sec: calibration}. It is also true for the other methods where we also have $\\beta$ fixed at 2 at short wavelength or the whole spectral range.\n\n\\subsubsection{Broken Emissivity (BE)}\nIt is possible that the dust emissivity is not a simple power law, but varies with wavelength. Previous studies have shown that the emissivity in the long wavelength end tends to be flatter than the short wavelength end. Thus, many authors including \\citet{REACH95} and \\citet{GORDON14} have tried to build more complicated forms of emissivity as a function of wavelength. Here, we adapted the BEMBB model in \\citet{GORDON14}: assuming $\\beta$ as a step function in wavelength, which makes the emissivity a broken-power law (Eq. \\ref{eq: BE}).\n\\begin{equation}\\label{eq: BE}\n\\kappa_\\nu=\\left\\{\\begin{array}{ll}\n \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^{\\beta} & \\lambda < \\lambda_c \\\\\n \\kappa_{\\nu_0}(\\frac{\\nu_c}{\\nu_0})^{\\beta}(\\frac{\\nu}{\\nu_c})^{\\beta_2} & \\lambda \\geq \\lambda_c\n \\end{array}\\right.\n\\end{equation}\n$\\lambda_c$ is the critical wavelength corresponding to the break, and $\\nu_c$ is the frequency corresponding to $\\lambda_c$. $\\lambda_c$ is fixed at 300~\\micron\\ in this study. We explored varying the break wavelength with the spectral range of 50 to 600 \\micron and found it had no major impact on the results. $\\beta_2$ is the dust emissivity index at long wavelength. The short wavelength dust emissivity index $\\beta$ is fixed at 2 in this study.\n\n\\subsubsection{Warm dust component (WD)}\nIn the spectral region below $100~\\micron$, it is possible that the SED is affected by stochastic emission from small grains \\citep{DRAINE07}, which is within the effective bandpass of the PACS100 response function (around $80$ to $120~\\micron$). In this model, we add a second MBB component with $T_d=\\rm 40~K$ to our SED, called ``warm dust'', to simulate the contribution from stochastically heated dust. We made this choice of $T_d$ to have the peak of warm dust SED at the boundary of PACS100 response function. The fraction of warm dust relative to total dust is symbolized as $f_W$. The fitting model in this method becomes (Note that both components have power-law emissivity with $\\beta=2$):\n\\begin{equation}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta\\Sigma_d \\Big((1-f_W) B_\\nu(T_d) + f_W B_\\nu(40K) \\Big).\n\\end{equation}\n\nTo properly take this effect into account, one would need to adopt a complete physical dust model. However, among the dust properties, we are mainly interested in $\\Sigma_d$, which is necessary for calculating DGR and DTM, and which does not require adopting a full dust model. This is because within our current understanding of dust heating and the dust grain size distribution, only a small fraction of the dust mass is stochastically heated \\citep{DRAINE07}. Our preliminary test confirms this: the mass fraction of stochastically heated dust in the WD modeling is usually under 1\\%. This means that we can still acquire reasonable accuracy in $\\Sigma_d$ even when the SED of stochastically heated dust is not modeled with high accuracy. \n\n\\subsubsection{Power Law distribution (PL)}\\label{sec: PL}\nAt the SPIRE500 resolution, the FWHM of PSF would have a large physical size ($\\sim 1.22~{\\rm kpc}$). Thus, it is likely that there are various dust heating conditions within one resolution element. To attempt to model such a distribution of heating conditions, we adopt a model wherein a fraction ($1-\\gamma$) of the dust mass is heated by a single value ISRF $U_{\\rm min}$, while the other $\\gamma$ fraction is heated by a distribution of ISRF between $U_{\\rm min}$ and $U_{\\rm max}$ with $\\frac{d\\Sigma_d}{dU}\\propto U^{-\\alpha}$ \\citep{DALE01, DRAINE07}. Each mass fraction emits a FB MBB spectrum, which makes the total emission\\footnote{The normalization factor $\\frac{1-\\alpha}{U_{\\rm max}^{1-\\alpha} - U_{\\rm min}^{1-\\alpha}}$ in Eq. \\ref{eq: PL} only works when $\\alpha \\neq 1$. For $\\alpha = 1$ (which is excluded in this study), one should use $\\frac{1}{\\ln(U_{\\rm max}\/U_{\\rm min})}$ instead.}:\n\\begin{equation} \\label{eq: PL}\n\\begin{array}{ll}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta\\Sigma_d & \\Big((1-\\gamma) B_\\nu(U_{\\rm min}) + \\\\\n& \\gamma \\frac{1-\\alpha}{U_{\\rm max}^{1-\\alpha} - U_{\\rm min}^{1-\\alpha}}\\int^{U_{\\rm max}}_{U_{\\rm min}}U^{-\\alpha}B_\\nu(U)dU \\Big).\n\\end{array}\n\\end{equation}\nTo calculate the equivalent MBB temperature, we convert $U$ to $T_d$ as $U \\propto T_d^{\\beta + 4}$, with a normalization of $U=1$ corresponding to $T_d=\\rm 18~K$ \\citep{DRAINE14}. This approach adds several free parameters, however, since we do not have good constraints for all of them, we fix some parameters before fitting: $U_{\\rm max}$ is fixed at $10^7$ \\citep[following][]{ANIANO12}, and $\\beta$ is fixed at 2. Thus, the number of free parameters is 4, which is not a major difference from the other models.\n\n\\subsection{Fitting techniques} \\label{subsec: fitting}\n\\begin{deluxetable}{lllll}\n\\tablecaption{Grid parameters for fitting.\\label{tab: grid space}}\n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Parameter} &\n\\colhead{Range} &\n\\colhead{Spacing} &\n\\colhead{Range$_c$\\tablenotemark{f}} &\n\\colhead{Spacing$_c$}}\n\\startdata\n$\\log_{10}\\Sigma_d$ & -4 to 1\\tablenotemark{a} & 0.025 & $\\pm0.2$ & 0.002\\\\\n$T_d$ & 5 to 50\\tablenotemark{b} & 0.5 & $\\pm1.5$ & 0.1\\\\\n$\\beta$ & -1.0 to 4.0\\tablenotemark{c} & 0.1 & $\\pm0.3$ & 0.02\\\\\n$\\lambda_c$ & 300\\tablenotemark{d} & N\/A & 300 & N\/A\\\\\n$\\beta_2$ & -1.0 to 4.0 & 0.25 & $\\pm0.3$ & 0.02\\\\\n$f_W$ & 0.0 to 0.05 & 0.002 & $\\pm0.006\\tablenotemark{g}$ & 0.0005\\\\\n$\\alpha$ & 1.1 to 3.0 & 0.1 & $\\pm0.3$ & 0.01\\\\\n$\\log_{10}\\gamma$ & -4.0 to 0.0 & 0.2 & $\\pm0.3$ & 0.1\\\\\n$\\log_{10} U_{\\rm min}$ & -2.0 to 1.5\\tablenotemark{e} & 0.1 & $\\pm0.1$ & 0.01\\\\\n$\\log_{10} U_{\\rm max}$ & 7 & N\/A & 7 & N\/A \\\\\n\\enddata\n\\tablecomments{(a) $\\Sigma_d$ in $M_\\sun ~{\\rm pc}^{-2}$. (b) In K. (c) For SE only. All the others are fixed at $\\beta=2$. (d) In \\micron. (e) $9.3\\leq T_d \\leq 35.6~\\rm K$ under our conversion. (f) Range for second iteration during calibration. (g) While none negative.}\n\\end{deluxetable}\nWe follow the fitting techniques in \\citet{GORDON14}: we build model SEDs on discrete grids in parameter space, and then calculate the likelihood for all models given the SED in each binned region. The multi-dimensional (3 dimensional for SE, BE and WD methods, 2 for FB and 4 for PL) grids have axes defined in \\secref{Sec: Model}, and grid spacing defined in Table \\ref{tab: grid space}.\n\nFor each grid point, we can generate a model SED $M_{ij...d}(\\nu)$, where the subscript represents a unique combination of parameters in the grid with $d$ dimensions. The calculated model is a continuous function of frequency $\\nu$. To compare with the real observation, we integrated $M_{ij...d}(\\nu)$ over the response function $R^n(\\nu)$ of each band $n$ in PACS and SPIRE with the following integral:\n\\begin{equation}\n\\overline{M^n_{ij...d}} = \\frac{\\int^\\infty_0 R^n(\\nu)M_{ij...d}(\\nu)d\\nu}{\\int^\\infty_0 R^n(\\nu)(\\nu_n\/\\nu)d\\nu}\n\\end{equation}\nNote that the denominator is added to account for the fact that \\textit{Herschel} intensities are quoted assuming a spectrum with $S(\\nu) \\propto \\nu^{-1}$ within the response function. The $\\nu_n$ values are the frequencies corresponding to the representative wavelength at each band, that is, 100, 160, 250, 350, and 500 \\micron.\n\nNext, in each binned region, we calculate the relative likelihood ($\\mathcal{L}$) of the model SED ($\\overline{M_{ij...d}}$) given the observed SED ($I_{\\rm obs}$) assuming Gaussian errors\\footnote{See \\citet{GORDON14} for discussion about statistical advantages of this matrix form definition}, that is:\n\\begin{equation}\\label{eq: likelihood}\n\\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs}) = \\exp\\big( -\\frac{1}{2} \\chi^2_{ij...d} \\big),\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq: chi2}\n\\chi_{ij...d}^2 \\equiv (\\overline{M_{ij...d}}-I_{\\rm obs})^T \\mathcal{C}^{-1} (\\overline{M_{ij...d}}-I_{\\rm obs})\n\\end{equation}\nand\n\\begin{equation}\\label{eq: covariance_sum}\n\\mathcal{C} = \\mathcal{C}_{\\rm bkg} + \\mathcal{C}_{\\rm cal}.\n\\end{equation}\nThe $^T$ sign represents the transpose matrix, and $^{-1}$ sign represents the inverse matrix. $\\mathcal{C}_{\\rm bkg}$ is the background covariance matrix discussed in \\secref{subsec: background error} with values:\n\\begin{eqnarray}\n\\mathcal{C}_{\\rm bkg} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 1.548 & 0.09 & 0.057 & 0.025 & 0.01 \\\\\n 0.09 & 0.765 & 0.116 & 0.079 & 0.04 \\\\\n 0.057 & 0.116 & 0.098 & 0.071 & 0.037 \\\\\n 0.025 & 0.079 & 0.071 & 0.063 & 0.033 \\\\\n 0.01 & 0.04 & 0.037 & 0.033 & 0.028 \\\\\n\\end{array}\\right].\n\\end{eqnarray}\nAs described in \\secref{subsec: voronoi}, $\\mathcal{C}_{\\rm bkg}$ will be lower for resolution elements binned together. For a binned region with a number of pixels greater than one resolution element (5.2 pixels, see \\secref{subsec: alignment}), $\\mathcal{C}_{\\rm bkg}$ is divided by number of resolution elements in the region.\n\n$\\mathcal{C}_{\\rm cal} = I^T \\mathcal{M}_{\\rm fit} I$ is the covariance matrix generated from calibration error, where $\\mathcal{M}_{\\rm fit}$ is the percentage calibration errors and $I$ is the observed SED at the binned region. There are two kinds of errors from calibration. The first one is absolute calibration uncertainty, estimated from the systematic uncertainty by comparing the calibrator to model \\citep{BENDO17}. We assume this absolute calibration uncertainty will affect all the bands calibrated together at the same time, thus we will fill this uncertainty both in the diagonal terms and the band-to-band off diagonal terms in $\\mathcal{M}_{\\rm fit}$. The second one is the relative uncertainty, or random uncertainty, which is estimated from the ability of an instrument to reproduce the same measurement \\citep{BENDO17}. We assume this noise is band-independent thus we only put it in diagonal terms in $\\mathcal{M}_{\\rm fit}$.\n\nAmong the \\textit{Herschel} observations, the SPIRE instruments were calibrated with Neptune, and were estimated to have 4\\% absolute calibration and 1.5\\% relative calibration uncertainty. The PACS instruments were calibrated with 5 stars. and the result gave a 5\\% absolute uncertainty and 2\\% relative uncertainty \\citep{PACS13, BALOG14}. In the diagonal terms in $\\mathcal{M}_{\\rm fit}$, where we need to consider both kinds of uncertainties, it is recommended that we should take the direct sum of the two errors instead of quadratic sum \\citep{BALOG14, BENDO17}. Since our object is an extended source, we must also take the uncertainty in the beam shape into account when calculating calibration errors \\citep{BENDO17}. It is recommended that we double the absolute uncertainties for this \\citep{GORDON14}. The final $\\mathcal{M}_{\\rm fit}$ is:\n\\begin{eqnarray}\\label{Eq: Mcal}\n\\mathcal{M}_{\\rm fit} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 0.12^2 & 0.1^2 & 0 & 0 & 0 \\\\\n 0.1^2 & 0.12^2 & 0 & 0 & 0 \\\\\n 0 & 0 & 0.095^2 & 0.08^2 & 0.08^2 \\\\\n 0 & 0 & 0.08^2 & 0.095^2 & 0.08^2 \\\\\n 0 & 0 & 0.08^2 & 0.08^2 & 0.095^2 \\\\\n\\end{array}\\right].\n\\end{eqnarray}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_Model_5555_NGC5457_z_merged.pdf}\n\\caption{An example of observed SED versus fitted SED from a single binned region. Red: The observed SED and error used in the fit. The error bars only include the square root of diagonal terms from the complete covariance matrix $\\mathcal{C}$. Green dot: The SED convolved with response function. Orange dashed line: The model SED generated from expectation values in the fit. Gray lines: Some selected models with transparency proportional to $\\mathcal{L}$. For each method, we randomly select 50 models from the subset $\\mathcal{L}(M^n_{ij...d}|I^n) \\geq max\\big(\\mathcal{L}(M^n_{ij...d}|I^n)\\big)\/1000$ for plotting. Note that both WD and PL methods allow FB components with peak wavelength below 100 \\micron\\, where we do not include observational constraint in this study. Therefore, the unusual shape in SED at short wavelength will not affect the fitting qualities of those models. However, we can still get similar expectation values in $\\Sigma_d$ from these methods.\\label{fig: example_model_merged}}\n\\end{figure*}\nWith the relative likelihood $\\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs})$ calculated, we can construct the full probability distribution function (PDF) for each parameter by summing over all other dimensions in parameter space. For example, if the index $i$ corresponds to $\\Sigma_d$, then the PDF of $\\Sigma_d$ with observed $I^n$ would be $P_{\\Sigma_{d,i}} = \\sum_{j...d} \\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs})$. We can then calculate the expectation value\\footnote{When calculating the expectation values, we use logarithmic scales for variables with logarithmic spacing in the grid.}, and the probability weighted 16\\% and 84\\% values, which represent the 1-$\\sigma$ confidence interval and are sampled to represent the uncertainty of the fit. An example of observed SED versus fitted models with all methods is shown in Figure \\ref{fig: example_model_merged}. An example of the log-scale likelihood distribution and correlation between fitting parameters is shown in Figure \\ref{fig: example_model_corner}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Corner_5555_NGC5457_BE.pdf}\n\\caption{Likelihood distribution in the parameter space from results of BE method at the same binned region in Figure \\ref{fig: example_model_merged}. Both the histograms and 2-dimensional histograms are shown in log scale. The figure does not include the whole parameter space. It is magnified to emphasize the region with $\\chi^2 \\leq \\big(min(\\chi^2) + 6\\big)$.\\label{fig: example_model_corner}}\n\\end{figure}\n\n\\subsubsection{Calibrating $\\kappa_{160}$ \\label{Sec: calibration}}\nWe use the procedure and integrated dust SED of the MW diffuse ISM from \\citet{GORDON14} to calibrate $\\kappa_{160}$ in our models. The SED was originally measured with Cosmic Background Explorer (COBE), where the $\\lambda \\geq 127~\\micron$ measurements are from Far Infrared Absolute Spectrophotometer (FIRAS) and the 100 \\micron\\ measurement is from Diffuse Infrared Background Experiment (DIRBE). The resulting SED is 0.6887, 1.4841, 1.0476, 0.5432, and 0.2425 ${\\rm MJy~sr^{-1}~(10^{20}~H~atom)^{-1}}$ for the 100, 160, 250, 350, and 500 \\micron\\ bands. These values differ from those given by \\citet{GORDON14} because we include a factor of 0.97 for the molecular cloud correction \\citep{COMPIEGNE11}. The ionized gas factor in \\citet{COMPIEGNE11} is excluded because we do not include ionized gas through out this study, including the calculation of average DGR in the MW diffuse ISM \\citep{JENKINS09, GORDON14}. The dust-to-Hydrogen mass ratio appropriate for this high-latitude diffuse region is calculated by averaging the depletion strength factor $\\rm F_\\star$ value over sightlines in \\citet{JENKINS09} with similar hydrogen column densities as the observed region. The resulting $\\rm F_\\star$ is 0.36, and the dust-to-Hydrogen mass ratio is $1\/150$, which corresponds to a dust surface density to H column density ratio of $5.30\\times10^{-3}~{\\rm M_\\sun~ pc^{-2}}~(10^{20}~{\\rm H~atom})^{-1}$.\n\nDuring calibration, it is important to use the same models and fitting methods as the real fitting \\citep{GORDON14}. We follow the same steps of our fitting techniques except four necessary differences: 1) We replace the original $\\mathcal{M}_{\\rm fit}$ with $\\mathcal{M}_{\\rm cali}$ (Eq. \\ref{eq: M_cal cali}) for calibration since the calibration data came from COBE instead of \\textit{Herschel}. Following \\citet{FIXSEN97}, we assume 0.5\\% relative uncertainty and 2\\% absolute uncertainty for FIRAS (calibrating PACS160 and SPIRE bands), and 1\\% relative uncertainty and 10\\% absolute uncertainty for DIRBE (calibrating PACS100 \\micron).\n\\begin{eqnarray}\\label{eq: M_cal cali}\n\\mathcal{M}_{\\rm cali} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 0.11^2 & 0 & 0 & 0 & 0 \\\\\n 0 & \\cdf & 0.02^2 & 0.02^2 & 0.02^2 \\\\\n 0 & 0.02^2 & \\cdf & 0.02^2 & 0.02^2 \\\\\n 0 & 0.02^2 & 0.02^2 & \\cdf & 0.02^2 \\\\\n 0 & 0.02^2 & 0.02^2 & 0.02^2 & \\cdf \\\\\n\\end{array}\\right]\n\\end{eqnarray}\n2) No $\\mathcal{C}_{\\rm bkg}$ term is applied. $\\mathcal{C}_{\\rm cal}$ is the only variance term considered. 3) Due to the small uncertainty of COBE data, the normal parameter spacing is not finely-sampled enough to resolve the PDF for all the parameters. Thus, we use a two-step calibration: first, we fit with the normal parameter space; then reduce the parameter range to a smaller region near the peak with a finer spacing (see ``Range$_c$'' and ``Spacing$_c$'' columns in Table \\ref{tab: grid space}); last, we fit with this new parameter spacing and report the results. 4) Our SED per hydrogen atom of the MW diffuse ISM is weaker than the one in \\citet{GORDON14} by a factor of 0.97 due to the molecular cloud fraction.\n\n\\begin{deluxetable*}{llcc}\n\\tablecaption{Results of calibrating emissivity to the MW high latitude SED.\\label{tab: cali result}}\n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Model} &\n\\colhead{$\\kappa_{160}~({\\rm cm^2~g^{-1}})$} &\n\\colhead{Other parameters} &\n\\colhead{Expectation values}}\n\\startdata\nSE & $10.10\\pm1.42$ & ($T_d$, $\\beta$) & ($20.90\\pm0.62$~K, $1.44\\pm0.08$) \\\\\nFB & $25.83\\pm0.86$ & ($T_d$) & ($17.13\\pm0.12$~K) \\\\\nBE & $20.73\\pm0.97$ & ($T_d$, $\\beta_2$) & ($18.02\\pm0.18$~K, $1.55\\pm0.06$) \\\\\nWD & $27.46\\pm1.14$ & ($T_d$, $f_W$) & ($16.60\\pm0.25$~K, $0.00343\\pm0.00143$) \\\\\nPL & $26.60\\pm0.98$ & ($\\alpha$, $\\log_{10}\\gamma$, $\\log_{10} U_{\\rm min}$) & ($1.69\\pm0.19$, $-1.84\\pm0.21$, $-0.16\\pm0.03$)\\\\\n\\enddata\n\\end{deluxetable*}\nThe calibrated $\\kappa_{160}$ values range from 10.48 to 21.16$~\\rm cm^2~g^{-1}$, see complete results in Table \\ref{tab: cali result}. This is a fairly large range, which indicates that the choice of model does affect the measurement of dust properties. Our results are comparable with calculated $\\kappa_{160}$ values in literature, e.g., the widely used \\citet{DRAINE07} model, with updates in \\citet{DRAINE14}, gives $\\kappa_{160}$ equal to 13.11$~\\rm cm^2~g^{-1}$ for silicates and 10.69$~\\rm cm^2~g^{-1}$ for carbonaceous grains, and 12.51$~\\rm cm^2~g^{-1}$ in the combined model. The standard model in \\citet{GALLIANO11} gives a value of 14$~\\rm cm^2~g^{-1}$, and 16$~\\rm cm^2~g^{-1}$ after replacing graphite with amorphous carbons. A recent calculation by \\citet{RELANO18}, following the \\citet{DESERT90} dust model, gives an equivalent $\\kappa_{160}=22.97~\\rm cm^2~g^{-1}$.\n\nIn the MBB model calibration process in \\citet{GORDON14} and \\citet{GORDON17}, the resulting $\\kappa_{160}$ falls between 30.2 and 36.4$~\\rm cm^2~g^{-1}$, depending on the model used. The common model between us is the SMBB in \\citet{GORDON14}, where they have $\\kappa_{160}=30.2~\\rm cm^2~g^{-1}$, and our SE, where we have $\\kappa_{160}=10.1~\\rm cm^2~g^{-1}$. Our calibration method differs from \\citet{GORDON14} in four ways: 1) With the values of COBE uncertainty we quote, we are allowed to have more deviation at 100 \\micron\\ than the other bands. On the other hand, \\citet{GORDON14} have both correlated and uncorrelated uncertainty values uniform for all bands. 2) We use a $M_{\\rm cali}$ which assumes 100 \\micron\\ calibration independent of the other bands since DIRBE and FIRAS were calibrated independently. \\citet{GORDON14} assumed that all bands are correlated with the same absolute uncertainties. 3) We use a two-step fitting to increase the accuracy only for calibration, while \\citet{GORDON14} used exactly the same methods for calibration and fitting. 4) Our SED per hydrogen atom of the MW diffuse ISM is weaker by a factor of 0.97 due to the molecular cloud fraction. In Section~\\ref{sec:sensitivity} we discuss the sensitivity of the results to choices in the SED fitting and calibration in more detail.\n\n\\section{Results}\\label{sec: results}\nWe fit the SEDs from all binned regions with all five MBB variants introduced in \\secref{Sec: Model}. We calculate the DGR in each bin from the observed $\\Sigma_{\\rm gas}$ and the fitting results of $\\Sigma_d$. Here, we look at the DGR and dust temperature radial gradients for each model, and at the residuals and reduced chi-square values about the best fit. Doing so, we will be interested in which models meet our physically motivated expectations and which models provide good fits to the SED. The complete fitting results are shown in Appendix \\ref{app: fitting}, along with their correlations in Appendix \\ref{app: corner}.\n\n\\subsection{DGR-metallicity relation\\label{sec: max DGR}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DGR-vs-Metallicity_NGC5457_z_merged.pdf}\n\\caption{DGR expectation values versus radius and metallicity. The shaded regions show the intrinsic scatter of DGR from $\\Sigma_d$ fitting results and the zero-point fluctuation of $\\Sigma_{\\rm gas}$ (\\secref{subsec: HI}). MAX is the maximum possible DGR calculated as a function of metallicity. The range is set by the difference between \\citet{LODDERS03} and \\citet{ASPLUND09} chemical composition, which is small at this plotting scale.\\label{fig: DGR_grad_models_merged}}\n\\end{figure}\nIn Figure \\ref{fig: DGR_grad_models_merged}, we plot the DGR-metallicity relation from all fitting methods. The metallicity-radius relation is calculated with Eq. 10 in \\citet{CROXALL16}. We first separate M101 into 20 radial regions, and, at each region with $r_i\\leq r < r_j$, we take the sum of the expectation value of dust mass divided by the total gas mass as the expectation value of DGR (${\\rm < DGR >}$) in that region, that is:\n\\begin{equation}\n{\\rm }_{ij} = \\frac{\\sum_{r_i\\leq r_k < r_j}<\\Sigma_d>_k A_k}{\\sum_{r_i\\leq r_k < r_j}M_{\\rm gas,k}},\n\\end{equation}\nwhere $<\\Sigma_d>_k$ and $A_k$ are the expectation value of $\\Sigma_d$ and area at the k-th binned region, respectively. We estimate the uncertainties of these expectation values of DGR with the ``realize'' method \\citep{GORDON14}, and the uncertainties are $\\sim$ 0.02 dex in the high metallicity region, $\\sim$ 0.09 dex at $\\rm 12+\\logt({\\rm O\/H})\\sim8.2$, and $\\sim$ 0.6 dex in the lowest metallicity region, which are reasonably small. However, there is also intrinsic scatter of DGR in each radial region, which would be larger than the uncertainties. To estimate this intrinsic scatter of DGR per $M_{\\rm gas}$ within one radial region, we calculate the distribution by summing up the PDFs of DGR from each bin in that radial region, weighted by their $M_{\\rm gas}$. Next, we take the region between the 16th and 84th percentile of the distribution as the range of the intrinsic scatter. This intrinsic scatter is included in Figure \\ref{fig: DGR_grad_models_merged}, along with the zero-point uncertainty in $\\Sigma_{\\rm gas}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_C18_NGC5457_BE.pdf}\n\\caption{Our DGR and DTM versus metallicity from the BE method results. (a) The DGR expectation values fitted by the BE model from each binned region are shown with error bars. Shaded region: The scatter of DGR. The definition is described in Figure \\ref{fig: DGR_grad_models_merged}. (b) The DGR from the BE model with power-law (${\\rm DGR} \\propto Z^x$) fitting as listed in Table \\ref{tab: D2M}. Blue: The expectation values calculated from the combined PDF (same for figures in \\secref{sec: discussions}). Orange: The power-law result with whole data range. Green: Fitting with only $\\rm 12+\\logt({\\rm O\/H}) > 8.2$, where we have a more concentrated data point distribution. (c) The DTM from the BE model. The DTM scatter includes DGR scatter, the $\\rm 12+\\logt({\\rm O\/H})$ uncertainty \\citep{CROXALL16}, and $M_{\\rm O}\/M_{\\rm Z}$ uncertainty (\\secref{sec: MOMZ unc}). The horizontal lines are the DTM=0.1, 0.2, ......1.0 locations.\\label{fig: DGR_grad_models_BE}}\n\\end{figure}\nThe distribution of our original data points is denser in the region with $\\rm 12+\\logt({\\rm O\/H}) \\ga 8.2$, where the original SNR is high. This is illustrated in Figure \\ref{fig: DGR_grad_models_BE} (a) with the results from the BE model. Within this range, all models except SE have their DGR dropping by nearly 1 dex, which is around twice faster than the metallicity gradient. The SE has its DGR dropping by 1.5 dex. At $\\rm 12+\\logt({\\rm O\/H}) < 8.0$, the scatter in PDF is large (generally with $\\sigma \\ga 1~\\rm dex$), which makes determining a trend difficult. By treating metallicity as an independent variable, we fit our DGR versus metallicity with a linear equation ${\\log_{10} \\rm DGR} = a\\times(\\rm 12+\\logt({\\rm O\/H})) +b$ in both the full metallicity range and only $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$ region. An example showing results from the BE model is shown in Figure \\ref{fig: DGR_grad_models_BE} (b). The results are listed in Table \\ref{tab: D2M}. All the fitting results indicate a $\\log_{10}{\\rm DGR}$ variation steeper than $\\rm 12+\\logt({\\rm O\/H})$. The three methods with $\\beta$ fixed over the whole spectral range, FB, WD, and PL, have fitted slopes closer to one.\n\\begin{table}\n \\centering\n \\caption{$\\log_{10}$DGR versus $\\rm 12+\\logt({\\rm O\/H})$ linear fitting results.\\label{tab: D2M}}\n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n Model & \\multicolumn{2}{c}{Full range} & \\multicolumn{2}{c}{$\\rm 12+\\logt({\\rm O\/H})\\geq 8.2$} \\\\\n & a & b & a & b \\\\\n \\hline\n SE & $2.7\\pm0.3$ & $-25.3\\pm 2.1$ & $3.2\\pm0.2$ & $-29.4\\pm 1.8$ \\\\\n FB & $1.5\\pm0.1$ & $-14.9\\pm0.9$ & $1.5\\pm0.1$ & $-15.3\\pm 0.7$ \\\\\n BE & $1.7\\pm0.1$ & $-16.9\\pm 1.0$ & $1.9\\pm 0.1$ & $-18.1\\pm 0.7$ \\\\\n WD & $1.5\\pm0.2$ & $-14.9\\pm1.4$ & $1.3\\pm0.1$ & $-13.1\\pm0.5$ \\\\\n PL & $1.3\\pm0.1$ & $-13.3\\pm 0.9$ & $1.2\\pm0.1$ & $-12.8\\pm0.5$ \\\\\n \\hline\n \\end{tabular}\n \\raggedright Note: Data are fitted with ${\\log_{10} \\rm DGR} = a\\times(\\rm 12+\\logt({\\rm O\/H})) +b$.\n\\end{table}\n\n\\subsubsection{Physical limitations to DGR}\\label{sec: MOMZ unc}\nDust grains are built from metals. Thus, we can calculate the theoretical upper limit to the DGR by calculating the DGR for the case when all available metals are in dust. If the fitted DGR exceeds the calculated upper limit, we would consider the fitting result physically less plausible. To convert to total metallicity from oxygen abundance, we need to assume the ISM chemical composition. We calculate the mass ratio of oxygen to total metal from two literature of solar chemical composition: 1) \\citet{LODDERS03}, which gives $M_{\\rm O}\/M_Z = 51\\%$ where $M_Z$ is the mass of all metals. This is the composition used in \\citet{JENKINS09}, which we will discuss in \\secref{sec: J09}. 2) A later version in \\citet{ASPLUND09}, which gives $M_{\\rm O}\/M_Z = 44.5\\%$. The conversion from $\\rm 12+\\logt({\\rm O\/H})$ to metallicity is given by:\n\\begin{equation}\\label{eq: DGR_limit}\n \\frac{M_Z}{M_{\\rm gas}} = \\frac{M_Z}{M_{\\rm O}}\\frac{M_{\\rm O}}{1.36M_{\\rm H}} = \\frac{\\frac{m_{\\rm O}}{m_{\\rm H}}10^{\\big(\\rm 12+\\logt({\\rm O\/H})\\big)-12}}{\\frac{M_{\\rm O}}{M_Z} \\times 1.36},\n\\end{equation}\nwhere $m_{\\rm O}$ and $m_{\\rm H}$ are the atomic weights of oxygen and hydrogen. The solar $\\rm 12+\\logt({\\rm O\/H})$ adopted in this study is $8.69\\pm0.05$ \\citep{ASPLUND09}. This estimation of the DGR upper limit can be incorrect if the actual chemical composition deviates from this range. For example, \\citet{CROXALL16} showed there is a trend that $\\log_{10}({\\rm N\/O})$ goes from $-0.4$ to $-1.4$ as radius increases in M101, which means we can overestimate the upper limit in the outer disk if other major elements have similar trends.\n\nWe overlay the DGR upper limit calculated between $M_{\\rm O}\/M_Z = 44.5\\%$ and $51\\%$ with our results in Figure \\ref{fig: DGR_grad_models_merged}. We find that in the highest metallicity region, the DGR given by SE method is greater than the upper limit by a factor of 3, which is outside the 16-84 percentile of intrinsic scatter. This is unlikely being a result of $\\alpha_{\\rm CO}$ variation because we will need to have $\\alpha_{\\rm CO}\\sim 9$ in the center of M101 to explain this apparent DGR. This $\\alpha_{\\rm CO}$ value is unlikely to be true with our knowledge of $\\alpha_{\\rm CO}$ in M101 \\citep{SANDSTROM13} and metallicity-dependency of $\\alpha_{\\rm CO}$ \\citep{BOLATTO13}. We thus consider the results from SE method less physically plausible.\n\nWe also notice that for all methods listed, there is a DGR spike in expectation value exceeding the upper limit near $\\rm 12+\\logt({\\rm O\/H}) \\sim 7.9$. Nevertheless, all the others still have their 16-84 percentile scatter falling under the DGR upper limit. Thus, we consider all methods except SE still reasonable under DGR upper limit test. Note that the scatter in the regions with $\\rm 12+\\logt({\\rm O\/H}) < 8.2$ reach the order of 1 dex, which means the fit values are less reliable.\n\n\\subsection{Temperature profiles}\\label{sec: T grad}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_T-profile_NGC5457_z_merged.pdf}\n\\caption{Radial profiles of dust temperature, $\\Sigma_{\\rm SFR}$ and $\\Sigma_\\star$. All the profiles are plotted in gas-mass-weighted average. Top panel: temperature profiles from all fitting methods. 16-84 percentile scatter from the fitting is shown in shaded areas. Bottom panel: $\\Sigma_{\\rm SFR}$ and $\\Sigma_\\star$ profiles. See \\secref{subsec: other} for data source and calculation. A 10\\% uncertainty is plotted in shaded region, which is an uncertainty suggested in \\citet{DALE09}.\\label{fig: T_prof_new}}\n\\end{figure}\nIn the top panel of Figure \\ref{fig: T_prof_new}, we plot the $M_{\\rm gas}$-weighted dust temperature as a function of radius for each method. Within a small radial range, we assume that the DGR variation is small, thus the $M_{\\rm gas}$-weighted dust temperature would be a representative $T_d$ in the corresponding radial region. For the PL method, temperature is not a directly fitted variable. Thus, we calculate the dust mass-weighted average $U$, and convert it to temperature according to \\secref{sec: PL}.\n\nThe equilibrium dust temperature depends on the heating radiation field, which should be related to a combination of $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$ here, shown in bottom panel of Figure \\ref{fig: T_prof_new}. By comparing to the radial trend of heating sources, the one model that stands out is SE: it has a temperature profile rising from the galaxy center to $0.8R_{25}$. It is possible to change the relationship between heating sources and dust temperature if the geometry and\/or the opacity of the ISM changes with radius. However, with both heating source tracers having intensity decreasing by more than one dex within $0.8R_{25}$, we expect that a decreasing $T_d$ with radius to be the dominant trend. Thus, we also reach the conclusion as previous section that results from SE method are less physically plausible.\n\n\\subsection{Residual distributions}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_Residual_NGC5457_merged.pdf}\n\\caption{2-dimensional histograms of relative residual versus observed SED at each band. The x-axes have unit in $\\rm MJy~sr^{-1}$. The zero relative residual line is marked in gray.\\label{fig: Residual_merged}}\n\\end{figure*}\nThe residual distribution is one of the most straightforward ways to check the goodness of fit. For each method, we plotted the 2-dimensional histogram of relative surface brightness residuals in Figure \\ref{fig: Residual_merged}. We expect that a good fit will give a residual distribution that is symmetric about zero (The gray lines in all panels in Figure \\ref{fig: Residual_merged}) and has no trend with the measured surface brightness. An example of well-behaved residual distribution can be seen for the BE model at the SPIRE 250 band. Otherwise, there may be a underlying systematic effect which tells us that the model is flawed or an additional free parameter is needed.\n\nThere are two features occurring for all MBB methods: 1) At the high intensity end, all of our methods underestimate PACS160. 2) In general, the relative residuals are smaller at the low intensity end (see more discussion in \\secref{sec: compare-chi2}). The SE method gives the most compact residual distributions. This means that letting both $T_d$ and $\\beta$ free provides the highest flexibility to fit the SED among all models here. However, we should bear in mind that the SE model yields DGR and temperature gradients distinct from the other models and that we consider these results less physically plausible, as previously shown in \\secref{sec: max DGR} and \\ref{sec: T grad}. The FB method yields the residual distribution least consistent with random scatter about the model. It shows the least compact residual distribution with long tails in positive residuals, especially in PACS100, SPIRE350 and SPIRE500. These positive residuals mainly come from low intensity regions. These indicate the need for $\\beta$ to change between high and low intensity regions. Among the remaining methods, both the WD and PL improve the residuals the short wavelengths covered by PACS100. This reflects the expected presence of warm, possibly out-of-equilibrium dust at these short wavelengths. The BE method has the second most compact residual distribution, and shows a better fit to the long wavelength bands that are crucial to accurately trace $\\Sigma_d$.\n\n\\subsection{The reduced chi-square values\\label{sec: compare-chi2}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Residual_Chi2_NGC5457.pdf}\n\\caption{The $\\tilde{\\chi}^2$ distributions for all fitting methods. Left: 2-dimensional histograms of $\\tilde{\\chi}^2$ with PACS100. The x-axes have unit in $\\rm MJy~sr^{-1}$. Note that the 2-dimensional histograms of $\\tilde{\\chi}^2$ with all five bands demonstrate similar information, thus we only plot the ones from PACS100. Right: the horizontal histograms of $\\tilde{\\chi}^2$. The orange lines show the expected distribution according to DoF.\\label{fig: residuals_chi2}}\n\\end{figure}\nThe reduced chi-square value is defined as $\\tilde{\\chi}^2 \\equiv \\chi^2\/(n - m)$, where $n$ is the number of observations (which is $5$ in our study) and $m$ is the number of fitting parameters (3 for SE, 2 for FB, 3 for BE, 3 for WD and 4 for PL). This value takes both uncertainties in the observations and the degrees of freedom (DoF) of the models into account. The $\\tilde{\\chi}^2$ value gives the information of how good the fitting is and how much an extra fitting parameter improves the fitting quality. We plot the $\\tilde{\\chi}^2$ distribution versus observation in the left panels in Figure \\ref{fig: residuals_chi2}. As we have seen in residual maps, the FB and WD methods have long tails in the low luminosity region. FB and WD methods have $\\tilde{\\chi}^2 \\geq 1$ in the high luminosity region, mainly due to the residuals in long wavelength, where the corresponding uncertainties are much smaller. The PL method has relatively large $\\tilde{\\chi}^2$ everywhere, which means the extra DoF does not offer an improvement in the quality of the fitting. Note that this result does not imply the physical correctness of single temperature over ISRF distribution, but indicates that the DoF from ISRF distribution is less effective in improving the quality of FIR SED fitting.\n\nAll the methods have a gradually rising $\\tilde{\\chi}^2$ toward the high luminosity region. By calculating the contribution to $\\tilde{\\chi}^2$ from each band, the most important contributor to this phenomenon is the PACS160 band. There is in general a $\\sim$20\\% systematic underestimation by the model fits in PACS160 in the center of M101. One possible explanation is that the contribution from \\textsc{[Cii]} 158 \\micron\\ line is integrated into the PACS160 SED, which makes the PACS160 SED brighter than what is predicted by dust emission models. This effect is shown to be minor by \\citet{GALAMETZ14}, where the authors demonstrated that \\textsc{[Cii]} contributes only around 0.4\\% to integrated 160~\\micron\\ emission. Another possible explanation is an unknown systematic uncertainty in PACS160. Previous work by \\citep{ANIANO12} found that PACS160 was $\\sim$20\\% higher than {\\em Spitzer} MIPS160 measurements in the bright regions of some nearby galaxies.\n\nWe also examine the histograms of $\\tilde{\\chi}^2$ (Figure \\ref{fig: residuals_chi2} right panels) with two features: 1) The mean value, which is expected to be one. 2) The shape of the histogram, which should resemble the $\\chi^2$-distribution with $k$ DoF\\footnote{We normalized the $\\chi^2$-distribution to a mean value of one, i.e., $k\\times f(k\\tilde{\\chi}^2, k)$.}. The SE method has mean $\\tilde{\\chi}^2$ of 0.77. The histogram is more compact than a $\\chi^2$-distribution with $k=2$. Both indicate that we might be overestimating the uncertainties in the SE method. FB and WD have mean values of 1.5 and 1.64, respectively, and flatter histograms than expected. BE has a mean value of 0.97 and a distribution resembling what we expected. PL has a mean value of 3.16, which means the extra parameters in the PL model do not help it making a more precise fit corresponding to its DoF.\n\n\\subsection{Summary of model comparison}\nAmong the MBB variants we have tested, we consider the SE method physically less plausible because the resulting temperature and DGR gradient do not match our physically-motivated expectations. The DGR results from the other four MBB variants are consistent with each other in regions with $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.5$, as illustrated in Figure \\ref{fig: DGR_grad_models_merged}. This implies that the dust masses measure from the MBB fitting is mostly insensitive to the specific choices about the radiation field distribution. According to the residual distribution and $\\tilde{\\chi}^2$ values, the BE model gives the statistical best fit, which means that the most important first-order correction to the basic MBB is to allow $\\beta$ vary in the long wavelength region. We will consider BE as the preferred model based on these tests.\n\n\\section{Discussion}\\label{sec: discussions}\n\\subsection{Is DTM Constant in M101?}\\label{sec: dicuss_variable_DTM}\nAll of our models indicate that DGR falls off steeper than metallicity, showing a variable DTM ratio. Our preferred model (BE) has ${\\rm DGR} \\propto Z^{1.7}$, which is equivalent to DTM changing from 0.25 at $\\rm 12+\\logt({\\rm O\/H})\\sim$7.8 to 1 above $\\rm 12+\\logt({\\rm O\/H})\\sim$8.5. Models with $\\beta$ fixed have smaller power-law indices, specifically the FB and WD models show ${\\rm DGR} \\propto Z^{1.4}$, and PL model shows ${\\rm DGR} \\propto Z^{1.2}$. Even if we only consider region with $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$, where the majority of our data points reside, we still obtain a DGR trend steeper than metallicity gradient. These results are based on direct-$T_e$ method metallicity measurements \\citep{CROXALL16} with uncertainties in $\\rm 12+\\logt({\\rm O\/H})$ around $0.04-0.08$ dex.\n\nIn order to understand what aspects of the dust life cycle could result in a variable DTM, we look for mechanisms that affect dust mass and metals in the ISM with different rates. The five most important mechanisms of this kind are: 1) Accretion of metals in the ISM onto existing dust grains, which raises DTM. 2) ISM enrichment from stellar sources (e.g. AGB stars, SNe), which have DTM characteristic of the particular stellar source instead of DTM in the current ISM. 3) Dust destruction by SNe, which lowers DTM. 4) Infall of circumgalactic medium (CGM) into the galaxy, which dilutes the ISM DTM with the lower DTM in the CGM \\citep{DWEK98, HIRASHITA99, ZHUKOVSKA16}. 5) Outflows of dust and metals into CGM, which increases the ISM DTM because the outflow is less dusty than the ISM \\citep{LISENFELD98}.\n\nAmong these mechanisms, ISM accretion has a rate that increases with ISM density, especially in cold clouds \\citep{DWEK98, ASANO13}. Observationally, ISM density can be roughly traced by the mass fraction of molecular hydrogen ($\\rm f_{H_2}$)\\footnote{Without knowing the three-dimensional ISM geometry, $\\rm f_{H_2}$ would be a better indicator of ISM density than $\\Sigma_{\\rm gas}$.}. The rate of enrichment from stellar sources should follow the stellar mass surface density ($\\Sigma_\\star$) modulo stellar age effects. The effects of production and destruction of dust by SNe should track both the massive star formation rate ($\\Sigma_{\\rm SFR}$) and the older stellar populations ($\\Sigma_\\star$).\n\\begin{table}\n \\centering\n \\caption{Correlation between $\\log_{10}$DTM and physical quantities $\\log_{10}\\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$.\\label{tab: Residual trend}}\n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n Quantity & \\multicolumn{2}{c}{Direct} & \\multicolumn{2}{c}{Residual} \\\\\n & $\\rho_S$ & $p$-value\\footnote{$p$-value is the probability that we get a $\\rho_S$ greater or equal to the calculated value from the given data when null hypothesis is true. In other words, $p$-value goes from 0 to 1, and a smaller $p$-value implies a more significant correlation.} & $\\rho_S$ & $p$-value \\\\\n \\hline\n $\\log_{10}\\rm f_{H_2}$ & 0.80 & $\\ll 1$ & 0.26 & $\\ll 1$ \\\\\n $\\log_{10}\\Sigma_\\star$ & 0.72 & $\\ll 1$ & -0.05 & 0.12 \\\\\n $\\log_{10}\\Sigma_{\\rm SFR}$ & 0.22 & $\\ll 1$ & -0.08 & 0.007 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_residual_trends_NGC5457_BE.pdf}\n\\caption{Relation between DTM and the three physical quantities: $\\rm f_{H_2}$ (a, d), $\\Sigma_\\star$ (b, e) and $\\Sigma_{\\rm SFR}$ (c, f). (a, b, c): Relations in the raw data. (d, e, f): Relations after removing the radial trends in all four quantities: $\\log_{10}$DTM, $\\log_{10}\\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$. The radial trend removal is done by first fitting the quantities versus radius with linear regression, and then subtracting the regression results from the original data. The discussion of radial trend removal is described in \\secref{sec: dicuss_variable_DTM}. The mean uncertainty in $\\Sigma_d$ is 0.1 dex. $\\Sigma_\\star$ has unit in $M_{\\sun}~\\rm pc^{-2}$ and $\\Sigma_{\\rm SFR}$ has unit in $M_{\\sun}~\\rm kpc^{-2}~yr^{-1}$.\\label{fig: residual trend}}\n\\end{figure*}\nTo test these potential correlations of DTM with environmental characteristics, we calculate the Spearman's rank correlation coefficient ($\\rho_S$) and $p$-value between $\\log_{10}$DTM and these three quantities. Note that we only include the region with $\\rm f_{H_2}\\geq 5\\%$ for all four quantities, namely DTM, $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$, due to the detection limit of HERACLES. $\\log_{10}$DTM correlates strongly and significantly with both $\\log_{10} \\rm f_{H_2}$ and $\\log_{10} \\Sigma_\\star$, while it shows a weaker but significant correlation with $\\log_{10}\\Sigma_{\\rm SFR}$. This is shown in the ``direct'' columns in Table \\ref{tab: Residual trend} and top panels in Figure \\ref{fig: residual trend}.\n\nWhile there are significant correlations between DTM and these environmental characteristics, all the quantities here (DTM, $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$) to first order have major trends that vary with radius. $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$ all have $\\rho_S$ with radius greater than the $\\rho_S$ with $\\log_{10}$DTM. $\\log_{10}$DTM also has a higher $\\rho_S$ with radius than with other quantities. The results of calculating the $\\rho_S$ and $p$-value directly will therefore be dominated by this major radial trend. In order to investigate what drives the DTM variation, we need to remove these dominant radial trends. This removal is done by first fitting $\\log_{10}$DTM, $\\log_{10} \\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$ versus radius with linear regression, and then subtracting the regression results from the original data points to get the residuals. The correlations between $\\log_{10}$DTM and $\\log_{10} \\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$ after radial trend removal are shown in the bottom panels in Figure \\ref{fig: residual trend} and the ``Residual'' columns in Table \\ref{tab: Residual trend}.\n\nThe resulting $\\rho_S$ between residual $\\log_{10}$DTM and residual $\\log_{10}\\rm f_{H_2}$ is 0.26, with a $p$-value $\\ll 1$. This indicates that the correlation between them is weak compared to the scatter in the data but significant. The null hypothesis, that the two variables (residual DTM and $\\rm f_{H_2}$) are unrelated, is extremely unlikely to be true. Residual $\\log_{10}\\Sigma_\\star$ and residual $\\log_{10}\\Sigma_{\\rm SFR}$, on the other hand, have their $\\rho_S$ drop relative to the direct correlation and the residual $\\rho_S$ of them show extremely weak correlations, and thus considered negligible.\n\nBased on this calculation, we suggest that ISM density may be the most important environmental factor that affects DTM in M101. This would explain the correlation between variations of DTM at a fixed radius and variations in $\\rm f_{H_2}$. The stellar sources, traced by $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$, do not correlate significantly with the variations of DTM at a fixed radius.\n\n\\subsubsection{Variable emissivity coefficient}\\label{subsec: discuss_emissivity}\nAlthough we have thus far interpreted our results as changes in DTM, an alternative possibility is that $\\kappa_{160}$ varies with environment instead. As discussed in \\secref{Sec: Model} all of our MBB variants are subject to the degeneracy between $\\Sigma_d$ and $\\kappa_{160}$. The way we deal with it is by calibrating $\\kappa_{160}$ with the MW diffuse ISM SED (\\secref{Sec: calibration}) and assuming all the variation in temperature-corrected SED amplitude is due to $\\Sigma_d$ only. However, this assumption might fail if we observe environments that differ from the high-latitude MW diffuse ISM we used for calibration and if $\\kappa_{160}$ varies with local environment. In general, our DGR($Z$) does not follow the DGR($Z$) calculated from ${\\rm F_\\star}=0.36$, which have been used for our calibration. This leaves the possibility that the changes we see in DTM are still degenerate with the changes in $\\kappa_{160}$.\n\n$\\kappa_{160}$ can be a function of dust size, temperature, and composition, which may change as gas transitions from diffuse to dense phases. The calculations in \\citet{OSSENKOPF94, KOHLER11} show an enhanced dust emissivity due to coagulation of dust particles in dense ISM regions. This phenomenon is also observed by \\citet{PLANCK14} and \\citet{PLANCK15} in the MW, where the authors show an increase in total opacity with increasing ISM density and decreasing $T_d$. However, we note that both \\citet{PLANCK14} and \\citet{PLANCK15} \\textit{assumed} a constant DGR, and explained their observations with a change in the composition and structure of the dust particles.\n\nWe will focus on the dense regions in M101 for discussing emissivity variation with coagulation, where coagulation is more likely to happen. \nWe use the constant DTM in MW \\citep{DRAINE11} as our reference true DTM and calculate how our DTM deviates from the reference as a function of ISM density, traced by $\\rm f_{\\rm H_2}$, plotted Figure \\ref{fig: rDGR_vs_metal_BE}. Note that the figure only includes the region with significant detection from HERACLES ($f_{\\rm H_2} \\ga 5\\%$, or $\\rm 12+\\logt({\\rm O\/H}) \\ga 8.4$), not the full range of our DGR-to-metallicity figures.\n\nWe calculate the Pearson's correlation coefficient of all four combinations of log\/linear $\\frac{DTM}{DTM_{MW}}$-to-$ f_{\\rm H_2}$ relation, i.e., $\\rm \\frac{DTM}{DTM_{MW}}$-to-$f_{\\rm H_2}$, $\\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$, $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$f_{\\rm H_2}$, and $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$. The result shows 0.712, 0.790, 0.694, and 0.795, respectively. Thus we continue our analysis with $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$ relation. By fitting $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$ to $\\log_{10} f_{\\rm H_2}$, our $\\rm \\frac{DTM}{DTM_{MW}}$ varies from 0.9 to 2.0 in this region. If we attribute this change to the increase in emissivity, then $\\kappa_{160}$ will go from 19 to $41~\\rm cm^2~g^{-1}$ in this region, with a relation of $\\kappa_{160} \\propto f_{\\rm H_2}^{0.2}$. This is comparable to the emissivity changes inferred by \\citet{PLANCK14} using similar reasoning in MW clouds and well within the range allowed by theoretical grain coagulation models \\citep{OSSENKOPF94, KOHLER11}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_relDGR_vs_afH2_NGC5457_.pdf}\n\\caption{Our DTM normalized by the MW DTM \\citep{DRAINE11} plotted as a function of H$_2$ mass fraction ($f_{\\rm H_2}$). The original distribution is shown in blue. A representative error bar in cyan, which only include the uncertainties in DGR, is shown at top-left. Another error bar including extra uncertainty in $\\rm 12+\\logt({\\rm O\/H})$, which is considered systematic, is shown in green at top-left. The linear regression of $\\log_{10} \\rm DTM\/DTM_{MW}$-to-$\\log_{10} f_{\\rm H_2}$ is shown in red. Note that this plot only includes data with $f_{\\rm H_2} \\ga 5\\%$ ($\\rm 12+\\logt({\\rm O\/H}) \\ga 8.4$), and that the y-axis is in log scale.\\label{fig: rDGR_vs_metal_BE}}\n\\end{figure}\n\n\\subsubsection{Variable conversion factor}\\label{sec: alpha_CO discussion}\nAnother potential explanation of the change in DGR (and thereby DTM) is that the conversion factor $\\alpha_{\\rm CO}$ is not a constant, therefore, we could be wrong in estimating $\\Sigma_{\\rm H_2}$. There are two major observed trends in $\\alpha_{\\rm CO}$ \\citep{BOLATTO13}. The first trend is a metallicity dependent $\\alpha_{\\rm CO}$. In the model derived in \\citet{WOLFIRE10}, among others, $\\alpha_{\\rm CO}$ increases as metallicity decreases, which means we could be overestimating DGR in the outer part of M101. Recovering this overestimation would increase the variation in DTM and make the observed trends stronger. Moreover, since $\\rm f_{H_2}$ traced by a fixed $\\alpha_{\\rm CO}$ drops steeply with increasing radius in M101, any modification from metallicity dependent $\\alpha_{\\rm CO}$ that can affect DGR in the disk must posit a large and almost totally invisible reservoir of CO-dark molecular gas. It is suggested by \\citet{BOLATTO13} to use a constant $\\alpha_{\\rm CO}$ in regions with $\\rm 12+\\logt({\\rm O\/H})\\ge 0.5Z_\\sun$. When we test the total gas mass from a constant $\\alpha_{\\rm CO}$ against the one calculated with \\citet{WOLFIRE10} metallicity-dependent $\\alpha_{\\rm CO}$, the difference between them is at most 0.12 dex. This small change is due to the fact that in the radial region of M101 where H$_2$ makes a substantial contribution to the total gas mass, the metallicity is greater than $\\rm 12+\\logt({\\rm O\/H})=8.4$, where $\\alpha_{\\rm CO}$ only changes by a small amount. Considering the unknown uncertainties caused by the constant DTM assumption in the metallicity-dependent model \\citep{BOLATTO13}, we decide to only present the results with a fixed $\\alpha_{\\rm CO}$.\n\nThe second trend is the decrease of $\\alpha_{\\rm CO}$ in the very center of some nearby galaxies, shown by \\citet{SANDSTROM13}. It is worth noting that the \\citet{SANDSTROM13} analysis assumed DGR was locally independent of $\\rm f_{H2}$ to simultaneously solve for $\\alpha_{\\rm CO}$ and DGR in their solution pixels. Over most of M101, however, the average $\\alpha_{\\rm CO}$ they find is similar to the standard MW conversion factor, so using the \\citet{SANDSTROM13} values or making the standard assumption of a MW $\\alpha_{\\rm CO}$ will not greatly impact our results. \\citet{SANDSTROM13} found that M101 has one of the largest observed central decreases in $\\alpha_{\\rm CO}$, showing $\\alpha_{\\rm CO}=0.35^{+0.21}_{-0.13}$ in the central solution pixel, which is far lower than the galaxy-average value. Adopting the galaxy average value of $\\alpha_{\\rm CO}$ therefore causes us to overestimate the amount of gas in the center and subsequently underestimate the DGR and DTM. As shown in Figure \\ref{fig: DGR_grad_models_BE} (c), we do observe a decrease in the DGR and DTM in the central $\\sim$kpc of M101, which is likely the result of an incorrect conversion factor assumption there. However, since the affected region is small compared to our full M101 maps, we can neglect this effect in the DTM discussion.\n\nBeyond radial trends that alter $\\alpha_{\\rm CO}$ relative to what we have assumed, it is also possible that $\\alpha_{\\rm CO}$ varies from cloud-to-cloud at a fixed radius. If we overestimate $\\alpha_{\\rm CO}$ for a cloud, the DTM would be underestimated and $\\rm f_{H_2}$ would be overestimated. If we underestimate $\\alpha_{\\rm CO}$, we would underestimate $\\rm f_{H_2}$ and overestimate DTM. Both overestimation and underestimation work in the opposite sense of the correlation we observe in the residual DTM and $\\rm f_{H_2}$ and, if corrected for, would therefore strengthen our conclusions. Thus, the positive correlation between DTM and $\\rm f_{H_2}$ we calculate previously is not a result of $\\alpha_{\\rm CO}$ variation.\n\n\\subsubsection{Summary of DTM Measurements}\nTo summarize, we can explain our fitting results from all our MBB variants except the SE model with a variable DTM, where ${\\rm DGR} \\propto Z^{1.7}$ in the BE model. The maximum DGR is still within the available total metal abundance limits. By comparing the correlation between DTM and physical quantities $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{SFR}$, we conclude that the strongest environmental correlation of DTM is with $\\rm f_{H_2}$, which we take to be a reasonable observational indicator of ISM density and thus a tracer for accretion process. We see no clear trends that indicate correlations of DTM with stellar sources or massive star formation.\n\nOn the other hand, we could also explain the DTM results with enhanced dust emissivity in dense regions due to coagulation. The increase in $\\kappa_{160}$ is at most twice of the originally calibrated value, which is within the findings in \\citet{PLANCK14}. A non-extreme metallicity-dependent $\\alpha_{\\rm CO}$ does not affect our DGR trend much due to the low $\\rm f_{H_2}$ in most regions, however, the change of $\\alpha_{\\rm CO}$ in the center is related to our observed decrease of DGR in the central kpc. Variability of $\\alpha_{\\rm CO}$ from cloud to cloud at fixed radius would lead to a negative correlation between residual DTM and residual $\\rm f_{H_2}$, which is opposite what we observe.\n\nBoth explanations of variable DTM and variable emissivity are within the physically plausible range, thus we cannot definitively conclude if the variations we see are mainly due to changes in DGR or changes in the emissivity. However, given the observation that elemental depletions in the Milky Way are a function of ISM density and $\\rm f_{H_2}$ \\citep[][see further discussion below]{JENKINS09}, which is equivalent to a variable DTM, we argue that attributing all variation to emissivity is unlikely. To break the degeneracy between emissivity and $\\Sigma_d$, one future path is to calculate emissivity from dust models according to physical properties of local ISM. Another is to build an observational database of $\\Sigma_d$-to-SED, with known metallicity and ISM density, for future calibration. \nAnother powerful test available in the near future will be to measure the properties of the UV\/optical extinction curve, like $\\rm R_V$, as a tracer for coagulation and processes that can change the IR emissivity in the Local Group, and correlate this extinction curve tracer with quantities observable outside the Local Group.\n\n\\subsection{Comparison with previous DTM studies}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_D14_NGC5457_BE.pdf}\n\\caption{Top: We compare our DGR($Z$) with that from M31 measured by \\citet{DRAINE14}. The solid line is where \\citet{DRAINE14} presents their DGR fitting in M31 with observed metallicity, and the dashed line is extrapolation of their linear DGR($Z$). Within this metallicity region, our M101 results suggest a DTM 2 times higher than M31. However, if we instead select the range of radii where the M31 $\\rm f_{H_2}$ matches what we see in M101 (red region), we find a much better agreement between our observed DTM and extrapolation of \\citet{DRAINE14}. Bottom: Demonstration of how we select the green and red zones. Grey zone: \\citet{DRAINE14} $\\rm f_{H_2}$ range corresponding to the presented $\\rm 12+\\logt({\\rm O\/H})$ range. Blue: $\\rm f_{H_2}$-metallicity relation in M101. Green zone: Region with the same metallicity as \\citet{DRAINE14} data range. Red zone: Region where M101 $\\rm f_{H_2}$ corresponds to \\citet{DRAINE14} $\\rm f_{H_2}$.\\label{fig: DTM_D14}}\n\\end{figure}\nIn Figure \\ref{fig: DTM_D14}, we plot our results compared to the linear DGR($Z$) relation discussed in \\citet{DRAINE14}. \\citet{DRAINE14} show that the M31 DTM matches very well with the DTM predicted from depletions along the line of sight to $\\zeta$Oph in the MW \\citep[${\\rm F_\\star}=1$ line of sight in][]{JENKINS09}. In the corresponding metallicity range, our DGR is larger than the one in \\citet{DRAINE14}. This is illustrated in Figure \\ref{fig: DTM_D14} green zone. The derived $\\kappa_{160}$ value in \\citet{DRAINE14} is 12.51, which is around 0.75 times of our $\\kappa_{160}$ value. Thus, the DGR discrepancy at high metallicity end is not a result of our choice of $\\kappa_{160}$. Moreover, \\citet{DALCANTON15, PLANCK16} indicates that the \\citet{DRAINE07} model might overestimate $\\Sigma_d$ by $\\sim 2$ times, which also makes the difference larger. Thus, The difference between \\citet{DRAINE14} and our results in high metallicity region is not due to parameter selection, but due to physical differences between M101 and M31, or differences in the modeling.\n\nInstead of comparing region with the same metallicity, we can also compare the DTM between regions in M31 and M101 with similar ISM density, traced by $\\rm f_{H_2}$ here. According to \\citet{NIETEN06}, the region in M31 where \\citet{DRAINE14} gives the direct metallicity measurements has $\\rm f_{H_2}$ below 0.2, marked by the horizontal dashed line in Figure \\ref{fig: DTM_D14}. This $\\rm f_{H_2}=0.2$ upper limit meets our M101 data at $\\rm 12+\\logt({\\rm O\/H})=8.44$, indicated at where the horizontal dashed line meets the blue curve in Figure \\ref{fig: DTM_D14}. We pick the region between $\\rm 12+\\logt({\\rm O\/H})=8.44$ and where we have minimum $\\rm f_{H_2}$, shown in red in Figure \\ref{fig: DTM_D14}, as the region that has similar ISM density with M31 data in \\citet{DRAINE14}. Within this region, our DTM is consistent with the extrapolation of \\citet{DRAINE14} DTM. This suggests that the difference in DTM between our results and \\citet{DRAINE14} may be a consequence of M101 having a higher $\\rm f_{H_2}$ and therefore enhanced depletion (e.g. larger DTM) at the metallicity of M31.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_R14_NGC5457_BE.pdf}\n\\caption{Our DGR versus metallicity with \\citet{REMY-RUYER14} results (data points in blue). The power law (orange dahsed line) and broken power law (green dotted line) fitting are quoted with MW conversion factors.\\label{fig: DTM_R14}}\n\\end{figure}\n\\citet{REMY-RUYER14} has compiled integrated DGR($Z$) for a large set of galaxies observed by \\textit{Herschel}. In Figure \\ref{fig: DTM_R14} we compare our measured DGR($Z$) with theirs. At the high metallicity end, our slope is shallower than their power law fitting, but within 1-$\\sigma$ confidence level of each other ($2.02 \\pm 0.28$ from \\citet{REMY-RUYER14}). Unfortunately, the turnover point of broken power law derived in \\citet{REMY-RUYER14} is at $\\rm 12+\\logt({\\rm O\/H}) = 8.10 \\pm 0.43$, and we do not have enough reliable DGR fitting results below that metallicity to compare with. It is hard to draw a conclusion whether a broken power law with turnover point around $\\rm 12+\\logt({\\rm O\/H})=8.0$ would fit our results better than a power law. The \\citet{REMY-RUYER14} broken power law in high metallicity region is basically identical to the \\citep{DRAINE11} power law.\n\n\\subsection{Comparison with MW depletion\\label{sec: J09}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_J09_NGC5457_BE.pdf}\n\\caption{The comparison of our results with the DTM corresponding to various MW $\\rm F_\\star$ values described in \\citet{JENKINS09}. Most MW measurable regions have $0 \\la {\\rm F_\\star} \\la 1$. ${\\rm F_\\star}=0.36$ represents the average property of our $\\kappa_{160}$ calibration, and ${\\rm F_\\star}=\\rm inf$ means total depletion. The 40\\% $\\rm H_2$ location is marked because all \\citet{JENKINS09} data points have $f_{\\rm H_2} \\la 0.4$.\\label{fig: DTM_J09}}\n\\end{figure}\nStudies of the depletion of heavy elements in the MW \\citep{JENKINS09} also found a dependence of DTM on average ISM density and $\\rm f_{H_2}$. In Figure \\ref{fig: DTM_J09}, we display DTM corresponding to various MW $F_\\star$ regions described in \\citet{JENKINS09}. All of their original data points have $f_{\\rm H_2} \\la 0.4$ and $17.4 \\la \\log_{10} (N_{\\rm HI}) \\la 21.8$. Regions with ${\\rm F_\\star}=1$ and ${\\rm F_\\star}=0$ are by definition the representative regions of high and low depletion in the diffuse ISM of the MW, respectively. Thus, the region between these two lines corresponds to a DTM similar to the MW range extending to lower metallicity. Most points with $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.4$ fall inside this range. The high-latitude diffuse ISM in the MW used to calibrate our $\\kappa_{160}$ has an ${\\rm F_\\star}$ of 0.36, thus it was selected for DGR calculation in calibrating our $\\kappa_{160}$, see \\secref{Sec: calibration}. The ${\\rm F_\\star}=\\rm inf$ line means total depletion, which is physically the same as the DGR upper limit discussed in \\secref{sec: max DGR}. All our DGR fitting results are within this limit. It is interesting to note that the point where the DGR trend falls below the maximum depletion is at the boundary of molecular gas dominant region and atomic gas dominant region (f$_{\\rm H_2}\\sim$0.4).\n\n\\subsection{Sensitivity of results to fitting methods}\\label{sec:sensitivity}\nIt is worth noting that given the same dust emission SED, the fitting results are sensitive to methods and parameters in the fitting process. Thus, it is important to be clear and self-consistent about the choices we make for calibration and fitting, as demonstrated by \\citet{GORDON14}. We also need to be careful when comparing cross-study results. Here, we use the process of $\\kappa_{160}$ calibration with SE model, which gives $\\kappa_{160}=10.48\\pm1.48~\\rm cm^2~g^{-1}$ with the SED of the MW diffuse ISM from \\citet{GORDON14}, to illustrate the possible variations in results due to different choices. Note that we want to focus only on the methods, thus we use the MW diffuse ISM from \\citet{GORDON14} in this section instead of ours described in \\secref{Sec: calibration} to eliminate the simple offset.\n\\begin{itemize}\n \\item By changing to different models, $\\kappa_{160}$ can go up to 21.16 (PL model), which is a 100\\% change. Thus, the choice of fitting model strongly affects fitting results.\n \\item By making the fitting grid spacing coarser, from the original 0.002 spacing to a 0.1 spacing in $\\log_{10}\\kappa_{160}$, the resulting $\\kappa_{160}$ becomes 11.7, which is a 10\\% change. This has a mild effect on fitting results, and is especially important when the grid spacing is larger than the adopted uncertainties.\n \\item The matrix form and values of the covariance matrix can affect the fitting results. By changing the covariance matrix from ours to the one in \\citet{GORDON14} and keeping all other factors the same, the resulting $\\kappa_{160}$ goes to 17.9, which is a 70\\% change. This also affects the results strongly.\n \\item The covariance matrix can also change the fitting residuals. For example, \\citet{GORDON14} assumes a flat uncertainty across the five bands and equal correlation, which results in similar residuals among the five bands. On the other hand, we assume different values and correlation between DIRBE and FIRAS bands, which results in better residuals in FIRAS bands and worse residual in the DIRBE band.\n\\end{itemize}\n\n\\section{Conclusions} \\label{sec: conclusions}\nWe present dust SED fitting results from five MBB variants in M101 with kpc scale spatial resolution. We compare the resulting $\\Sigma_d$ and $T_d$ with known physical limitations, and conclude the results from a simple, variable emissivity, modified blackbody model are not physically plausible. The other four models have results consistent with each other at $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.5$, which demonstrates the robustness of modified blackbody model under many conditions. Among the four models, the one with a single temperature blackbody modified by a broken power-law emissivity has the highest fitting quality in residuals and $\\tilde{\\chi}^2$ distribution. Thus, the first order correction to the MBB, necessitated by our observed SEDs in M101, is to add flexibility in the emissivity spectral index at long wavelengths.\n\nThe resulting DTM, derived from our dust and gas surface densities and direct $\\rm T_e$-based metallicities, is not constant with radius or metallicity in M101 from all five models. From the preferred BE model, a relation of $\\rm DGR \\propto Z^{1.7}$ is observed overall, and $\\rm DGR \\propto Z^{1.9}$ in region with $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$. We try to explain this variable DTM by searching for correlations between tracers of formation and destruction mechanisms of dust and metallicity to the observed physical quantities. By comparing the correlation between DTM and physical quantities ($\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$) after removing the major radial trend, we argue that the accretion of metals in ISM onto existing dust grains could be a cause of this variable DTM, while we do not see evidence for correlations with stellar or SNe related production and destruction.\n\nIt is also possible that the change in DTM is actually the enhancement of emissivity due to coagulation. In the center of M101, if we assume the \\citet{DRAINE14} DTM and calculate the possible change in emissivity, the resulting $\\kappa_{160}$ would be $\\sim$19 to $41~\\rm cm^2~g^{-1}$, which are 0.9 to 2.0 larger than the originally calibrated value of $16.52~\\rm cm^2~g^{-1}$ in the high latitude diffuse ISM in the MW. This change is still within the range of previous observational and theoretical calculations. Both changes in DTM and in emissivity are possible according to our current knowledge.\n\nWhen comparing with previous DTM studies, our DTM is 2 times larger than the \\citet{DRAINE14} results in the same metallicity region, but our DTM are consistent with their DTM extrapolated to the region with similar $\\rm f_{H_2}$. Comparing with \\citet{REMY-RUYER14}, our DTM has a slope consistent with their power-law fitting slope. Unfortunately, we do not have enough low-metallicity data to compare with their broken-power law. When comparing with known depletion relations from the MW and the amount of available metals in the central 5 kpc of M101, our DTM suggests essentially all available heavy elements are in dust, which is consistent with ${\\rm F_\\star}={\\rm inf}$ line from extrapolating the \\citet{JENKINS09} calculations, and also larger than most of the previous studies. Our DTM results in the lower metallicity region would fall between ${\\rm F_\\star}=1$ and ${\\rm F_\\star}=0$ in the MW. This suggests that even in the lowest metallicity regime of our study, we have not yet probed conditions where the dust life cycle differs in major ways from that in the Milky Way.\n\nDuring the fitting process, we found that the fitting results from the likelihood calculated with a multi-dimensional Gaussian distribution and a complete covariance matrix are sensitive to the choice of model and covariance matrix. Therefore, it is important to be self-consistent between calibration and fitting processes. It is also important to note the covariance matrix adopted when comparing fitting results across studies because the fitting results could change by 70\\% with different covariance matrices.\n\n\\acknowledgments\nWe thank the referee for useful comments that helped to improve the quality of the manuscript. We gratefully acknowledge the hard work of the KINGFISH, THINGS, HERACLES, LVL, and CHAOS teams and thank them for making their data publicly available. We acknowledge the usage of the HyperLeda database (http:\/\/leda.univ-lyon1.fr). IC thanks K. Gordon for helpful conversations regarding calibration and fitting. IC thanks Y.-C. Chen for helpful conversations. The work of KS, IC, AKL, DU and JC is supported by National Science Foundation grant No. 1615728 and NASA ADAP grants NNX16AF48G and NNX17AF39G. The work of AKL and DU is partially supported by the National Science Foundation under Grants No. 1615105, 1615109, and 1653300.\n\nThis work uses observations made with \\textit{Herschel}. \\textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. \nPACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI\/OAA\/OAP\/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA\/CNES (France), DLR (Germany), ASI\/INAF (Italy), and CICYT\/MCYT (Spain). \nSPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA).\n\nThis work uses observations based on National Radio Astronomy Observatory (NRAO) Karl G. Jansky Very Large Array. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\nThis work uses observations based on HERA on IRAM 30-m telescope. IRAM is supported by CNRS\/INSU (France), the MPG (Ger-\nmany) and the IGN (Spain). \nThis work uses observations made with the \\textit{Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. \nThis research has made use of NASA's Astrophysics Data System. This research has made use of the NASA\/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\n\n\\vspace{5mm}\n\\facilities{Herschel(PACS and SPIRE), VLA, GALEX, IRAM(HERA), Spitzer(MIPS and IRAC), LBT(MODS)}\n\\software{astropy \\citep{ASTROPY13},\n matplotlib \\citep{HUNTER07},\n numpy \\& scipy \\citep{VANDERWALT11},\n pandas \\citep{MCKINNEY10},\n voronoi\\_2d\\_binning \\citep{CAPPELLARI03},\n corner \\citep{Foreman-Mackey16}, \n Scanamorphos \\citep[v16.9 and v17.0;][]{ROUSSEL13}, \n HIPE \\citep[vspire-8.0.3287;][]{OTT10}}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n\\vspace{-1cm}\n\n\n\n\n\\footnotetext{\\textit{$^{a}$~Laboratory of Nonlinear Chemical Dynamics, Institute of Chemistry, ELTE E\\\"otv\\\"os Lor\\'and University, Budapest, Hungary.}}\n\\footnotetext{\\textit{$^{b}$~Budapest University of Technology and Economics, Department of Analysis, Budapest, Hungary. Fax: +361 463 3172; Tel: +361 463 2314; E-mail: jtoth@math.bme.hu}}\n\\footnotetext{\\textit{$^{c}$~Chemical Kinetics Laboratory, Institute of Chemistry, ELTE E\\\"otv\\\"os Lor\\'and University, Budapest, Hungary.}}\n\n\\footnotetext{\\dag~Electronic Supplementary Information (ESI) available: [details of any supplementary information available should be included here]. See DOI: 10.1039\/cXCP00000x\/}\n\n\\footnotetext{\\ddag~Based on the talk given at the 2nd International Conference on Reaction Kinetics, Mechanisms and Catalysis. 20--22 May 2021, Budapest, Hungary}\n\n\\section{Introduction} \nThe concept of reaction extent (most often denoted by \\(\\xi\\)) is more than 100 years old \\cite{dedondervanrysselberghe}. \nIts importance is emphasized by the mere fact that it has been included in the IUPAC Green Book\\cite{millscvitashomankallaykuchitsu} (see page 43). \nTwo definitions are given that are equivalent in the treated very simple special case of a\nsingle reaction step:\n\\begin{equation}\\label{eq:cm}\n\\ce{$\\sum_{m=1}^M\\alpha_{m}$X($m$) = $\\sum_{m=1}^M\\beta_{m}$X($m$)}; \n\\end{equation}\nwhere \\(M\\) is the number of chemical species \\ce{X(1)}, \\ce{X(2)}, \\dots, \\ce{X($M$)};\nand the integers \\(\\alpha_m\\) and \\(\\beta_m\\) are the corresponding stoichiometric coefficients of the reactant and product species, respectively.\nThe first definition is:\n\\begin{equation}\\label{eq:iupac1def}\n n_{\\ce{X($m$)}}=n^0_{\\ce{X($m$)}}+ \\nu_{m}\\xi, \n\\end{equation}\nwhere \\ce{n_{\\ce{X($m$)}}} and $n^0_{\\ce{X($m$)}}$ are the actual and initial quantities (number of moles) of the species \\ce{X($m$)}, respectively. The symbol \\(\\nu_m\\) is the generalized stoichiometric number. It is negative for a reactant and positive for a product species.\nThe second definition is\n\\begin{equation}\\label{eq:iupac2def}\n\\Delta\\xi=\\frac{\\Delta n_{\\ce{X($m$)}}}{\\nu_m}=\n\\frac{n_{\\ce{X($m$)}}-n^0_{\\ce{X($m$)}}}{\\nu_m}.\n\\end{equation}\nA slightly different version is given in Ref.\\cite{goldloeningmcnaughtsehmi} and by the electronic version \\url{https:\/\/goldbook.iupac.org\/terms\/view\/E02283} called IUPAC Gold Book\\cite{mcnaughtwilkinson}:\n\\begin{equation}\\label{eq:iupac3def}\n\\mathrm{d}\\xi=\\frac{\\mathrm{d} n_{\\ce{X($m$)}}}{\\nu_m}.\n\\end{equation}\nThe above-cited definitions have been summarized in the book by Stepanov et al.\\cite{stepanoverlikinafilippov}. The authors also give a good introduction to the methods of linear algebra applied in Reaction Kinetics.\n\nWith an eye on the applicability of the concept in modern formal reaction kinetics (or, chemical reaction network theory) \nas exposed by Feinberg\\cite{feinbergbook} and T\u00f3th et al.\\cite{tothnagypapp} the following points seem crucial:\n\\begin{enumerate}\n\\item\nStarting from the original definition by De Donder and Van Rysselberghe\\cite{dedondervanrysselberghe}, we extend the definition to an \\emp{arbitrary} number of reaction steps.\n\\item\nWe do not restrict ourselves to reversible steps.\n\\item\nWe do not require linear independence of the reaction steps.\n\\item\nWe do not \"order the steps to one side\" which would result in hiding the difference between the steps like \\ce{X -> Y} and \\ce{X + Y -> 2Y}.\n\\item\nWe do not consider and take into account the atomic (or molecular) structure of the species.\n\\item \nWe do not use differentials when introducing the concept\n(cf. p. 61. of Ref.\\cite{truesdell}).\n\\item\nWe shall give an explicit definition more similar to Eq. \\eqref{eq:iupac2def} rather than to Eq. \\eqref{eq:iupac1def}.\n\\item\nWe take into consideration the volume of the reacting mixture to be able to calculate the number of individual reaction events.\n\\end{enumerate}\n\nThe structure of our paper is as follows. \nSection \\ref{sec:concept} introduces the concept for reaction networks of arbitrary complexity: for any number of reaction steps and species, mass action kinetics is not assumed. \nAs it is a usual requirement that the reaction extent tends to 1 when \"the reaction tends to its end\", we try to find quantities derived from our reaction extent having this property in Section \\ref{sec:what}.\nIt will turn out in many examples that the reaction extents do not tend to 1 in any sense. We show, however, that they contain quite relevant information about the time evolution of the reactions: they measure (or give) the number of the occurrences of the individual reaction events.\nThese examples will also reflect the fact that the reaction events do not cease during equilibrium, and this can be seen without referring to fluctuations.\nAs the closure of our paper, we show applications of the concept\nto more complicated cases: those with multiple stationary states, oscillation, and chaos. \n\n\nIn this part, first, we analyze the classical multi-stationary example\nby Horn and Jackson\\cite{hornjackson}. \nAs to oscillatory reactions, we start with the irreversible Lotka--Volterra reaction, and we also study the reversible Lotka--Volterra reaction both in the detailed balanced and not detailed balanced cases. \nOur following oscillatory example will be an experimental system studied by R\u00e1bai\\cite{rabai}. \nAs a chaotic example, we shall take a slightly modified version of that oscillatory system. \nDiscussion of the Conclusions and a list of Notations come last.\nThe proofs of the statements and Theorems are relegated to an Appendix so as to improve the logical flow of the manuscript without getting side-tracked.\nSupporting Information is given in a PDF file; upon request, the corresponding Wolfram Language notebook---the source of the PDF file---will be provided to the interested reader.\n\\section{The concept of reaction extent}\\label{sec:concept}\nStarting from the classical works\\cite{dedondervanrysselberghe,arisprolegomena1,croce}\nand relying on the consensus of the chemists' community as formulated by Laidler\\cite{laidlerglossary} our aim is to present a treatment more general than any of the definitions introduced and applied up to now.\n\\subsection{Motivation and fundamental definitions}\nWe are going to use the following concepts.\n\\subsubsection{Fundamental notations and definitions: The framework.}\nFollowing the books by Feinberg\\cite{feinbergbook} and by T\\'oth et al.\\cite{tothnagypapp} we consider a \\emp{complex chemical reaction}, simply \\emp{reaction}, or \\emp{reaction network} as a set consisting of \\emp{reaction steps} as follows: \n\\begin{equation}\\label{eq:ccr}\n\\left\\{\\ce{$\\sum_{m=1}^M\\alpha_{m,r}$X($m$) -> $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R)\\right\\}; \n\\end{equation}\nwhere \n\\begin{enumerate}\n\\item \nthe chemical \\emp{species} are \\ce{X($1$)}, \\ce{X($2$)}, \\dots, \\ce{X($M$)}---take note that their quantities \\(N_{\\ce{X($m$)}}\\) or \\(N_m\\) will be applied interchangeably;\n\\item\nthe \\emp{reaction steps} are numbered from 1 to \\(R;\\) \n\\item\nhere \\(M\\) and \\(R\\) are positive integers;\n\\item\n\\(\\boldsymbol{\\alpha}:=[\\alpha_{m,r}]\\) and \n\\(\\boldsymbol{\\beta}:=[\\beta_{m,r}]\\) are \\(M\\times R\\) matrices of non-negative integer components called \\emp{stoichiometric coefficients}, with the properties that all the species take part in at least one reaction step (\\(\\forall m \\exists r: \\beta_{m,r}\\neq\\alpha_{m,r}\\)), and all the reaction steps do have some effect (\\(\\forall r \\exists m: \\beta_{m,r}\\neq\\alpha_{m,r}\\)), and finally\n\\item\n\\(\\boldsymbol{\\gamma}:=\\boldsymbol{\\beta}-\\boldsymbol{\\alpha}\\) is the \\emp{stoichiometric matrix} of \\emp{stoichiometric numbers}.\n\\end{enumerate}\nInstead of Eq. \\eqref{eq:ccr} some authors prefer writing this:\n\\begin{equation}\\label{eq:ccrreduced}\n\\ce{$\\sum_{m=1}^M\\gamma_{m,r}$X($m$) = 0}\\quad (r=1,2,\\dots,R).\n\\end{equation}\nThis formulation immediately excludes reaction steps like \\ce{X + Y -> 2Y} (used e.g. to describe a step in the Lotka--Volterra reaction), or reduces it to \\ce{X -> Y}, \nchanging the stoichiometric coefficients used to formulate mass-action type kinetics. Similarly, an autocatalytic step that may be worth studying, see e.g. pp. 63 and 66 in the book by Aris\\cite{arisintro}, like \\ce{X -> 2X} appears oversimplified as \\ce{0 -> X}. \nAnother possibility is to exclude the \\emp{empty complex}, implying involuntarily that we get rid of the possibility to simply represent in- and outflow with \\ce{0 -> X} and \\ce{X -> 0}, respectively. \nThese last two examples evidently mean mass-creation and \nmass-destruction. \nIf one does not like these one should explicitly say that one is only interested in mass-conserving reactions. \nSometimes mass creation and mass destruction are slightly less obvious than above, \nsee the reaction network\n\\begin{equation*}\n\\ce{X -> Y + U},\\quad \\ce{Z -> X + U},\\quad \\ce{Z -> U},\n\\end{equation*}\nwhich is mass-producing.\n\nIt may happen that one would like to exclude reaction steps with more than two particles on the left side, such as \n\\[\n\\ce{2MnO4- + 6H+ + 5H2C2O4 = 2Mn^2+ + 8H2O + 10CO2}. \n\\]\nSuch steps do occur e.g. on page 1236 of Kov\\'acs et al.\\cite{kovacsvizvaririedeltoth} when dealing with \\emp{overall reactions}. The theory and applications of decomposition of overall reactions into elementary steps\\cite{kovacsvizvaririedeltoth,pappvizvari} would have been impossible without the framework of formal reaction kinetics. Someone may be interested in complex chemical reactions consisting of reversible steps only. Then, they have to write down all the forward and the corresponding backward reaction steps.\n\nTaking into consideration restrictions of the above kind usually \ndoes not make the mathematical treatment easier. \nSometimes it needs hard work to figure out how they can be checked, \nas it is in the case of mass conservation of models containing species without atomic structure,\\cite{deaktothvizvari,tothnagypapp} or in relation to the existence of oscillatory reactions.\\cite{potalotka}\nTo sum up: an author has the right to make any restriction \nthought to be chemically important, but these restrictions should be declared at the outset. \nFinally, we mention our main assumption: all the steps in Eq. \\eqref{eq:ccr} are really present, i.e. they proceed with a positive rate whenever the species on the left side are present.\n\n\nWe now provide a simple example to make the understanding easier.\n\\subsubsection{A simple example.}\nLet us take an example that may be deemed chemically oversimplified but not too trivial, still simple enough so as not to be lost in the details. \nAssume that water formation follows the reversible reaction step \n\\ce{2H2 + O2 <=> 2H2O}.\nThis means that we do not take into consideration either the atomic structure of the species, or the realistic details of water formation.\nLet the forward step be represented in a more abstract way: \\ce{2X + Y -> 2Z}.\n\nThe number of species, denoted as above by \\(M\\), is 3, and the number of (irreversible) reaction steps, denoted as above by \\(R\\), is 1. \nThe \\(3\\times1\\) stoichiometric matrix \\(\\boldsymbol{\\gamma}\\) consisting of the stoichiometric numbers is:\n\\(\\begin{bmatrix}-2\\\\-1\\\\2\\end{bmatrix}\\). \nIn case this step occurs five times, the vector of the \n\\emp{number of individual species}\nwill change as follows:\n\\begin{equation*}\n\\begin{bmatrix}\nN_{\\ce{X}}-N_{\\ce{X}}^0\\\\\nN_{\\ce{Y}}-N_{\\ce{Y}}^0\\\\\nN_{\\ce{Z}}-N_{\\ce{Z}}^0\n\\end{bmatrix}=\n5\\begin{bmatrix}\n-2\\\\\n-1\\\\\n2\n\\end{bmatrix},\n\\end{equation*}\nwhere \\(N_{\\ce{X}}^0\\) is the number of molecules of species \\ce{X} at the beginning, and \\(N_{\\ce{X}}\\) is the number of molecules of species \\ce{X} after five reaction events, and so on. \nIf one considers the reversible reaction \n\\ce{2X + Y <=> 2Z}, and assumes that the backward reaction step takes place three times then the total change is\n\\begin{equation}\n\\begin{bmatrix}\nN_{\\ce{X}}\\\\mathbb{N}_{\\ce{Y}}\\\\mathbb{N}_{\\ce{Z}}\n\\end{bmatrix}-\n\\begin{bmatrix}\nN_{\\ce{X}}^0\\\\mathbb{N}_{\\ce{Y}}^0\\\\mathbb{N}_{\\ce{Z}}^0\n\\end{bmatrix}=\n5\\begin{bmatrix}\n-2\\\\-1\\\\2\n\\end{bmatrix}+\n3\\begin{bmatrix}\n2\\\\1\\\\-2\n\\end{bmatrix}.\\label{eq:spec}\n\\end{equation}\nNote that both the number of molecules and the number of the occurrence of reaction events are positive integers. \n\nEq. (2.2) of Ref.\\cite{kurtz} is of the same form as our Eq. \\eqref{eq:spec}. Kurtz is interested mainly in reversible and detailed balanced complex chemical reactions, and, more importantly, in the relationship of their deterministic and stochastic models. This is the reason why he formulates his Eq. (2.2) for the slightly restricted case only. As to the relationship between discrete and continuous descriptions, we follow here more or less Kurtz\\cite{kurtz} and T\\'oth et al.\\cite{tothnagypapp} We cannot rely on a discrete state deterministic model of reaction kinetics---that would be desirable---because such a model does not exist as far as we know. \n\n\\subsubsection{The general case.}\nBefore providing general definitions, we mention that Dumon et al.\\cite{dumonlichanotpoquet} formulated a series of requirements that---according to them---should be obeyed by a well-defined reaction extent.\nUnfortunately, we are unable to accept most of these requirements. \nLet us mention only one: the reaction extent should be independent of the choice of stoichiometric coefficients (invariant under multiplication), i.e. it should have the same value for the reaction \\ce{2H2 + O2 -> 2 H2O} and for the reaction \\ce{H2 + $\\frac{1}{2}$O2 -> H2O}. Our point of view is that the reaction extent is strongly connected to kinetics, and it is not a tool to describe stoichiometry as some other authors\\cite{garst} also think. The only requirement that we accept will be mentioned later, in the discussion of Definition \\ref{def:extent}.\n\nWe assume throughout that the volume (\\(V\\)) is constant, and\none can write the generalized form of Eq. \\eqref{eq:spec} as\n\\begin{equation*}\n\\begin{bmatrix}\nN_{\\ce{1}}\\\\\nN_{\\ce{2}}\\\\\n\\dots\\\\\nN_{\\ce{M}}\n\\end{bmatrix}-\\begin{bmatrix}\nN_{\\ce{1}}^0\\\\\nN_{\\ce{2}}^0\\\\\n\\dots\\\\\nN_{\\ce{M}}^0\n\\end{bmatrix}=\n\\sum_{r=1}^{R}\n\\begin{bmatrix}\n\\gamma_{1,r}\\\\\n\\gamma_{2,r}\\\\\n\\dots\\\\\n\\gamma_{M,r}\n\\end{bmatrix}W_r, \n\\end{equation*}\nor shortly\n\\begin{equation*}\n\\mathbf{N}-\\mathbf{N}^0=\\boldsymbol{\\gamma} \\mathbf{W},\n\\end{equation*}\nwhere component \\(W_r\\) of the vector \\(\\mathbf{W}=\\begin{bmatrix}\nW_1&W_2&\\dots&W_R\n\\end{bmatrix}^{\\top}\\) gives the number of occurrences of the \\(r^{\\mathrm{th}}\\) reaction step. \nNote that we do not speak about \\emp{infinitesimal} changes. \n\nWith a slight abuse of notation let \\(\\mathbf{W}(t)\\), the vector of the numbers of occurrences of reaction events, a step function in the interval \\([0,t]\\).\nThen:\n\\begin{equation*}\n\\mathbf{N}(t)-\\mathbf{N}^0=\\boldsymbol{\\gamma} \\mathbf{W}(t),\n\\end{equation*}\nor turning to moles\n\\begin{equation}\\label{def:moles}\n\\mathbf{n}(t)-\\mathbf{n}^0=\\frac{\\mathbf{N}(t)-\\mathbf{N}^0}{L}=\\boldsymbol{\\gamma} \\frac{\\mathbf{W}(t)}{L}=\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t),\n\\end{equation}\nwhere \\(L\\)\\emp{ is the Avogadro constant} having the unit \\({\\,\\mathrm{mol}}^{-1}\\), and\n\\begin{equation*}\n\\mathbf{n}(t):=\\frac{\\mathbf{N}(t)}{L},\n\\mathbf{n}^0:=\\frac{\\mathbf{N}^0}{L},\n\\boldsymbol{\\xi}(t):=\\frac{\\mathbf{W}(t)}{L}.\n\\end{equation*}\nHere we had to choose the less often used notation \\(L\\) (\\url{https:\/\/goldbook.iupac.org\/terms\/view\/A00543}) to avoid mixing up with other notations.\n\nThe relationship \\eqref{def:moles} can be expressed in concentrations as\n\\begin{equation}\n\\mathbf{c}(t)-\\mathbf{c}^0=\\frac{\\mathbf{n}(t)-\\mathbf{n}^0}{V}=\\frac{\\boldsymbol{\\gamma}}{V}\\boldsymbol{\\xi}(t),\\label{eq:implicit}\n\\end{equation}\nwhere \\(V\\in\\mathbb{R}^+\\), the volume of the reaction vessel is assumed to be constant, \\(\\mathbf{c}(t):=\\frac{\\mathbf{n}(t)}{V}\\) and\n\\(\\mathbf{c}^0:=\\frac{\\mathbf{n}^0}{V}\\). \nThe component \\(c_{\\ce{X($m$)}}\\) or \\(c_m\\) of \\(\\mathbf{c}\\) is traditionally denoted in chemical textbooks as \\([\\ce{X($m$)}],\\) see e.g. Section 1.2 of Ref.\\cite{pillingseakins}.\n\nThe concentration in Eq. \\eqref{eq:implicit} is again a step function; however, if the number of particles (molecules, radicals, electrons, etc.) is very large, as very often it is, it may be considered to be a continuous, even differentiable function. Remember that the components of \\(\\boldsymbol{\\xi}(t)\\) have the dimension of the amount of substance, measured in moles.\n\nLet us now give a general, formal, and \\emp{explicit} definition of reaction extent valid \\emp{for an arbitrary number of species and reaction steps, and not restricted to mass action type kinetics}. \n(Few qualitative---mainly technical---restrictions are usually made on the function \n\\(\\mathbf{rate}\\)\\cite{feinbergbook,tothnagypapp,volperthudyaev},\nbut we now mention the continuous differentiability only.)\nWe start with rewriting the induced kinetic differential equation \\begin{equation}\\label{eq:ikdegen}\n\\dot{\\mathbf{c}}(t)=\\boldsymbol{\\gamma}\\mathbf{rate}(\\mathbf{c}(t))\n\\end{equation} \ntogether with the initial condition \\(\\mathbf{c}(0)=\\mathbf{c}^0\\) into an (equivalent) integral equation:\n\\begin{equation}\\label{eq:int}\n\\mathbf{c}(t)-\\mathbf{c}^0=\\boldsymbol{\\gamma}\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}.\n\\end{equation}\n\nThe component \\(rate_r\\) of the vector \\(\\mathbf{rate}\\) provides the reaction rate of the \\(r^{\\mathrm{th}}\\) reaction step.\nNote that in the mass action case Eq. \\eqref{eq:ikdegen} specializes into\n\\begin{equation}\\label{eq:ikdemass}\n\\dot{\\mathbf{c}}=\\boldsymbol{\\gamma}\\mathbf{k}\\odot\\mathbf{c}^{\\boldsymbol{\\alpha}} \n\\end{equation} \nor, in coordinates\n\\begin{equation*\n\\dot{c}_m(t)=\\sum_{r=1}^{R}\\gamma_{mr}k_r\\prod_{p=1}^{M}c_p^{\\alpha_{p,r}}\\quad(m=1,2,\\cdots,M),\n\\end{equation*}\nwhere \\(\\mathbf{k}\\) is the vector of (positive) reaction rate coefficients \\(k_r.\\)\n(We prefer using the expression \\emp{reaction rate coefficients} to \\emp{reaction rate constants},\nas these numbers do depend on many factors---except species concentrations.)\nIn Eq. \\eqref{eq:ikdemass} we used the usual vectorial operations, see e.g. Section 13.2 in T\\'oth et al.\\cite{tothnagypapp}\nTheir use in formal reaction kinetics has been initiated by Horn and Jackson.\\cite{hornjackson}\n\nIn accordance with what has been said up to now, we can introduce the explicit definition of reaction extent by combining Eqs. \\eqref{eq:implicit} and \\eqref{eq:int}.\n\n\\begin{definition}\\label{def:extent}\nThe \\emp{reaction extent}\nof a complex chemical reaction or reaction network defined by Eq. \\eqref{eq:ccr} is the scalar variable, vector-valued function given by the formula\n\\begin{equation}\\label{eq:specre}\n\\boxed{\n\\boldsymbol{\\xi}(t):=V\\int_0^t\n\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}}.\n\\end{equation}\nIts time derivative \\(\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}(t))\\) is usually called the \\emp{rate of conversion} or \\emp{reaction flux}\\cite{polettiniesposito}.\n\\end{definition}\nNote that Eq. \\eqref{eq:specre} shows that the reaction extent, in general, depends on the whole history\n(past and present) of the vector of concentrations, as if it had a \\emp{memory}.\\label{page:memory}\n\nDefinition \\ref{def:extent} of the reaction extent has been derived from the number of reaction events in order to reveal its connection to changes in the concentrations. \nAssuming here also that \\(V\\) is constant, \none can formulate the following trivial (equivalent) consequences of the definition:\n\\begin{equation}\\label{eq:trivicons}\n\\dot{\\mathbf{n}}=\\boldsymbol{\\gamma}\\dot{\\boldsymbol{\\xi}},\\quad\\dot{\\mathbf{c}}=\\frac{1}{V}\\boldsymbol{\\gamma}\\dot{\\boldsymbol{\\xi}},\\quad\n\\mathbf{c}=\\mathbf{c}^0+\\boldsymbol{\\gamma}\\frac{\\boldsymbol{\\xi}}{V}\n\\end{equation}\nmentioned also by Laidler\\cite{laidlerglossary},\nsometimes as definitions, sometimes as statements.\n\nNote that neither the rate of the reaction: \n\\(\\mathbf{rate}(\\mathbf{c}(t))=\\frac{\\dot{\\boldsymbol{\\xi}}(t)}{V},\\)\nnor the reaction extent \\({\\boldsymbol{\\xi}},\\) nor the rate of conversion \\(\\dot{\\boldsymbol{\\xi}}\\) depends on the stoichiometric matrix \\(\\boldsymbol{\\gamma}\\), thereby this\none of the requirements formulated by Dumont et al.\\cite{dumonlichanotpoquet}\nis fulfilled.\n\nWhat is wrong with the almost ubiquitous implicit \"definition\" Eq. \\eqref{eq:implicit}? \nWe show an example to enlighten this.\n\\begin{example}\nConsider the reaction steps\n\\begin{equation*}\n\\ce{X ->[$k_1$] Y}, \\quad\\ce{X + Y ->[$k_2$] 2Y},\n\\end{equation*}\nexpressing the fact that \\ce{X} is transformed into \\ce{Y} directly and also via autocatalysis.\nAlthough the reaction steps \n\\begin{equation*}\n\\ce{X ->[$k_1$] Y + P},\\quad \\ce{X + Y ->[$k_2$] 2Y + P}\n\\end{equation*} with the external species \\ce{P} is a more realistic description of genuine chemical reactions,\ne.g. the acid autocatalysis in ester hydrolysis\\cite{ostwald,bansagitaylor}, they lead to the same kinetic differential equations for \\ce{X} and \\ce{Y}. Therefore, we shall analyze the simpler scheme.\nNow the stoichiometric matrix $\\gamma$ is as follows:\n\\begin{equation*}\n\\boldsymbol{\\gamma}=\\begin{bmatrix}\n-1&-1\\\\\n1&1\n\\end{bmatrix}.\n\\end{equation*}\nThen Eq. \\eqref{eq:int} specializes into \n\\begin{align*}\nc_{\\ce{X}}(t)-c_{\\ce{X}}(0)&=\n-\\int_{0}^{t}k_1c_{\\ce{X}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n-\\int_{0}^{t}k_2c_{\\ce{X}}(\\overline{t})c_{\\ce{Y}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n=\n-\\frac{\\xi_1(t)+\\xi_2(t)}{V}\\\\\nc_{\\ce{Y}}(t)-c_{\\ce{Y}}(0)&=\n\\int_{0}^{t}k_1c_{\\ce{X}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}+\n\\int_{0}^{t}k_2c_{\\ce{X}}(\\overline{t})c_{\\ce{Y}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n=\n\\frac{\\xi_1(t)+\\xi_2(t)}{V}.\n\\end{align*}\nThese two relations do not determine \\(\\xi_1\\) and \\(\\xi_2\\) individually but only their sum (even if one utilizes \\(c_{\\ce{Y}}(t)=c_{\\ce{X}}(0)+c_{\\ce{Y}}(0)-c_{\\ce{X}}(t)\\)).\nThe problem originates from the fact that the reaction steps are not linearly independent as reflected in the singularity of the matrix \\(\\boldsymbol{\\gamma}.\\)\n\\end{example}\n\n\nIf the reaction steps of a complex chemical reaction are independent, the situation is better.\n\\begin{example}\\label{ex:indep}\nIn some special cases, there is a way of making the \"definition\" Eq. \\eqref{eq:implicit} into a real, explicit definition.\nAssume that \\(R\\le M\\), and that the stoichiometric matrix \\(\\boldsymbol{\\gamma}\\) is of the full rank, i.e. the reaction steps are independent. \nThen, one can rewrite Eq. \\eqref{eq:implicit} in two steps as follows:\n\\begin{align}\n\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)&=\\frac{1}{V}\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t)\\nonumber\\\\\n\\boldsymbol{\\xi}(t)&=V(\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma})^{-1}\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0).\\label{eq:memoryless}\n\\end{align}\nNow one can accept Eq. \\eqref{eq:memoryless} as a definition for the reaction extent.\nNevertheless, in this special case \nEq. \\eqref{eq:int} \nimplies\n\\begin{equation*}\n\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)=\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma}\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t})){\\;\\mathrm{d}\\overline{t}}\n\\end{equation*}\nand\n\\begin{equation*}\n(\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma})^{-1}\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)=\n\\frac{1}{V}\\boldsymbol{\\xi}(t)=\n\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t})){\\;\\mathrm{d}\\overline{t}},\n\\end{equation*}\nthus this definition is the same as the one in Eq. \\eqref{eq:specre}. \nThis derivation can always be done if \\(R=1\\) that is, in a not-so-interesting trivial case.\nUnfortunately, the case \\(R\\le M\\) does not happen very often. On the contrary, for example, in\ncase of combustion reactions, Law's law\\cite{law} (see page 11) states that \\(R\\approx 5 M.\\)\n\nNote also, that Eq. \\eqref{eq:memoryless} shows the following: in these cases, i.e. when the stoichiometric matrix is of the full rank---as opposed to the general case, see page \\pageref{page:memory}---the reaction extents do not depend on the whole history of the concentration vector, it only depends on its instantaneous value.\n\\end{example}\nLet us make a trivial remark on the independence of reaction steps. \nIf the complex chemical reaction consists of a single irreversible step, then the reaction steps(!) are independent. \nIf any of the reaction steps are reversible, then the reaction steps are not independent. \n\\subsection{Properties of the reaction extent}\nThe usual assumptions \\label{pg:assump} on the vector-valued function \\(\\mathbf{rate}\\) are as follows, see Refs.\\cite{tothnagypapp,volperthudyaev}.\n\\begin{enumerate}\n\\item\nAll of its components are continuously differentiable functions defined on \\(\\mathbb{R}^M\\) taking only non-negative values. This is usual, e.g. in the case of mass action kinetics, but---with some restrictions---also in the case when the reaction rates are rational functions as in the case of Michaelis--Menten or Holling type kinetics, see e.g. Refs.\\cite{polczkulcsarszederkenyi,polczpeniszederkenyi,kisstoth,laidlerglossary}\n\\item\nThe value of \\(rate_r(\\mathbf{c})\\) is zero if and only if some of the species needed for the \\(r^{\\mathrm{th}}\\) reaction step is missing, i.e. for some \\(m: \\alpha_{m,r}>0\\) and \\(c_m=0\\) (see p. 613, Condition 1 in Ref.\\cite{volperthudyaev}). \nWe shall say in this case that reaction step \\(r\\) \\emp{cannot start} from the concentration vector \\(\\mathbf{c}.\\)\n\\end{enumerate}\nThe second assumption implies---even in the general case, i.e. without restriction to the mass action type kinetics---that \\(rate_r(\\mathbf{c})>0\\) if all the necessary species (\\emp{reactants}, see below) are present initially: \\(\\alpha_{m,r}>0\\Longrightarrow c_m>0.\\)\n\nLet us sum up the relevant qualitative characteristics of the reaction extent.\n(Remember that the proof can be found in the Appendix.)\n\\begin{theorem}\\label{thm:basics}\n\\begin{enumerate}\n\\item[\\nonumber]\n\\item\nThe domain of the function \\(t\\mapsto\\boldsymbol{\\xi}(t)\\) is the same as that of \\(\\mathbf{c}\\).\n\\item\nBoth \\(\\mathbf{c}\\) and \\(\\boldsymbol{\\xi}\\in\\mathcal{C}^2(J,\\mathbb{R}^R);\\) with some open interval \\(J\\subset\\mathbb{R}\\) such that \\(0\\in J.\\)\n \\item\n \\(\\boldsymbol{\\xi}\\) obeys the following initial value problem:\n\\begin{equation}\\label{eq:rmdiffegy}\n\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}^0+\\frac{1}{V}\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t)),\\quad \\boldsymbol{\\xi}(0)=\\boldsymbol{0}.\n\\end{equation}\n\\item\nAt the beginning, the velocity vector of the reaction extent (also called the \\emp{rate of conversion}) points into the closed first orthant, and this property is kept for all times in the domain of the function \\(t\\mapsto\\boldsymbol{\\xi}(t).\\) \n\\item\nThe components of \\(\\boldsymbol{\\xi}\\) are either positive, strictly monotonously increasing functions or constant zero. \nIf for some positive time \\(t\\) we find that \\(\\xi_r(t)=0\\) then, obviously, the reaction step \\(r\\) did not start at all at the beginning.\n\\end{enumerate}\n\\end{theorem}\n\n\n\nLet us make a few remarks:\n\\begin{itemize}\n\\item \nThe last property (positivity) mentioned in the Theorem can be realized with \\(\\lim_{t\\to+\\infty}\\xi(t)=+\\infty\\)\n(the simplest example for this being \\(\\ce{X -> 2X}, c_{\\ce{X}}^0>0\\)), or with a finite positive value of \\(\\lim_{t\\to+\\infty}\\xi(t),\\) see the example \\ce{X -> Y -> Z} below.\n\\item\nEq. \\eqref{eq:rmdiffegy} shows that we would have got simpler formulas if we used\n\\(\\frac{\\xi(t)}{V}\\) as proposed by Aris\\cite{arisintro} on p. 44, \nbut this form is valid only if \\(V\\) is constant. \n\\item\nIn the mass action case both \\(\\mathbf{c}\\) and \\(\\boldsymbol{\\xi}\\) are infinitely many times differentiable.\n\\item\nIf one uses a kinetics different from the mass action type\nnot fulfilling assumptions 1 and 2 on page \\pageref{pg:assump}, \nor---as P\\'ota\\cite{potarabai} has shown---if one applies an approximation, \nthen it may happen that some of the initially positive concentrations turn to zero.\n\\end{itemize}\n\nIn order to proceed, we need to make a technical remark on the figures shown hereinafter. We label the first axis (usually: horizontal) in the figures with \\(t\\mathrm{\/s}\\), where \\(\\mathrm{s}\\) is the time unit second. Labels of other axes are formed in a similar way. With this procedure we want to emphasize that the figures show the relationship between pure numbers and not between physical quantities. \n\nThe condition in part 5 of Theorem \\ref{thm:basics} is only necessary but not sufficient as the example below shows. \n\\begin{example}\\label{ex:convexconcave}\nLet us start the consecutive reaction \\ce{X ->[$k_1$] Y ->[$k_2$] Z} from the vector of the initial concentrations:\n\\(\\begin{bmatrix}\nc^0_{\\ce{X}}&0&0\n\\end{bmatrix}^{\\top}\\), and suppose \\(k_1\\neq k_2.\\) \nAlthough, the second step cannot start at the beginning, yet the second reaction extent is positive for all positive times as the solution of the evolution equations\n\\begin{equation}\\label{eq:velo}\n\\dot{\\xi}_1=Vk_1\\left(c^0_{\\ce{X}}-\\frac{\\xi_1}{V}\\right),\\quad\n\\dot{\\xi}_2=Vk_2\\left(0+\\frac{\\xi_1-\\xi_2}{V}\\right)\n\\end{equation}\nare as follows\n\\begin{align*}\n&\\xi_1(t)=Vc^0_{\\ce{X}}(1-e^{-k_1t}),\\\\\n&\\xi_2(t)=\\frac{Vc^0_{\\ce{X}}}{k_2-k_1}\\left(k_2(1-e^{-k_1t})-k_1(1-e^{-k_2t})\\right).\n\\end{align*}\nPositivity also follows without any calculations from the fact that the velocity vector of the differential equations in \\eqref{eq:velo} point inward, into the interior of the first quadrant, or using the fact that Eqs. \\eqref{eq:velo} are also kinetic type differential equations.\n\nNote that \\(\\lim_{t\\to+\\infty}\\xi_1(t)=\\lim_{t\\to+\\infty}\\xi_2(t)=Vc^0_{\\ce{X}}.\\)\nIt means that the number of occurrences of the reaction events for both reactions, and thus the reaction extents, are exactly the same at the end of the whole process. \nMoreover, it does not depend on the reaction rate coefficients.\n\nEasy calculations show the following facts. \nThe function \\(\\xi_2\\,\/\\,\\mathrm{mol}\\) in Fig. \\ref{fig:convexconcave} has an inflection point, because its second derivative is zero at some positive time \\(t_{\\mathrm{infl}}\\) for all choices of the reaction rate coefficients, and the third derivative is not zero at \\(t_{\\mathrm{infl}}\\). \nThe function \\(\\xi_1\\,\/\\,\\mathrm{mol}\\) in Fig. \\ref{fig:convexconcave} is concave from below no matter what the reaction rate coefficients are. \n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{convexconcave}\n\\caption{Reaction extents in the consecutive reaction \\ce{X ->[$k_1$] Y ->[$k_2$] Z} when \n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\), \n\\(k_2=~2~{\\mathrm{s}}^{-1}\\), \n\\(c_{X}^0=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(c_{\\ce{Y}}^0=c_{\\ce{Z}}^0=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(V=~1~\\mathrm{dm}^3.\\) \nThe limit 1 here is only a consequence of the choice of the parameters. \nThe large dot is the inflection point of the curve \\(\\xi_2\\,\/\\,\\mathrm{mol}\\).}\n\\label{fig:convexconcave}\n\\end{figure}\n\\end{example}\n\nTo characterize the convexity of the reaction extents in the general case is an open problem. \nOne should take into consideration that although in the practically interesting cases the number of equations in \\eqref{eq:rmdiffegy} is larger than those in Eq.\n\\eqref{eq:ikdegen}, that is \\(R > M\\), the equations for the reaction extents are of a simpler structure. \n\nMonotonicity mentioned in Theorem \\ref{thm:basics} implies that all the components of the reaction extent do have a finite or infinite limit as \\(t\\) tends to \\(\\sup(J)=:t^*,\\) where \\(J:=\\Dom(\\mathbf{c}),\\)\nand \\(t^*\\) is a finite or infinite time. \nIt is an interesting open question: \nwhen does a coordinate of the reaction extent vector tend to infinity?\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{graphical1}\n\\caption{\n\\(c_{\\ce{X}}^0~=~3.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Y}}^0~=~1.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~1.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(k_1~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(k_{-1}~=~1.0~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)\nWhile the concentrations tend to and are becoming very close to the equilibrium values, the reaction extents tend to infinity in a monotonously increasing way in the reaction: \n\\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}.}\n\\label{fig:dynamiceq}\n\\end{figure}\nAs the \\emp{emphatic} closure of this series of remarks, we mention that the strictly monotonous increase of the number of the occurrence of reaction events shows that the reaction events never stop, see \\ref{fig:dynamiceq}. This important fact is independent on the form of kinetics, and it is a property of the deterministic models of reaction kinetics.\nThis sheds light on the meaning of \\emp{dynamic equilibrium} as generally taught. \nNote that \\emp{no reference to thermodynamics or statistical physics} has been invoked here,\nanalyzing the connections are left to the reader.\n\\begin{example}\\label{ex:blowup}\nIn case when the domain of the function \\(t\\mapsto\\mathbf{c}(t)\\) is a proper subset of the non-negative real numbers, \\(\\boldsymbol{\\xi}\\) has the same property. \nLet us consider the induced kinetic differential equation\\ \\(\\dot{c}=kc^2\\) of the quadratic auto-catalytic reaction \\ce{2X ->[k] 3X} with the initial condition \\(c(0)=c^0>0.\\) Then, \n\\(\nc(t)=\\frac{c^0}{1-kc^0t}, \\left(t\\in\\left[0,\\frac{1}{kc^0}\\right[\n\\,\\subsetneq\\,[0,+\\infty[\\right).\n\\) \nNow Eq. \\eqref{eq:rmdiffegy} specializes into\n\\(\\dot{\\xi}=Vk(c^0+\\xi\/V)^2,{\\ }\\xi(0)=0\\)\nhaving the solution\n\\(\n\\xi(t)=V(c^0)^2\\frac{kt}{1-kc^0t}, \\left(t\\in\\left[0,\\frac{1}{kc^0}\\right[\\right),\n\\)\nthus \\(\\xi\\) blows up at the same time (\\(t^*:=\\frac{1}{kc^0}\\)) when \\(c\\) does.\nUp to the blow-up, the reaction event occurs infinitely many times:\n\\(\\lim_{t\\to+t^*}\\xi(t)=+\\infty.\\)\nDefinitions and a few statements about blow-up in kinetic differential equations are given in the works by Csikja et al.\\cite{csikjapapptoth,csikjatoth}\n\\end{example}\n\n\n\\section{What is it that tends to 1?}\\label{sec:what}\nOur interest up to this point was the number of occurrences of reaction events. \nHowever, many authors think it is useful and visually attractive that the \"reaction extent tends to 1 when the reaction tends to its end,\" see e.g. Fig. 1 of Glasser\\cite{glasser}.\nBorge\\cite{borge} and Peckham\\cite{peckham} also argue for [0,1].\n\nAnother approach is given by Moretti\\cite{moretti}, Dumon et al.\\cite{dumonlichanotpoquet} and others via introducing the \\emp{reaction advancement ratio} \\(\\frac{\\xi}{\\xi_{\\max}}\\), and stating that this ratio is always between 0 and 1. \nPeckham\\cite{peckham} noticed that Atkins\\cite{atkins5} (pp. 272--276) shows a figure of the free energy \\(G\\) of the reacting system versus \\(\\xi,\\) where the first axis is labeled from zero to one. \nHowever, in the next edition\\cite{atkins6} (pp. 216--217), the graph has been changed, and it now shows the first axis without 1 as an upper bound.\nBeing loyal to the usual belief\\cite{peckham,treptow}, \nwe are looking for quantities (pure numbers) tending to 1 as e.g. \\(t\\to+\\infty\\). \nScaling might help to find such quantities.\n\\subsection{Scaling by the initial concentration: One reaction step}\nNow we are descending from the height of generality by considering a single irreversible reaction step (\\(R=1\\)), assuming that the kinetics is of the mass action type. Thus, the reaction is\n\\begin{equation}\\label{eq:onestep}\n\\ce{$\\sum_{m=1}^M\\alpha_m$X($m$) ->[k] $\\sum_{m=1}^M\\beta_m$X($m$)}.\n\\end{equation}\nTherefore, one has the reaction extent\n\\begin{equation}\\label{eq:ximone}\n\\dot{\\xi}=Vk\\left(\\begin{bmatrix}\nc_1^0\\\\c_2^0\\\\\\dots\\\\c_M^0\n\\end{bmatrix}+\n\\begin{bmatrix}\n\\gamma_1\\\\\\gamma_2\\\\\\dots\\\\\\gamma_M\n\\end{bmatrix}\\frac{\\xi}{V}\\right)^{\\boldsymbol{\\alpha}}\n=\nVk\\prod_{m=1}^{M}\\left(c_m^0+\\gamma_m\\frac{\\xi}{V}\\right)^{\\alpha_m},\\quad \\xi(0)=0;\n\\end{equation}\nwith \\(\\gamma_m:=\\beta_m-\\alpha_m.\\)\n\n\\begin{theorem}\\label{thm:singlestep}\n\\begin{enumerate}\n\\item[\\nonumber]\n\\item \nIf the reaction in Eq. \\eqref{eq:onestep} cannot start, then\n\\(\\xi(t)=0\\) for all non-negative real times \\(t\\):\n\\begin{equation}\\label{eq:cannot}\n\\exists m:(\\alpha_m\\neq0\\ \\&\\ c_m^0=0)\\Longrightarrow\\forall t\\in\\mathbb{R}^+_0:\\xi(t)=0.\n\\end{equation}\n\\item\nIf the reaction in Eq. \\eqref{eq:onestep} does start and all the species are produced (i.e. for all \\(m: \\gamma_m>0\\)), then \\(\\xi(t)\\)\ntends to infinity (blow-up included):\n\\begin{equation}\\label{eq:produced}\n\\forall m:\\gamma_m:=\\beta_m-\\alpha_m>0\\Longrightarrow\\lim_{t\\to t^*}\\xi(t)=+\\infty,\n\\end{equation} \nwhere \\(t^*:=\\sup(J)\\) with \\(J:=\\Dom(\\xi)\\).\n\\item \nIf some of the species is consumed, that is \\(\\exists m: \\gamma_m<0\\), then \n\\begin{equation}\\label{eq:decreases}\n\\lim_{t\\to +\\infty}\\xi(t)=\\min\\left\\{-\\frac{Vc_m^0}{\\gamma_m};\\gamma_m<0\\right\\}.\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{example}\n\\begin{enumerate}\n\\item [\\nonumber]\n\\item \nReaction \\ce{X ->[$k$] Y} with \\(c^0_{\\ce{X}}=0\\) and \\(c^0_{\\ce{Y}}\\)\narbitrary, illustrates the first case as here \\(\\dot{\\xi}=-k\\xi, \\xi(0)=0\\)\nimplies \\(\\forall t\\in\\mathbb{R}:\\xi(t)=0.\\)\n\\item \nReaction \\ce{X ->[$k$] 2X} with \\(c^0_{\\ce{X}}>0\\) is an illustration for the second case \n\\begin{equation*}\n\\forall t\\in\\mathbb{R}:\\xi(t)=V c^0_{\\ce{X}}(e^{k t}-1 )\n\\end{equation*}\nwith \n\\(\n\\lim_{t\\to+\\infty}\\xi(t)=+\\infty. \n\\)\n\\item \nReaction \\ce{2X ->[$k$] 3X} with \\(c^0_{\\ce{X}}>0\\) (Example \\ref{ex:blowup}) is another illustration for the second case with \n\\begin{equation*}\n\\forall t\\in[-\\infty,\\frac{1}{kc^0_{\\ce{X}}}[ :\\xi(t)=\\frac{k t V (c^0_{\\ce{X}})^2}{1-ktc^0_{\\ce{X}}}\n\\end{equation*} with\n\\(\\lim_{t\\to\\frac{1}{kc^0_{\\ce{X}}}}\\xi(t)=+\\infty.\\)\n\\item \nReaction \\ce{X + Y ->[$k$] 2X}\nis an illustration for the third case \n\\begin{equation*}\n\\forall t\\in\\mathbb{R}:\\xi(t)=\n\\frac{(-1 + e^{k t (c^0_{\\ce{X}} + c^0_{\\ce{Y}})}) V c^0_{\\ce{X}} c^0_{\\ce{Y}}}{(c^0_{\\ce{Y}} + e^{k t (c^0_{\\ce{X}} + c^0_{\\ce{Y}})} c^0_{\\ce{X}})}\n\\end{equation*} \nwith \\(\\lim_{t\\to+\\infty}\\xi(t)=Vc^0_{\\ce{Y}},\\) if \\(c^0_{\\ce{X}},c^0_{\\ce{Y}}\\neq0\\). If either \\(c^0_{\\ce{X}}=0,\\) or \\(c^0_{\\ce{Y}}=0,\\)\nthen \\(\\forall t\\in \\mathbb{R}:\\xi(t)=0.\\)\n\\item\nThe example \\ce{X -> Y} with \\(c_{\\ce{X}}^0=0, c_{\\ce{Y}}^0>0\\) shows that\na species (here \\ce{Y}) can have positive concentration for all positive times in a reaction where \"none of the steps\" can start.\n\\end{enumerate}\n\\end{example}\n\nThe table below shows a series of examples illustrating different types of single irreversible reaction steps.\n\n\\begin{table}\n \\caption{Reaction extent for various reaction types}\n \\label{table:cases}\n\\begin{tabular}{lllll}\n\\hline\nStep &\\(\\mathbf{c}^0\\)&\\(\\dot{\\xi}=\\) &\\({\\xi}(t)=\\) &Case\\\\ \n\\hline\\ce{X ->[$k$] 2X} & 0 &\\(k\\xi\\) & 0 &\\eqref{eq:cannot}\\\\\n\\hline\\ce{X ->[$k$] 2X} & 1 &\\(Vk(1+\\xi\/V)\\)& \\(V(e^{kt}-1)\\)&\\eqref{eq:produced}\\\\\n\\hline\\ce{2X ->[$k$] 3X} & 1 &\\(Vk(1+\\xi\/V)^2\\) &\\(\\frac{Vkt}{1-kt}\\) &\\eqref{eq:produced}\\\\\n\\hline\\ce{2X + Y ->[$k$] 2Z}& &\\(Vk(c_{\\ce{X}}^0-2\\xi\/V)^2*\\)&\\\\\n&&\\((c_{\\ce{Y}}^0-\\xi\/V)\\)& &\\eqref{eq:decreases}\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{example}\nHere we analyze the last example of Table \\ref{table:cases}.\nIn the case of the reaction \\ce{2X + Y ->[$k$] 2Z} mimicking water formation one has the following quantities:\n\\(R:=1,M:=3,\\ce{X}:=\\ce{H2}, \\ce{Y}:=\\ce{O2},\\ce{Z}:=\\ce{H2O}.\\)\nFurthermore,\n\\[\n\\boldsymbol{\\alpha}=\n\\begin{bmatrix}\n2\\\\1\\\\0\n\\end{bmatrix}{\\!}{\\!},\\quad\n\\boldsymbol{\\beta}=\\begin{bmatrix}\n0\\\\0\\\\2\n\\end{bmatrix}{\\!}{\\!},\\quad \n\\boldsymbol{\\gamma}=\\begin{bmatrix}\n-2\\\\-1\\\\2\n\\end{bmatrix}{\\!}{\\!}.\n\\]\nThe initial value problem to describe the time evolution of reaction extent is\n\\begin{equation}\\label{eq:wateru}\n\\dot{\\xi}=Vk\\left(\n\\begin{bmatrix}\nc_{\\ce{X}}^0\\\\c_{\\ce{Y}}^0\\\\c_{\\ce{Z}}^0\n\\end{bmatrix}+\n\\begin{bmatrix}\n-2\\\\-1\\\\2\\\\\n\\end{bmatrix}\\frac{\\xi}{V}\\right)^{\\begin{bmatrix}\n2\\\\1\\\\0\n\\end{bmatrix}}\n=\nVk(c_{\\ce{X}}^0-2\\frac{\\xi}{V})^2(c_{\\ce{Y}}^0-\\frac{\\xi}{V}),\\quad \\xi(0)=0.\n\\end{equation}\nWe can provide only the inverse of the solution to Eq. \\eqref{eq:wateru}. However, one can state that the reaction extent tends strictly monotonously to its limit (independent on the value of the reaction rate coefficient): \\(\\lim_{t\\to+\\infty}\\xi(t)=\\min\\{\\frac{Vc_{\\ce{X}}^0}{2},Vc_{\\ce{Y}}^0\\}.\\)\nDifferent initial conditions lead to different results, see Figs. \\ref{fig:rm11}--\\ref{fig:rm13}. Obviously, the third point of Theorem \\ref{thm:singlestep} is of main practical use here. For this case one has the following statement.\n\\end{example}\n\n\\begin{corollary}\\label{corr:pure}\nIn the third case of Theorem \\ref{thm:singlestep}, \ndividing \\(\\xi\\) by the initial concentration and scaled by the quantity \\(-\\gamma_m\/V\\), \nwe obtain a (pure) number tending to 1 as \\(t\\) tends to infinity:\n\\fbox{\\(\\lim_{t\\to +\\infty}\n\\frac{\\xi(t)}{Vc_m^0\/(-\\gamma_m)}=1.\\)}\n\\end{corollary}\n\\subsection{Stoichiometric initial condition, excess and deficit}\nBefore studying the above-mentioned figures, we need some definitions in order to avoid the sin of using a concept without having defined it. The concepts of \\emp{stoichiometric initial condition} and \\emp{initial stoichiometric excess} are often used elsewhere but never defined.\n\\begin{definition\nConsider the induced kinetic differential equation \n\\eqref{eq:ikdegen} of the reaction \\eqref{eq:ccr} with the initial condition\n\\(\\mathbf{c}(0)=\\mathbf{c}^0\\neq\\boldsymbol{0}.\\)\nThis initial condition is said to be a \\emp{stoichiometric initial condition} (and \\(\\mathbf{c}^0\\) is a stoichiometric initial concentration), if for all such\n\\(m=1,2,\\dots,M;{\\ }r=1,2,\\dots,R\\) for which \\(\\gamma_{m,r}<0\\) the ratios \\(\\frac{c_m^0}{-\\gamma_{m,r}}\\)\nare independent from \\(m\\) and \\(r.\\) \nIf the ratios are independent of \\(r\\), but for some \n\\(p=1,2,\\dots,M\\) the ratio \\(\\frac{c_p^0}{-\\gamma_{p,r}}\\) is larger than the others, then \\ce{X(p)} is said to be in \\emp{initial stoichiometric excess}, or it is in excess initially.\nIf the ratios are independent of \\(r\\), but for some \n\\(p=1,2,\\dots,M\\) the ratio \\(\\frac{c_p^0}{-\\gamma_{p,r}}\\) is smaller than the others, then \\ce{X(p)} is said to be in \\emp{initial stoichiometric deficit}, or it is in deficit initially.\nThe last notion is mathematically valid, but in such cases, one prefers saying that all the other species are in excess.\nIn combustion theory the expressions \\emp{stoichiometric}, \\emp{fuel lean} and \\emp{fuel rich} are used in the same sense, see page 115 of the book by Tur\\'anyi and Tomlin\\cite{turanyitomlin}.\n\\end{definition}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p1}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=~0.7~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3} 2Z}. \nThe limiting value is denoted with the dashed line.} \n\\label{fig:rm11}\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p2}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=c_{\\ce{Y}}^0=~0.6~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(k~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\) Stoichiometric initial condition in the reaction \\ce{2X + Y -> 2Z}; slow convergence. The limiting value is denoted with the dashed line.}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p3}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=~0.6~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}>c_{\\ce{Y}}^0~=~0.5~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(k~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)\n\\ce{X} is in stoichiometric excess in the reaction \\ce{2X + Y -> 2Z}. \nThe limiting value is denoted with the dashed line.}\n\\label{fig:rm13}\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.3\\linewidth]{q1}\\quad\n\\includegraphics[width=0.3\\linewidth]{q2}\\quad\n\\includegraphics[width=0.3\\linewidth]{q3}\n\\caption{Scaled reaction extents tend to 1. \nThe reaction rate coefficient and the initial data are the same as in Figs. \\ref{fig:rm11}--\\ref{fig:rm13}.}\n\\label{fig:rm1skalazott}\n\\end{figure*}\nWe suggest that instead of saying that the scaling factor is some initial concentration as in the third case of Theorem \\ref{thm:singlestep},\none can equally well say that the divisor is the limiting value of the reaction extent, as it is in the cases in figures \\ref{fig:rm1skalazott}: \\(Vc_{\\ce{X}}^0\/2, Vc_{\\ce{X}}^0\/2=Vc_{\\ce{Y}}^0,\nVc_{\\ce{Y}}^0,\\) respectively.\nThis result will come in handy below.\n\nUnder stoichiometric initial conditions Peckham\\cite{peckham} gives a definition of \\(\\xi_{\\max}\\) and proposes to use \\(\\frac{\\xi(t)}{\\xi_{\\max}}\\) in extremely special cases. The domain of this ratio is \\([0,1].\\)\n\nAt this point it may not be obvious how to generalize Corollary \\ref{corr:pure}. In order to treat more complicated cases we shall choose another way.\n\\subsection{Scaling by the \"maximum\"}\nIn cases when \\(\\xi^*:=\\lim_{t\\to+\\infty}\\xi(t)\\) is finite, then \n\\(\\xi^*=\\sup\\{\\xi(t);t\\in\\mathbb{R}\\}\\), thus \n \\(\\xi^*\\) may be identified (mathematically incorrectly) with \\(\\xi_{\\max},\\) and surely \\(\\lim_{t\\to+\\infty}\\frac{\\xi(t)}{\\xi^*}=1.\\)\n That is the procedure applied by most authors\\cite{dumonlichanotpoquet,vandezandevandergrienddekock,moretti}.\n\\subsection{Detailed balanced reactions}\n\\begin{definition}\nThe complex chemical reaction\n\\begin{equation}\\label{eq:revccr}\n\\ce{$\\sum_{m=1}^M\\alpha_{m,r}$X($m$) <=>[$k_r$][$k_{-r}$] $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R);\n\\end{equation}\nendowed with mass action kinetics is said to be \\emp{conditionally detailed balanced} at the positive stationary point \\(\\mathbf{c}^*\\) if\n\\begin{equation}\\label{eq:dbgeneralcond}\nk_r(\\mathbf{c}^*)^{\\boldsymbol{\\alpha}_{.,r}}=k_{-r}(\\mathbf{c}^*)^{\\boldsymbol{\\beta}_{.,r}}\n\\end{equation}\nholds. It is unconditionally \\emp{detailed balanced} if\nEq. \\eqref{eq:dbgeneralcond} holds for any choice of (positive) reaction rate coefficients.\n\\end{definition}\nNote that all the steps in \\eqref{eq:revccr} are reversible. Furthermore, in such cases the reaction steps are indexed\nby \\(r\\) and \\(-r,\\). It is always our choice in which order the forward and backward steps are written, expressing the fact that \"forward\" and \"backward\" has no true physical meaning.\n\\subsubsection{Ratio of two reaction extents.}\nSuppose we have a reversible reaction\n\\begin{equation}\\label{eq:singledb}\n\\ce{$\\sum_{m=1}^M\\alpha_m$X($m$) <=>[$k_1$][$k_{-1}$] $\\sum_{m=1}^M\\beta_m$X($m$)}\n\\end{equation}\nbeing unconditionally detailed balanced because the number of the forward and backward reaction pairs is 1.\nThen the initial value problem for the reaction extents is as follows.\n\\begin{align*}\n&\\dot{\\xi}_1=Vk_1\\prod_{m=1}^{M}(c_m^0+\\gamma_m(\\xi_1-\\xi_{-1})\/V)^{\\alpha_m},&\\xi_1(0)=0,\\\\\n&\\dot{\\xi}_{-1}=Vk_{-1}\\prod_{m=1}^{M}(c_m^0+\\gamma_m(\\xi_1-\\xi_{-1})\/V)^{\\beta_m},&\\xi_{-1}(0)=0,\n\\end{align*}\nwhere \\(\\gamma_m:=\\beta_m-\\alpha_m.\\)\n\\begin{proposition}\\label{prop:specdb}\nUnder the above conditions, one has\n\\fbox{\\(\\lim_{t \\to +\\infty}\n\\frac{\\xi_1(t)}{\\xi_{-1}(t)}=1.\\)}\n\\end{proposition}\nNote that initially one only knows that it is the \\emp{derivative}s of the reaction extents that have the same value at equilibrium.\n\n\\begin{example}\\label{ex:waterform}\nConsider the reversible reaction\n\\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z} for water formation with the data \n\\( k_1~=~1~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1},\\) \n\\(k_{-1}~=~1~\\mathrm{dm}^3\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1},\\)\n\\(c_{\\ce{Y}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, c_{\\ce{Z}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}.\\)\n\\begin{itemize}\n\\item \nIf \\(c_{\\ce{X}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) then \\ce{X} is in excess initially (a);\n\\item\nif \\(c_{\\ce{X}}^0~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) then one has a stoichiometric initial condition (b);\n\\item\nif \\(c_{\\ce{X}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) or\n\\(c_{\\ce{X}}^0~=~1\/2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3},\\) then \\ce{Y} is in excess initially (c or d).\n\\end{itemize}\nNote that it is not the excess or deficit that is relevant, see Conjecture \\ref{conj:iniratio} below.\nThe initial rates of the forward and backward reactions are as follows:\n\\begin{itemize}\n\\item \n\\(1\\cdot9\\cdot1 > 1\\cdot1,\\)\n\\item \n\\(1\\cdot4\\cdot1 > 1\\cdot1,\\)\n\\item \n\\(1\\cdot1\\cdot1 = 1\\cdot1,\\)\n\\item \n\\(1\\cdot1\/4\\cdot1 <1\\cdot1.\\)\n\\end{itemize}\nThe results are in accordance with Conjecture \\ref{conj:iniratio} below and can be seen in Figs. \\ref{fig:rm21} and \\ref{fig:rm22}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{xexcess1}\\\\\n\\includegraphics[width=0.7\\linewidth]{xexcess2}\\\\\n\\caption{The ratio \\(\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}\\) is tending to 1 from above: \\ce{X} is in excess at the top figure (case a of Example \\ref{ex:waterform}) and the initial condition is stoichiometric at the bottom figure (case b) in case of the reaction \\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}. \\(V~=~1~\\mathrm{dm}^3,\\) and other data are given in the text.}\n\\label{fig:rm21}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{stoichio}\\\\\n\\includegraphics[width=0.7\\linewidth]{yexcess}\\\\\n\\caption{The ratio \\(\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}\\) is constant at the top figure (case c of Example \\ref{ex:waterform}) and is tending to 1 from below at the bottom figure (case d) in case of the reaction \\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}. \\ce{Y} is in excess in both cases. \\(V~=~1~\\mathrm{dm}^3,\\) and other data are given in the text.}\n\\label{fig:rm22}\n\\end{figure}\n\\end{example}\n\nNow we formulate our experience collected on several models. \nConsider reaction \\eqref{eq:singledb}.\n\\begin{conjecture}\\label{conj:iniratio}\nThe sign of the difference \\(k_{-1}(\\mathbf{c}^0)^{\\beta}-k_1(\\mathbf{c}^0)^{\\alpha}\\)\nand the sign of \\(\\lim_{t\\to0}\\frac{\\xi_1(t)}{\\xi_{-1}(t)}\\)\nis the same.\n\\end{conjecture}\n\n\nConvergence has been proved above.\nThe ratio at \\(t=0\\) is not defined, but the limit of the ratio when \\(t \\to +0\\)\ncan be calculated using the l'Hospital Rule as\n\\begin{equation*}\n\\lim_{t\\to+0}\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}=\n\\lim_{t\\to+0}\\frac{\\dot{\\xi}_{1}(t)}{\\dot{\\xi}_{-1}(t)}=\n\\frac{k_1}{k_{-1}}(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}-\\boldsymbol{\\beta}},\n\\end{equation*}\nand \\(\\frac{k_1}{k_{-1}}(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}-\\boldsymbol{\\beta}}<1\\) is equivalent to saying that \n\\(k_1(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}} $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R)\\right\\} \n\\end{equation}\nwith the usual notations: \n\\(\\alpha_{m,r}\\) and \\(\\beta_{m,r}\\)\nare the stoichiometric coefficients, \\ce{X($m$)} are the species. \nThe induced kinetic differential equation describing the time evolution of the quantities of the species is in general \\begin{equation}\\label{eq:ikdegen}\n\\dot{\\mathbf{c}}(t)=\\boldsymbol{\\gamma}\\mathbf{rate}(\\mathbf{c}(t))\n\\end{equation} \ntogether with the initial condition \\(\\mathbf{c}(0)=\\mathbf{c}^0.\\) The component \\(rate_r\\) of the vector \\(\\mathbf{rate}\\) provides the reaction rate of the \\(r^{\\mathrm{th}}\\) reaction step.\nNote that in the mass action case Eq. \\eqref{eq:ikdegen} specializes into\n\\begin{equation}\\label{eq:ikdemass}\n\\dot{\\mathbf{c}}=\\boldsymbol{\\gamma}\\mathbf{k}\\odot\\mathbf{c}^{\\boldsymbol{\\alpha}} \n\\end{equation} \nor, in coordinates\n\\begin{equation*\n\\dot{c}_m(t)=\\sum_{r=1}^{R}\\gamma_{mr}k_r\\prod_{p=1}^{M}c_p^{\\alpha_{p,r}}\\quad(m=1,2,\\cdots,M),\n\\end{equation*}\nwhere \\(\\mathbf{k}\\) is the vector of (positive) reaction rate coefficients \\(k_r.\\)\n\nThe \\emp{reaction extent}\nof a complex chemical reaction or reaction network defined by Eq. \\eqref{eq:ccr} is the scalar variable, vector-valued function given by the formula\n\\begin{equation}\\label{eq:specre}\n\\boxed{\n\\boldsymbol{\\xi}(t):=V\\int_0^t\n\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}}.\n\\end{equation}\nIts time derivative \\(\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}(t))\\) is usually called the \\emp{rate of conversion}.\nWe shall calculate the reaction extent for a few exotic reactions and see how they reflect the special properties of the reactions.\n\\end{comment}\n\\section{What if the conditions are not fulfilled?}\nIn the first part of the paper, we calculated the reaction extents for the reaction steps of simple reactions, for detailed balanced reactions, for reactions with a kinetic differential equation having an attracting stationary point, etc. \nOur main question in the present part is:\nWhat happens if one takes an exotic reaction that has multiple stationary point(s), and shows oscillations or even chaos?\n\\subsection{Multistationarity}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.85\\linewidth]{multist1}\n\\caption{The Horn--Jackson reaction network\nwith \\(k_1=k_3=1~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\) and \\(k_2=k_4=k\\). \nThe value of \\(k\\) is varied as described in the text, \nits unit is the same as that of the other rate coefficients.}\n\\label{fig:hj0}\n\\end{figure}\nHorn and Jackson\\cite{hornjackson}, (see p. 110) has shown that the complex chemical reaction in Fig. \\ref{fig:hj0} has three (positive) stationary points in every stoichiometric compatibility class if \nthe numerical value of \\(k\\) lies between 0 and \\(\\frac{1}{6}.\\)\nTo be more specific, let us choose \n\\(\nk~=~\\frac{1}{10}~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\), and \\(c^0_{\\ce{X}}~=~c^0_{\\ce{Y}}~=~\\frac{1}{2}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}.\\) \nThen, easy calculation--neglecting the units for simplicity--shows that in the \\emp{stoichiometric compatibility} class\n\\(\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}]; c_{\\ce{X}}+c_{\\ce{Y}}=1\\}\\)\n(or, for cases when the total concentration is unity) there are three stationary points:\n\\begin{enumerate}\n\\item \nthe stationary point\n\\(\n\\mathbf{c}^*_1:=\\begin{bmatrix}\n\\frac{1}{3 + \\sqrt{3}}&\\frac{3 + \\sqrt{3}}{6}\n\\end{bmatrix}\n\\) is globally asymptotically stable (i.e. attracting) with the attracting domain \n\\(\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}]: 0\\le c_{\\ce{X}}<\\frac{1}{3 + \\sqrt{3}}, c_{\\ce{X}}+c_{\\ce{Y}}=1\\},\n\\) and\n\\item \nthe stationary point \\(\\mathbf{c}^*_2:=[\\frac{1}{2}\\quad\\frac{1}{2}]\\)\nis unstable (i.e. non-attracting), and\n\\item \nthe stationary point\n\\(\n\\mathbf{c}^*_3:=\\begin{bmatrix}\n\\frac{1}{3 - \\sqrt{3}}&\\frac{3 - \\sqrt{3}}{6}\n\\end{bmatrix}\n\\) \nis globally asymptotically stable (i.e. attracting)\n with the attracting domain \n\\(\n\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}];\\frac{1}{3 - \\sqrt{3}} 3Y}. \nNote that the ranking of the reaction extents is different in the two cases.}\n\\label{fig:hj}\n\\end{figure}\n\\subsection{Oscillation}\nWe shall study here two oscillatory reactions. \nFirst, the often used Lotka--Volterra reaction\\cite{lotka,volterra} comes that is not only theoretically interesting because it can be used to describe the oscillations in cold flames\\cite{frankkamenetskii} or see Ref.\\cite{frankkamenetskiiPUP}.\nThe experimentally based R\\'abai reaction\\cite{rabai} aimed at describing pH oscillations follows as the second. \nOne may say that the Brusselator model\\cite{prigoginelefever} would be a more realistic choice, as it results in limit cycle solutions. However, it has a third-order step that makes the calculations more tedious.\nThe type of calculations shown below would give almost the same kind of results with the Brusselator, too.\n\\subsubsection{The Lotka--Volterra reaction.}\nThe irreversible and reversible cases behave in qualitatively different ways.\n\n\\paragraph{Irreversible case:}\n\nIt is known\\cite{potalotka,schumantoth} that under some mild conditions the only two-species reaction to show oscillations is the (irreversible) Lotka--Volterra reaction \\ce{X ->[$k_1$] 2X}, \\ce{X + Y ->[$k_2$] 2Y}, \\ce{Y ->[$k_3$] 0}. (Cf. also the paper by T\\'oth and H\\'ars\\cite{tothhars} and that by Banaji and Boros\\cite{banajiboros}.) \nIt has a single positive stationary point that is stable but not attractive, therefore, one cannot apply Proposition \\ref{prop:main} above.\nNote that the individual reaction extents are not oscillating; they are \"pulsating\" while monotonously increasing to infinity. \nThey have an oscillatory derivative, and the zeros of their second derivative clearly show the endpoints of the periods, \nsee Fig. \\ref{fig:pulsating}. \nIt may be a good idea to calculate any kind of reaction extent \\emp{for a period} in case of oscillatory reactions.\nWe are going to study this point later.\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkaxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkaxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkaxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the irreversible Lotka--Volterra reaction with \n\\(k_1~=~3~{\\mathrm{s}}^{-1}\\),\n\\(k_2~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\), \n\\(k_3~=~5~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:pulsating}\n\\end{figure}\nIt is interesting to have a look at the ratios of the reaction extents, as they seem to tend 1, see Fig. \\ref{fig:lotkaratios}.\nWe assume that this phenomenon is related to the fact that the oscillatory solution results in a closed curve in the phase plane of the irreversible Lotka--Volterra reaction.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.75\\linewidth]{lotkaratio12}\\\\\n\\includegraphics[width=0.75\\linewidth]{lotkaratio23}\\\\\n\\includegraphics[width=0.75\\linewidth]{lotkaratio31}\n\\caption{The ratios of the reaction extents in case of the irreversible Lotka--Volterra reaction with the parameters as in Fig. \\ref{fig:pulsating}.}\n\\label{fig:lotkaratios}\n\\end{figure}\n\n\\paragraph{Reversible case, detailed balanced:}\n\nThe reversible Lotka--Volterra reaction\n\\ce{X <=>[$k_1$][$k_{-1}$] 2X}, \n\\ce{X + Y <=>[$k_2$][$k_{-2}$] 2Y}, \n\\ce{Y <=>[$k_3$][$k_{-3}$] 0}\nis also worth studying.\nFirst, let us note that for all values of the reaction rate coefficients it has a single, positive stationary point because the reaction steps are reversible. \nTherefore, the system is permanent\\cite{simon,borosexistence}, i.e. the trajectories remain in a compact set. \nIf the trajectories remain in a compact set, then they are either tending to a limit cycle, or the stationary point is asymptotically stable. \nThe first possibility is excluded by the above-mentioned theorem by P\\'ota\\cite{potalotka}, thus it is only the second possibility that remains. \nFig. \\ref{fig:lvrevdb} shows the behavior of the individual reaction extents.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the reversible, detailed balanced Lotka--Volterra reaction with \n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\),\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~\\frac{15}{8}~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:lvrevdb}\n\\end{figure}\nLet us note that both the existence and uniqueness of the stationary state also follow from the Deficiency One Theorem (see p. 106 in Feinberg\\cite{feinbergbook}, or p. 176 in T\u00f3th et al.\\cite{tothnagypapp})\n\nIf the reaction is detailed balanced that holds if and only if\n\\begin{equation}\\label{eq:lvdb}\nk_1k_2k_3=k_{-1}k_{-2}k_{-3}\n\\end{equation}\nis true, then our Proposition 2 of the previous paper implies that\nthe product of the ratios of the reaction extents tends to 1, see Fig \\ref{fig:lotkarevdbratios}. \nThis follows also from our Theorem 3 there.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkarevdbratios}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevdbprod}\n\\caption{\nAbove: Time evolution of the ratios of the individual reaction extents---blue for Reaction (1), orange for Reaction (2), and green for Reaction (3)---in case of the reversible, detailed balanced Lotka--Volterra reaction with\n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\),\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\), \n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~\\frac{15}{8}~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)\nBelow: Time evolution of the product of the ratios tending to 1.}\n\\label{fig:lotkarevdbratios}\n\\end{figure}\n\n\\paragraph{Reversible case, not detailed balanced:}\n\nIf Condition \\eqref{eq:lvdb} does not hold, the reaction still has an \\emp{attracting stationary point}. What is more, it has an asymptotically stable stationary point. \nFig. \\ref{fig:lotkanotdbxi} shows the behavior of the individual reaction extents.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the reversible, \\emp{not} detailed balanced Lotka--Volterra reaction with \\(k_1~=~1~{\\mathrm{s}}^{-1},\\)\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~6~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:lotkanotdbxi}\n\\end{figure}\n\n\\subsubsection{The R\\'abai reaction of pH oscillation.}\nHere we include a reaction proposed by R\\'abai\\cite{rabai} to describe pH oscillations. \nThis reaction has much more direct contact with chemical kinetic experiments, and it is much more challenging---from the point of view of numerical mathematics---than the celebrated Lotka--Volterra reaction.\n\nR\\'abai\\cite{rabai} starts with the steps\n\\begin{align*}\n&\\ce{A- + H+ <=>[$k_1$][$k_-1$] AH},\\\\\n&\\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-}\n\\end{align*}\nwhere \\ce{\\{B\\}} is an external species with a constant concentration. \nThis reaction has a single stationary point\n\\begin{equation*}\nc^*_{\\ce{A-}}=0,\\quad\nc^*_{\\ce{H+}}=c^0_{\\ce{H+}}+c^0_{\\ce{AH}},\\quad\nc^*_{\\ce{AH}}=0,\\quad\nc^*_{\\ce{P}}=c^0_{\\ce{A-}}+c^0_{\\ce{AH}}+c^0_{\\ce{P}}\n\\end{equation*}\nspecializing into\n\\(\nc^*_{\\ce{A-}}=0,\nc^*_{\\ce{H+}}=c^0_{\\ce{H+}},\nc^*_{\\ce{AH}}=0,\nc^*_{\\ce{P}}=c^0_{\\ce{A-}}\n\\)\nwith the natural restriction on the initial condition\n\\(\nc^0_{\\ce{AH}}=0,\nc^0_{\\ce{P}}=0.\n\\)\n\nPutting the reaction into a CSTR (continuously stirred flow-through tank reactor) \nmeans in the terms of formal reaction kinetics that\nall the species can flow out and some of the species may flow in, \nso that in the meantime the volume is maintained constant.\nIn the present case, the following steps are added\n\\begin{align}\n&\\ce{A- ->[$k_0$] 0},\\label{rabaiout1}\\\\\n&\\ce{0 ->[$k_0c^0_{\\mathrm{A}^-}$]A-},\\label{rabaiin1}\\\\\n&\\ce{H+ ->[$k_0$] 0},\\label{rabaiout2}\\\\\n&\\ce{0 ->[$k_0c^0_{\\mathrm{H}^+}$] H+},\\label{rabaiin2}\\\\\n&\\ce{AH ->[$k_0$] 0},\\label{rabaiout3}\n\\end{align}\nwhere $k_0$ is the volumetric flow rate normalized to the volume of the reactor (often called the reciprocal of the residence time) measured in unit \\(\\mathrm{s}^{-1}\\). \nAs a result of adding these steps, multistability may occur with appropriately chosen values of the parameters. When the reaction step\n\\begin{equation}\\label{rabai3}\n\\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH}\n\\end{equation}\nis also added, one may obtain periodic solutions having appropriate parameter values, see Fig. \\ref{fig:rabaiosc}. Let us remark that neither the R\\'abai reaction nor the Lotka--Volterra reaction is mass-conserving.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscsol}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaiosctraj}\n\\caption{Time evolution of the pH and the projection of the negative logarithm of the first three coordinates of the trajectory in case of the oscillating R\\'abai reaction with \n\\(k_1~=~10^{10}~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-1}~=~10^3~{\\mathrm{s}}^{-1}\\),\n\\(k_2~=~10^6~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3~=~1~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_0~=~10^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{A-}}~=~5~\\times~10^{-3}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{H+}}~=~10^{-3}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{AH}}~=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{P}}~=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:rabaiosc}\n\\end{figure}\n\nIt is instructive to cast a glance to the reaction extents in such a complex system.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscext01forward}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaioscext01backward}\n\\caption{Reaction extents of the forward (above) and backward (below) steps of the fast equilibrium reaction \\ce{A- + H+ <=>[$k_1$][$k_-1$] AH} of the oscillating R\\'abai reaction (in the same time window as that of in Fig. \\ref{fig:rabaiosc}.)}\n\\label{fig:rabaioscext12} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscext02}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaioscext03}\n\\caption{Reaction extents of the reaction steps\n\\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-} (left) and \n\\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH} (right) of the oscillating R\\'abai reaction (in the same time window as that of in Fig. \\ref{fig:rabaiosc}.)}\n\\label{fig:rabaioscext23} \n\\end{figure}\nNote that the reaction extents of the fast equilibrium reaction \\ce{A- + H+ <=>[$k_1$][$k_-1$] AH} shown in Fig. \\ref{fig:rabaioscext12} are practically the same, or: their ratio tends to 1, as if no other steps were present! \nThey are also four-five orders of magnitude higher than those of the auto-catalytic production \\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-} and the slow pseudo first-order chemical removal \\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH} of \\ce{H+} ion shown in Fig. \\ref{fig:rabaioscext23}.\nNote also the step-wise increase of reaction extent \\(\\xi_3(t)\\).\n\\subsection{Chaos}\nHere we use a version of the R\\'abai reaction that \\emp{can numerically be shown} to exhibit chaotic behavior, see Fig. \\ref{fig:rabaichaos}. \nThis is a good model for experimental pH oscillators also showing behavior that seems to be chaotic according to the usual standards.\n\nWhen the reaction step \\eqref{rabai3} is made reversible \n\\begin{equation}\\label{rabai5}\n\\ce{H+ + \\{C$^-$\\} <=>[$k_3$][$k_{-3}$] CH},\n\\end{equation}\nand one also introduces both the chemical \"removal\" and the outflow of \\ce{CH} \n\\begin{equation}\\label{rabai6}\n\\ce{CH ->[$k_{4}$] Q},\n\\end{equation}\n\\begin{equation}\\label{rabai7}\n\\ce{CH ->[$k_{0}$] 0},\n\\end{equation}\nchaotic solutions are obtained by using appropriate parameters and favorable input concentrations. Fig. \\ref{fig:rabaichaos} illustrates this fact. The reaction extents tend to \\(+\\infty\\) (see for example Fig. \\ref{fig:rabaichaosextent}) in such a way that their derivative is chaotically oscillating (not shown), as expected.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaichaossol}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaichaostraj}\n\\caption{Time evolution of the pH and the projection of the negative logarithm of the first two coordinates of the trajectory in case of the chaotically oscillating R\\'abai reaction with \nthe same parameters as in Fig. \\ref{fig:rabaiosc} and\n\\(k_{-3}~=~1.5~\\times~10^{-2}~{\\mathrm{s}}^{-1}\\),\n\\(k_{4}~=~~5~\\times~10^{-2}~{\\mathrm{s}}^{-1}.\\) \n}\n\\label{fig:rabaichaos} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaichaosextent}\n\\caption{Time evolution of the reaction extent \nof the forward step of reaction \\eqref{rabai5}\nin case of the chaotic R\\'abai reaction with the same parameters as in Fig. \\ref{fig:rabaichaos}.}\n\\label{fig:rabaichaosextent}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nA generalized definition for the reaction extent has been given by Bowen\\cite{bowen} (included in Chapter 6 of the book edited and partially written by Truesdell\\cite{truesdell}).\nAnother definition that turned out to be much better fitting into the framework of modern formal reaction kinetics in the last 50 years was given by Aris\\cite{arisprolegomena1}. \nStill, neither of them became popular among chemists and chemical engineers. Our goal here is to further generalize the definition (essentially by Aris) to make it compatible with the present theory of reaction kinetics. The result will reveal that there existed a kind of sleeping definition with no use in chemical kinetics, and we show that this should not be the case.\n\nWe have introduced the concept of reaction extent for reaction networks of arbitrary complexity (any number of reaction steps and species) without assuming mass action kinetics. \nThe newly defined reaction extent gives the advancement of each individual irreversible reaction step; \nin case of reversible reactions, we have a pair of reaction extents. \nIn all the practically important cases, the fact that the reaction extent is strictly monotonously increasing, implies that the reaction events never stop. \nThis observation sheds new light onto the concept of \\emp{dynamic equilibrium},\nwithout alluding to either thermodynamics or statistical mechanics.\n\nAfter a few statements on the qualitative behaviour of the reaction extent we made efforts to connect the notion with the traditional ones. \nThus, we have shown that if the number of reaction steps is one, \nthe reaction extent in the long run (as \\(t\\to+\\infty\\))\ntends to 1 if appropriately scaled.\nWe have not used the expression \\emp{progress of reaction}, \nand even less the \\emp{reaction coordinate}. \nWe agree that it is convenient to accept the proposal by Aris\\cite{aris} to work with \\(\\frac{1}{V}\\boldsymbol{\\xi}\\), that is usually called the \\emp{degree of advancement}. \nWe have also shown that for an arbitrary number of reversible detailed balanced reaction steps the product of the ratios of the individual reaction extents also tends to 1 as \\(t\\to+\\infty.\\)\n\nOur most general statement follows for arbitrary reactions having an attracting stationary point\nand with a function not vanishing on the stationary point: in this case the value of the chosen function \nalong the time dependent concentrations divided by the value of the given function at the stationary concentration tends to 1. \nThus, this statement is true not only for the reaction extent but also for any appropriate functions.\n\nOne should take into consideration that although in the practically interesting cases, when the number of equations \\(R\\) in \\eqref{eq:rmdiffegy} for the reaction extents is larger than \\(M\\), i.e. the number of the kinetic differential equations in \\eqref{eq:ikdegen}: \\(R > M\\), then\nthe equations for the reaction extents are of a much simpler structure, \nas the right hand side of the differential equations \\eqref{eq:rmdiffegy} describing them consist only of a single term. \nDuring calculations, we had the experience that it was numerically less demanding to solve the system of differential equations of the reaction extents than those of the concentrations. \n\nThe main advantage of the new definition of reaction extent is that by knowing the kinetic model of a reacting system one can now calculate not only the time evolution of the concentration of each reacting species but also the number of occurrences of the individual reaction events.\n\nAs a by-product we have given an exact definition of the stoichiometric initial concentrations and the initial concentration in excess. \nOne can say that the concept of the newly defined reaction extent can be usefully applied to a larger class of reactions than usual, but in some (exotic) cases its use needs further investigations,\nthis we will start in the forthcoming paper.\nIt is for the reader to decide if we succeeded in avoiding all the traps mentioned in the Introduction. \nQuite a few authors treat the methodology of teaching the concept\\cite{garst,moretti,vandezandevandergrienddekock};\nwe think this approach will only have its \\textit{raison d'\\^{e}tre} when the scientific background will have been clarified and agreed on. \n\n\nLet us mention a few limitations and future directions of research. We have assumed throughout that volume (together with temperature and pressure) is constant. Tacitly, we assumed that we deal with homogeneous kinetics; heterogeneous systems are not taken into consideration. Also, we have not dealt with reaction-diffusion systems. \nWe mention that recently, Peka\\v{r}\\cite{pekar} and Rodrigues et al.\\cite{rodriguesbilleterbonvin} have applied the concept of reaction extent to the case when diffusion is also present. \nWe have also mentioned a few mathematical conjectures that are to be investigated later. \n\n\\section*{Supporting Information}\nThe file FiguresandCalculationGasparToth.pdf contains all the calculations and drawings made using the Wolfram language. \nInterested readers may request from the authors the .nb file usable for calculations.\n\n\\section*{Author Contributions}\nThe authors equally participated in all parts of the paper.\n\n\\section*{Conflicts of interest}\nThere are no conflicts to declare.\n\n\\section*{Acknowledgements}\nThe present work has been supported by the National Office for Research and Development (2018-2.1.11-T\\'ET-SI-2018-00007 and FK-134332). \nJT is grateful to Dr. J. Karsai (Bolyai Institute, Szeged University) and for Daniel Lichtblau (Wolfram Research) for their help. \nMembers of the Working Committee for Reaction Kinetics and Photochemistry, \nespecially Profs. T. Tur\\'anyi and G. Lente, furthermore Drs. Gy. P\\'ota and T. Nagy, made a number of useful critical remarks.\n\n\\section*{Notations}\nSome of the readers may appreciate that we have collected the used notations.\n\n\\begin{table*}\n\\caption{Notations}\n\\label{tbl:notations1}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}clcc}\n\\hline\nNotation & Meaning & Unit & Typical value \\\\\n\\hline\\([a, b[\\)&left-closed, right-open interval&&\\\\\n\\hline\\(\\Longrightarrow\\)&implies&&\\\\\n\\hline\\(\\in\\)&belongs to&&\\\\\n\\hline\\(\\forall\\)&universal quantifier&&\"for all\"\\\\\n\\hline\\(\\exists\\)&existential quantifier&&\"there is\"\\\\\n\\hline\\(\\odot\\)&coordinate-wise product of vectors&&\\\\\n\\hline\\(\\mathbf{A}^{\\top}\\)&transpose of the matrix \\(\\mathbf{A}\\)&&\\\\\n\\hline\\(c_m, c_{\\ce{X}}\\) & concentration of \\ce{X($m$)} and \\ce{X} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(\\mathbf{c}\\) &vector of concentrations & & \\\\\n\\hline\\(c_m^0\\) & initial concentration of \\ce{X($m$)} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(c_m^*\\) & stationary concentration of \\ce{X($m$)} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(\\mathcal{C}^i(A,B) \\) & \\(i\\) times continuously differentiable \\\\\n & functions from \\(A\\) into \\(B\\) & & \\\\\n\\hline\\(\\Dom(u)\\) & the domain of the function \\(t\\mapsto u(t)\\) & &\\\\\n\\hline \\(J\\subset\\mathbb{R}\\) & an open interval &&\\\\ \n\\hline \\(k, k_r,k_{-r}\\) & reaction rate coefficient &\\((\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3})^{1-\\sum_{m=1}^{M}\\alpha_{m,r}}~{\\mathrm{s}}^{-1}\\)&\\\\\n\\hline \\(k_{0}\\) & normalized volumetric flow rate& \\({\\mathrm{s}}^{-1}\\) &\\\\\n\\hline\\(M\\) & the number of chemical species & & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(n_m\\) & the quantity of species \\ce{X($m$)}&\\(\\,\\mathrm{mol}\\) & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(n_m^0\\) & the initial quantity of species \\ce{X($m$)} &\\(\\,\\mathrm{mol}\\) & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(\\mathbf{n}\\) & the vector of the quantity of species & &\\\\\n\\hline\\(\\mathbf{n}^0\\) & the vector of the initial quantity of species\\\\\n\\hline \\(\\mathbb{N}\\) & the set of positive integers &&\\\\\n\\hline \\(\\mathbb{N}_0\\) & the set of non-negative integers &&\\\\\n\\hline\\(rate_r\\) & rate of the \\(r^{\\mathrm{th}}\\) reaction step &\\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}~\\mathrm{s}^{-1}\\) & \\\\\n\\hline\\(\\mathbf{rate}\\) & vector of reaction rates & & \\\\\n\\hline \\(\\mathbb{R}\\) & the set of real numbers&&\\\\\n\\hline \\(\\mathbb{R}^+\\) & the set of positive real numbers &&\\\\\n\\hline \\(\\mathbb{R}^+_0\\) & the set of non-negative real numbers &&\\\\\n\\hline\\(t\\) & time & s & \\(\\in\\mathbb{R}\\)\\\\\n\\hline \\(V\\) & volume & \\(\\mathrm{dm}^3\\) & \\\\\n\\hline \\(W_r\\) & number of occurrences of the \\(r^{\\mathrm{th}}\\) step & &\\(\\in\\mathbb{N}_0\\) \\\\\n\\hline \\(\\mathbf{W}\\) & vector of number of occurrences &&\\\\\n\\hline \\(x_m\\) & \\(m^{\\mathrm{th}}\\) dependent variable in a & &\\\\\n& differential equation&&\\\\\n\\hline\\(\\mathbf{x}\\) & vector of variables in a & &\\\\\n& differential equation &&\\\\\n\\hline\\(\\ce{X},\\ce{Y},\\ce{X($m$)}\\) &chemical species & & \\\\\n\\hline\\(\\alpha_{m,r},\\alpha_{m}\\) & stoichiometric coefficient & 1 & \\(0,1,2,3\\) \\\\\n& in the reactant complex & & \\\\\n\\hline \\(\\boldsymbol{\\alpha}\\) & matrix of reactant complex vectors & & \\\\\n\\hline \\(\\beta_{m,r},\\beta_{m}\\) & stoichiometric coefficient & 1 & \\(0,1,2,3\\) \\\\\n& in the product complex & & \\\\\n\\hline \\(\\boldsymbol{\\beta}\\) & matrix of product complex vectors & & \\\\\n\\hline \\(\\gamma_{m,r},\\gamma_{m}\\) & stoichiometric number & 1 & \\(-3,-2,\\dots,2,3\\) \\\\\n\\hline \\(\\boldsymbol{\\gamma}\\) & stoichiometric matrix & &\\\\\n\\hline \\(\\xi_r\\) & reaction extent of the \\(r^{\\mathrm{th}}\\) step &\\(\\,\\mathrm{mol}\\)&\\\\\n\\hline \\(\\boldsymbol{\\xi}\\) &vector of reaction extents & & \\\\\n\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n\n\n\\renewcommand\\refname{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgefe b/data_all_eng_slimpj/shuffled/split2/finalzzgefe new file mode 100644 index 0000000000000000000000000000000000000000..8b855743ca22075ad30655824c3d8654f6932aa8 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgefe @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn 1992 Sedykh \\cite{sedykh:originalvertex, sedykh:vertex} generalized the classical four vertex theorem of planar curves by showing that the torsion of any closed space curve vanishes at least $4$ times, if it lies on a convex surface, see Note \\ref{note:4vertex}. Recently the author \\cite{ghomi:rosenberg} extended Sedykh's theorem to curves which lie on a \\emph{locally} convex simply connected surface. In this work we prove another generalization of Sedykh's theorem which does not require the existence of any underlying surface for the curve. \n\nTo state our result, let us recall that \na set $X$ in Euclidean space $\\mathbf{R}^3$ is \\emph{star-shaped} with respect to a point $o$ if no ray emanating from $o$ intersects $X$ in more than one point. Further let us say that $X$ is \\emph{locally convex} with respect to $o$ if through every point $p$ of $X$ there passes a plane $H$, not containing $o$, such that a neighborhood of $p$ in $X$ lies on the same side of $H$ as does $o$. For instance, the boundary of any convex body in $\\mathbf{R}^3$ is both star-shaped and locally convex with respect to any of its interior points. In this paper we show:\n\n\\begin{thm}\\label{thm:main}\nLet $\\Gamma\\subset\\mathbf{R}^3$ be a simple closed $\\mathcal{C}^3$ immersed curve with nonvanishing curvature. Suppose that $\\Gamma$ is star-shaped and locally convex with respect to a point $o$ in the interior of its convex hull. Then the torsion of $\\Gamma$ changes sign at least $4$ times.\n\\end{thm}\n\nIn particular, if $\\Gamma$ lies on the boundary of a convex body, then it immediately follows that $\\Gamma$ has at least $4$ points of vanishing torsion, which is Sedykh's result. The above theorem also generalizes a similar result of Thorbergsson and Umehara \\cite[Thm. 0.2]{thorbergsson&umehara}, see Note \\ref{note:TU2}.\n\n\nThe general strategy for proving Theorem \\ref{thm:main} hinges on the fact that at a point of nonvanishing torsion $p$, the torsion $\\tau$ of $\\Gamma$ is positive (resp. negative) if and only if $\\Gamma$ crosses its osculating plane at $p$ in the direction (resp. opposite direction) of the binormal vector $B(p)$. To exploit this phenomenon, we project $\\Gamma$ into a sphere $S$ centered at $o$ to obtain a simple closed curve $\\overline\\Gamma$ which contains $o$ in its convex hull. By a result of Segre \\cite{segre:tangents, weiner:inflection, ghomi:verticesC}, also known as Arnold's tennis ball theorem, $\\overline\\Gamma$ has at least $4$ inflections $\\overline p_i$. It turns out that the osculating planes $\\overline\\Pi_{\\overline p_i}$ of $\\overline\\Gamma$ coincide with the osculating planes $\\Pi_{p_i}$ of $\\Gamma$ at $p_i$, where $p_i$ are the preimages of $\\overline p_i$, see Figure \\ref{fig:projection}(a). Further, the local convexity assumption will ensure that the binormal vectors of these planes are parallel, i.e., $\\overline B(\\overline p_i)=B(p_i)$. So the local position of $\\Gamma$ with respect to $\\Pi_{p_i}$ mirrors that of $\\overline\\Gamma$ with respect to $\\overline\\Pi_{\\overline p_i}$. After a perturbation of $o$, we may assume that $\\tau(p_i)\\neq 0$ and $\\overline p_i$ are \\emph{genuine} inflections, i.e., the geodesic curvature of $\\overline\\Gamma$ changes sign at $\\overline p_i$. Further, it is easy to see that at every pair of consecutive genuine inflections, $\\overline\\Gamma$ crosses its osculating planes in opposite directions with respect to $\\overline B$, because $\\overline B$ can never be orthogonal to $S$, see Figure \\ref{fig:projection}(b).\nThus it follows that $\\tau$ changes sign at least once within each of the $4$ intervals of $\\Gamma$ determined by $p_i$.\n \\begin{figure}[h]\n \\centering\n \\begin{overpic}[height=1.75in]{projection.pdf}\n \\put(13.5,12.5){\\small $p_i$}\n \\put(13,20){\\small $B(p_i)$}\n \\put(22,16){\\small $\\Gamma$}\n \\put(34.5,19){\\small $\\overline p_i$}\n \\put(33,27){\\small $\\overline B(\\overline{p}_i)$}\n \\put(38.5,23){\\small $\\overline\\Gamma$}\n \\put(30,2){\\small (a)}\n \\put(81,2){\\small (b)}\n \\put(13.5,6){$\\Pi_{p_i}=\\overline\\Pi_{\\overline{p}_i}$}\n \\put(81,25.5){\\small $\\overline\\Gamma$}\n \\put(94,23){\\small $\\overline{B}$}\n \\put(68,14){\\small $\\overline{B}$}\n\n \\end{overpic}\n \\caption{}\\label{fig:projection}\n\\end{figure}\n\n\n\nThe above approach for studying the sign of torsion is due to Thorbergsson and Umehara \\cite[p. 240]{thorbergsson&umehara}, who in turn attribute the spherical projection technique to Segre \\cite[p. 258]{segre1968alcune}; however, the method of Thorbergsson and Umehara does not quite lead to a generalization of Sedykh's theorem to a class of nonconvex curves, as we describe in Note \\ref{note:TU} below. \n\nNext we discuss some examples which validate Theorem \\ref{thm:main}. Figure \\ref{fig:curls}(a) shows a curve which is star-shaped and locally convex, but is not convex. Thus Theorem \\ref{thm:main} is indeed a nontrivial generalization of Sedykh's result. Figure \\ref{fig:curls}(b), which shows a torus curve of type $(1, 7)$, demonstrates that the star-shaped assumption by itself would not be sufficient in Theorem \\ref{thm:main}. Indeed all torus curves of type $(1,n)$ are star-shaped with respect to any point on their axis of symmetry, and Costa \\cite{costa:twisted} has shown that, for $n\\geq 2$, they may be realized with nonvanishing torsion if the underlying torus is sufficiently thin. Finally Figure \\ref{fig:curls}(c) shows that the local convexity by itself is not sufficient either. This figure depicts a spherical curve with only two extrema of geodesic curvature and hence only two sign changes of torsion (the torsion of a spherical curve vanishes precisely when its geodesic curvature has a local extremum).\n\n \\begin{figure}[h]\n \\centering\n \\begin{overpic}[height=1in]{curls.jpg}\n \\put(10,-2){\\small (a)}\n \\put(45,-2){\\small (b)}\n \\put(78,-2){\\small (c)}\n \\end{overpic}\n \\caption{}\\label{fig:curls}\n\\end{figure}\n\nFour vertex theorems have had a multifaceted and interesting history, with unexpected applications, since Mukhopadhyaya proved the first version of this phenomenon in 1909. For extensive background and more references see \\cite{ghomi:rosenberg, gluck:notices, ovsienko&tabachnikov}. In particular see \\cite{bray&jauregui} for some recent applications in General Relativity, and \\cite{sedykh1996, sedykh1997} for discrete versions.\n\n\\begin{note}\\label{note:4vertex}\nThe classical four vertex theorem states that the curvature of any simple closed curve in $\\mathbf{R}^2$ has at least $4$ critical points, which are called vertices. A point of the curve is a vertex if and only if the osculating circle at that point has contact of order $3$ with the curve. Consequently, the geodesic curvature of simple closed curves on the sphere also satisfies the four vertex theorem, because the stereographic projection preserves circles. Further note that the plane which contains the osculating circle of a spherical curve is actually the osculating plane of the curve. Thus at a vertex, a spherical curve has contact of order $3$ with its osculating plane, which means that the torsion at that point vanishes. Hence all simple closed spherical curves have at least $4$ points of vanishing torsion. Sedykh generalized this result to curves lying on any convex surface. It is in this sense that Sedykh's theorem, and more generally our Theorem \\ref{thm:main}, are extensions of the classical four vertex theorem.\n\\end{note}\n\n\n\n\\begin{note}\\label{note:TU}\nFor convex space curves, a refinement of Sedykh's four vertex theorem is proved by Thorbegsson and Umehara in \\cite[Thm. 0.1]{thorbergsson&umehara}. Furthermore, these authors \\cite[Thm. 0.2]{thorbergsson&umehara} obtain a $4$ vertex result for space curves $\\Gamma\\subset \\mathbf{R}^3$ which are star-shaped with respect to a point $o$ in the interior of their convex hull, have no tangent line passing through $o$, and further satisfy the property that for all points $p\\in\\Gamma$ the angle between the principal normal $N(p)$ of $\\Gamma$ and the position vector $p-o$ is obtuse, i.e., \n\\begin{equation}\\label{eq:N}\n\\langle p-o, N(p) \\rangle <0.\n\\end{equation}\nThey claim that this result implies Sedykh's theorem, because a ``convex space curve $\\gamma$ satisfies the conditions in Theorem 0.2\" \\cite[p. 230]{thorbergsson&umehara}; however, this is not the case. Indeed there exists a simple closed curve which lies on the boundary of a convex body, but does not satisfy the condition \\eqref{eq:N} for any point $o$; see Figure \\ref{fig:apple}.\n \\begin{figure}[h]\n \\centering\n \\begin{overpic}[height=1.1in]{apple.jpg}\n \\put(71.5,26){\\small$p_1$}\n \\put(80,26){\\small$p_2$}\n \\put(57,1.5){\\small$H_1^+$}\n \\put(94,1.5){\\small$H_2^+$}\n \\end{overpic}\n \\caption{}\\label{fig:apple}\n\\end{figure}\nThe left diagram here depicts the curve, and the right diagram shows its projection into its plane of symmetry. Let $p_i$, $i=1$, $2$ be the intersections of the curve with it symmetry plane. Note that at these points the principal normals $N(p_i)$ are antiparallel. Let $H_i$ be the planes orthogonal to $N(p_i)$ which pass through $p_i$, and $H_i^+$ be the corresponding (closed) half-spaces into which $N(p_i)$ point. Note that each $H_i^+$ consists of the set of all points $o$ such that \n$\\l p_i-o, N(p_i) \\r \\leq 0$. But $H_1$ and $H_2$ are disjoint. So there exists no point $o$ with respect to which $\\Gamma$ can satisfy condition \\eqref{eq:N}. Thus Theorem 0.2 in \\cite{thorbergsson&umehara} does not imply Sedykh's theorem.\n\\end{note}\n\n\\begin{note}\\label{note:TU2}\nThe result of Thorbergsson and Umehara \\cite[Thm. 0.2]{thorbergsson&umehara} mentioned in the previous note is a special case of Theorem \\ref{thm:main}. Indeed let $H_p$ be the rectifying plane of $\\Gamma$ at $p$, i.e., the plane which passes through $p$ and is orthogonal to the principal normal $N(p)$. Then \\eqref{eq:N} implies that $N(p)$ points to the side of $H$ which contains $o$. Consequently a neighborhood of $p$ in $\\Gamma$ lies on the same side as well, which establishes the local convexity of $\\Gamma$.\n\\begin{comment}\n (let $\\gamma\\colon (-\\epsilon,\\epsilon)\\to\\Gamma$ be a local unit speed parametrization with $\\gamma(0)=p$, and $f(t):=\\langle \\gamma(t)-p,N(p)\\rangle$; by Taylor's theorem, $f(t)=\\|\\gamma''(s)\\|\\langle N(s), N(0)\\rangle t^2\/2$ for some $s\\in(0,t)$, which yields $f\\geq 0$ for small $t$)\n\\end{comment}\n\\end{note}\n\n\n\n\n\n\\section{Basic Notation and Terminology}\nThroughout this work we assume that $\\Gamma$, $o$ are as in Theorem \\ref{thm:main}. In particular $\\Gamma$ has nonvanishing curvature (so that its torsion is well defined). For convenience we also assume that $o$ is the origin of $\\mathbf{R}^3$. Further we let $\\overline\\Gamma$ denote the radial projection of $\\Gamma$ into the unit sphere $\\S^2$ centered at $o$. For every point $p\\in\\Gamma$, $\\overline p:=p\/\\|p\\|$ denotes the corresponding point of $\\overline\\Gamma$. We assume that $\\Gamma$ is oriented, and let $T$, $N$, $B:=T\\times N$, denote the corresponding unit tangent, principal normal, and the binormal vectors of $\\Gamma$ respectively. For every point $p\\in\\Gamma$ there exists a $(\\mathcal{C}^3)$ unit speed parametrization $\\gamma\\colon (-\\epsilon,\\epsilon)\\to \\Gamma$ with $\\gamma(0)=p$ such that $\\gamma'(0)=T(p)$. Then $N(p):=\\gamma''(0)\/\\|\\gamma''(0)\\|$, and the torsion of $\\Gamma$ is given by \n$$\n\\tau(p):=\\frac{\\langle \\gamma'''(0),B(p)\\rangle}{\\|\\gamma''(0)\\|}.\n$$\n The osculating plane $\\Pi_p$ of $\\Gamma$ at $p$ is the plane which passes through $p$ and is orthogonal to $B(p)$. We let $\\overline\\gamma$, $\\overline T$, $\\overline N$, $\\overline B$, $\\overline \\Pi$ denote the corresponding quantities for $\\overline\\Gamma$. More specifically, $\\overline\\gamma:=\\gamma\/\\|\\gamma\\|$, $\\overline T:=\\overline\\gamma'\/\\|\\overline\\gamma'\\|$, $\\overline B:=\\overline\\gamma'\\times\\overline\\gamma''\/\\|\\overline\\gamma'\\times\\overline\\gamma''\\|$, and $\\overline N:=\\overline B\\times \\overline T$. In particular note that these quantities are well defined, since by assumption through each point of $\\Gamma$ there passes a local support plane of $H$ not containing $o$. Consequently the tangent lines of $\\Gamma$ do not pass through $o$, since they lie in $H$. So $\\|\\overline\\gamma'\\|\\neq 0$.\n Further note that $\\overline\\Gamma$ inherits its orientation from $\\Gamma$.\nAn \\emph{inflection point} of $\\overline\\Gamma$ is a point where the geodesic curvature $\\overline k$ of $\\overline\\Gamma$ in $\\S^2$ vanishes. Here we define $\\overline k$ with respect to the conormal vector $\\overline n(\\overline p):=\\overline p\\times \\overline T(\\overline p)$:\n$$\n\\overline k(\\overline p):=\\langle \\overline N(\\overline p),\\overline n(\\overline p)\\rangle.\n$$\nWe say that an inflection point $\\overline p$ is genuine if $\\overline k$ changes sign at $\\overline p$.\n\n\\section{Osculating Planes and Inflections}\nA key part of the proof of Theorem \\ref{thm:main}, which facilitates its reduction to Segre's tennis ball theorem, is the following observation. The first part of this lemma is trivial, the second part goes back to Segre, and the third part is a consequence of our local convexity assumption. \n\\begin{lem}\\label{lem:osculate}\nLet $\\overline p$ be an inflection point of $\\overline\\Gamma$. Then \n\\begin{enumerate}\n\\item[(i)] $o\\in\\overline\\Pi_{\\overline p}$, \n\\item[(ii)] $\\overline\\Pi_{\\overline p}=\\Pi_p$, \n\\item[(iii)] $B(p)= \\overline B(\\overline p)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nThe argument is presented in three parts corresponding to each of the items enumerated above. Here for any pair of vector $v$, $w$, we use the notation $v\\parallel w$ to indicate that $v=\\lambda w$ for some $\\lambda>0$.\n\n(\\emph{i}) \nThe point $\\overline p$ is an inflection if and only if $\\overline N(\\overline p)$ is orthogonal to $\\S^2$, or\n\\begin{equation}\\label{eq:olN}\n\\overline N(\\overline p)=-\\overline p.\n\\end{equation}\n Thus $o=\\overline p+\\overline N(\\overline p)\\in\\overline \\Pi_{\\overline p}$, as claimed. \n\n(\\emph{ii}) \nSince $\\overline\\Pi_{\\overline p}$ passes through $\\overline p$ and $o$, it contains $p$ as well. So it suffices to check that $\\overline\\Pi_{\\overline p}$ and $\\Pi_p$ are parallel, or that $\\overline B(\\overline p)$ is orthogonal to $\\Pi_p$. To this end let $\\gamma\\colon (-\\epsilon, \\epsilon)\\to\\Gamma$ be a local parametrization with $\\gamma(0)=p$ and $\\gamma'(0)\\parallel T(p)$. Then $\\overline\\gamma:=\\gamma\/\\|\\gamma\\|$ yields a local parametrization for $\\overline\\Gamma$ with $\\overline\\gamma(0)=\\overline p$ and $\\overline\\gamma'(0)\\parallel \\overline T(\\overline p)$. We need to check that $\\overline B(\\overline p)$ is orthogonal to both $\\gamma'(0)$ and $\\gamma''(0)$. Simple computations show that\n\\begin{equation}\\label{eq:olgamma1}\n\\overline\\gamma\\times \\overline\\gamma'=\\frac{\\gamma\\times \\gamma'}{\\|\\gamma\\|^2},\n\\end{equation}\n\\begin{equation}\\label{eq:olgamma2}\n\\overline\\gamma\\times\\overline\\gamma''=\\frac{(\\gamma\\times\\gamma'')\\|\\gamma\\|^2-2(\\gamma\\times\\gamma')\\l\\gamma,\\gamma'\\r}{\\|\\gamma\\|^4}.\n\\end{equation}\nUsing \\eqref{eq:olN} and \\eqref{eq:olgamma1}, we have\n\\begin{equation}\\label{eq:olB}\n\\overline B(\\overline p)=\\overline T(\\overline p)\\times \\overline N(\\overline p)=\\overline p\\times \\overline T(\\overline p)=\\overline\\gamma(0)\\times \\overline\\gamma'(0)\n=\\frac{\\gamma(0)\\times \\gamma'(0)}{\\|\\gamma(0)\\|^2}.\n\\end{equation}\nThus\n$\n\\langle \\gamma'(0), \\overline B(\\overline p)\\rangle=0.\n$\nNext, to compute $\\l\\gamma''(0), \\overline B(\\overline p)\\r$, we may assume that $\\overline\\gamma$ has unit speed. Then $\\overline N(\\overline p)\\parallel \\overline\\gamma''(0)$ and \\eqref{eq:olN} yields that \n$$\n\\overline\\gamma(0)\\times \\overline\\gamma''(0)\\parallel \\overline p\\times \\overline N(\\overline p)\\parallel \\overline p\\times(-\\overline p)=0.\n$$\nConsequently, \\eqref{eq:olgamma2} yields that $\\gamma(0)\\times\\gamma''(0)=\\alpha\\; \\gamma(0)\\times\\gamma'(0)$, for some constant $\\alpha$. Now\nusing \\eqref{eq:olB} we have, \n$$\n\\|\\gamma(0)\\|^2\\l\\gamma''(0), \\overline B(\\overline p)\\r=\\l\\gamma'(0),\\gamma''(0)\\times\\gamma(0)\\r=\n\\alpha\\l\\gamma'(0),\\gamma'(0)\\times\\gamma(0)\\r=0,\n$$\nas desired. \n\n(\\emph{iii})\nWe may assume that $\\gamma$ has unit speed. Then $N(p)\\parallel \\gamma''(0)$.\nConsequently,\n$$\nB(p)=T(p)\\times N(p)\\parallel \\gamma'(0)\\times \\gamma''(0).\n$$\nThis together with \\eqref{eq:olB} shows that $B(p)\\parallel\\overline B(\\overline p)$ if and only if\n$$\n\\gamma'(0)\\times\\gamma''(0)\\parallel \\gamma(0)\\times \\gamma'(0).\n$$\nBy assumption there exists a local support plane $H$ of $\\Gamma$ passing through $p$. Let $L$ be the line given by $H\\cap \\Pi_p$ ($L$ is well defined because $o\\in \\Pi_p$ by parts $(i)$ and $(ii)$ above, but $o\\not\\in H$ by assumption; thus $H\\neq\\Pi_p$). Let $L^+$ denote the side of $L$ in $\\Pi_p$ which contains $o$. Then $N(p)$ points into $L^+$, see Figure \\ref{fig:planes}; because by assumption $\\Gamma$ lies locally on the side of $H$ which contains $o$, and thus $N(p)$ must point into this side as well. \n \\begin{figure}[h]\n \\centering\n \\begin{overpic}[height=1.8in]{planes.pdf}\n \\put(52.5,16){\\small$o$}\n \\put(42,14){\\small$\\overline{p}$}\n \\put(21,14){\\small$p$}\n \\put(11,3){\\small$L$}\n \\put(6,23){\\small$H$}\n \\put(63,11){\\small$L^+$}\n \\put(76,29){\\small$L$}\n \\put(19,23.5){\\small$T(p)$}\n \\put(43,24){\\small$\\overline T(\\overline p)$}\n \\put(27,11.5){\\small$N(p)$}\n \\put(50,13){\\small$\\overline N(\\overline p)$}\n \\put(0,11){\\small$\\Pi_{p}=\\overline\\Pi_{\\overline{p}}$}\n \\put(89, 6.5){\\small$o$}\n \\put(89, 16.5){\\small$\\overline p$}\n \\put(89, 28.5){\\small$p$}\n \\put(79, 1){\\small$L^+$}\n \\put(93.5, 26.5){\\small$T(p)$}\n \\put(94, 16){\\small$\\overline T(p)$}\n \\put(80.5, 22){\\small$N(p)$}\n \\put(82, 9){\\small$\\overline N(p)$}\n \\end{overpic}\n \\caption{}\\label{fig:planes}\n\\end{figure}\n Further, $L$ is tangent to $\\Gamma$ since $H$ is tangent to $\\Gamma$. Thus $N(p)\\perp L$. Consequently \n $$\n L^+=\\{ x\\in \\Pi_p\\mid \\l\\ x-p,N(p)\\r\\geq 0\\}.\n $$ \n So since $o$ lies in the interior of $L^+$, $\\l o-p, N(p)\\r>0$, or\n $\n\\langle N(p), p\\rangle <0,\n$\n which yields\n\\begin{equation}\\label{eq:a}\n \\langle \\gamma''(0), \\gamma(0)\\rangle <0.\n\\end{equation}\n Since $\\Pi_p$ passes through $o$, $\\{\\gamma(0),\\gamma'(0),\\gamma''(0)\\}$ is linearly dependent.\n Further, since we are assuming that $\\gamma$ has unit speed, $\\gamma'\\perp\\gamma''$. Thus \n $$\n \\gamma(0)=a \\gamma'(0)+b\\gamma''(0),\n $$\n where $b=\\langle \\gamma''(0), \\gamma(0)\\rangle\/\\|\\gamma''(0)\\|^2<0$ according to \\eqref{eq:a}.\n So\n $$\n \\gamma(0)\\times\\gamma'(0)=-b\\;\\gamma'(0)\\times\\gamma''(0),\n $$\n which completes the proof.\n\\end{proof}\n\n\\section{Maximum Principles for Torsion and Geodesic Curvature}\n\nHere we collect the facts we need concerning the relation between the sign of torsion of a space curve, and its relative position with respect to its osculating plane. Further we discuss the corresponding facts for the geodesic curvature of spherical curves, which will be used in the proof of our main result in the next section.\n\n\nWe start with torsion. Here by the region \\emph{above} the osculating plane we mean the (closed) half-space into which the binormal vector $B$ points, and the region \\emph{below} will be the other half-space. Since we assume that $\\Gamma$ is oriented, for every pair of points $p$, $q$ of $\\Gamma$, there is a unique choice of a segment with initial point $p$ and final point $q$ which we denote by $[p,q]$. The interior of this segment will be denoted by $(p,q)$.\n\n\\begin{lem}[Lem. 6.12, \\cite{ghomi:rosenberg}]\\label{lem:maxprincipletorsion}\nSuppose that $\\tau\\geq 0$ (resp. $\\tau\\leq 0$) on a segment $[p, q]$ of $\\Gamma$. Then, near $p$, the segment lies above (resp. below) the osculating plane of $\\Gamma$ at $p$.\\qed\n\\end{lem}\n\nThe above lemma quickly yields the following converse:\n\n\\begin{cor}\\label{cor:maxprincipletorsion}\nSuppose that a segment $[p,q]$ of $\\Gamma$ lies above its osculating plane at $p$ (resp. $q$) and does not lie completely in the osculating plane. Then $\\tau >0$ (resp. $\\tau<0$) at some point of $[p,q]$.\n\\end{cor}\n\\begin{proof}\nFirst suppose that $[p,q]$ lies above its osculating plane at $p$ and assume, towards a contradiction, that $\\tau\\leq 0$ on $[p,q]$. Then $[p,q]$ lies below $\\Pi_p$ by Lemma \\ref{lem:maxprincipletorsion}. Thus $[p,q]$ must lie entirely in $\\Pi_p$ which is a contradiction. The case where $[p,q]$ lies above its osculating plane at $q$, also follows from Lemma \\ref{lem:maxprincipletorsion}, once we switch the orientation of $[p,q]$ and observe that this switches the direction of $B$, but does not effect the sign of $\\tau$.\n\\end{proof}\n\nSimilarly, here are the facts concerning geodesic curvature which we need. For every $\\overline p\\in\\overline\\Gamma$, let $C_{\\overline p}$ denote the great circle in $\\S^2$ which is tangent to $\\overline\\Gamma$ at $\\overline p$. By the region \\emph{above} $C_{\\overline p}$ we mean the (closed) hemisphere into which the conormal vector $\\overline n(\\overline p):=\\overline p\\times \\overline T(\\overline p)$ points, and the other hemisphere will be referred to as the region \\emph{below} $C_{\\overline p}$.\n\n\\begin{lem}[Lem. 2.1, \\cite{ghomi:verticesC}]\\label{lem:maxprinciplecurvature}\nSuppose that a segment $[\\overline p, \\overline q]$ of $\\overline\\Gamma$ lies above (resp. below) the tangent great circle $C_{\\overline p}$. Then either $[\\overline p, \\overline q]$ is a part of $C_{\\overline p}$, or else $\\overline k > 0$ (resp. $< 0$) at some point of $[\\overline p, \\overline q]$.\\qed\n\\end{lem}\n\nNow we quickly obtain:\n\n\\begin{cor}\\label{cor:maxprinciplecurvature}\nSuppose that $\\overline k>0$ on the interior of a segment $[\\overline p,\\overline q]$ of $\\overline\\Gamma$. Then, near $\\overline p$ and $\\overline q$, $[\\overline p, \\overline q]$ lies above $C_{\\overline p}$ and $C_{\\overline q}$ respectively, and does not coincide with them.\n\\end{cor}\n\\begin{proof}\nIf $[\\overline p,\\overline q]$ lies locally below $C_{\\overline p}$, then $[\\overline p, \\overline q]$ coincides with $C_{\\overline p}$ near $\\overline p$, by Lemma \\ref{lem:maxprinciplecurvature}. So $[\\overline p, \\overline q]$ lies locally above $C_{\\overline p}$ as claimed. The same argument may also be applied to $\\overline q$ after switching the orientation of $\\overline\\Gamma$. Finally, since $\\overline k\\neq 0$ on $(p,q)$, these circles cannot coincide with $\\overline\\Gamma$ near $\\overline p$ and $\\overline q$.\n\\end{proof}\n\nNext, we link the last two corollaries together by recording that if $\\overline p$ is an inflection point of $\\overline\\Gamma$, then the region above $C_{\\overline p}$ corresponds to the region above $\\Pi_{p}$. Indeed recall that if $\\overline p$ is an inflection, then $\\overline N(\\overline p)=-\\overline p$ as we discussed in the proof of Lemma \\ref{lem:osculate}. Thus\n$$\n\\overline n(\\overline p)=\\overline p\\times \\overline T(\\overline p)=-\\overline N(\\overline p)\\times \\overline T(\\overline p)=\\overline T(\\overline p)\\times \\overline N(\\overline p)=\\overline B(\\overline p).\n$$\n\nSo Lemma \\ref{lem:osculate} quickly yields:\n\n\n\\begin{lem}\\label{lem:above}\nAt an inflection point $\\overline p$ of $\\overline\\Gamma$, the region above the great circle $C_{\\overline p}$ in $\\S^2$ coincides with the hemisphere which lies above the osculating plane $\\Pi_p$.\\qed\n\\end{lem}\n\n\n\\section{Proof of the Main Result}\n\nBefore proving our main theorem, we require the following technical fact which shows that the local convexity and star-shaped properties of $\\Gamma$ are stable under small perturbations of $o$.\n\n\\begin{lem}\\label{lem:oprime}\nThere exists an open neighborhood $U$ of $o$ such that $\\Gamma$ is star-shaped and locally convex with respect to every point $o'$ in $U$.\nFurther, we may choose $o'$ so that only finitely many osculating planes of $\\Gamma$ pass through $o'$.\n \\end{lem}\n\\begin{proof}\nThe local convexity assumption ensures that the tangent lines of $\\Gamma$ do not pass through $o$. Thus $\\overline\\Gamma$ is a $\\mathcal{C}^3$ immersed curve. Further, the star-shaped assumption ensures that $\\overline\\Gamma$ is embedded. Now since embeddings of compact manifold are open in the space of $\\mathcal{C}^1$ mappings, it follows that projections of $\\Gamma$ into unit spheres centered at $o'\\in U$ are embedded as well, for some open neighborhood $U$ of $o$. So the star-shaped property is preserved under small perturbations of $o$. Next we check that the local convexity assumption is preserved as well. To this end it suffices to show that the local support planes of $\\Gamma$ may be chosen continuously, for then the support planes cannot get arbitrarily close to $o$, and hence $\\Gamma$ remains locally convex with respect to all points of $U$, assuming $U$ is sufficiently small. To see that the local support planes may be chosen continuously, see \\cite[Sec 3.1]{ghomi:stconvex}.\nFinally, using Sard's theorem, we may choose a point $o'$ in $U$ such that only finitely many osculating planes of $\\Gamma$ pass through $o'$: consider $f\\colon\\Gamma\\times\\mathbf{R}^2\\to\\mathbf{R}^3$ given by $f(p, t, s)= p+t\\, T(p) +s\\, N(p)$, and let $o'$ be a regular value of $f$. \n\\end{proof}\n\nWe also need the following refinement of the tennis ball theorem. Recall that an inflection point $\\overline p$ of $\\overline\\Gamma$ is \\emph{genuine}, provided that the geodesic curvature $\\overline k$ of $\\overline\\Gamma$ changes sign at $\\overline p$.\n\n\\begin{lem}[Thm. 1.2, \\cite{ghomi:verticesC}]\\label{lem:tennisball}\nSuppose that $\\overline\\Gamma$ has at most finitely many inflections. Then at least $4$ of these inflections must be genuine.\\qed\n\\end{lem}\n \n\nFinally we are ready to establish our main result:\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main}]\nBy Lemma \\ref{lem:oprime}, after replacing $o$ by a nearby point, we may assume that at most finitely many osculating planes of $\\Gamma$ pass through $o$. By Lemma \\ref{lem:osculate}, osculating planes of $\\overline\\Gamma$ at its inflections pass through $o$ and coincide with the osculating planes of $\\Gamma$. Thus $\\overline\\Gamma$ has now at most finitely many inflections. \nConsequently, by the refinement of the tennis ball theorem, Lemma \\ref{lem:tennisball}, $\\overline\\Gamma$ has at least $4$ genuine inflections. \n\nLet $\\overline p_0$, $\\overline p_1$ be a pair of genuine inflections of $\\overline\\Gamma$ such that the oriented segment \n$[\\overline p_0, \\overline p_1]$ has no genuine inflections in its interior $(\\overline p_0, \\overline p_1)$. There are at least $4$ such intervals with pairwise disjoint interiors. Thus, to complete the proof, it suffices to show that $\\tau$ changes sign on $(p_0, p_1)$. \n\nWe may assume that $\\overline k\\geq 0$ on $(\\overline p_0,\\overline p_1)$, after switching the orientations of $\\Gamma$ and $\\overline\\Gamma$ if necessary. Then $\\overline k>0$ near $\\overline p_i$ on $(\\overline p_0,\\overline p_1)$, since $\\overline\\Gamma$ has only finitely many inflections. Consequently, by the maximum principle for geodesic curvature, Corollary \\ref{cor:maxprinciplecurvature}, $[\\overline p_0, \\overline p_1]$ lies locally above its tangent great circles at $\\overline p_i$ and does not coincide with them near $\\overline p_i$.\nThis in turn implies, by Lemma \\ref{lem:above},\n that \n$[p_0, p_1]$ lies locally above its osculating planes at $p_i$, and does not coincide with them near $p_i$. Thus, by the maximum principle for torsion, Corollary \\ref{cor:maxprincipletorsion}, $\\tau$ changes sign on $(p_0, p_1)$ as desired.\n\\end{proof}\n\n\\section*{Acknowledgment}\nThe author is grateful to Slava Sedykh for correcting some computations in the proof of Lemma \\ref{lem:osculate}, and several other comments to improve the exposition of this work. Thanks also to Masaaki Umehara for helpful communications.\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCurrently there is a large body of data coming from cosmological and astrophysical observations that is mostly consistent with the existence of dark matter. Such observations also suggest that the hypothesized particles that constitute dark matter have very small cross section and travel at speeds much lower than light. These lead to the cold dark matter framework, which is one of the pillars of the current standard cosmological model $\\Lambda$CDM.\n\n\nIt is not only tempting, but mandatory to check if such dark matter particles exist (by detecting them in laboratory based experiments, for instance) and also to check if the gravitational effects that lead to the dark matter hypothesis could follow from a more detailed and complete approach to gravity. The effects of pure classical General Relativity at galaxies have been studied for a long time and, considering galaxy rotation curves, the differences between General Relativity and Newtonian gravity are negligible, since in a galaxy matter moves at speeds much lower than that of light and is typically subject to weak gravitational fields ($\\Phi \\ll c^2$), which leads to the Newtonian limit of General Relativity \\footnote{There are some proposals that consider General Relativity in the context of galaxies which do not lead to Newtonian gravity, see e.g. \\cite{Cooperstock:2006dt, Vogt:2005hn, Cooperstock:2007sc, Vogt:2005va, Vogt:2005hv}. It is not impossible that a reasonable explanation for galaxy rotation curves may rely on similar approaches, nevertheless up to now none of such proposals have found a baryonic mass distribution that is in conformity with astrophysical expectations.}. \n\nThere is however a newer approach that may change considerably the role of dark matter, while following standard physical principles. Namely, the investigation of the running of the gravitational coupling parameter $G$ on large scales as induced by the renormalization group framework.\n\nThe running of coupling constants is a well known phenomenon within \nQuantum Field Theory. It is well-known that the renormalization group \nmethod can be extended to quantum field theory on curved space time\nand to some models of quantum gravity (see, e.g., \\cite{Buchbinder:1992rb}), such that the beta functions can be interpreted in this framework. Concerning \nthe high energy (UV) behavior, there is hope that the running of $G$ \nin quantum gravity may converge to a non-Gaussian fixed point \n(asymptotic safety) \\cite{Niedermaier:2006wt,Weinberg:2009bg}. Our present concern is, however, not about \nthe UV completeness, but with the behavior of $G$ in the far infrared \nregime (IR). In the electromagnetic case the IR behavior corresponds \nto the Appelquist-Carazzone decoupling \\cite{Appelquist:1974tg} (see e.g., \\cite{Goncalves:2009sk} for a recent derivation of this theorem). In the case of gravity the same effect of decoupling has been obtained in \\cite{Gorbar:2002pw,Gorbar:2003yt},\nbut only for the higher derivative terms in the gravitational action. \nIt remains unclear whether such decoupling takes place for the other terms. This possibility was studied on phenomenological grounds a number of times before, e.g. \\cite{Shapiro:2004ch,Reuter:2004nx}.\n\nIn \\cite{Rodrigues:2009vf} we presented new results on the application of renormalization group (RG) corrections to General Relativity (GR) in the astrophysical domain. Previous attempts to apply this picture to galaxies have considered for simplicity point-like galaxies. We extended previous considerations by identifying the proper renormalization group energy scale $\\mu$ and by evaluating the consequences considering the observational data of disk galaxies. Also we propose a natural choice for the identification of $\\mu$, linking it to the local value of the Newtonian gravitational potential. With this choice, the renormalization group-based approach is capable to mimic dark matter effects with great precision. This picture induces a very small variation on the gravitational coupling parameter $G$, namely a variation of about $10^{-7}$ of its value across $10^5$ light-years. We call our model RGGR, in reference to renormalization group effects in General Relativity.\n\nIn order to evaluate the observational consequences of the RGGR model and to compare it to other proposals, \nrecent high quality observational data \\cite{2008AJ....136.2648D,Gentile:2004tb} from nine regular spiral galaxies were mass-modelled using the standard procedures for the baryonic part, and four different models for the ``dark'' component: i) the RGGR model; ii) one of the most phenomenological successful dark matter profiles, the Isothermal profile \\cite{1991MNRAS.249..523B}; and two alternative models which were built to avoid the need for the dark matter: iii) the Modified Newtonian Dynamics (MOND) \\cite{1983ApJ...270..365M,1983ApJ...270..371M} and iv) the Scalar-Tensor-Vector Gravity (STVG) \\cite{Moffat:2005si}. The latter is a recent relativistic proposal that is capable of dealing with galaxy rotation curves and other phenomena usually attributed to dark matter. For galaxy rotation curves phenomena, STVG becomes equivalent to a similar proposal called MSTG \\cite{Moffat:2004bm, Brownstein:2005zz, Brownstein:2009gz}. The model parameters that we use to fit galaxies in the STVG framework can be found in ref. \\cite{Brownstein:2005zz}. While for MOND we use the $a_0$ value as given in \\cite{Sanders:2002pf}.\n\nThe quality of the rotation curve fits and total stellar mass as inferred from the RGGR model is perfectly satisfactory considering both the general behavior of the model and its results when applied to nine particular galaxies, as analyzed in \\cite{Rodrigues:2009vf}. It is about the same of the Isothermal profile quality, while it seems significantly better than the quality of MOND and STVG. In the case of MOND, we did numerical experiences with $a_0$ as a free parameter and found that, albeit the concordance with the shape of the rotation curve can considerably increase in this case, the concordance with the expected stellar mass-to-light ratios remains unsatisfactory (similar conclusions have also appeared in some recent papers, e.g. \\cite{Gentile:2004tb,Gentile:2010xt,Famaey:2005fd}, and it seems that the concordance can only be improved by adjusting the MOND's $\\mu(x)$ function in an {\\it ad-hoc} way).\n\n\\section{The running of $G$}\n\n\nThe $\\beta$-function for the gravitational coupling parameter $G$ has been discussed in the framework of different approaches to Quantum Gravity and Quantum Field Theory in curved space-time. In \\cite{Rodrigues:2009vf} we followed the derivation used previously in \\cite{Shapiro:2004ch}. If $G$ does not behave as a constant in the far IR limit, it was argued in \\cite{Shapiro:2004ch} (and recently in more details in \\cite{Farina:2011me}) that the logarithmic running of $G$ is a direct consequence of covariance and must hold in all loop orders. As far as direct derivation of the physical running of $G$ is not available, it is worthwhile to explore the possibility of a logarithmically running $G$ at the phenomenological level.\n\n\nConsider the following infrared $\\beta$-function for General Relativity,\n\\begin{equation}\n\t \\beta_{G^{-1}} \\equiv \\mu \\frac{dG^{-1}}{d \\mu} = 2 \\nu \\, \\frac{M_{\\mbox{\\tiny Planck}}^{2}}{c \\, \\hbar} = 2 \\nu G_0^{-1}.\n\t\\label{betaG}\n\\end{equation}\n\nEquation (\\ref{betaG}) leads to the logarithmically varying $G(\\mu)$ function,\n\\begin{equation}\n\t\\label{gmu}\n\tG(\\mu) = \\frac {G_0}{ 1 + \\nu \\,\\mbox{ln}\\,(\\mu^2\/\\mu_0^2)},\n\\end{equation}\nwhere $\\mu_0$ is a reference scale introduced such that $G(\\mu_0) =G_0 $. The constant $G_0$ is the gravitational constant as measured in the Solar System (actually, there is no need to be very precise on where $G$ assumes the value of $G_0$, due to the smallness of the variation of $G$). The dimensionless constant $\\nu$ is a phenomenological parameter which depends on the details of the quantum theory leading to eq. (\\ref{gmu}). Since we have no means to compute the latter from first principles, its value should be fixed from observations. It will be shown that even a very small $\\nu$ can lead to observational consequences at galactic scales.\n\n\nThe action for this model is simply the Einstein-Hilbert one in which $G$ appears inside the integral, namely,\n\\begin{equation}\n\tS_{\\mbox{\\tiny RGGR}}[g] = \\frac {c^3}{16 \\pi }\\int \\frac {R } G \\, \\sqrt{-g} \\, d^4x.\n\t\\label{rggraction}\n\\end{equation}\nIn the above, $G$ should be understood as an external scalar field that satisfies (\\ref{gmu}). Since for the problem of galaxy rotation curves the cosmological constant effects are negligible, we have not written the $\\Lambda$ term above. Nevertheless, for a complete cosmological picture, $\\Lambda$ is necessary and it also runs covariantly with the RG flow of $G$ (see e.g.,\\cite{Shapiro:2004ch}). \n\nThere is a simple procedure to map solutions from the Einstein equations with the gravitational constant $G_0$ into RGGR solutions. One need not to follow this route, one may find all the dynamics from the RGGR equations of motion, which can be found by a direct variation of the action (\\ref{rggraction}) in respect to the metric, leading to equations of motion that have the same form of those of a scalar-tensor gravity\\footnote{We stress that it is only the from since RGGR is not a type of scalar-tensor gravity, and $G$ is not a fundamental field of the model.}. In this review, we will proceed to find RGGR solutions via a conformal transformation of the Einstein-Hilbert action, and to this end first we write \n\\begin{equation}\n\tG = G_0 + \\delta G,\n\\end{equation} \nand we assume $\\delta G \/ G_0 \\ll 1$, which will be justified latter. Introducing the conformally related metric\n\\begin{equation}\n\t\\bar g_{\\mu \\nu} \\equiv \\frac {G_0}{G} g_{\\mu \\nu}, \n\t\\label{ct}\n\\end{equation}\nthe RGGR action can be written as\n\\begin{equation}\n\tS_{\\mbox{\\tiny RGGR}}[g] = S_{\\mbox{\\tiny EH}}[\\bar g] + O(\\delta G^2),\n\\end{equation}\nwhere $S_{\\mbox{\\tiny EH}}$ is the Einstein-Hilbert action with $G_0$ as the gravitational constant. The above suggest that the RGGR solutions can be generated from the Einstein equations solutions via the conformal transformation (\\ref{ct}). Indeed, within a good approximation, one can check that this relation persists when comparing the RGGR equations of motion to the Einstein equations even in the presence of matter \\cite{Rodrigues:2009vf}.\n\nIn the context of rotation curves of galaxies, standard General Relativity gives essentially the same predictions of Newtonian gravity. The Newtonian potential $ \\Phi_{\\mbox{\\tiny Newt}}$ is related to the metric by \n\\begin{equation}\n\t\\bar g_{00} = - \\left ( 1 + \\frac {2 \\Phi_{\\mbox{\\tiny Newt}}}{c^2} \\right ).\n\\end{equation}\nHence, using eq. (\\ref{ct}), the effective RGGR potential $\\Phi$ in the non-relativistic limit is given by\n\\begin{equation}\n\t\\Phi = \\Phi_{\\mbox{\\tiny Newt}} + \\frac {c^2}2 \\frac{\\delta G}{G_0}.\n\\end{equation}\nAn equivalent result can also be found from the evaluation of a test particle geodesics \\cite{Rodrigues:2009vf}. In the context of weak gravitational fields $\\Phi_{\\mbox{\\tiny Newt}}\/ c^2 \\ll 1$ (with $\\Phi_{\\mbox{\\tiny Newt}} = 0$ at spatial infinity) holds, and hence the term $\\delta G\/G_0$ should not be neglected.\n\nIn order to derive a test particle acceleration, we have to specify the proper energy scale $\\mu$ for the problem setting in question, which is a time-independent gravitational phenomena in the weak field limit. This is a recent area of exploration of the renormalization group application, where the usual procedures for high energy scattering of particles cannot be applied straightforwardly. Previously to \\cite{Rodrigues:2009vf} the selection of $\\mu \\propto 1\/r$, where $r$ is the distance from a massive point, was repeatedly used, e.g. \\cite{Reuter:2004nv,Dalvit:1994gf,Bertolami:1993mh,Goldman:1992qs, Shapiro:2004ch}. This identification adds a constant velocity proportional to $\\nu$ to any rotation curve. Although it was pointed as an advantage due to the generation of ``flat rotation curves'' for galaxies, it introduced difficulties with the Tully-Fisher law, the Newtonian limit, and the behavior of the galaxy rotation curve close to the galactic center, since there the behavior is closer to the expected one without dark matter. In \\cite{Rodrigues:2009vf} we introduced a $\\mu$ identification that seems better justified both from the theoretical and observational points of view. The characteristic weak-field gravitational energy does not comes from the geometric scaling $1\/r$, but from the Newtonian potential $\\Phi_{\\mbox{\\tiny Newt}}$. However, the straight relation $ \\mu \\propto \\Phi_{\\mbox{\\tiny Newt}}$ leads to $\\mu \\propto 1\/r$ in the large $r$ limit; which is unsatisfactory on observational grounds (bad Newtonian limit and correspondence to the Tully-Fisher law). One way to recover the Newtonian limit is to impose a suitable cut-off, but this does not solves the Tully-Fisher issues \\cite{Shapiro:2004ch}. Another one is to use \\cite{Rodrigues:2009vf}\n\\begin{equation}\n\t\\frac {\\mu}{\\mu_0} =\\left( \\frac{\\Phi_{\\mbox{\\tiny Newt}}}{\\Phi_0} \\right)^\\alpha,\n\t\\label{muphi}\n\\end{equation}\nwhere $\\Phi_0$ and $\\alpha$ are constants. Apart from the condition $0 < \\Phi_0 < c^2$ (i.e., essentially $\\Phi_0$ is a reference Newtonian potential), the precise value of $\\Phi_0$ is largely irrelevant for the problem of rotation curves. The relevant parameter is $\\alpha$. It is a phenomenological parameter that depends on the mass of the system, and it must go to zero when the mass of the system goes to zero. This is necessary to have a good Newtonian limit. From the Tully-Fisher law, it is expected to increase monotonically with the increase of the mass. Such behavior is indeed found from the galaxy fits done in \\cite{Rodrigues:2009vf}. In a recent paper, an upper bound on $\\nu \\alpha$ in the Solar System was derived \\cite{Farina:2011me}. In galaxy systems, $\\nu \\alpha|_{\\mbox{\\tiny Galaxy}} \\sim 10^{-7}$, while for the Solar System, whose mass is about $10^{-10}$ of that of a galaxy, $\\nu \\alpha|_{\\mbox{\\tiny Solar System}} \\lesssim 10^{-17}$. It shows that a linear decrease on $ \\alpha$ with the mass is sufficient to satisfy both the current upper bound from the Solar System and the results from galaxies.\n\nWe also point that the above energy scale setting (\\ref{muphi}) was recently re-obtained from a more theoretical perspective \\cite{Domazet:2010bk}.\n\nOnce the $\\mu$ identification is set, it is straightforward to find the rotation velocity for a static gravitational system sustained by its centripetal acceleration,\n\\begin{equation}\n\tV^2_{\\mbox{\\tiny RGGR}} \\approx V^2_{\\mbox{\\tiny Newt}} \\left ( 1 - \\frac {\\nu \\, \\alpha \\, c^2} {\\Phi_{\\mbox{\\tiny Newt}}} \\right ).\n\t\\label{v2rggr}\n\\end{equation}\n\nContrary to Newtonian gravity, the value of the Newtonian \npotential at a given point does play a significant role \nin this approach. This sounds odd from the perspective of \nNewtonian gravity, but this is not so \nfrom the General Relativity viewpoint, since the latter has no \nfree zero point of energy. In particular, the Schwarzschild \nsolution is not invariant under a constant shift of the \npotential.\n\nIn the following, we will comment on the effect of the relation (\\ref{v2rggr}) to galaxy rotation curves. First from a more general perspective, and then to the modeling of individual galaxies.\n\n \n\n\n\n\n\\section{Galaxy rotation curves}\n\nBefore proceeding to specific galaxy rotation curves modeling, it is more instructive to analyze general features of the relation (\\ref{v2rggr}), and to compare it to the standard approach. In \\cite{Rodrigues:2009vf} we analyze some general aspects and scaling laws of the RGGR model, with no dark matter, in comparison to the isothermal profile; both of them, at this step, without gas and with an exponential stellar disk. In particular, it was pointed that the RGGR rotation curves have a reasonable shape to fit galaxies (i.e., no clear problems like increasing or decreasing too fast, oscillations...), and that they effectively behave similarly to cored dark matter profiles at inner radii, whose effective core radius scales with the galaxy disk scale length. Further details in our paper.\n\nWe have also extended the previous analysis by adding a gas-like contribution (a re-scaled version of the NGC 3198 gaseous part). In particular, this numerically evaluates how the model behaves on the presence of density perturbations at large radii. In the first plot of fig. (\\ref{mus}) it is displayed the result for RGGR, which is remarkably good (a similar plot can be found in our original paper), while in the others plots in fig. (\\ref{mus}) (presented at the Conference, but not in \\cite{Rodrigues:2009vf}) one sees the results for the same mass distribution but different choices for the energy scale\\footnote{These other choices are also unsatisfactory from the theoretical perspective, since they have no direct relation to the local energy of the gravitational field in the weak field regime ($\\Phi_{\\mbox{\\tiny Newt}} \\ll c^2 $). } $\\mu$. \n\n\n\n\\begin{figure}[ht]\n\t \\includegraphics[width=150mm]{IC2Cosmo3mus2.pdf}\n\\caption{The additional circular velocity squared induced by different choices of $\\mu$. The first plot refers to RGGR, the two others to different identifications of $\\mu$: one depends on the Newtonian acceleration (a variation inspired on MOND) and the other on the (baryonic) matter density. All the plots above display the additional squared velocity of each model divided by $V^2_\\infty \\equiv \\nu \\alpha c^2$ and as a function of $R\/R_D$, where $R$ is the the radial cylindrical coordinate and $R_D$ is the stellar disk scale length. Black lines depict the additional velocity due to a pure exponential stellar disk, while the gray solid lines take into account the gas mass $M_{\\mbox{gas}}$ for different values of $f \\equiv M_{\\mbox{gas}}\/M_{\\mbox{stars}}$, with $f = 0.2, 0.7, 1.2,..., 9.7$ (i.e., the black lines stand for $f=0$). See \\cite{Rodrigues:2009vf} for further details.}\n\\label{mus}\n\\end{figure}\n\nFrom fig. (\\ref{mus}), the two other proposals different from RGGR are seen to be unsuited as replacements for dark matter. In particular, both are too sensitive to the gas presence, and both eventually add negative contributions to the total circular velocity at large radii.\n\n\nIn \\cite{Rodrigues:2009vf} we used a sample of nine high quality and regular rotation curves of disk galaxies from \\cite{2008AJ....136.2648D, Gentile:2004tb}. In figs. (\\ref{ngc2403}, \\ref{masstolightplot}) we show one of ours results (see \\cite{Rodrigues:2009vf} for the complete set and further details) in comparison to the results of three other models: a cored dark matter profile (Isothermal profile), the Modified Newtonian Dynamics (MOND) and the recently proposed Scalar-Tensor-Vector Gravity (STVG). \n\n\\begin{figure}[ht]\n\t \\includegraphics[width=100mm]{NGC2403PlotsIC2Models.pdf}\n\\caption{NGC 2403 rotation curve fits. The red dots and its error bars are the rotation curve observational data, the gray ones close to the abscissa are the residues of the fit. The solid black line for each model is its best fit rotation curve, the dashed yellow curves are the stellar rotation curves from the bulge and disk components, the dotted purple curve is the gas rotation curve, and the dot-dashed green curve is the resulting Newtonian, with no dark matter, rotation curve. }\n\\label{ngc2403}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\t \\includegraphics[width=100mm]{ColorVsYIC2Models}\n\\caption{Stellar disk mass-to-light ratio ($Y_{*D}$) in the $3.6 \\mu m$ band as a function of the color $J - K$. Each galactic disk is represented above by an open circle, with a {\\it reference} error bar of 50$\\%$ of the $Y_{*D}$ value. The black open squares display the $Y_{*D}$ values and their associated 1$\\sigma$ errors for each galaxy as inferred from the rotation curve fits for each model. The highlighted square and circle correspond to the NGC 2403 galactic disk mass-to-light ratios. See \\cite{Rodrigues:2009vf} for further details.}\n\\label{masstolightplot}\n\\end{figure}\n\n\nDue to the considerably large uncertainty in the total stellar mass of each stellar component (disk and bulge), we first use the total stellar mass as a free parameter for the fittings (achieved from a $\\chi^2$ minimization considering the errors). At a second stage, we compare the resulting value with stellar population expectations, following the standard approach.\n\nOn the free parameters of each model, we remark that besides the total stellar mass, the Isothermal profile has two additional free parameters, the RGGR model has a single free parameter ($\\alpha$) while MOND and STVG have no free parameters that can vary from galaxy to galaxy. On the other hand, both of the latter depend on constants whose values are calibrated considering its best fit in a large sample of galaxies. We remark that the $\\nu$ parameter in RGGR cannot vary from galaxy to galaxy, but $\\alpha $ can, and galaxy rotation curves are sensible to the combination $\\nu \\alpha$, whose value is about the order of $10^{-7}$. The best fit for NGC 2403 yields $ \\nu \\alpha = (1.66 \\pm0.01) \\times 10^{-7}$.\n\n\n\n\\section{Conclusions}\n\nWe presented a model, motivated by renormalization group corrections to the Einstein-Hilbert action, that introduces small inhomogeneities in the gravitational coupling across a galaxy (of about 1 part in $10^7$) and can generate galaxy rotation curves in agreement with the observational data, without the introduction of dark matter as a new kind of matter. Both High and Low Surface Brightness galaxies were tested \\cite{Rodrigues:2009vf} . Considering the samples of galaxies evaluated in \\cite{Rodrigues:2009vf}, the quality of the RGGR rotation curves, together with the corresponding mass-to-light ratios, is about the same than the Isothermal profile quality, but with one less free parameter. We expect that similar results would hold in regard to other cored dark matter profiles, while our results seem better than those achieved by the NFW profile \\cite{Rodrigues:2009vf}. We also compared the results of our model with MOND and STVG, and at face value our model yielded clearly better results. \n\nOur results can be seen as a next step compared to the \nprevious models motivated by renormalization group effects in \ngravity, e.g. \\cite{Shapiro:2004ch, Reuter:2007de}. Their original analyses \ncould only yield a rough estimate on the galaxy rotation curves, \nsince they were restricted to modeling a galaxy as a single \npoint. Trying to extend this approach to real galaxies, we \nhave shown that the proper scale \nfor the renormalization group phenomenology is not of a \ngeometric type, like the inverse of the distance, but is\nrelated to the Newtonian potential with null boundary \ncondition at infinity. \n\n\nThe essential feature for the RGGR rotation curves fits is the formula (\\ref{v2rggr}), which is by itself a simple formula that provides a very efficient description of galaxy rotation curves. \n\nThere are several tests and implications of this model yet to be evaluated. In particular we are working on applying the RGGR framework to a larger sample of galaxies (including elliptical galaxies) \\cite{FabrisGalaxies} and galaxy-galaxy strong lensing \\cite{rodrigueslentes}. Related work on CMB, BAO and LSS in search for a new cosmological concordance model is also a work in progress \\cite{toribioCMB}. \n\n\n\n\\bigskip\n\n\\noindent\n{ \\it \\bf Acknowledgements}\n\n\\noindent\nDCR thanks FAPESP and PRPPG-UFES for partial financial support. PSL thanks CNPq and FAPESP for partial financial support. The work of I.Sh. was partially supported by CNPq, FAPEMIG, FAPES and ICTP. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper we establish the structured ring spectra analogs of Goodwillie's widely exploited and powerful cubical diagram results \\cite{Goodwillie_calc2} for spaces. These cubical diagram results are a key ingredient in the authors' homotopic descent results \\cite{Ching_Harper} on a structured ring spectra analog of Quillen-Sullivan theory \\cite{Quillen_rational, Sullivan_MIT_notes, Sullivan_genetics}. They also establish an important part of the foundations for the theory of Goodwillie calculus in the context of structured ring spectra; see, for instance, Arone and Ching \\cite{Arone_Ching}, Bauer, Johnson, and McCarthy \\cite{Bauer_Johnson_McCarthy}, Ching \\cite{Ching_duality}, Harper and Hess \\cite[1.14]{Harper_Hess}, Kuhn \\cite{Kuhn_survey}, and Pereira \\cite{Pereira_general_context, Pereira_spectral_operad}. For example, it follows from our results that the identity functor on a category of structured ring spectra is analytic in the sense of Goodwillie \\cite{Goodwillie_calc2}.\n\n\\begin{assumption}\n\\label{assumption:commutative_ring_spectrum}\nFrom now on in this paper, we assume that ${ \\mathcal{R} }$ is any commutative ring spectrum; i.e., we assume that ${ \\mathcal{R} }$ is any commutative monoid object in the category $({ \\mathsf{Sp}^\\Sigma },{ \\otimes }_S,S)$ of symmetric spectra \\cite{Hovey_Shipley_Smith, Schwede_book_project}. We work mostly in the category of ${ \\mathcal{R} }$-modules which we denote by ${ \\mathsf{Mod}_{ \\mathcal{R} } }$.\n\\end{assumption}\n\n\\begin{rem}\nOur results apply to many different types of algebraic structures on spectra including (i) associative ring spectra, which we simply call ring spectra, (ii) commutative ring spectra, (iii) all of the $E_n$ ring spectra for $1\\leq n\\leq \\infty$ that interpolate between these two extremes of non-commutativity and commutativity. These structures, and many others, are examples of algebras over operads. We therefore work in the following general context: throughout this paper, ${ \\mathcal{O} }$ is an operad in the category of ${ \\mathcal{R} }$-modules (unless otherwise stated), ${ \\Alg_{ \\mathcal{O} } }$ is the category of ${ \\mathcal{O} }$-algebras, and ${ \\Lt_{ \\mathcal{O} } }$ is the category of left ${ \\mathcal{O} }$-modules. \n\nWhile ${ \\mathcal{O} }$-algebras are the main objects of interest for most readers, our results also apply in the more general case of left modules over the operad ${ \\mathcal{O} }$; that generalization will be needed elsewhere.\n\\end{rem}\n\n\\begin{rem}\nIn this paper, we say that a symmetric sequence $X$ of ${ \\mathcal{R} }$-modules is $n$-connected if each ${ \\mathcal{R} }$-module $X[\\mathbf{t}]$ is an $n$-connected spectrum. We say that an algebra (resp. left module) over an operad is $n$-connected if the underlying ${ \\mathcal{R} }$-module (resp. symmetric sequence of ${ \\mathcal{R} }$-modules) is $n$-connected, and similarly for operads. Similarly, we say that a map $X { \\rightarrow } Y$ of symmetric sequences is $n$-connected if each map $X[\\mathbf{t}] { \\rightarrow } Y[\\mathbf{t}]$ is an $n$-connected map of ${ \\mathcal{R} }$-modules, and a map of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules) is $n$-connected if the underlying map of spectra (resp. symmetric sequences) is $n$-connected.\n\\end{rem}\n\nThe main results of this paper are Theorems~\\ref{thm:higher_blakers_massey} and \\ref{thm:higher_dual_blakers_massey}, which are the analogs of Goodwillie's higher Blakers-Massey theorems \\cite[2.4 and 2.6]{Goodwillie_calc2}. These results include various interesting special cases which we now highlight.\n\nOne such case is given by the homotopy excision result of Theorem~\\ref{thm:homotopy_excision} below. Goerss and Hopkins \\cite[2.3.13]{Goerss_Hopkins_moduli} prove a closely related homotopy excision result in the special case of simplicial algebras over an $E_\\infty$ operad, and remark that it is true more generally for any simplicial operad \\cite[2.3.14]{Goerss_Hopkins_moduli}. In a closely related setting, Baues \\cite[I.C.4]{Baues_combinatorial} proves a homotopy excision result in an algebraic setting that includes simplicial associative algebras, and closely related is a result of Schwede \\cite[3.6]{Schwede_algebraic} that is very nearly a homotopy excision result in the context of algebras over a simplicial theory. Our result also recovers Dugger and Shipley's \\cite[2.3]{Dugger_Shipley} homotopy excision result for associative ring spectra as a very special case.\n\n\\begin{thm}[Homotopy excision for structured ring spectra]\n\\label{thm:homotopy_excision}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules. Let ${ \\mathcal{X} }$ be a homotopy pushout square of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules) of the form\n\\begin{align*}\n\\xymatrix{\n { \\mathcal{X} }_\\emptyset\\ar[r]\\ar[d] & { \\mathcal{X} }_{\\{1\\}}\\ar[d]\\\\\n { \\mathcal{X} }_{\\{2\\}}\\ar[r] & { \\mathcal{X} }_{\\{1,2\\}}\n}\n\\end{align*}\nAssume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Consider any $k_1,k_2\\geq -1$. If each ${ \\mathcal{X} }_\\emptyset{ \\rightarrow } { \\mathcal{X} }_{\\{i\\}}$ is $k_i$-connected ($i=1,2$), then\n\\begin{itemize}\n\\item[(a)] ${ \\mathcal{X} }$ is $l$-cocartesian in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$) with $l=k_1+k_2 +1$,\n\\item[(b)] ${ \\mathcal{X} }$ is $k$-cartesian with $k=k_1+k_2$.\n\\end{itemize}\n\\end{thm}\n\nRelaxing the assumption in Theorem~\\ref{thm:homotopy_excision} that ${ \\mathcal{X} }$ is a homotopy pushout square, we obtain the following result which is the direct analog for structured ring spectra of the original Blakers-Massey Theorem for spaces.\n\n\\begin{thm}[Blakers-Massey theorem for structured ring spectra]\n\\label{thm:blakers_massey}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules. Let ${ \\mathcal{X} }$ be a commutative square of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules) of the form\n\\begin{align*}\n\\xymatrix{\n { \\mathcal{X} }_\\emptyset\\ar[r]\\ar[d] & { \\mathcal{X} }_{\\{1\\}}\\ar[d]\\\\\n { \\mathcal{X} }_{\\{2\\}}\\ar[r] & { \\mathcal{X} }_{\\{1,2\\}}\n}\n\\end{align*}\nAssume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Consider any $k_1,k_2\\geq -1$, and $k_{12}\\in{ \\mathbb{Z} }$. If each ${ \\mathcal{X} }_\\emptyset{ \\rightarrow } { \\mathcal{X} }_{\\{i\\}}$ is $k_i$-connected ($i=1,2$) and ${ \\mathcal{X} }$ is $k_{12}$-cocartesian, then ${ \\mathcal{X} }$ is $k$-cartesian, where $k$ is the minimum of $k_{12}-1$ and $k_{1}+k_{2}$.\n\\end{thm}\n\nThe following higher homotopy excision result lies at the heart of this paper. It can be thought of as a structured ring spectra analog of higher homotopy excision (see Goodwillie \\cite[2.3]{Goodwillie_calc2}) in the context of spaces. This result also implies that the identity functors for ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$ are $0$-analytic in the sense of \\cite[4.2]{Goodwillie_calc2}.\n\n\\begin{thm}[Higher homotopy excision for structured ring spectra]\n\\label{thm:higher_homotopy_excision}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a nonempty finite set. Let ${ \\mathcal{X} }$ be a strongly $\\infty$-cocartesian $W$-cube of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules). Assume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Let $k_i\\geq -1$ for each $i\\in W$. If each ${ \\mathcal{X} }_\\emptyset{ \\rightarrow }{ \\mathcal{X} }_{\\{i\\}}$ is $k_i$-connected ($i\\in W$), then\n\\begin{itemize}\n\\item[(a)] ${ \\mathcal{X} }$ is $l$-cocartesian in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$) with $l=|W|-1+\\sum_{i\\in W}k_i$,\n\\item[(b)] ${ \\mathcal{X} }$ is $k$-cartesian with $k=\\sum_{i\\in W}k_i$.\n\\end{itemize}\n\\end{thm}\n\nThe preceding results are all special cases of the following theorem which relaxes the assumption in Theorem \\ref{thm:higher_homotopy_excision} that ${ \\mathcal{X} }$ is strongly $\\infty$-cocartesian. This result is a structured ring spectra analog of Goodwillie's higher Blakers-Massey theorem for spaces \\cite[2.4]{Goodwillie_calc2}. \n\n\\begin{thm}[Higher Blakers-Massey theorem for structured ring spectra]\n\\label{thm:higher_blakers_massey}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a nonempty finite set. Let ${ \\mathcal{X} }$ be a $W$-cube of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules). Assume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected, and suppose that\n\\begin{itemize}\n\\item[(i)] for each nonempty subset $V\\subset W$, the $V$-cube $\\partial_\\emptyset^V{ \\mathcal{X} }$ (formed by all maps in ${ \\mathcal{X} }$ between ${ \\mathcal{X} }_\\emptyset$ and ${ \\mathcal{X} }_V$) is $k_V$-cocartesian,\n\\item[(ii)] $-1\\leq k_{U}\\leq k_V$ for each $U\\subset V$.\n\\end{itemize}\nThen ${ \\mathcal{X} }$ is $k$-cartesian, where $k$ is the minimum of $-|W|+\\sum_{V\\in\\lambda}(k_V+1)$ over all partitions $\\lambda$ of $W$ by nonempty sets.\n\\end{thm}\n\nFor instance, when $n=3$, $k$ is the minimum of\n\\begin{align*}\n k_{\\{1,2,3\\}}-2,\\quad\n &k_{\\{1,2\\}}+k_{\\{3\\}}-1,\\\\\n &k_{\\{1,3\\}}+k_{\\{2\\}}-1,\\\\\n &k_{\\{2,3\\}}+k_{\\{1\\}}-1,\\quad\n k_{\\{1\\}}+k_{\\{2\\}}+k_{\\{3\\}}.\n\\end{align*}\n\nOur other results are dual versions of Theorems~\\ref{thm:homotopy_excision}, \\ref{thm:blakers_massey}, \\ref{thm:higher_homotopy_excision} and \\ref{thm:higher_blakers_massey}.\n\n\\begin{thm}[Dual homotopy excision for structured ring spectra]\n\\label{thm:dual_homotopy_excision}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules. Let ${ \\mathcal{X} }$ be a homotopy pullback square of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules) of the form\n\\begin{align*}\n\\xymatrix{\n { \\mathcal{X} }_\\emptyset\\ar[r]\\ar[d] & { \\mathcal{X} }_{\\{1\\}}\\ar[d]\\\\\n { \\mathcal{X} }_{\\{2\\}}\\ar[r] & { \\mathcal{X} }_{\\{1,2\\}}\n}\n\\end{align*}\nAssume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Consider any $k_1,k_2\\geq -1$. If ${ \\mathcal{X} }_{\\{2\\}}{ \\rightarrow } { \\mathcal{X} }_{\\{1,2\\}}$ is $k_1$-connected and ${ \\mathcal{X} }_{\\{1\\}}{ \\rightarrow } { \\mathcal{X} }_{\\{1,2\\}}$ is $k_2$-connected, then ${ \\mathcal{X} }$ is $k$-cocartesian with $k=k_{1}+k_{2}+2$.\n\\end{thm}\n\nThe following result relaxes the assumption that ${ \\mathcal{X} }$ is a homotopy pullback square.\n\n\\begin{thm}[Dual Blakers-Massey theorem for structured ring spectra]\n\\label{thm:dual_blakers_massey}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules. Let ${ \\mathcal{X} }$ be a commutative square of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules) of the form\n\\begin{align*}\n\\xymatrix{\n { \\mathcal{X} }_\\emptyset\\ar[r]\\ar[d] & { \\mathcal{X} }_{\\{1\\}}\\ar[d]\\\\\n { \\mathcal{X} }_{\\{2\\}}\\ar[r] & { \\mathcal{X} }_{\\{1,2\\}}\n}\n\\end{align*}\nAssume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Consider any $k_1,k_2,k_{12}\\geq -1$ with $k_1\\leq k_{12}$ and $k_2\\leq k_{12}$. If ${ \\mathcal{X} }_{\\{2\\}}{ \\rightarrow } { \\mathcal{X} }_{\\{1,2\\}}$ is $k_1$-connected, ${ \\mathcal{X} }_{\\{1\\}}{ \\rightarrow } { \\mathcal{X} }_{\\{1,2\\}}$ is $k_2$-connected, and ${ \\mathcal{X} }$ is $k_{12}$-cartesian, then ${ \\mathcal{X} }$ is $k$-cocartesian, where $k$ is the minimum of $k_{12}+1$ and $k_{1}+k_{2}+2$.\n\\end{thm}\n\n\\begin{thm}[Higher dual homotopy excision for structured ring spectra]\n\\label{thm:higher_dual_homotopy_excision}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a finite set with $|W|\\geq 2$. Let ${ \\mathcal{X} }$ be a strongly $\\infty$-cartesian $W$-cube of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules). Assume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected. Let $k_i\\geq -1$ for each $i\\in W$. If each ${ \\mathcal{X} }_{W-\\{i\\}}{ \\rightarrow }{ \\mathcal{X} }_W$ is $k_i$-connected ($i\\in W$), then ${ \\mathcal{X} }$ is $k$-cocartesian with $k=|W|+\\sum_{i\\in W}k_i$.\n\\end{thm}\n\nThe last three results are all special cases of the following theorem which is a structured ring spectra analog of Goodwillie's higher dual Blakers-Massey theorem for spaces \\cite[2.6]{Goodwillie_calc2}. This specializes to the higher dual homotopy excision result (Theorem \\ref{thm:higher_dual_homotopy_excision}) in the special case that ${ \\mathcal{X} }$ is strongly $\\infty$-cartesian, and to Theorem~\\ref{thm:dual_blakers_massey} in the case $|W| = 2$.\n\n\\begin{thm}[Higher dual Blakers-Massey theorem for structured ring spectra]\n\\label{thm:higher_dual_blakers_massey}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a nonempty finite set. Let ${ \\mathcal{X} }$ be a $W$-cube of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules). Assume that ${ \\mathcal{R} },{ \\mathcal{O} },{ \\mathcal{X} }_\\emptyset$ are $(-1)$-connected, and suppose that\n\\begin{itemize}\n\\item[(i)] for each nonempty subset $V\\subset W$, the $V$-cube $\\partial_{W-V}^W{ \\mathcal{X} }$ (formed by all maps in ${ \\mathcal{X} }$ between ${ \\mathcal{X} }_{W-V}$ and ${ \\mathcal{X} }_W$) is $k_V$-cartesian,\n\\item[(ii)] $-1\\leq k_{U}\\leq k_V$ for each $U\\subset V$.\n\\end{itemize}\nThen ${ \\mathcal{X} }$ is $k$-cocartesian, where $k$ is the minimum of $k_W+|W|-1$ and $|W|+\\sum_{V\\in\\lambda}k_V$ over all partitions $\\lambda$ of $W$ by nonempty sets not equal to $W$.\n\\end{thm}\n\n\n\\subsection{Organization of the paper}\n\nIn Section \\ref{sec:preliminaries} we recall some preliminaries on algebras and modules over operads. In Section \\ref{sec:cubical_diagrams} we prove our main results. Much of the work is concerned with proving higher homotopy excision (Theorem~\\ref{thm:higher_homotopy_excision}) which we obtain as a special case of a more general result, Theorem \\ref{thm:pushout_cofibration_cube_homotopical_analysis}. We then use an induction argument due to Goodwillie to pass from this to the higher Blakers-Massey result (Theorem~\\ref{thm:higher_blakers_massey}). We can then use higher Blakers-Massey to deduce, first, higher dual homotopy excision (Theorem~\\ref{thm:higher_dual_homotopy_excision}) and then higher dual Blakers-Massey (Theorem~\\ref{thm:higher_dual_blakers_massey}). Finally, in Section \\ref{sec:chain_complexes_over_a_commutative_ring}, we observe that the analogs of the main theorems stated above remain true in the context of unbounded chain complexes over a commutative ring.\n\n\n\\subsection*{Acknowledgments}\n\nThe second author would like to thank Greg Arone, Kristine Bauer, Bjorn Dundas, Bill Dwyer, Brenda Johnson, Nick Kuhn, Ib Madsen, Jim McClure, and Donald Yau for useful remarks. The second author is grateful to Dmitri Pavlov and Jakob Scholbach for helpful comments that directly led to \\cite{Harper_Spectra_Correction}, and to Mark Behrens and Haynes Miller for a stimulating and enjoyable visit to the Department of Mathematics at the Massachusetts Institute of Technology in summer 2011, and for their invitation which made this possible. The first author was partially supported by National Science Foundation Grant DMS-1144149.\n\n\n\\section{Preliminaries}\n\\label{sec:preliminaries}\n\nThe purpose of this section is to recall various preliminaries on algebras and left modules over operads needed in this paper. Define the sets $\\mathbf{n}:=\\{1,\\dots,n\\}$ for each $n\\geq 0$, where $\\mathbf{0}:=\\emptyset$ denotes the empty set. If $W$ is a finite set, we denote by $|W|$ the number of elements in $W$. For a more detailed development of the material in this section, see \\cite{Harper_Spectra, Harper_Modules}.\n\n\\begin{defn}\n\\label{defn:symmetric_sequences}\nLet ${ \\mathsf{M} }$ be a category and $n\\geq 0$.\n\\begin{itemize}\n\\item $\\Sigma$ is the category of finite sets and their bijections.\n\\item $({ \\mathsf{Mod}_{ \\mathcal{R} } },{ \\,\\wedge\\, },{ \\mathcal{R} })$ is the closed symmetric monoidal category of ${ \\mathcal{R} }$-modules.\n\\item A \\emph{symmetric sequence} in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{M} }$) is a functor $\\functor{A}{\\Sigma^{{ \\mathrm{op} }}}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}$ (resp. $\\functor{A}{\\Sigma^{{ \\mathrm{op} }}}{{ \\mathsf{M} }}$). Denote by ${ \\mathsf{SymSeq} }$ the category of symmetric sequences in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ and their natural transformations.\n\\item A symmetric sequence $A$ is \\emph{concentrated at $n$} if $A[\\mathbf{r}]=\\emptyset$ for all $r\\neq n$.\n\\end{itemize}\n\\end{defn}\n\n\\begin{defn} Let $A_1,\\dotsc,A_t\\in{ \\mathsf{SymSeq} }$. Their \\emph{tensor product} $A_1{ \\check{\\tensor} }\\dotsb{ \\check{\\tensor} } A_t\\in{ \\mathsf{SymSeq} }$ is the left Kan extension of objectwise smash along coproduct of sets\n\\begin{align*}\n\\xymatrix{\n (\\Sigma^{{ \\mathrm{op} }})^{\\times t}\n \\ar[rr]^-{A_1\\times\\dotsb\\times A_t}\\ar[d]^{\\coprod} & &\n ({ \\mathsf{Mod}_{ \\mathcal{R} } })^{\\times t}\\ar[r]^-{{ \\,\\wedge\\, }} & { \\mathsf{Mod}_{ \\mathcal{R} } } \\\\\n \\Sigma^{{ \\mathrm{op} }}\\ar[rrr]^{A_1{ \\check{\\tensor} }\\dotsb{ \\check{\\tensor} }\n A_t}_{\\text{left Kan extension}} & & & { \\mathsf{Mod}_{ \\mathcal{R} } }\n}\n\\end{align*}\n\\end{defn}\n\nIf $X$ is a finite set and $A$ is an object in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$, we use the usual dot notation $A\\cdot X$ (see Mac Lane \\cite{MacLane_categories} or \\cite[2.3]{Harper_Modules}) to denote the copower $A\\cdot X$ defined by\n$\n A\\cdot X := \\coprod_X A\n$,\nthe coproduct in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ of $|X|$ copies of $A$. Recall the following useful calculations for tensor products.\n\n\\begin{prop}\nLet $A_1,\\dotsc,A_t\\in{ \\mathsf{SymSeq} }$ and $R\\in\\Sigma$, with $r:=|R|$. There are natural isomorphisms\n\\begin{align}\n \\notag\n (A_1{ \\check{\\tensor} }\\dotsb{ \\check{\\tensor} } A_t)[R]&{ \\ \\cong \\ }\\\n \\coprod_{\\substack{\\function{\\pi}{R}{\\mathbf{t}}\\\\ \\text{in ${ \\mathsf{Set} }$}}}\n A_1[\\pi^{-1}(1)]{ \\,\\wedge\\, }\\dotsb{ \\,\\wedge\\, }\n A_t[\\pi^{-1}(t)],\\\\\n \\label{eq:tensor_check_calc}\n &{ \\ \\cong \\ }\n \\coprod_{r_1+\\dotsb +r_t=r}A_1[\\mathbf{r_1}]{ \\,\\wedge\\, }\\dotsb{ \\,\\wedge\\, }\n A_t[\\mathbf{r_t}]\\underset{{\\Sigma_{r_1}\\times\\dotsb\\times\n \\Sigma_{r_t}}}{\\cdot}\\Sigma_{r}\n\\end{align}\n\\end{prop}\n\nHere, ${ \\mathsf{Set} }$ is the category of sets and their maps, and \\eqref{eq:tensor_check_calc} displays the tensor product $(A_1{ \\check{\\tensor} }\\dotsb{ \\check{\\tensor} } A_t)[R]$ as a coproduct of $\\Sigma_{r_1}\\times\\dotsb\\times\\Sigma_{r_t}$-orbits. It will be conceptually useful to extend the definition of tensor powers $A^{{ \\check{\\tensor} } t}$ to situations in which the integers $t$ are replaced by a finite set $T$.\n\n\\begin{defn}\nLet $A\\in{ \\mathsf{SymSeq} }$ and $R,T\\in\\Sigma$. The \\emph{tensor powers} $A^{{ \\check{\\tensor} } T}\\in{ \\mathsf{SymSeq} }$ are defined objectwise by\n\\begin{align*}\n (A^{{ \\check{\\tensor} }\\emptyset})[R]:=\n \\coprod_{\\substack{\\function{\\pi}{R}{\\emptyset}\\\\ \\text{in ${ \\mathsf{Set} }$}}}\n S,\\quad\\quad\n &(A^{{ \\check{\\tensor} } T})[R]:=\n \\coprod_{\\substack{\\function{\\pi}{R}{T}\\\\ \\text{in ${ \\mathsf{Set} }$}}}\n \\bigwedge_{t\\in T} A[\\pi^{-1}(t)]\\quad(T\\neq\\emptyset).\n\\end{align*}\nNote that there are no functions $\\function{\\pi}{R}{\\emptyset}$ in ${ \\mathsf{Set} }$ unless $R=\\emptyset$. We will use the abbreviation $A^{{ \\check{\\tensor} } 0}:=A^{{ \\check{\\tensor} }\\emptyset}$.\n\\end{defn}\n\n\\begin{defn}\\label{defn:circle_product}\nLet $A,B,C\\in{ \\mathsf{SymSeq} }$, and $r,t\\geq 0$. The \\emph{circle product} (or composition product) $A\\circ B\\in{ \\mathsf{SymSeq} }$ is defined objectwise by the coend\n\\begin{align}\n \\label{eq:circle_product_calc}\n (A\\circ B)[\\mathbf{r}] := A{ \\,\\wedge\\, }_\\Sigma (B^{{ \\check{\\tensor} }-})[\\mathbf{r}]\n &{ \\ \\cong \\ }\n \\coprod_{t\\geq 0}A[\\mathbf{t}]{ \\,\\wedge\\, }_{\\Sigma_t}\n (B^{{ \\check{\\tensor} } t})[\\mathbf{r}].\n\\end{align}\n\\end{defn}\n\n\n\\begin{prop}\n\\label{prop:closed_monoidal_on_symmetric_sequences}\n\\\n\\begin{itemize}\n\\item [(a)] $({ \\mathsf{SymSeq} },{ \\check{\\tensor} },1)$ has the structure of a closed symmetric monoidal category with all small limits and colimits. The unit for ${ \\check{\\tensor} }$ denoted ``$1$'' is the symmetric sequence concentrated at $0$ with value ${ \\mathcal{R} }$.\n\\item [(b)] $({ \\mathsf{SymSeq} },\\circ,I)$ has the structure of a closed monoidal category with all small limits and colimits. The unit for $\\circ$ denoted ``$I$'' is the symmetric sequence concentrated at $1$ with value ${ \\mathcal{R} }$. Circle product is not symmetric.\n\\end{itemize}\n\\end{prop}\n\n\\begin{defn}\n\\label{defn:hat_construction_embed_at_zero}\nLet $Z\\in{ \\mathsf{Mod}_{ \\mathcal{R} } }$. Define $\\hat{Z}\\in{ \\mathsf{SymSeq} }$ to be the symmetric sequence concentrated at $0$ with value $Z$.\n\\end{defn}\n\nThe functor $\\function{\\hat{-}}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}{{ \\mathsf{SymSeq} }}$ fits into the adjunction\n\\begin{align*}\n\\xymatrix{\n { \\mathsf{Mod}_{ \\mathcal{R} } }\\ar@<0.5ex>[r]^-{\\hat{-}} &\n { \\mathsf{SymSeq} }\\ar@<0.5ex>[l]^-{{ \\mathrm{Ev} }_0}\n}\n\\end{align*}\nwith left adjoint on top and ${ \\mathrm{Ev} }_0$ the \\emph{evaluation} functor defined objectwise by ${ \\mathrm{Ev} }_0(B):=B[\\mathbf{0}]$. Note that $\\hat{-}$ embeds ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ in ${ \\mathsf{SymSeq} }$ as the full subcategory of symmetric sequences concentrated at $0$.\n\n\\begin{defn}\\label{defn:corresponding_functor}\nLet ${ \\mathcal{O} }$ be a symmetric sequence and $Z\\in{ \\mathsf{Mod}_{ \\mathcal{R} } }$. The corresponding functor $\\functor{{ \\mathcal{O} }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}$ is defined objectwise by\n$\n { \\mathcal{O} }(Z):={ \\mathcal{O} }\\circ(Z):=\\amalg_{t\\geq 0}{ \\mathcal{O} }[\\mathbf{t}]\n { \\,\\wedge\\, }_{\\Sigma_t}Z^{\\wedge t}.\n$\n\\end{defn}\n\n\\begin{prop}\nLet ${ \\mathcal{O} },A\\in{ \\mathsf{SymSeq} }$ and $Z\\in{ \\mathsf{Mod}_{ \\mathcal{R} } }$. There are natural isomorphisms\n\\begin{align}\n\\label{eq:circ_product_and_evaluate_at_zero}\n \\widehat{{ \\mathcal{O} }(Z)}=\n \\widehat{{ \\mathcal{O} }\\circ(Z)}{ \\ \\cong \\ }{ \\mathcal{O} }\\circ\\hat{Z},\\quad\\quad\n { \\mathrm{Ev} }_0({ \\mathcal{O} }\\circ A){ \\ \\cong \\ } { \\mathcal{O} }\\circ\\bigl({ \\mathrm{Ev} }_0(A)\\bigr).\n\\end{align}\n\\end{prop}\n\n\\begin{proof}\nThis follows from \\eqref{eq:circle_product_calc} and \\eqref{eq:tensor_check_calc}.\n\\end{proof}\n\n\\begin{defn}\n\\label{defn:operad}\nAn \\emph{operad} in ${ \\mathcal{R} }$-modules is a monoid object in $({ \\mathsf{SymSeq} },\\circ,I)$ and a \\emph{morphism of operads} is a morphism of monoid objects in $({ \\mathsf{SymSeq} },\\circ,I)$.\n\\end{defn}\n\n\\begin{rem} If ${ \\mathcal{O} }$ is an operad, then the associated functor $\\function{{ \\mathcal{O} }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}$ is a monad.\n\\end{rem}\n\n\\begin{defn}\n\\label{defn:algebras_and_modules}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules.\n\\begin{itemize}\n\\item A \\emph{left ${ \\mathcal{O} }$-module} is an object in $({ \\mathsf{SymSeq} },\\circ,I)$ with a left action of ${ \\mathcal{O} }$ and a \\emph{morphism of left ${ \\mathcal{O} }$-modules} is a map that respects the left ${ \\mathcal{O} }$-module structure. Denote by ${ \\Lt_{ \\mathcal{O} } }$ the category of left ${ \\mathcal{O} }$-modules and their morphisms.\n\\item An \\emph{${ \\mathcal{O} }$-algebra} is an algebra for the monad $\\functor{{ \\mathcal{O} }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }}$ and a \\emph{morphism of ${ \\mathcal{O} }$-algebras} is a map in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ that respects the ${ \\mathcal{O} }$-algebra structure. Denote by ${ \\Alg_{ \\mathcal{O} } }$ the category of ${ \\mathcal{O} }$-algebras and their morphisms.\n\\end{itemize}\n\\end{defn}\n\nIt follows easily from \\eqref{eq:circ_product_and_evaluate_at_zero} that an ${ \\mathcal{O} }$-algebra is the same as an ${ \\mathcal{R} }$-module $Z$ with a left ${ \\mathcal{O} }$-module structure on $\\hat{Z}$, and if $Z$ and $Z'$ are ${ \\mathcal{O} }$-algebras, then a morphism of ${ \\mathcal{O} }$-algebras is the same as a map $\\function{f}{Z}{Z'}$ in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ such that $\\function{\\hat{f}}{\\hat{Z}}{\\hat{Z'}}$ is a morphism of left ${ \\mathcal{O} }$-modules. In other words, an algebra over an operad ${ \\mathcal{O} }$ is the same as a left ${ \\mathcal{O} }$-module that is concentrated at $0$, and ${ \\Alg_{ \\mathcal{O} } }$ embeds in ${ \\Lt_{ \\mathcal{O} } }$ as the full subcategory of left ${ \\mathcal{O} }$-modules concentrated at $0$, via the functor $\\function{\\hat{-}}{{ \\Alg_{ \\mathcal{O} } }}{{ \\Lt_{ \\mathcal{O} } }}$, $Z\\longmapsto \\hat{Z}$. Define the \\emph{evaluation} functor $\\function{{ \\mathrm{Ev} }_0}{{ \\Lt_{ \\mathcal{O} } }}{{ \\Alg_{ \\mathcal{O} } }}$ objectwise by ${ \\mathrm{Ev} }_0(B):=B[\\mathbf{0}]$.\n\n\\begin{prop}\n\\label{prop:basic_properties_LTO}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathsf{C} }$. There are adjunctions\n\\begin{align}\n\\label{eq:free_forgetful_adjunction}\n\\xymatrix{\n { \\mathsf{Mod}_{ \\mathcal{R} } }\\ar@<0.5ex>[r]^-{{ \\mathcal{O} }\\circ(-)} & { \\Alg_{ \\mathcal{O} } },\\ar@<0.5ex>[l]^-{U}\n}\\quad\\quad\n\\xymatrix{\n { \\mathsf{SymSeq} }\\ar@<0.5ex>[r]^-{{ \\mathcal{O} }\\circ-} & { \\Lt_{ \\mathcal{O} } },\\ar@<0.5ex>[l]^-{U}\n}\\quad\\quad\n\\xymatrix{\n { \\Alg_{ \\mathcal{O} } }\\ar@<0.5ex>[r]^-{\\hat{-}} & { \\Lt_{ \\mathcal{O} } },\\ar@<0.5ex>[l]^-{{ \\mathrm{Ev} }_0}\n}\n\\end{align}\nwith left adjoints on top and $U$ the forgetful functor. All small colimits exist in ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$, and both reflexive coequalizers and filtered colimits are preserved (and created) by the forgetful functors. All small limits exist in ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$, and are preserved (and created) by the forgetful functors.\n\\end{prop}\n\nThroughout this paper, we use the following model structures on the categories of ${ \\mathcal{O} }$-algebras and left ${ \\mathcal{O} }$-modules.\n\n\\begin{defn}\n\\label{defn:stable_flat_positive_model_structures}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules. The \\emph{positive flat stable model structure} on ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) has as weak equivalences the stable equivalences (resp. objectwise stable equivalences) and as fibrations the positive flat stable fibrations (resp. objectwise positive flat stable fibrations).\n\\end{defn}\n\nThe model structures in Definition \\ref{defn:stable_flat_positive_model_structures} are established in \\cite{Harper_Spectra, Harper_Spectra_Correction, Harper_Hess}. For a description of the cofibrations, see \\cite[Section 4]{Harper_Spectra} and \\cite[Section 7]{Harper_Hess}. For ease of notation, we have followed Schwede \\cite{Schwede_book_project} in using the term \\emph{flat} (e.g., flat stable model structure) for what is called $S$ (e.g., stable $S$-model structure) in \\cite{Hovey_Shipley_Smith, Schwede, Shipley_comm_ring}. For some of the good properties of the flat stable model structure, see \\cite[5.3.7 and 5.3.10]{Hovey_Shipley_Smith}.\n\n\n\\section{Homotopical Analysis of Cubical Diagrams}\n\\label{sec:cubical_diagrams}\n\nIn this section we prove the main results of the paper. The following definitions and constructions appear in Goodwillie \\cite{Goodwillie_calc2} in the context of spaces, and will also be useful in our context of structured ring spectra.\n\n\\begin{defn}[Indexing categories for cubical diagrams]\nLet $W$ be a finite set and ${ \\mathsf{M} }$ a category.\n\\begin{itemize}\n\\item Denote by $\\powerset(W)$ the poset of all subsets of $W$, ordered by inclusion $\\subset$ of sets. We will often regard $\\powerset(W)$ as the category associated to this partial order in the usual way; the objects are the elements of $\\powerset(W)$, and there is a morphism $U{ \\rightarrow } V$ if and only if $U\\subset V$.\n\\item Denote by $\\powerset_0(W)\\subset\\powerset(W)$ the poset of all nonempty subsets of $W$; it is the full subcategory of $\\powerset(W)$ containing all objects except the initial object $\\emptyset$.\n\\item Denote by $\\powerset_1(W)\\subset\\powerset(W)$ the poset of all subsets of $W$ not equal to $W$; it is the full subcategory of $\\powerset(W)$ containing all objects except the terminal object $W$.\n\\item A \\emph{$W$-cube} ${ \\mathcal{X} }$ in ${ \\mathsf{M} }$ is a $\\powerset(W)$-shaped diagram ${ \\mathcal{X} }$ in ${ \\mathsf{M} }$; in other words, a functor $\\function{{ \\mathcal{X} }}{\\powerset(W)}{{ \\mathsf{M} }}$.\n\\end{itemize}\n\\end{defn}\n\n\\begin{rem}\nIf $n=|W|$ and ${ \\mathcal{X} }$ is a $W$-cube in ${ \\mathsf{M} }$, we will sometimes refer to ${ \\mathcal{X} }$ simply as an \\emph{$n$-cube} in ${ \\mathsf{M} }$. In particular, a $0$-cube is an object in ${ \\mathsf{M} }$ and a $1$-cube is a morphism in ${ \\mathsf{M} }$.\n\\end{rem}\n\n\\begin{defn}[Faces of cubical diagrams]\nLet $W$ be a finite set and ${ \\mathsf{M} }$ a category. Let ${ \\mathcal{X} }$ be a $W$-cube in ${ \\mathsf{M} }$ and consider any subsets $U\\subset V\\subset W$. Denote by $\\partial_U^V{ \\mathcal{X} }$ the $(V-U)$-cube defined objectwise by\n\\begin{align*}\n T\\mapsto(\\partial_U^V{ \\mathcal{X} })_T:={ \\mathcal{X} }_{T\\cup U},\\quad\\quad T\\subset V-U.\n\\end{align*}\nIn other words, $\\partial_U^V{ \\mathcal{X} }$ is the $(V-U)$-cube formed by all maps in ${ \\mathcal{X} }$ between ${ \\mathcal{X} }_U$ and ${ \\mathcal{X} }_V$. We say that $\\partial_U^V{ \\mathcal{X} }$ is a \\emph{face} of ${ \\mathcal{X} }$ of \\emph{dimension} $|V-U|$.\n\\end{defn}\n\n\\begin{defn}\n\\label{defn:cofibration_cubes_etc}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a finite set. Let ${ \\mathcal{X} }$ be a $W$-cube in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) or ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$) and $k\\in{ \\mathbb{Z} }$.\n\\begin{itemize}\n\\item ${ \\mathcal{X} }$ is a \\emph{cofibration cube} if the map\n$\\colim_{\\powerset_1(V)}{ \\mathcal{X} }{ \\rightarrow }\\colim_{\\powerset(V)}{ \\mathcal{X} }{ \\ \\cong \\ }{ \\mathcal{X} }_V$ is a cofibration for each $V\\subset W$; in particular, each ${ \\mathcal{X} }_V$ is cofibrant.\n\\item ${ \\mathcal{X} }$ is \\emph{$k$-cocartesian} if the map\n$\\hocolim_{\\powerset_1(W)}{ \\mathcal{X} }{ \\rightarrow }\\hocolim_{\\powerset(W)}{ \\mathcal{X} }{ \\ \\simeq \\ }{ \\mathcal{X} }_W$ is $k$-connected.\n\\item ${ \\mathcal{X} }$ is \\emph{$\\infty$-cocartesian} if the map\n$\\hocolim_{\\powerset_1(W)}{ \\mathcal{X} }{ \\rightarrow }\\hocolim_{\\powerset(W)}{ \\mathcal{X} }{ \\ \\simeq \\ }{ \\mathcal{X} }_W$ is a weak equivalence.\n\\item ${ \\mathcal{X} }$ is \\emph{strongly $\\infty$-cocartesian} if each face of dimension $\\geq 2$ is $\\infty$-cocartesian.\n\\item ${ \\mathcal{X} }$ is a \\emph{pushout cube} if the map\n$\\colim_{\\powerset_1(V)}{ \\mathcal{X} }{ \\rightarrow }\\colim_{\\powerset(V)}{ \\mathcal{X} }{ \\ \\cong \\ }{ \\mathcal{X} }_V$ is an isomorphism for each $V\\subset W$ with $|V| \\geq 2$; i.e., if it is built by colimits in the usual way out of the maps ${ \\mathcal{X} }_\\emptyset{ \\rightarrow }{ \\mathcal{X} }_V$, $V\\subset W$, $|V|=1$.\n\\end{itemize}\n\\end{defn}\n\nThese definitions and constructions dualize as follows. Note that when looking for the appropriate dual construction, it is useful to observe that ${ \\mathcal{X} }=\\partial_\\emptyset^V{ \\mathcal{X} }$ when restricted to $\\powerset(V)$; for instance, $\\colim_{\\powerset_1(V)}{ \\mathcal{X} }=\\colim_{\\powerset_1(V)}\\partial_\\emptyset^V{ \\mathcal{X} }$.\n\n\\begin{defn}\n\\label{defn:fibration_cubes_etc}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a finite set. Let ${ \\mathcal{X} }$ be a $W$-cube in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) or ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$) and $k\\in{ \\mathbb{Z} }$.\n\\begin{itemize}\n\\item ${ \\mathcal{X} }$ is a \\emph{fibration cube} if the map ${ \\mathcal{X} }_V{ \\ \\cong \\ }\\lim_{\\powerset(W-V)}\\partial_V^W{ \\mathcal{X} }{ \\rightarrow }\\lim_{\\powerset_0(W-V)}\\partial_V^W{ \\mathcal{X} }$ is a fibration for each $V\\subset W$; in particular, each ${ \\mathcal{X} }_V$ is fibrant.\n\\item ${ \\mathcal{X} }$ is \\emph{$k$-cartesian} if the map ${ \\mathcal{X} }_\\emptyset{ \\ \\simeq \\ }\\holim_{\\powerset(W)}{ \\mathcal{X} }{ \\rightarrow }\\holim_{\\powerset_0(W)}{ \\mathcal{X} }$ is $k$-connected.\n\\item ${ \\mathcal{X} }$ is \\emph{$\\infty$-cartesian} if the map ${ \\mathcal{X} }_\\emptyset{ \\ \\simeq \\ }\\holim_{\\powerset(W)}{ \\mathcal{X} }{ \\rightarrow }\\holim_{\\powerset_0(W)}{ \\mathcal{X} }$ is a weak equivalence.\n\\item ${ \\mathcal{X} }$ is \\emph{strongly $\\infty$-cartesian} if each face of dimension $\\geq 2$ is $\\infty$-cartesian.\n\\item ${ \\mathcal{X} }$ is a \\emph{pullback cube} if the map ${ \\mathcal{X} }_V{ \\ \\cong \\ }\\lim_{\\powerset(W-V)}\\partial_V^W{ \\mathcal{X} }{ \\rightarrow }\\lim_{\\powerset_0(W-V)}\\partial_V^W{ \\mathcal{X} }$ is an isomorphism for each $V\\subset W$ with $|W-V| \\geq 2$; i.e., if it is built by limits in the usual way out of the maps ${ \\mathcal{X} }_V{ \\rightarrow }{ \\mathcal{X} }_W$, $V\\subset W$, $|W-V|=1$.\n\\end{itemize}\n\\end{defn}\n\n\\begin{rem}\nIt is important to note that every $1$-cube in ${ \\Alg_{ \\mathcal{O} } }$, ${ \\Lt_{ \\mathcal{O} } }$, ${ \\mathsf{Mod}_{ \\mathcal{R} } }$, or ${ \\mathsf{SymSeq} }$ is strongly $\\infty$-cocartesian (resp. strongly $\\infty$-cartesian), since there are no faces of dimension $\\geq 2$, but only the $1$-cubes that are weak equivalences are $\\infty$-cocartesian (resp. $\\infty$-cartesian).\n\\end{rem}\n\nThe following is an exercise left to the reader.\n\\begin{prop}\n\\label{prop:connectivity_estimates_for_composition_of_maps}\nLet $k\\in{ \\mathbb{Z} }$. Consider any maps $X{ \\rightarrow } Y{ \\rightarrow } Z$ in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) or ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$).\n\\begin{itemize}\n\\item[(a)] If $X{ \\rightarrow } Y$ and $Y{ \\rightarrow } Z$ are $k$-connected, then $X{ \\rightarrow } Z$ is $k$-connected.\n\\item[(b)] If $X{ \\rightarrow } Y$ is $(k-1)$-connected and $X{ \\rightarrow } Z$ is $k$-connected, then $Y{ \\rightarrow } Z$ is $k$-connected.\n\\item[(c)] If $X{ \\rightarrow } Z$ is $k$-connected and $Y{ \\rightarrow } Z$ is $(k+1)$-connected, then $X{ \\rightarrow } Y$ is $k$-connected.\n\n\\end{itemize}\n\\end{prop}\n\nVersions of the following connectivity estimates are proved in Goodwillie \\cite[1.6--1.8]{Goodwillie_calc2} in the context of spaces, and exactly the same arguments give a proof of Propositions \\ref{prop:map_of_cubical_diagrams} and \\ref{prop:composed_map_of_cubical_diagrams} below in the context of structured ring spectra; this is an exercise left to the reader.\n\n\\begin{prop}\n\\label{prop:map_of_cubical_diagrams}\nLet $W$ be a finite set and $k\\in{ \\mathbb{Z} }$. Consider any map ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ of $W$-cubes in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) or ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$).\n\\begin{itemize}\n\\item[(a)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ and ${ \\mathcal{X} }$ are $k$-cocartesian, then ${ \\mathcal{Y} }$ is $k$-cocartesian.\n\\item[(b)] If ${ \\mathcal{X} }$ is $(k-1)$-cocartesian and ${ \\mathcal{Y} }$ is $k$-cocartesian, then ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ is $k$-cocartesian.\n\\item[(c)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ and ${ \\mathcal{Y} }$ are $k$-cartesian, then ${ \\mathcal{X} }$ is $k$-cartesian.\n\\item[(d)] If ${ \\mathcal{X} }$ is $k$-cartesian and ${ \\mathcal{Y} }$ is $(k+1)$-cartesian, then ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ is $k$-cartesian.\n\n\\end{itemize}\n\\end{prop}\n\n\\begin{prop}\n\\label{prop:composed_map_of_cubical_diagrams}\nLet $W$ be a finite set and $k\\in{ \\mathbb{Z} }$. Consider any map ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }{ \\rightarrow }{ \\mathcal{Z} }$ of $W$-cubes in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) or ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$).\n\\begin{itemize}\n\\item[(a)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ and ${ \\mathcal{Y} }{ \\rightarrow }{ \\mathcal{Z} }$ are $k$-cocartesian, then ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Z} }$ is $k$-cocartesian.\n\\item[(b)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ is $(k-1)$-cocartesian and ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Z} }$ is $k$-cocartesian, then ${ \\mathcal{Y} }{ \\rightarrow }{ \\mathcal{Z} }$ is $k$-cocartesian.\n\\item[(c)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ and ${ \\mathcal{Y} }{ \\rightarrow }{ \\mathcal{Z} }$ are $k$-cartesian, then ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Z} }$ is $k$-cartesian.\n\\item[(d)] If ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Z} }$ is $k$-cartesian and ${ \\mathcal{Y} }{ \\rightarrow }{ \\mathcal{Z} }$ is $(k+1)$-cartesian, then ${ \\mathcal{X} }{ \\rightarrow }{ \\mathcal{Y} }$ is $k$-cartesian.\n\\end{itemize}\n\\end{prop}\n\nThe following results depend on the fact that the model structures on ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ and ${ \\mathsf{SymSeq} }$ are stable, so that fibration and cofibration sequences coincide. Note that these do not hold, in general, for ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$.\n\n\\begin{prop}\n\\label{prop:comparing_cocartesian_and_cartesian_estimates_in_ModR}\nLet $W$ be a finite set and $k\\in{ \\mathbb{Z} }$. Let ${ \\mathcal{X} }$ be a $W$-cube in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$).\n\\begin{itemize}\n\\item[(a)] ${ \\mathcal{X} }$ is $k$-cocartesian if and only if ${ \\mathcal{X} }$ is $(k-|W|+1)$-cartesian.\n\\item[(b)] ${ \\mathcal{X} }$ is $k$-cartesian if and only if ${ \\mathcal{X} }$ is $(k+|W|-1)$-cocartesian.\n\\end{itemize}\n\\end{prop}\n\n\\begin{proof}\nThis is because the total homotopy cofiber of ${ \\mathcal{X} }$ (see Goodwillie \\cite[1.4]{Goodwillie_calc2}) is weakly equivalent to the $|W|$-th suspension, usually denoted $\\Sigma^{|W|}$, of the total homotopy fiber of ${ \\mathcal{X} }$ (see \\cite[1.1a]{Goodwillie_calc2}).\n\\end{proof}\n\n\n\\subsection{Proof of higher homotopy excision for ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$}\n\nThe purpose of this section is to prove Theorem~\\ref{thm:higher_homotopy_excision}. At the heart of our proof is a homotopical analysis of the construction ${ \\mathcal{O} }_A$ described in Proposition~\\ref{prop:coproduct_modules}. We deduce Theorem~\\ref{thm:higher_homotopy_excision} from a more general result about the effect of the construction $A \\mapsto { \\mathcal{O} }_A$ on strongly $\\infty$-cocartesian cubes. \n\n\\begin{defn}\\label{def:symmetric_array}\nConsider symmetric sequences in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$. A \\emph{symmetric array} in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ is a symmetric sequence in ${ \\mathsf{SymSeq} }$; i.e., a functor $\\functor{A}{\\Sigma^{ \\mathrm{op} }}{{ \\mathsf{SymSeq} }}$. Denote by ${ \\mathsf{SymArray} }:={ \\mathsf{SymSeq} }^{\\Sigma^{ \\mathrm{op} }}$ the category of symmetric arrays in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ and their natural transformations.\n\\end{defn}\n\nThe following ${ \\mathcal{O} }_A$ construction is crucial to our arguments; a proof of the following proposition is given in \\cite[4.7]{Harper_Spectra}.\n\n\\begin{prop}\n\\label{prop:coproduct_modules}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$, $A\\in{ \\Alg_{ \\mathcal{O} } }$ (resp. $A\\in{ \\Lt_{ \\mathcal{O} } }$), and $Y\\in{ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. $Y\\in{ \\mathsf{SymSeq} }$). Consider any coproduct in ${ \\Alg_{ \\mathcal{O} } }$ (resp. ${ \\Lt_{ \\mathcal{O} } }$) of the form $A\\amalg{ \\mathcal{O} }\\circ(Y)$ (resp. $A\\amalg({ \\mathcal{O} }\\circ Y)$). There exists a symmetric sequence ${ \\mathcal{O} }_A$ (resp. symmetric array ${ \\mathcal{O} }_A$) and natural isomorphisms\n\\begin{align*}\n A\\amalg{ \\mathcal{O} }\\circ(Y) { \\ \\cong \\ }\n \\coprod\\limits_{q\\geq 0}{ \\mathcal{O} }_A[\\mathbf{q}]\n { \\,\\wedge\\, }_{\\Sigma_q}Y^{\\wedge q}\\quad\n \\Bigl(\\text{resp.}\\quad\n A\\amalg({ \\mathcal{O} }\\circ Y) { \\ \\cong \\ }\n \\coprod\\limits_{q\\geq 0}{ \\mathcal{O} }_A[\\mathbf{q}]\n { \\check{\\tensor} }_{\\Sigma_q}Y^{{ \\check{\\tensor} } q}\n \\Bigr)\n\\end{align*}\nin the underlying category ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$). If $q\\geq 0$, then ${ \\mathcal{O} }_A[\\mathbf{q}]$ is naturally isomorphic to a colimit of the form\n\\begin{align*}\n { \\mathcal{O} }_A[\\mathbf{q}]&{ \\ \\cong \\ }\n \\colim\\biggl(\n \\xymatrix{\n \\coprod\\limits_{p\\geq 0}{ \\mathcal{O} }[\\mathbf{p}\\boldsymbol{+}\\mathbf{q}]\n { \\,\\wedge\\, }_{\\Sigma_p}A^{\\wedge p} &\n \\coprod\\limits_{p\\geq 0}{ \\mathcal{O} }[\\mathbf{p}\\boldsymbol{+}\\mathbf{q}]\n { \\,\\wedge\\, }_{\\Sigma_p}({ \\mathcal{O} }\\circ (A))^{\\wedge p}\\ar@<-0.5ex>[l]^-{d_1}\n \\ar@<-1.5ex>[l]_-{d_0}\n }\n \\biggl),\\\\\n \\text{resp.}\\quad\n { \\mathcal{O} }_A[\\mathbf{q}]&{ \\ \\cong \\ }\n \\colim\\biggl(\n \\xymatrix{\n \\coprod\\limits_{p\\geq 0}{ \\mathcal{O} }[\\mathbf{p}\\boldsymbol{+}\\mathbf{q}]\n { \\,\\wedge\\, }_{\\Sigma_p}A^{{ \\check{\\tensor} } p} &\n \\coprod\\limits_{p\\geq 0}{ \\mathcal{O} }[\\mathbf{p}\\boldsymbol{+}\\mathbf{q}]\n { \\,\\wedge\\, }_{\\Sigma_p}({ \\mathcal{O} }\\circ A)^{{ \\check{\\tensor} } p}\\ar@<-0.5ex>[l]^-{d_1}\n \\ar@<-1.5ex>[l]_-{d_0}\n }\n \\biggl),\n\\end{align*}\nin ${ \\mathsf{Mod}_{ \\mathcal{R} } }^{\\Sigma_q^{ \\mathrm{op} }}$ (resp. ${ \\mathsf{SymSeq} }^{\\Sigma_q^{ \\mathrm{op} }}$), with $d_0$ induced by operad multiplication and $d_1$ induced by the left ${ \\mathcal{O} }$-action map $\\function{m}{{ \\mathcal{O} }\\circ (A)}{A}$ (resp. $\\function{m}{{ \\mathcal{O} }\\circ A}{A}$).\n\\end{prop}\n\nRecall from \\cite{Harper_Hess} the following proposition.\n\n\\begin{prop}\n\\label{prop:OA_commutes_with_certain_colimits}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ and let $q\\geq 0$. Then the functor\n$\n \\function{{ \\mathcal{O} }_{(-)}[\\mathbf{q}]}{{ \\Alg_{ \\mathcal{O} } }}{{ \\mathsf{Mod}_{ \\mathcal{R} } }^{\\Sigma_q^{ \\mathrm{op} }}}\n$ (resp.\n$\n \\function{{ \\mathcal{O} }_{(-)}[\\mathbf{q}]}{{ \\Lt_{ \\mathcal{O} } }}{{ \\mathsf{SymSeq} }^{\\Sigma_q^{ \\mathrm{op} }}}\n$) preserves reflexive coequalizers and filtered colimits.\n\\end{prop}\n\n\\begin{defn}\\label{def:filtration_setup_modules}\nLet $\\function{i}{X}{Y}$ be a morphism in ${ \\mathsf{Mod}_{ \\mathcal{R} } }$ (resp. ${ \\mathsf{SymSeq} }$) and $t\\geq 1$. Define $Q_0^t:=X^{\\wedge t}$ (resp. $Q_0^t:=X^{{ \\check{\\tensor} } t}$) and $Q_t^t:=Y^{\\wedge t}$ (resp. $Q_t^t:=Y^{{ \\check{\\tensor} } t}$). For $0}[d]^{\\xi_1}\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_A^2[\\mathbf{r}]\\ar@{.>}[d]^{\\xi_2}\\ar[r] &\n \\dots\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_A^\\infty[\\mathbf{r}]\\ar[d]^{\\xi_\\infty}\\\\\n &\n \\cdot\\ar@{.>}[d]^{(*)_1}\\ar[r] &\n \\cdot\\ar@{.>}[d]^{(*)_2}\\ar[r] &\n \\dots\\ar[r] &\n \\cdot\\ar[d]^{(*)_\\infty}\\\\\n { \\mathcal{O} }_{\\tilde{A}}^0[\\mathbf{r}]\\ar[r]\\ar@\/_0.5pc\/[ur] &\n { \\mathcal{O} }_{\\tilde{A}}^1[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{A}}^2[\\mathbf{r}]\\ar[r] &\n \\dots\\ar[r] &\n { \\mathcal{O} }_{\\tilde{A}}^\\infty[\\mathbf{r}]\n}\n\\end{align}\ntogether with induced maps $\\xi_t$ and $(*)_t$ ($t\\geq 1$) that make the diagram in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$ commute; here, the upper diagrams are pushout diagrams and $\\xi_\\infty:=\\colim_t\\xi_t$, the maps $(*)_t$ are the obvious induced maps and $(*)_\\infty:=\\colim_t(*)_t$, the left-hand vertical map is naturally isomorphic to\n\\begin{align*}\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_{Z_0}[\\mathbf{r}]\\longrightarrow\n { \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}],\n\\end{align*}\nand the right-hand vertical maps are naturally isomorphic to the diagram\n\\begin{align}\n\\label{eq:right_hand_vertical_maps_higher_cubes}\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_{Z_1}[\\mathbf{r}]\\longrightarrow\n \\bigl(\\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_{Z_1}[\\mathbf{r}]\\bigr)\\cup\n { \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\longrightarrow\n { \\mathcal{O} }_{\\tilde{Z}_1}[\\mathbf{r}];\n\\end{align}\nhere, $\\tilde{Z}_0:={Z_0}_{\\{1,\\dots,n\\}}$ and $\\tilde{Z}_1:={Z_1}_{\\{1,\\dots,n\\}}$. We want to show that the right-hand map in \\eqref{eq:right_hand_vertical_maps_higher_cubes} is $(k_1+\\dots+k_{n+1}+n)$-connected; since the horizontal maps in \\eqref{eq:induced_filtration_diagram_for_studying_induced_map_higher_cubes} are monomorphisms, it suffices to verify each map $(*)_t$ is $(k_1+\\dots+k_{n+1}+n)$-connected. The argument is by induction on $t$. The map $\\xi_0$ factors as\n\\begin{align*}\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_A^0[\\mathbf{r}]\\longrightarrow\n \\bigl(\\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_A^0[\\mathbf{r}]\\bigr)\\cup{ \\mathcal{O} }_{\\tilde{A}}^0[\\mathbf{r}]\\xrightarrow[{ \\ \\cong \\ }]{(*)_0}\n { \\mathcal{O} }_{\\tilde{A}}^0[\\mathbf{r}]\n\\end{align*}\nand since the right-hand map $(*)_0$ is an isomorphism, it is $(k_1+\\dots+k_{n+1}+n)$-connected. Consider the commutative diagram\n\\begin{align}\n\\label{eq:filtration_quotients_diagram_for_analyzing_connectivity_higher_cubes}\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_A^{t-1}[\\mathbf{r}]\\ar[d]^{\\xi_{t-1}}\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_A^t[\\mathbf{r}]\\ar[d]^{\\xi_t}\\ar[r] &\n \\bigl(\\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_A[\\mathbf{t+r}]\\bigr){ \\check{\\tensor} }_{\\Sigma_t}(Y\/X)^{{ \\check{\\tensor} } t}\\ar[d]_{{ \\ \\cong \\ }}\\ar@\/^2pc\/[dd]^{(\\#)}\\\\\n \\cdot\\ar[d]^{(*)_{t-1}}\\ar[r] &\n \\cdot\\ar[d]^{(*)_t}\\ar[r] &\n \\cdot\\ar[d]_{(\\#\\#)}\\\\\n { \\mathcal{O} }_{\\tilde{A}}^{t-1}[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{A}}^t[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{A}}[\\mathbf{t+r}]{ \\check{\\tensor} }_{\\Sigma_t}(Y\/X)^{{ \\check{\\tensor} } t}\n}\n\\end{align}\nwith rows cofiber sequences. Since we know $(Y\/X)^{{ \\check{\\tensor} } t}$ is at least $k_{n+1}$-connected and\n$\n \\colim_{\\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_A[\\mathbf{t+r}]\\longrightarrow\n { \\mathcal{O} }_{\\tilde{A}}[\\mathbf{t+r}]\n$\nis $(k_1+\\dots+k_{n}+n-1)$-connected by the induction hypothesis, it follows that $(\\#)$ is $(k_1+\\dots+k_{n+1}+n)$-connected, and hence $(\\#\\#)$ is also. Since the rows in \\eqref{eq:filtration_quotients_diagram_for_analyzing_connectivity_higher_cubes} are cofiber sequences, it follows by induction on $t$ that $(*)_t$ is $(k_1+\\dots+k_{n+1}+n)$-connected for each $t\\geq 1$. This finishes the argument that the the right-hand maps of $n$-cubes $(r\\geq 0)$ in \\eqref{eq:gluing_on_cells_higher_cube}, each regarded as an $(n+1)$-cube in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$, are $(k_1+\\dots+k_{n+1}+n)$-cocartesian.\n\nConsider a sequence $Z_0{ \\rightarrow } Z_1{ \\rightarrow } Z_2{ \\rightarrow }\\cdots$ of pushout $n$-cubes in ${ \\Lt_{ \\mathcal{O} } }$ as in \\eqref{eq:gluing_on_cells_higher_cube}, define $\\tilde{Z}_n:={Z_n}_{\\{1,\\dots,n\\}}$, $Z_\\infty:=\\colim_nZ_n$, and $\\tilde{Z}_\\infty:=\\colim_n\\tilde{Z}_n$, and consider the naturally occurring map $Z_0{ \\rightarrow } Z_\\infty$ of pushout $n$-cubes, regarded as a pushout $(n+1)$-cube in ${ \\Lt_{ \\mathcal{O} } }$. Consider the associated left-hand diagram of the form\n\\begin{align}\n\\label{eq:gluing_on_cells_filtration_sequence_O_construction_higher_cubes}\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}Z_0\\ar[d]\\ar[r] & \\tilde{Z}_0\\ar[d]\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}Z_\\infty\\ar[r] & \\tilde{Z}_\\infty\n}\\quad\\quad\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_{Z_0}[\\mathbf{r}]\\ar[d]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[d]\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}{ \\mathcal{O} }_{Z_\\infty}[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_\\infty}[\\mathbf{r}]\n}\n\\end{align}\nin the underlying category ${ \\mathsf{SymSeq} }$ and the associated right-hand diagrams ($r\\geq 0$) in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$. Assume each ${Z_0}_\\emptyset{ \\rightarrow } {Z_0}_{\\{i\\}}$ is a $k_i$-connected cofibration between cofibrant objects in ${ \\Lt_{ \\mathcal{O} } }$ $(1\\leq i\\leq n)$ and ${ \\mathcal{O} }_{{Z_0}_\\emptyset}$ is $(-1)$-connected. We want to show that the right-hand diagrams in \\eqref{eq:gluing_on_cells_filtration_sequence_O_construction_higher_cubes} are $(k_1+\\dots + k_{n+1}+n)$-cocartesian. Consider the associated commutative diagram\n\\begin{align}\n\\label{eq:filtration_quotients_diagram_for_analyzing_connectivity_for_Z_higher_cubes}\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_0}[\\mathbf{r}]\\ar[dd]\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_1}[\\mathbf{r}]\\ar@{.>}[d]^{\\eta_1}\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_2}[\\mathbf{r}]\\ar@{.>}[d]^{\\eta_2}\\ar[r] &\n \\cdots\\ar[r] &\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_\\infty}[\\mathbf{r}]\\ar[d]^{\\eta_\\infty}\\\\\n &\n \\cdot\n \\ar@{.>}[d]^{(\\#)_1}\\ar[r] &\n \\cdot\n \\ar@{.>}[d]^{(\\#)_2}\\ar[r] &\n \\cdots\\ar[r] &\n \\cdot\\ar[d]^{(\\#)_\\infty}\\\\\n { \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[r]\\ar@\/_0.5pc\/[ur] &\n { \\mathcal{O} }_{\\tilde{Z}_1}[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_2}[\\mathbf{r}]\\ar[r] &\n \\cdots\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_\\infty}[\\mathbf{r}]\n}\n\\end{align}\nin ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$ and induced maps $\\eta_t$ and $(\\#)_t$ $(t\\geq 1)$; here, the upper diagrams are pushout diagrams and $\\eta_\\infty:=\\colim_t\\eta_t$, the maps $(\\#)_t$ are the obvious induced maps and $(\\#)_\\infty:=\\colim_t(\\#)_t$, and the right-hand vertical maps are naturally isomorphic to the diagram\n\\begin{align}\n\\label{eq:right_hand_vertical_maps_sequential_diagram_higher_cubes}\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_\\infty}[\\mathbf{r}]\n \\longrightarrow\n \\bigl(\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_\\infty}[\\mathbf{r}]\n \\bigr)\n \\cup{ \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\n \\longrightarrow\n { \\mathcal{O} }_{\\tilde{Z}_\\infty}[\\mathbf{r}]\n\\end{align}\nWe want to show that the right-hand map in \\eqref{eq:right_hand_vertical_maps_sequential_diagram_higher_cubes} is $(k_1+\\dots + k_{n+1}+n)$-connected; since the horizontal maps in \\eqref{eq:filtration_quotients_diagram_for_analyzing_connectivity_for_Z_higher_cubes} are monomorphisms, it suffices to verify each map $(\\#)_t$ is $(k_1+\\dots + k_{n+1}+n)$-connected. The argument is by induction on $t$. The map $(\\#)_t$ factors as\n\\begin{align}\n\\label{eq:factorization_of_maps_to_analyze_higher_cubes}\n\\xymatrix{\n \\bigl(\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_t}[\\mathbf{r}]\n \\bigr)\n \\cup{ \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[r] &\n \\bigl(\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_t}[\\mathbf{r}]\n \\bigr)\n \\cup{ \\mathcal{O} }_{\\tilde{Z}_{t-1}}[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_t}[\\mathbf{r}]\n}\n\\end{align}\nWe know from above that $(\\#)_1$ and the right-hand map in \\eqref{eq:factorization_of_maps_to_analyze_higher_cubes} are $(k_1+\\dots + k_{n+1}+n)$-connected for each $t\\geq 1$, hence it follows by induction on $t$ that $(\\#)_t$ is $(k_1+\\dots + k_{n+1}+n)$-connected for each $t\\geq 1$. This finishes the argument that the right-hand diagrams $(r\\geq 0)$ in \\eqref{eq:gluing_on_cells_filtration_sequence_O_construction_higher_cubes} are $(k_1+\\dots + k_{n+1}+n)$-cocartesian in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$.\n\nIt follows from Proposition \\ref{prop:factorization_into_n_connected_cofibration} that the pushout $(n+1)$-cube $A{ \\rightarrow } B$ factors as $Z_0\\xrightarrow{i_\\lambda} Z_\\lambda \\xrightarrow{p} B$, a composition of pushout $(n+1)$-cubes in ${ \\Lt_{ \\mathcal{O} } }$, starting with $Z_0=A$, where $i_\\lambda$ is a (possibly transfinite) composition of pushout $n$-cubes as in \\eqref{eq:gluing_on_cells_higher_cube} and $p$ is an objectwise weak equivalence. Consider the associated diagram\n\\begin{align*}\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_0}[\\mathbf{r}]\\ar[d]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[d]\\ar@\/^0.5pc\/[dr]\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_\\lambda}[\\mathbf{r}]\\ar[d]^{{ \\ \\simeq \\ }}\\ar[r] &\n \\bigl(\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_{Z_\\lambda}[\\mathbf{r}]\n \\bigr)\n \\cup{ \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[d]^{{ \\ \\simeq \\ }}\\ar[r]^-{(**)} &\n { \\mathcal{O} }_{\\tilde{Z}_\\lambda}[\\mathbf{r}]\\ar[d]^{{ \\ \\simeq \\ }}\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_B[\\mathbf{r}]\\ar[r] &\n \\bigl(\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_B[\\mathbf{r}]\n \\bigr)\n \\cup{ \\mathcal{O} }_{\\tilde{Z}_0}[\\mathbf{r}]\\ar[r]^-{(*)} &\n { \\mathcal{O} }_{\\tilde{B}}[\\mathbf{r}]\n}\n\\end{align*}\nNoting that the bottom vertical arrows are weak equivalences, it follows that $(*)$ has the same connectivity as $(**)$, which finishes the proof of part (a) that the right-hand diagrams $(r\\geq 0)$ in \\eqref{eq:colim_punctured_cube_associated_diagrams} are $(k_1+\\dots + k_{n+1}+n)$-cocartesian in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$. In particular, taking $r=0$ verifies that the left-hand diagram in \\eqref{eq:colim_punctured_cube_associated_diagrams} is $(k_1+\\dots + k_{n+1}+n)$-cocartesian in ${ \\mathsf{SymSeq} }$.\n\nConsider part (b). The map $B_\\emptyset{ \\rightarrow } *$ in ${ \\Lt_{ \\mathcal{O} } }$ factors as $B_\\emptyset{ \\rightarrow } C_\\emptyset{ \\rightarrow } *$, an acyclic cofibration followed by a fibration. Consider the associated pushout $(n+1)$-cube $B{ \\rightarrow } C$ in ${ \\Lt_{ \\mathcal{O} } }$ and the associated diagram of pushout squares of the form\n\\begin{align*}\n\\xymatrix{\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_A[\\mathbf{r}]\\ar[d]\\ar[r] & { \\mathcal{O} }_{\\tilde{A}}[\\mathbf{r}]\\ar[d]\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_B[\\mathbf{r}]\\ar[d]^{{ \\ \\simeq \\ }}\\ar[r] &\n { \\mathcal{O} }_{\\tilde{B}}[\\mathbf{r}]\\ar[d]^{{ \\ \\simeq \\ }}\\\\\n \\colim\\limits_{\\ \\ \\powerset_1(\\mathbf{n})}\n { \\mathcal{O} }_C[\\mathbf{r}]\\ar[r] &\n { \\mathcal{O} }_{\\tilde{C}}[\\mathbf{r}]\n}\n\\end{align*}\nin ${ \\Lt_{ \\mathcal{O} } }$. Since the map $B_\\emptyset{ \\rightarrow } C_\\emptyset$ is an acyclic cofibration and the composite map $A_\\emptyset{ \\rightarrow } C_\\emptyset$ is a $k_{n+1}$-connected cofibration, we know from part (a) that the outer diagram is $(k_1+\\dots+k_{n+1}+n)$-cocartesian in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$. Since the vertical maps in the bottom square are weak equivalences, it follows that the upper square is $(k_1+\\dots+k_{n+1}+n)$-cocartesian in ${ \\mathsf{SymSeq} }^{\\Sigma_r^{ \\mathrm{op} }}$ which finishes the proof of part (b).\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:higher_homotopy_excision}]\nIt suffices to consider the case of left ${ \\mathcal{O} }$-modules. It is enough to treat the special case where ${ \\mathcal{X} }$ is a pushout cofibration $W$-cube in ${ \\Lt_{ \\mathcal{O} } }$. The case $|W|=1$ is trivial and the case $|W|\\geq 2$ follows from Theorem \\ref{thm:pushout_cofibration_cube_homotopical_analysis}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:homotopy_excision}]\nThis is the special case $|W| = 2$ of Theorem \\ref{thm:higher_homotopy_excision}.\n\\end{proof}\n\n\n\\subsection{Proof of the higher Blakers-Massey theorem for ${ \\Alg_{ \\mathcal{O} } }$ and ${ \\Lt_{ \\mathcal{O} } }$}\n\nThe purpose of this section is to prove the Blakers-Massey theorems \\ref{thm:blakers_massey} and \\ref{thm:higher_blakers_massey}. We first show that Blakers-Massey for square diagrams (Theorem \\ref{thm:blakers_massey}) follows fairly easily from the higher homotopy excision result proved in the previous section.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:blakers_massey}]\nIt suffices to consider the case of left ${ \\mathcal{O} }$-modules. Let $W:=\\{1,2\\}$. It is enough to consider the special case where ${ \\mathcal{X} }$ is a cofibration $W$-cube in ${ \\Lt_{ \\mathcal{O} } }$. Consider the induced maps\n\\begin{align*}\n\\xymatrix{\n \\colim\\nolimits^{{ \\mathsf{SymSeq} }}_{\\powerset_1(W)}{ \\mathcal{X} }\\ar[r]^-{(*)} &\n \\colim\\nolimits^{{ \\Lt_{ \\mathcal{O} } }}_{\\powerset_1(W)}{ \\mathcal{X} }\\ar[r]^-{(**)} &\n \\colim\\nolimits^{{ \\Lt_{ \\mathcal{O} } }}_{\\powerset(W)}{ \\ \\cong \\ }{ \\mathcal{X} }_W\n}\n\\end{align*}\nWe know that $(*)$ is $(k_1+k_2+1)$-connected by homotopy excision (Theorem \\ref{thm:homotopy_excision}) and $(**)$ is $k_{12}$-connected by assumption. Hence by Proposition \\ref{prop:connectivity_estimates_for_composition_of_maps}(a) the composition is $l$-connected, where $l$ is the minimum of $k_1+k_2+1$ and $k_{12}$; in other words, we have verified that ${ \\mathcal{X} }$ is $l$-cocartesian in ${ \\mathsf{SymSeq} }$, and Proposition \\ref{prop:comparing_cocartesian_and_cartesian_estimates_in_ModR}(a) finishes the proof.\n\n\\end{proof}\n\nWe now turn to the proof of the higher Blakers-Massey result (Theorem~\\ref{thm:higher_blakers_massey}). Our approach follows that used by Goodwillie at the corresponding point in \\cite{Goodwillie_calc2}.\n\nThe following is an important warm-up calculation for Proposition \\ref{prop:cocartesian_estimates_for_induction_argument}.\n\n\\begin{prop}\n\\label{prop:warmup_cocartesian_estimates_for_induction_argument}\nLet ${ \\mathcal{O} }$ be an operad in ${ \\mathcal{R} }$-modules and $W$ a nonempty finite set. Let ${ \\mathcal{X} }$ be a cofibration $W$-cube of ${ \\mathcal{O} }$-algebras (resp. left ${ \\mathcal{O} }$-modules). Assume that\n\\begin{itemize}\n\\item[(i)] for each nonempty subset $V\\subset W$, the $V$-cube $\\partial_\\emptyset^V{ \\mathcal{X} }$ (formed by all maps in ${ \\mathcal{X} }$ between ${ \\mathcal{X} }_\\emptyset$ and ${ \\mathcal{X} }_V$) is $k_V$-cocartesian,\n\\item[(ii)] $k_{U}\\leq k_V$ for each $U\\subset V$.\n\\end{itemize}\nThen, for every $U\\subsetneqq V\\subset W$, the $(V-U)$-cube $\\partial_U^V{ \\mathcal{X} }$ is $k_{V-U}$-cocartesian.\n\\end{prop}\n\n\\begin{proof}\nThe argument is by induction on $|U|$. The case $|U|=0$ is true by assumption. Let $n\\geq 1$ and assume the proposition is true for each $0\\leq|U| 16V$ it reaches saturation. The resonance peak width grows considerably with $U$, yet the shift from the resonance angles ${\\theta _1} = {43^0}$ and ${\\theta _2} = {56.3^0}$ does not occur. The reflection coefficient changes significantly: when $U$ approaches $20 V$, it decreases fourfold near the first resonance angle and increases fivefold near the second SPR angle. In Figure~\\ref{fig:fig02} we can select three groups of curves: the first group corresponds to $U \\in [0,\\;6V]$, the second to $U \\in [8 V,\\;16 V]$, and the third to $U \\in [18 V,\\;32 V]$. The grouping will become clear in the next paragraph where we examine the voltage dependence of optical parameters. Besides, there are two points where the reflection coefficient is independent of the applied voltage: $R \\approx 11.6\\% $ at ${\\theta} = {53.95^0}$ and $R \\approx 11.8\\% $ at ${\\theta} = {57.39^0}$ for any $U$. It is hard to explain the existence of these points, however they can be used for testing the results: the theoretical curves should intersect at the points.\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{fig02}\n\\caption{\\label{fig:fig02}The reflection spectrum versus the voltage $U$ across the MDM structure. The upper curves corresponds to $U$changing from $0 V$ to $28 V$ in $4 - V$ steps. The lowest curve corresponds to near-breakdown voltage $U = 30 V$.}\n\\end{figure}\n\n\tThe curves of reflection coefficient $R$ versus angle of incidence $\\theta $ allowed us to compute the optical parameters of all layers of the MDM structure (optical thickness, refraction coefficients $n$ and absorption coefficients $k$) as function of voltage $U \\in [0,\\;30 V]$. The effective optical parameters of the layers were determined by the best agreement between experimental and theoretical curves.\n\n\tThe conventional methods and software were used in computation of theoretical angular spectrum (the method is described in \\cite{Palagushkin1,Palagushkin2} in detail). The optical constants of silver and corundum from the SOPRA database and layer thicknesses measured during the deposition process were taken as initial values for the theoretical model in the absence of the external electric field. Then we computed the best agreement between the theoretical and experimental curves $R = R(\\theta )$ to correct the layer thicknesses to use them for further computations. The results of simulation are shown in Figure~\\ref{fig:fig03} and in Table~\\ref{table:table1} for $U = 0$.\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{fig03}\n\\caption{\\label{fig:fig03}SPR spectrum when there is no electric field (the solid line corresponds to the theory, marks to the experiments).}\n\\end{figure}\n\n\\begin{table}\n\\caption{\\label{table:table1} The initial optical parameters of the layers ($U=0$)}\n\\begin{ruledtabular}\n\\begin{tabular}{ccccc}\nNo & material \t\t& $n$ \t& $k$ \t& thickness, ($nm$) \\\\\n1 & ${Al_2}{O_3}$\t& 1.659\t& 0\t\t& 12.02\\\\\n2 & $Ag$\t\t\t& 0.136\t& 4.011\t& 49.20\\\\\n3 & ${Al_2}{O_3}$\t& 1.659\t& 0\t\t& 177.36\\\\\n4 & $Ag$\t\t\t& 0.136\t& 4.011\t& 36.46\\\\\n5 & substrate\t\t& 1.525\t& 0\t\t& -\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\tGiven non-zero external electric field ($U > 0$), the thicknesses of the layers were considered fixed (Table~\\ref{table:table1}) in computations and effective optical parameters of the $Ag$ and $A{l_2}{O_3}$ layers (refraction $n$ and absorption $k$ coefficients) were varied to reach the best agreement with the experimental spectrum. All parameters of the outer protective layer ($d = 12nm$) were also fixed according to Table~\\ref{table:table1} because it should have no electric field in it.\n\n\tComparing the experimental and theoretical curves allows the following conclusions. With no external field (Fig.~\\ref{fig:fig03}) the model is nearly perfect: the original values of optical parameters of the layers are identical to those given in the SOPRA data base. If there is an external field, the simulation needs closer consideration. If we assume that the optical properties of both $Ag$ layers keep the same in the presence of the electric field, the difference between theoretical and experimental data will increase with the field strength and we can't eliminate it by varying $n$ and $k$. Indeed, the theoretical curves in Fig.~\\ref{fig:fig04} built from this assumption agrees with the experiment poorly. It is clearly caused by the fact that the computations do not take into account nonlinear effects which can take place when the electric field strength is high. Surface charges caused by dielectric polarization and changes of the numbers of electrons in the metal layers should also be taken into consideration. The theory and experiment agree better if we assume that the properties of $Ag$ of the cathode and anode differ dramatically. Validation of such approach and corresponding simulation results are given in the next section.\n\n\\begin{figure*}\n\\includegraphics{fig04}\n\\caption{\\label{fig:fig04}SPR spectra for $U = 16 V$ and $U = 30 V$ (the marks are the experimental data, the solid line are the computation results). The theoretic curve are built under assumption that the optical parameters of the both $Ag$ layers are the same. It is seen that these curves do not agree well with the experiment.}\n\\end{figure*}\n\n\\section{ The results of the simulation}\nThe difference between the theory and experiment can be eliminated if we suppose that the optical parameters of the upper (anode) and lower (cathode) $Ag$ layers can be different and must be optimized separately. The separate optimization is possible under the assumption that charges accumulate at the dielectric-metal interface and the electron plasma density changes across the metal layers in a strong electric field.\n\n\tIndeed, when the dielectric layer is about $d_0\\sim200\\,nm$ thick and the applied voltage is nearly $30\\,V$, the surface charge density is $\\sigma = U\/4\\pi \\varepsilon {d_0} \\sim 40CGSE$ (the permittivity of corundum $\\varepsilon \\sim 10$). If the layer area is $1\\;c{m^2}$, the number of excess electrons on one of the $Ag$ layer (and deficient ones on the other) is $\\sim{10^{11}}$, while the full number of conduction electrons is $\\sim{10^{18}}$.\n\n\tLet us ask ourselves what the deficiency of electrons in the layer can result in. It is clear that the depth of the potential well for all electrons of the anode increases by $30\\;eV$. At the same time, we should take into account the changed band structure. The matter is that the additional attraction force from the uncompensated positive charge brings about a shift of the conduction band and, of course, the Fermi level ${E_F}$ down to a lower energy. Correspondingly, this results in the interband absorption threshold ($\\sim4\\,eV$) lowering. However, the shift of the d-electron band is much smaller because of stronger binding to the nuclei.\n\nThe change of the interband absorption level changes the dielectric permittivity of the metal and, therefore, parameters $n$ and $k$. For instance, the model function $\\varepsilon (\\omega )$ is built for gold (see \\cite{Maier}) which allows for two interband transitions. The eleven fitting parameters (resonance frequencies, resonance widths, complex amplitudes, asymptotic parameter, etc.) of the function allowed to reach a good agreement with the data \\cite{Etchegoin}. In the case of silver a similar function has not been built so far, so we limit ourselves to the rough estimation using formulae for gold \\cite{Maier}. A $0.1 - eV$ change of threshold $\\Delta $ results in an about unit change of the dielectric permittivity, which can be detected experimentally. Note, that the formula presented in \\cite{Etchegoin2} describes well the permittivity of $Ag$ for the photon energies less than $3\\,eV$, which does not allow to analyze the influence of the shift of the interband transition threshold on the permittivity value at $\\lambda = 632.8nm$ ($1.96\\,eV$).\n\n\tAt the same time, the excess of electrons on the cathode raises the interband absorption level. Excess electrons do not leave the metal as long as the attraction caused by image charges exceeds the repulsion caused by excess electrons. The condition of equilibrium of these forces can be written as ${{e}^{2}}\/{{l}^{2}}=eU\/2d_0$, where $U\/2d_0$ is the field strength on the cathode (it is twice greater in the capacitor itself), $l$ is the electron-cathode distance. Then the saturation voltage is ${U_s} = 6 \\times {10^{ - 12}}\/{l^2}$. Let us suppose that slow electrons are scattered by defects whose concentration is about ${10^{18}}\\;c{m^{ - 3}}$. The average free path is about $10nm$ and saturation voltage is a few volts in this case. Further increase of $U$ leads to dynamic equilibrium, that is, the number of electrons arriving to the cathode in unit time is equal to the number of electrons leaking through the dielectric per unit time.\n\n\tThe above considerations suggest the necessity to optimize optical parameters of the upper (anode) and lower (cathode) $Ag$ layer independently. Indeed, the modelling using independent optimization of anode and cathode layers gives good agreement between theoretic and experimental curves. Figure~\\ref{fig:fig05} gives the results of simulation of an SPR angular spectrum for $U = 16\\,V$ and $U = 30\\,V$. Comparing Figure~\\ref{fig:fig04} and Figure~\\ref{fig:fig05} shows that the independent optimization allows a significantly better theory-experiment agreement. The computed values of optical properties for this kind of modelling are given in Table~\\ref{table:table2}. The first thing that draws attention is that the refractive index of the anode layer proves to be smaller than that of the cathode. This is an expected observation which agrees with measurements made during the deposition process and is explained by the fact that the deposition on a smooth substrate always gives a better quality of the $Ag$ layer than in deposition on a relatively loose intermediate layer. Secondly, with the growing voltage the $A{l_2}{O_3}$ layer starts exhibiting slight absorption and its refractive index decreases. Probably, it is caused by electrons arriving to this layer from the cathode. To avoid misunderstanding, we should point out that in modelling it is impossible to take into account surface irregularities, interpenetration of layers and non-linear effects caused by a strong electric field. That is why the values of $n$ and $k$ given in Table~\\ref{table:table2} should be regarded as certain effective values that allows for the drawbacks of this theoretical approach in a way.\n\n\\begin{figure*}\n\\includegraphics{fig05}\n\\caption{\\label{fig:fig05}SPR spectra for $U = 16 V$ and $U = 30 V$ (marks \u2013 experiment, line \u2013 theory). The theoretic curves are built given separate optimizations of $Ag$ parameters on the anode and cathode.}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{\\label{table:table2} The optical parameters of the MDM structure}\n\\begin{ruledtabular}\n\\begin{tabular}{ccccccc}\n & \n\\multicolumn{2}{c}{$Ag$ (anode), $d=49 nm$} &\n\\multicolumn{2}{c}{${Al_2}{O_3}$, $d=177 nm$} &\n\\multicolumn{2}{c}{$Ag$ (cathode), $d=36 nm$} \\\\\n$U(V)$ \t& $n$ \t\t& $k$ \t& $n$ \t& $k$ \t& $n$ \t& $k$ \\\\\n0 \t\t& 0.1360\t&4.0110\t&1.6908\t&0.0000\t&0.1341\t&4.0100\\\\\t\n4 \t\t& 0.1341\t&4.0109\t&1.6592\t&0.0022\t&0.1340\t&4.0100\\\\\n8 \t\t& 0.5928\t&4.2967\t&1.6593\t&0.0087\t&0.0881\t&3.4853\\\\\n12 \t& 0.5931\t&4.2969\t&1.6533\t&0.0116\t&0.0882\t&3.4852\\\\\n16 \t& 0.5932\t&4.2970\t&1.6531\t&0.0113\t&0.0882\t&3.4852\\\\\n20 \t& 0.3752\t&4.6909\t&1.5059\t&0.0583\t&0.0000\t&1.0263\\\\\n24 \t& 0.3717\t&4.6840\t&1.5064\t&0.0589\t&0.0000\t&1.0308\\\\\n28 \t& 0.3828\t&4.6885\t&1.5122\t&0.0622\t&0.0000\t&1.0619\\\\\n30 \t& 0.3778\t&4.7022\t&1.5161\t&0.0642\t&0.0000\t&1.0816\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\tThe voltage dependence of the optical parameters of MDM layers are presented in Figure~\\ref{fig:fig06}. It is seen that all the optical parameters of the $Ag$ layers experience noticeable changes at $U\\sim8V$ \u0438 and $U\\sim16V$. Similar irregular behaviour of the parameters is observed for the $A{l_2}{O_3}$ layers. Between these two voltage points parameters $n$ and $k$ keep fairly stable for all layers. The exclusion is the absorption coefficient of the $A{l_2}{O_3}$ layer which grows slightly on the interval $U =20-30V$. It is interesting that the refractive index of the cathode $Ag$ layer falls to zero when $U > 16V$, which means that the dielectric permittivity becomes strictly negative and loses its imaginary component.\n\t\n\t\t\\begin{figure*}\n\\includegraphics{fig06}\n\\caption{\\label{fig:fig06}The refractive indices and absorptions coefficients versus voltage $U$. The top panels show the behaviour of the parameters of the cathode and anode $Ag$ layers, the bottom ones refer to the $A{l_2}{O_3}$ layers.}\n\\end{figure*}\n\n\tFollowing the changes of parameters $n$ and $k$, the reflection coefficient also changes sharply. The most pronounced changes in the MDM structure reflection occur when the voltage exceeds $16V$. An experiment on this MDM structure showed that the voltage increase up to $U =20-30V$ results in a $4.2$-times decrease of the reflection coefficient at ${\\theta} = {42.9^0}$ and $5.4$-times increase at ${\\theta} = {55.8^0}$. Calculations show that the contrast of the reflectivity can be increased dramatically by varying the layer thicknesses slightly. For instance, if we increase the anode layer thickness to $54\\,nm$, we get a structure which at fixed angle ${\\theta} = {60.5^0}$ may be considered as a switcher: with zero reflection at $U=0$ and with $20\\% $ reflectivity when $U$ increases to $30V$.\n\n\\section{Conclusions}\nWe have discovered significant changes of the dielectric permittivity of silver when a constant voltage of up to $30V$ is applied to a MDM-structure waveguide. It is shown that we can modulate the reflection coefficient widely by varying the voltage. Looking forward, the effect may be used in development of electrically controlled optical valves.\n\n\tWe could not so far find an acceptable theory that can explain the abrupt change of optical parameters of the MDM structure. This will be the goal of later research.\n\n\\begin{acknowledgments} \nThe work was supported by the project 2.1 of the RAS Presidium and RFBR grant 11-07-92470.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\pagestyle{plain}\n\nDuring the last two years new connections between integrable\nmodels and quantum groups were established. The representation\ntheory of quantum groups at roots of unit provides new solutions\nto the start--triangle equation increasing in this way the\nfamily of lattice integrable models. Most of these new solutions\ncan be obtained by applying the ``descendent''--procedure [1] to\nwell known models as for instance the 6--vertex model.\n\nGiven an $R$ matrix satisfying the Yang--Baxter equation the first\nstep in the descendent procedure is to find all possible\nsolutions to:\n\n\\begin{equation}\nR (\\lambda, \\mu) \\left[ L (\\lambda) \\otimes L (\\mu) \\right] =\n\\left[ L (\\mu) \\otimes L (\\lambda) \\right] R (\\lambda, \\mu)\n\\label{1}\n\\end{equation}\n\nIf the $R$--matrix is, for instance, the quantum $R$ matrix of\n$\\widehat{SU (2)}_q$ in the 1\/2--representation then we should\nexpect the solutions of (1) to be in one to one correspondence with the\ndifferent irreps of the Hopf algebra $\\widehat{SU (2)}_q$.\nIt is in this way that the representation theory plays an\nimportant role in the discovery of new solutions to thestar--triangle equations\nby the descendent procedure [2]. The second\nstep of this method is to find for two given solutions\n$L^{\\rho_1}, L^{\\rho_2}$ of (1) a new $R$--matrix, solution to\nthe Yang--Baxter equation, and satisfying:\n\n\\begin{equation}\n{\\cal R}^{\\rho_1 \\rho_2} \\left( L^{\\rho_1} \\otimes L^{\\rho_2}\n\\right) = \\left( L^{\\rho_2} \\otimes L^{\\rho_1} \\right) {\\cal\nR}^{\\rho_1 \\rho_2}\n\\label{2}\n\\end{equation}\n\nIf we interpret $L^{\\rho_1}, L^{\\rho_2}$ as associated with two\ndifferent irreps $(\\rho_1, \\rho_2)$ of $\\widehat{SU (2)}_q$ the\nnew ${\\cal R}$--matrix will define the intertwiner between these\ntwo representations. The two steps of the descendent--procedure\nare graphically represented in fig. 1.\n\n\\vspace{4cm}\n\\begin{center}\nFig. 1. \\underline{The descendent--equations}\n\\end{center}\n\n\\vspace{0.3cm}\nThe simplest example of this procedure is the one known as\nfusion where starting, for instance, with the $R^{1\/2,\n1\/2}$--matrix of $\\widehat{SU (2)}_q$ we obtain the intertwiner\nfor regular representations with higher spins [3].\n\nThe case of quantum groups at roots of unity is a very\ninteresting example for the application of this technique. In\nthis case, the number of different finite dimensional irreps is\nmuch bigger as a consequence of the existence of new central\nelements in the Hopf algebra [4]. Hence for the $SU(2)_q$ case, the\ncentral Hopf subalgebra for $q = \\varepsilon$ an Nth--root of unit,\ncontains, in addition to the standard casimir, the new elements\n$E^N, F^N, K^N$. The different irreps can be divided into two\nmain sets: i) regular representations and ii)\ngeneric--representation. The last ones are classified into\ncyclic, with non vanishing eigenvalue for $E^N$ and $F^N$,\nsemicyclic with vanishing eigenvalue only for $E^N$ or $F^N$ but\nnot both and nilpotent for which $E^N = F^N = 0$ but $K^N$ having\ngeneric values. Starting with the $R$--matrix of $\\widehat{SU\n(2)}_q$ it is possible to get solutions to the descendent\nequations (1) and (2) associated with cyclic, semicyclic and\nnilpotent representations. The first case i.e. cyclic, was first\ndiscovered by Bazhanov and Stroganov [1] and the Kyoto group [5]. Surprisingly\nenough, the solution obtained coincides with the one\ncorresponding to the chiral Potts model [6]. These kinds of models are\nspecially important from a mathematical point of view. Their\nspectral manifold is a higher genus curve:\n\n\\begin{equation}\n\\Gamma_k \\; : \\; x^N + y^N = k (1 + x^N y^N)\n\\label{3}\\end{equation}\n\n\\noindent\nwith $k$ a parameter of the model (for instance $k=0$\ncorresponds to Fateev--Zamolodchikov model). Given two irreps\n$(\\xi_1, \\xi_2)$ of $\\widehat{SU (2)}_q$ with $q^N =1$ and\n$\\xi \\in Spec Z_q$, where $Z_q$ is the central Hopf subalgebra,\nthen the intertwining condition defines a submanifold in $Spec Z_q$\nwhich can be shown to be isomorphic to the product of two copies\nof the spectral curve $\\Gamma_k$ of the chiral Potts model.\nMoreover the solution $R(\\xi_1 \\xi_2)$ to equation (2)\nfactorizes into four pieces which can be represented in terms of\nthe Boltzman weights $W, \\bar{W}$ of the chiral Potts model [5].\n\nThe new solution which we would like to discuss in this lecture\ncorresponds to the case of semicyclic and nilpotent\nrepresentations. The model defined in this way shares some\nsimilarities with chiral Potts and with the Heisenberg--Ising models with\nhigher spin. However, it presents some new features which we now move\non to present.\n\n\\section{The semi--cyclic descendent}\n\nIn a previous paper [7] we have presented the $R$ matrix solution\nto the Yang--Baxter and the intertwiner conditions:\n\n$$\n(1 \\otimes R (\\xi_1 \\xi_2)) (R (\\xi_1 \\xi_3) \\otimes 1) (1\n\\otimes R (\\xi_2 \\xi_3)) =\n\\eqno{(3.a)}\n$$\n\n\\[\n= (R (\\xi_2 \\xi_3) \\otimes 1) (1 \\otimes R(\\xi_1 \\xi_3)) (R\n(\\xi_1 \\xi_2) \\otimes 1)\n\\]\n\n$$\nR (\\xi_1 \\xi_2) \\Delta_{\\xi_1 \\xi_2} (a) = \\Delta_{\\xi_2 \\xi_1}\n(a) R (\\xi_1 \\xi_2)\n\\eqno{(3.b)}\n$$\n\n\\[\na \\in SU (2)_q \\; \\; \\; q^N = 1\n\\]\n\n\\noindent\nfor $\\xi_1, \\xi_2$ semicyclic representations of $SU(2)_q$.\nDenoting by $x, y, \\lambda^N$ the eigenvalues of $E^N, F^N$ and\n$K^N$ respectively the semicyclic representations are\nparametrized by $(y_1, \\lambda_1)$, $(y_2, \\lambda_2)$. Using (3.b)\nfor $a$ in the center, we get the intertwining constraints on\nthe values of $\\xi_1, \\xi_2$:\n\\begin{equation}\n\\frac{y_1}{1 - \\lambda^N_1} = \\frac{y_2}{1 - \\lambda^N_2} = \\chi\n\\label{4}\n\\end{equation}\n\n\\noindent\nwith $\\chi$ an arbitrary complex number. The solution $R (\\xi_1\n\\xi_2)$ satisfies:\n\n\\begin{equation}\n\\begin{array}{rl}\ni) & R (\\xi \\xi) = 1 \\\\\nii) & R (\\xi_1 \\xi_2) R (\\xi_2 \\xi_1) = 1 \\otimes 1 \\\\\niii) & R (\\xi_1 \\xi_2) = P R (\\xi_2 \\xi_1) P\n\\end{array}\n\\label{5}\n\\end{equation}\n\n\\noindent\nwhere $P$ is the permutation operator.\n\nFor more details on this solution see reference [7].\n\nFollowing the spirit of the descendent--technology we proceed\nnow to find solutions to equation (1), for $R$ the quantum\n$R$--matrix of $\\widehat{SU(2)}_q$ in the 1\/2--representation\ni.e. the six vertex model, which can be associated with\nsemicyclic representations of $\\widehat{SU(2)}_q$ at $q =\n\\varepsilon$ an Nth--root of unit (we shall consider $N$ to be\nan odd integer). We will first consider the simplest\nnon trivial case: $x = y = 0$ and $\\lambda$ generic (the\nregular representations correspond to $\\lambda =\n\\varepsilon^{2s}$ with $s$ integer or half integer spin).\n\nThe 6--vertex $R$--matrix is given by [3]:\n\n\\begin{equation}\nR^{1\/2, 1\/2} (u) = sh \\left[ u + i\\gamma \\frac{1}{2} (1 + \\sigma^3\n\\otimes \\sigma^3) \\right] + i \\; {\\rm sin\\gamma} \\; (\\sigma^+ \\otimes\n\\sigma^- + \\sigma^- \\otimes \\sigma^+)\n\\label{6}\n\\end{equation}\n\nEquation (1) reads now:\n\n\\begin{equation}\nR^{1\/2, 1\/2} (u-v) (L (u) \\otimes L(v)) = (L (v) \\otimes L(u))\nR^{1\/2, 1\/2} (u-v)\n\\label{7}\n\\end{equation}\n\nThe solutions, we are interested in, are $L^{(\\lambda)} (u)$--matrices\nacting on $V^{1\/2} \\otimes V^\\lambda$ such that:\n\n\\begin{equation}\nL^{(\\lambda)} (u) : V^{1\/2} \\otimes V^\\lambda \\rightarrowV^{1\/2} \\otimes\nV^{\\lambda}\n\\label{8}\n\\end{equation}\n\n\\noindent\nwhere $V^\\lambda$ is the irrep of $\\widehat{SU\n(2)}_\\varepsilon$ with eigenvalue of $K^N = \\lambda^N$ and where\nthe tensor product in (7) is defined with respect to the\n$V^{1\/2}$--indices. Notice that the $L^{(\\lambda)} (u)$--matrices we\nare looking for define the intertwiners between the irreps 1\/2\nand $\\lambda$. In matrix notation the solution is given by:\n\n$$\nL^{(\\lambda)} (u) = \\frac{1}{e^u \\varepsilon^{1\/2} \\lambda^{1\/2}\n- e^{-u} \\varepsilon^{-1\/2} \\lambda^{-1\/2}} \\times\n$$\n\n\\begin{equation}\n\\times \\left( \\begin{array}{cc}\ne^u \\varepsilon^{1\/2} K^{1\/2}\n- e^{-u} \\varepsilon^{-1\/2} K^{-1\/2} & (\\varepsilon -\n\\varepsilon^{-1}) \\varepsilon^{-1\/2} FK^{1\/2} \\\\\n(\\varepsilon- \\varepsilon^{-1}) \\varepsilon^{-1\/2} E K^{-1\/2} &\ne^u \\varepsilon^{1\/2} K^{-1\/2}\n- e^{-u} \\varepsilon^{-1\/2} K^{1\/2} \\end{array} \\right)\n\\label{9}\n\\end{equation}\n\n\n\\noindent\nwith $E, K, F$ the generators of $SU(2)_q$ which satisfy the relations:\n\n\\[\nEK = \\varepsilon^2 K E \\; \\; , \\; \\; FK = \\varepsilon^{-2} KF\n\\]\n\n\\begin{equation}\n[E, K] = \\frac{K - K^{-1}}{\\varepsilon - \\varepsilon^{-1}}\n\\label{10}\n\\end{equation}\n\nThe representation $V^\\lambda$ is defined in the basis $\\{ |r> \\}_{r=0}^{N-1}$\n\n\\begin{equation}\n\\begin{array}{l}\nE | r> = d_{r-1} | r-1> \\\\\nF | r> = d_{r} | r+1> \\\\\nK | r> = \\lambda \\varepsilon^{-2r} | r>\n\\end{array}\n\\label{11}\n\\end{equation}\n\n\\[\nd^2_j (\\lambda) = [j+1] \\frac{\\lambda \\varepsilon^{-j} -\n\\lambda^{-1} \\varepsilon^j}{\\varepsilon - \\varepsilon^{-1}}\n\\]\n\\[\n[ x] = \\frac{\\varepsilon^x - \\varepsilon^{-x}}{\\varepsilon - \\varepsilon^{-1}}\n\\]\n\nFor a graphic representation of the matrix $L^{(\\lambda)}(u)$ see\nFig. 2. Notice that each entry in the 2 x 2 matrix (9)\nrepresents an operator acting on the space $V^\\lambda$.\n\n\\vspace{5cm}\n\\begin{center}\nFig. 2. Graphic representation of the $L^{(\\lambda)}(u)$--matrix\n\\end{center}\n\n\\vspace{1cm}\nThe second step in the descendent procedure corresponds to\nfinding the $R$--matrix solution to the equation (2):\n\n\\begin{equation}\n{\\cal R}^{\\lambda_1 \\lambda_2} (u-v) \\left( L^{(\\lambda_1)} (u)\n\\otimes L^{(\\lambda_2)} (v) \\right) = \\left( L^{(\\lambda_2)} (v)\n\\otimes L^{(\\lambda_1)} (u) \\right) R^{\\lambda_1 \\lambda_2} (u-v)\n\\label{12}\n\\end{equation}\n\n\\noindent\nwhere now the tensor product is defined with respect\nto the \\underline{$V^{\\lambda_1}$ - $V^{\\lambda_2}$} \\underline{indices} of the\n$L^{(\\lambda)}$ (u) - matrices (see fig. 1 B). The solution to\n(12) defines the intertwiner between the representations\n$\\lambda_1, \\lambda_2$ and it is given by:\n\n\\[\nR^{\\lambda_1 \\lambda_2} (u)^{l, r_1 + r_2 - l}_{r_1 r_2} =\n\\frac{\\epsilon^{(r_1 + r_2 - l) l - r_1\nr_2}}{\\prod^{r_1+r_2-1}_{j=0} (e^u \\lambda_1 \\lambda_2\n\\epsilon^{-j} - e^{-u} \\epsilon^j)} \\times\n\\]\n\n\\[\n\\times {\\sum^{r_1}_{l_1 = 0}\n\\sum^{r_2}_{l_2 = 0}} \\left[ \\begin{array}{c} r_1\n\\\\ l_1 \\end{array} \\right] \\left[ \\begin{array}{c} r_2 \\\\ l_2\n\\end{array} \\right] \\frac{[l] ! [r_2 - l_2] !}{[r_1 + l_2] !\n[r_2]!} (\\epsilon - \\epsilon^{-1})^{r_1 - l_1 +l_2}\n\\]\n\n\\[\n\\times \\prod^{r_1 + l_2 -1}_{j= r_1} d_j (\\lambda_1) \\prod^{r_1\n+l_2-1}_{j=l_1 +l_2} d_j (\\lambda_1) \\prod^{r_2\n- 1}_{j=r_2 - l_2} d_j (\\lambda_2) \\prod^{r_1+r_2\n-l-1}_{j=r_2 -l_2}\n\\]\n\n\\begin{equation}\\times \\lambda^{l_2}_1 \\lambda^{r_1-l_1}_2 \\prod^{r_2 -l_2\n-1}_{j=0} (e^u \\lambda_2 \\epsilon^{-j} - e^{-u} \\lambda_1\n\\epsilon^j) \\prod^{l_1 -1}_{j=0} (e^u \\lambda_1 \\epsilon^{-j+r_2-l_2}\n-e^{-u} \\lambda_2\n\\epsilon^{j+l_2 -r_2})\n\\label{13}\n\\end{equation}\n\n\\noindent\nwith the following conventions: a) whenever in above products the upper index\nis less than the lower index the result is one.\nb) the constraint $l_1 + l_2 = l$ must be used to carry out summation.\n\nIt is easy to check that this solution coincides in the case of\n$u=0$ with the $R^{\\xi_1 \\xi_2}$--matrix of reference [7] for\n$\\xi_1, \\xi_2$ two semi--cyclic irreps with $x=y=0$ and $K^N =\n\\lambda^N_1, \\lambda^N_2$. Moreover for $\\lambda_1 = \\lambda_2\n= \\epsilon^{2s}$ eqn. (13) gives us the spin $s - R$ matrix.\n\nSummarizing, we have obtained starting with the $R$--matrix of\nthe six vertex model for $q$ an Nth-root of unit a class of\ndescendent models characterized by the quantum $R$--matrix (13).\nThe transfer matrix of these models can be defined for periodic\nboundary conditions as follows (see fig. 3):\n\n\\begin{equation}\nT_{\\lambda_0} (u, \\lambda) : \\otimes^L V^{\\lambda_0}\n\\rightarrow\n\\otimes^L V^{\\lambda_0}\n\\label{14}\n\\end{equation}\n\n\\begin{equation}\n\\langle r'_1 \\cdots r'_N | T_{\\lambda_0} (u, \\lambda) | r_1,\nr_2 \\cdots r_N \\rangle = \\left( \\sum_{l's} R^{\\lambda,\n\\lambda_0} (u)_{l_1 r_1}^{l_2 r'_1} R^{\\lambda \\lambda_0}\n(u)^{l_3 r'_2}_{l_2 r_2} \\cdots \\right)\n\\label{15}\n\\end{equation}\n\n\\noindent\nwhere $\\lambda_0$ appears as a parameter characterizing the model\nand $u$ and $\\lambda$ as spectral variables. The Yang-Baxter\nrelation for the $R$--matrix (eq. 13) implies the integrability equation:\n\n\\begin{equation}\n\\left[ T_{\\lambda_0} (u, \\lambda'), T_{\\lambda_0} (u', \\lambda'') \\right]\n= 0\n\\label{16}\n\\end{equation}\n\nEquation (16) is the standard integrability condition for\nsoluble models with the important peculiarity that the spectral\nvariables are now living on a manifold of complex dimension\nequal 2. Notice that for the kind of irreps we are considering\n(i.e. with vanishing eigenvalues of $E^N$ and $F^N$ and generic\neigenvalue of K) the intertwining condition is not imposing any\nconstraint on the allowed values of $\\lambda$.\n\n\\[(\\lambda'_1 u) \\begin{array}{cc|cc|cc|cc|cc|ccc}\n& & r'_1 & & r'_2 & & r'_3 & & r'_4 \\cdots & & r'_L & & \\\\\n\\hline\nl_1 & & &l_2 & & l_3 & & l_4 & & l_5 \\cdots & e 1 & \\\\\n& & r_1 & & r_2 & & r_3 & & r_4 \\cdots & & r_L & | r_1 \\cdots\nr_L> \\\\\n\\end{array}\n\\overbrace{\\otimes^L V^{\\lambda_0}}\n\\]\n\n\\begin{center}\n$L = \\#$ sites in the row.\n\nFig. 3: The transfer matrix $T_{\\lambda_0} (\\lambda, u)$\n\\end{center}\n\n\\noindent\n\n\\section*{3. The associated 1D--chain}\n\nGiven the transfer matrix (14-15) we can define a local 1D\nhamiltonian as follows:\n\n\\begin{equation}\n\\left. H^{(\\lambda_0)} = i \\frac{\\partial}{\\partial u} l n\nT_{\\lambda_0} (u, \\lambda_0) \\right|_{u= 0}\n\\label{17}\n\\end{equation}\n\n\\noindent\nwhich will be hermitian for certain allowed regions of\n$\\lambda_0$. This hamiltonian defines a one dimensional\nspin chain with the Hilbert space given by:\n\n\\begin{equation}\n{\\cal H} = \\otimes^L V^{\\lambda_0}\n\\label{18}\n\\end{equation}\n\nThe most natural interpretation of this spin--chain is as a\ngeneralization to continuous ``spin\" of the higher spin quantum\nchains defined in references [8]. In fact, as we have\nalready mentioned for the special case $\\lambda_0 =\n\\epsilon^{2S_{max}} (2S_{max} +1 = N)$ the hamiltonian (16)\ncoincides with the hamiltonian $H^{S_{max}}_\\epsilon$ of spin\n$S_{max}$ and anisotropy depending of $\\epsilon$.\n\nThe two main new features of the model defined by the transfer\nmatrix (14) are:\n\n\\begin{itemize}\n\\item[i)] All the $V^{\\lambda_0}$ representations are of the\nsame dimensions $N$. This is specific of working with the\nquantum group at roots of unit.\n\\item[ii)] The integrability condition (16) with two independent\nspectral parameters.\n\\end{itemize}\n\nThe main consequence of (16) for the spin chain defined by (16)\nis the existence of a local operator $Q^{(\\lambda_0)}$ defined by:\n\n\n\\begin{equation}\nQ^{(\\lambda_0)} = \\left. 2 \\lambda \\frac{\\partial}{\\partial \\lambda} l n\nT_{\\lambda_0} (u = 0,\\lambda) \\right|_{\\lambda = \\lambda_0}\n\\label{19}\n\\end{equation}\n\n\\noindent\nsuch that:\n\n\\begin{equation}\n\\left[ H^{\\lambda_0}, Q^{\\lambda_0} \\right] = 0\n\\label{20}\n\\end{equation}\n\nDenoting $\\xi \\equiv (u, \\lambda)$, the transfer matrix (14) can\nbe written as:\n\n\\begin{equation}\nT_{\\xi, \\xi'} = T_{(u, \\lambda)|(u'\\lambda_0)} \\equiv\nT_{\\lambda_0} (u - u', \\lambda)\n\\label{21}\n\\end{equation}\n\n\\noindent\nwhere we have used the difference property with respect to the\nspectral variable $u$. The most general local hamiltonian we can\ndefine is:\n\n\\begin{equation}\n{\\cal H} = \\frac{d}{dt} \\left. ln T_{\\xi, \\xi'(t)}\n\\right|_{t= 0}\n\\label{22}\n\\end{equation}\n\n\\noindent\nwith $\\lim_{t\n\\rightarrow 0} \\xi'(t) = \\xi$. This definition of the hamiltonian would\nbe, in spirit, very close to the way a 1D hamiltonian is defined\nfor the chiral Potts model. However, it is clear that in our case\nany Hamiltonian obtained as indicated by eqn. (22) will be a\nlinear combination of $H^{(\\lambda_0)}$ and $Q^{(\\lambda_0)}$.\nBefore entering into the detailed study of the 1D hamiltonians, it\nis worthwhile to discuss the differences between the models we\nare defining and the chiral Potts. The model which we can compare\nwith is the Fateev--Zamolodchikov model. The way the cyclicity\nof the irreps, entering into the definition of this model, is reflected\nin the physics is through the $Z(N)$--invariance. In fact, the spectrum of\nthe hamiltonian decomposes in different sectors with well\ndefined $Z(N)$--charge.\n\nIn the noncyclic case, we are presenting here, there exists a well\ndefined ``reference state\" $| \\Omega_0 > \\in \\otimes^L V^{\\lambda_0}$:\n\n\\begin{equation}\n| \\Omega_0 > = | 0,0 \\cdots 0 >\n\\label{23}\n\\end{equation}\n\nRecall that the states in $\\otimes^L V^\\lambda$ are given by $|r_1\nr_2 \\cdots r_L>$ with $r_i = 0,\\cdots,N-1$. The reference state\nis the natural generalization of the ferromagnetic state with\nall the spins up. The conserved number, as in the case of the\nHeisenberg chains, is the total number of spins ``down'' which in our\ncase is given by:\n\n\\begin{equation}\n\\sum^L_{i=1} r_i = \\rho.\n\\label{24}\n\\end{equation}\n\nDifferent sectors correspond to different values $\\rho$. In the\nchiral Potts model, i.e. cyclic case, there is no good\ndefinition of ``reference state\" and the conserved quantum\nnumber is given by the $Z(N)$ charge defined by:\n\n\\begin{equation}\nX = e^{2 i\\pi Q\/N}.\n\\label{25}\n\\end{equation}\n\n\\noindent\nwhere\n\n\\begin{equation}\nX | r_1 \\cdots r_L >= |r_1 +1 \\cdots r_2 +1>.\n\\label{26}\n\\end{equation}\n\nIn the next section and using the fact that we have a good\nreference state we will proceed to diagonalize the hamiltonian\n$H^{\\lambda_0}$ by using the Bethe Ansatz technique.\n\n\\section*{4. Bethe Ansatz Equations}\n\nIn terms of the $L^{\\lambda_0} (u)$--matrices we define the\nmonodromy matrix by:\n\n\\begin{equation}\nt_{\\lambda_0} (u) = L^{\\lambda_0}_L (u) \\cdots L^{\\lambda_0}_2\nL^{\\lambda_0}_1 (u)\\label{27}\n\\end{equation}\n\n\\noindent\nwith $L$ the length of the row in lattice units. The monodromy\nmatrix $t_{\\lambda_0}(u)$ satisfies:\n\n\\begin{equation}\nR^{1\/2, 1\/2} (u - v) t_{\\lambda_0} (u) t_{\\lambda_0} (v)\n= t_{\\lambda_0} (v) t_{\\lambda_0} (u) R^{1\/2, 1\/2} (u - v)\n\\label{28}\n\\end{equation}\n\nRepresenting it as a 2x2 matrix:\n\n\\begin{equation}\nt_{\\lambda_0} (u) = \\left[ \\begin{array}{cc} A_{\\lambda_0} (u) &\nB_{\\lambda_0} (u) \\\\\nC_{\\lambda_0} (u) & D_{\\lambda_0} (u) \\end{array} \\right]\n\\label{29}\n\\end{equation}\n\n\\noindent\nwe obtain the operators $B_{\\lambda_0} (u) : \\otimes^L\nV^{\\lambda_0} \\rightarrow \\otimes^L V^{\\lambda_0}$ which can be\ninterpreted as creating on the reference state (23) an\n``elementary excitation\". Notice that:\n\n\\begin{equation}\nB_{\\lambda_0} (u_1) B_{\\lambda_0} (u_2) = B_{\\lambda_0} (u_2)\nB_{\\lambda_0} (u_1)\n\\label{30}\n\\end{equation}\n\n$$\nC_{\\lambda_0} (u) | \\Omega_0 > = 0.\n\\eqno{(30;b)}\n$$\n\nTo diagonalize the transfer matrix (13) we use the standard\nalgebraic Bethe Ansatz:\n\n\\begin{equation}\n|\\psi> = \\prod^M_{i=1} B_{\\lambda_0} (u_i) | \\Omega_0 >\n\\label{31}\n\\end{equation}\n\nThe number $M$ of ``excitations\" is for our model a conserved\nquantum number and therefore we can diagonalize the transfer\nmatrix in each sector.\n\nUsing equation (9) it is easy to find the eigenvalues $\\Lambda\n(\\lambda_0, \\lambda, u; \\{ u_i \\}_{i=1, \\cdots, M})$ of the\ntransfer matrix T in each sector:\n\n\\begin{equation}T_{\\lambda_0} (u, \\lambda) |\\psi> = \\Lambda (\\lambda_0,\n\\lambda,\nu; \\{u_i \\}) |\\psi > +\\;unwanted\\;terms.\n\\label{32}\n\\end{equation}\n\n\\noindent\nwith\n\n\\begin{equation}\n\\Lambda (\\lambda_0, \\lambda,\nu; \\{u_i \\}) = \\sum^{N-1}_{r=0} (R^{r0}_{r0} (\\lambda,\n\\lambda', u))^L \\prod^M_{i=1} {\\cal L}^{\\lambda'}_r (u - u_i)\n\\label{33}\n\\end{equation}\n\n\\noindent\nwhere the $R$ matrix is the one given in (12) and\n${\\cal L}^{\\lambda'}_r (u-u_i)$ is:\n\n\\begin{equation}\n{\\cal L}^{\\lambda'}_r (u)= \\frac{(\\epsilon^u \\epsilon^{1\/2}\n\\lambda^{\\prime 1\/2} - e^{-u} \\epsilon^{-1\/2} \\lambda'^{-1\/2}) (e^u\n\\epsilon^{-1\/2} \\lambda'^{-1\/2} - e^{-u} \\epsilon^{1\/2}\n\\lambda'^{1\/2})}{(e^u \\epsilon^{1\/2 - r} \\lambda'^{1\/2} - e^{-u}\n\\epsilon^{r-1\/2} \\lambda'^{-1\/2}) (e^u \\epsilon^{-r-1\/2}\n\\lambda'^{1\/2} - e^{-u} \\epsilon^{r+1\/2} \\lambda'^{-1\/2})}\n\\label{34}\n\\end{equation}\n\nTo fix the values of the ``rapidities'' $u_i$ we must eliminate\nthe unwanted terms in (32). This can be done using equation (28)\nor more easily imposing that the residues at the poles both in\n$u$ and $\\lambda$ of (33) vanish. The result is:\n\n\\begin{equation}\n\\left( \\frac{e^{u_j} \\epsilon^{1\/2} \\lambda^{1\/2}_0 - e^{-u_j}\n\\epsilon^{-1\/2} \\lambda^{-1\/2}_0}{e^{u_j} \\epsilon^{1\/2}\n\\lambda^{-1\/2}_0 - e^{-u_j} \\epsilon^{-1\/2} \\lambda^{1\/2}_0}\n\\right)^L = \\prod^{M}_{\\begin{array}{c} k=1 \\\\ k \\neq j\n\\end{array}} \\frac{sh (u_j - u_k + i \\gamma)}{sh (u_j - u_k - i \\gamma)}\n\\label{35}\n\\end{equation}\n\n\\noindent\nwhere $\\epsilon = e^{i \\gamma}$. In the case $\\lambda_0 =\n\\epsilon^{2s}$ with 2s being an integer, i.e. regular representations of spin\ns, equations (35) become the Bethe Ansatz equations for the higher spin\nHeisenberg--Ising chains of references [8].\n\n{}From (33) we can find the eigenvalues of the operators\n$H^{\\lambda_0}$ and $Q^{\\lambda_0}$ defined in the previous section:\n\n\\begin{equation}\nE^{\\lambda_0} = 2i \\sum^M_{j=1} \\frac{\\lambda_0 -\n\\lambda^{-1}_0}{(e^{u_j} \\varepsilon^{1\/2} \\lambda^{1\/2}_0 -e^{-u_j}\n\\varepsilon^{-1\/2} \\lambda^{-1\/2}_0 ) (e^{u_j}\n\\varepsilon^{1\/2} \\lambda^{-1\/2}_0 - e^{-u_j} \\varepsilon^{-1\/2}\n\\lambda^{1\/2}_0)}\n\\label{36}\n\\end{equation}\n\n\\begin{equation}\nQ^{\\lambda_0} = 2 \\sum^M_{j=1}\n\\frac{e^{2u_j} \\varepsilon - e^{-2u_j} \\varepsilon^{-1}}{(e^{u_j}\n\\varepsilon^{1\/2} \\lambda^{1\/2}_0 -\ne^{-u_j} \\varepsilon^{-1\/2} \\lambda^{-1\/2}_0 ) (e^{u_j}\n\\varepsilon^{1\/2} \\lambda^{-1\/2}_0 - e^{-u_j} \\varepsilon^{-1\/2}\n\\lambda^{1\/2}_0)}\n\\label{37}\n\\end{equation}\n\nFor the intermediate case corresponding to $\\lambda_0 =\n\\varepsilon^{2s}$ with arbitrary $s$, the equations (36) (37) become:\n\n\\begin{equation}\nE^s = - \\sum^M_{j=1} \\frac{\\sin (2 \\gamma s)}{sh [\n\\frac{\\gamma}{2} (\\alpha_j + 2 is)] sh [ \\frac{\\gamma}{2} (\\alpha_j -\n2 is)]}\n\\label{38}\n\\end{equation}\n\n\\begin{equation}\nQ^s = \\sum^M_{j=1} \\frac{s h ( \\gamma \\alpha_j)}{sh [\n\\frac{\\gamma}{2} (\\alpha_j + 2 is)] sh [ \\frac{\\gamma}{2} (d_j -\n2 is)]}\n\\label{39}\n\\end{equation}\n\n\\noindent\nwhere we have introduced the new variable $\\alpha$ defined by\n\n\\begin{equation}\nu + \\frac{i}{2} \\gamma = \\frac{\\gamma}{2} \\alpha\n\\label{40}\n\\end{equation}\n\nNow we briefly discuss the case of the nonvanishing $\\chi$, defined by eqn.(4).\nThe first observation to be made is that the number of excitations\n``$M$'', introduced in (31), is no longer ``good quantum number\".\nWhat replaces it is an eigenvalue of $\\Delta K$ (which is $\\lambda^L_0\ne^{i\\frac{2 \\pi Q}{N}}$ for $Q=0, 1, \\cdots N-1)$ since this\noperator commutes with both transfer matrices, namely\n\n\\[\n\\Delta K = K^{\\lambda_0}_L \\cdots K_2^{\\lambda_0} K_1^{\\lambda_0}\n\\]\n\n\\begin{equation}[ \\Delta K, tr t_{\\lambda_0} (u)] = [\\Delta K, T_{\\lambda_0}\n(u,\n\\lambda) ] =0\n\\label{41}\n\\end{equation}\n\nSurprisingly enough, some nice features of $\\chi =0$ case\nsurvive. In particular inspecting formula (13), one concludes\nthat R--matrix preserves its low--triangular form while acting\non the ``reference state\" (23). Therefore, this state remains an\neigenvector of $T_{\\lambda_0} (u_1 \\lambda)$. More precisely\n\n\\begin{equation}\nT_{\\lambda_0} (u, \\lambda) | \\Omega_0 > = \\sum^{N-1}_{r=0} (R^{r0}_{r0}\n(\\lambda_0, \\lambda, u))^L | \\Omega_0 >\n\\label{42}\n\\end{equation}\n\n\\noindent\nWhat makes $\\chi \\neq 0$ situation somewhat more complicated is\nthe fact that the commutation relations for $A_{\\lambda_0},\nB_{\\lambda_0}, C_{\\lambda_0}, D_{\\lambda_0}$ and the elements of\n$T_{\\lambda_0}(u,\\lambda)$ no longer have simple form and\ntherefore, one does not expect for the state (31) to be an\neigenvector of $T_{\\lambda_0} (u, \\lambda)$. Consequently,\nthe appropriate generalization of Algebraic Bethe Ansatz is\ncalled for in this case.\n\nTo get the feeling of what transpires, one should note that the\nstate (31) is still an eigenvector of $tr t_{\\lambda_0} (u)$\n\n\\begin{equation}\ntr t_{\\lambda_0} (u) | \\psi_M > = \\sum^1_{r=0} \\left( L^{\\lambda_0}\n(u)^{r0}_{r0} \\right)^L\n\\prod^M_{i=1} R^{1\/2, 1\/2} ( (-1)^r (u_i - u) )^{01}_{10}\n| \\psi_M >\n\\label{43}\n\\end{equation}\n\nIndeed, in deriving eqn.(43) above, one only uses 6--vertex\ncommutation relations (28) and the degeneracy eqn.(30;b) for the\noperator $C_{\\lambda_0} (u)$. Since neither eqn.(28) nor eqn.(30;b)\nare affected by turning on nonzero $\\chi$, one could infer that\neqn.(43) should hold true. Thus, it appears that going from $\\chi\n=0$ to $\\chi \\neq0$ case produces no visible effect on the\nspectrum of the transfer matrix $tr t_{\\lambda_0} (u)$. This\nconclusion, however, may be a bit premature. To see that, let us\nrecall, that not all the solutions of (35), generally speaking,\ncorrespond to non-zero vectors of the form (31).\n\nQuite frequently an additional investigation (involving delicate\nlimiting procedures) is required to determine the fate of the\nparticular solution. In general, the answer to {\\it ``To Be Or\nNot To Be}\" question may depend on whether or not $\\chi$ takes\non the zero value.\nLet us now generate \\underline{finite--dimensional} vector space\n$V_M (Q = M - int [ \\frac{M}{N}])$ by premultiplying non-zero\nvector $| \\psi_M >$ by any sum of products of operators\n$T_{\\lambda_0} (u, \\lambda), T_{\\lambda_0} (u', \\lambda')\n\\cdots$ for any $u, \\lambda; u', \\lambda'; \\cdots$ (but all\nwith the same $\\lambda_0$ and $\\chi$). It is clear, that in this\nspace one can diagonalize simultaneously the whole family of\ncommuting transfer--matrices $T_{\\lambda_0} (u, \\lambda)$.\nRecalling that $tr t_{\\lambda_0} (u)$ commutes with\n$T_{\\lambda_0} (u, \\lambda)$, one immediately arrives to the conclusion\nthat the eigenvectors of family $T_{\\lambda_0} (u, \\lambda)$\ncan be constructed as linear combinations of vectors (31), all\nof the same length (modulo N) and eigenvalue of transfer--matrix\n$tr t_{\\lambda_0} (u)$. Apparently, what seems to be going on is\nas follows: The eigenvalues of transfer matrices $T_{\\lambda_0}\n(u, \\lambda)$ and $tr t_{\\lambda_0} (u)$ are highly degenerate\nwhen $\\chi = 0$. Turning on finite $\\chi$ results in lifting this\ndegeneracy for $T_{\\lambda_0} (u, \\lambda)$ and changing\nmultiplicities of eigenvalues of $tr t_{\\lambda_0} (u)$. This is\nvery intriguing phenomena and certainly warrants further\ninvestigations. To summarize, we can still characterize\neigenstates of commuting family $T_{\\lambda_0} (u, \\lambda)$\nby the set of Bethe Ansatz roots (35), abandoning, however, the\nsimple representation for Bethe vector given by formula (31). The\ngeneralization of algebraic Bethe Ansatz, briefly sketched\nabove, is similar in spirit to Tarasov's proposal for\nsuperintegrable chiral Potts Model [9].\n\nThis similarity is, by no means, accidental. Indeed, chiral Potts\nmodel on superintegrable line shares an important property of\nsemi--cyclic case: The L--matrix which intertwines spin 1\/2 and\ncyclic repr. has ``top\" reference state but no bottom state.\nWhether this similitude has a deeper implication remains to be seen.\n\nWhen algebraic Bethe Ansatz does not work (or does not provide\nthe shortest route), one may resort to the alternative procedure\nin order to solve for eigenvalues of Hamiltonian and transfer\nmatrix. This procedure, known as ``Functional Relations\"\nmethod, was exploited quite successfully in recent years to\nfind the spectrum of RSOS [10] and chiral Potts models [11].\nThe gist of this approach can be described in a few words as\nfollows: One keeps on fusing various R--matrices (related to the\nmodel under investigation) throwing in some obvious (and not so\nobvious) symmetries along the way, until one comes back to where\njourney began. The result is the system of functional equations\nfor the transfer matrices. In a sense, one can regard this\nprocedure as a generalization of Zamolodchikov--Karowski\nBootstrap program [12] to determine S-matrices of exactly\nintegrable models.\n\nIn order to apply this technique to our model, let us recall\nthat the tensor product of a spin $j$ representation with a\nsemi--cyclic one is completely reducible. In particular,\nfor $j= \\frac{1}{2}$, we have (see reference [14])\n\\begin{equation}\n( \\frac{1}{2} ) \\otimes (y, \\lambda) = ( y, \\epsilon \\lambda)_+\n+ (y, \\epsilon^{-1} \\lambda)_-\n\\label{44}\n\\end{equation}\n\nIn each subspace one can find the highest weight vector $V^+_0$\nand $V^-_0$ such that\n\n\\begin{equation}\n\\Delta(E) V^{\\pm}_0 = 0 \\;\\;and\\;\\;\\Delta (K) V^\\pm_0 =\n\\epsilon^{\\pm1} \\lambda V^\\pm_0\n\\label{45}\n\\end{equation}\n\nMaking use of (11) we obtain for each sign a semi--cyclic\nrepresentation with the basis $[ V^\\pm_0 , \\cdots, V^\\pm_{N-1} ].$\n\nLet us now define two projection points $u_\\pm$ as follows:\n\n\\begin{equation}\nL^{\\lambda_0} (u_\\mp) V^\\pm_i = 0\n\\label{46}\n\\end{equation}\n\nAt these points the operators $L^{\\lambda_0}$ becomes\nessentially the projector $P^\\pm$ onto corresponding $\\pm$ subspace.\n\n\\begin{equation}\nL^{\\lambda_0} (u_{\\pm}) \\sim P^\\pm;\\;\\;P^+ + P^- = 1\\;\\;and\\;\\;P^+P^-\n= P^- P^+ = 0\n\\label{47}\n\\end{equation}\n\nIt follows then from eqn. (12) that\n\n\\[\nP^- [ L^{\\lambda_0} (u) \\otimes R^{\\lambda_1, \\lambda_0} (u\n- u_-)] = \\cdots [ R^{\\lambda_1, \\lambda_0} (u - u_-) \\otimes\nL^{\\lambda_0} (u)] \\cdots P^-\n\\]\n\n\\noindent\nand\n\n\\begin{equation}\nP^- [ L^{\\lambda_0} (u) \\otimes R^{\\lambda_1, \\lambda_0} (u - u_-)\n] P^+ = 0\n\\label{48}\n\\end{equation}\n\nThe relation (48) above reveals a block--triangular structure of\nthe product\n\n\\[O^{-1} [ L^{\\lambda_0} (u) \\otimes R^{\\lambda_1, \\lambda_0} (u -\nu_-) ] O =\n\\left(\n\\begin{array}{cc}\nP^+ LRP^+ & * \\\\\n0 & P^-LRP^-\n\\end{array}\n\\right)\n\\]\n\nRemarkably, $P^+LRP^+$ and $P^-LRP^-$ turn out to be\n\n\\[\nP^+ L^{\\lambda_0} (u) \\otimes R^{\\lambda_1, \\lambda_0} (u - u_-)\nP^+ =\n\\]\n\n\\begin{equation}\n= L^{\\lambda_0} (u)^{00}_{00} R^{\\epsilon \\lambda_1, \\lambda_0}\n(u - u_- + i \\frac{2\\pi}{N})\n\\label{49}\n\\end{equation}\n\n\\[\nP^- L^{\\lambda_0} (u) \\otimes R^{\\lambda_1, \\lambda_0} (u - u_-)\nP^- =\n\\]\n\\begin{equation}\n= L^{\\lambda_0} (u)^{10}_{10} R^{\\epsilon^{-1} \\lambda_1,\n\\lambda_0} (u - u_- - i \\frac{2\\pi}{N})\n\\label{50}\n\\end{equation}\n\nIn the product of block--triangular matrices the diagonal blocks\nare multiplied independently. Thus, we have for transfer--matrices\n\n\\[\ntr t_{\\lambda_0} (u) T_{\\lambda_0} (u - u_-, \\lambda_1) =\n[L^{\\lambda_0} (u)^{00}_{00}]^L T_{\\lambda_0} (u-u_- + i\n\\frac{2\\pi}{N}, \\epsilon^{\\lambda_1}) +\n\\]\n\n\\begin{equation}\n+ (L^{\\lambda_0} (u)^{10}_{10} )^L T_{\\lambda_0} (u - u_- - i\n\\frac{2\\pi}{N}, \\epsilon^{-1} \\lambda_1)\n\\label{51}\n\\end{equation}\n\n\nThe use of the second projection point $u_+$ leads to the\nsimilar equation. Note, that $T_{\\lambda_0} (u, \\lambda)$\ndepends essentially on two parameters $u$ and $\\lambda$.\nTherefore, another functional relation is needed in order to\ndetermine eigenvalues of $T_{\\lambda_0} (u, \\lambda)$ interms of known\neigenvalues of $tr t_{\\lambda_0} (u)$. This\nadditional relation (along with eigenvalues of $T_{\\lambda_0}\n(u, \\lambda))$ will be presented in the forthcoming publication [13].\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgytp b/data_all_eng_slimpj/shuffled/split2/finalzzgytp new file mode 100644 index 0000000000000000000000000000000000000000..0fc9a2ebd13916e98fdcd52b175400e75b7ba96c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgytp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\nIt is well established that Type Ia supernovae (SNe Ia) originate from \nexplosive thermonuclear runaways in a C-O white dwarf \\citep[WD;][]{hf60},\nusually attributed to mass accretion from a non-degenerate binary companion\nor a merger\/collision of two WDs. \nThe former is known as the single-degenerate \\citep[e.g.,][]{wi73}\nscenario; the latter as the double-degenerate scenario \\citep[e.g.,][]{it84,web84}.\nAlthough the overall similarities in observed light curves of \\snia\\\nare indicative of the presence of shared underlying core mechanisms,\nthere exists substantial diversity in other observed properties,\nincluding uncertainties in the inferred accretion processes, explosion mechanisms, \nnature of progenitors as well as host galaxy environment. \nAccording to \\citet{liet01}, for example, about one third of nearby \\snia\\ are peculiar. \n\\moon{The cause of this diversity is not well understood and under considerable debate\n\\citep[see][for a recent review on peculiar \\snia]{tau17}.}\n\nAmong these stands out the long-lasting question about the origin of \nthe group of \\snia\\ exhibiting rapid post-peak evolution\nidentified with a large decline rate in light curves. \nThis group of \\snia\\ is called ``91bg-like\" following its archetype SN 1991bg \\citep{fet92,leiet93}, \nwhile the main population are ``Branch Normal.\"\nWhen viewed on the Phillips relation \\citep{phi93,phiet99},\nwhich compares the peak luminosity and the decline rate of \\snia\\\nusing the parameter \\dm15\\ (= Phillips parameter) that \nmeasures the change in $B$-band magnitude during the first 15 days after peak,\nthe 91bg-like rapid decliners appear as\na distinctive group with \\dm15\\ $\\simgt$ 1.6 mag \\citep[e.g.,][]{tauet08, buret14}.\nIt has been reported that a significant portion, i.e., 15--20~\\%, of the entire \n\\snia\\ population belongs to the group of rapid decliners \\citep[e.g.,][]{let11,sriet17}\naccompanied by a low peak luminosity in most cases \\citep[e.g.,][]{howell01,tauet08}.\nWhile normal \\snia\\ typically show double-peaked light curves in the $I$ and\nand near-infrared bands\nwhere the primary first peak precedes that of the $B$ band,\nthe rapid decliners show a single $I$-band peak after they reach \nthe maximum brightness in the $B$ band \\citep[see][and references therein]{tauet08,dhaet17}. \nThis behaviour is known to be related to \\ni56\\ production and\nthe evolution of temperature and opacity in \\snia,\nwith the 91bg-like group being driven by a smaller amount of \\ni56.\nThese faint and rapidly-declining \\snia\\ have mostly \nbeen found in large elliptical galaxies \nand are therefore thought to be associated with old stellar populations\n\\citep{howell01,sulet06,get11}. \n\nThe distinction of this group of rapid decliners from the normal \\snia\\ \nin the Phillips relation, however, is less apparent when the decline rate\nis represented by the color-stretch parameter (= \\sbv) \ndefined as the time span between the $B$-band maximum\nand that of the $B-V$ color normalized by 30 days \\citep{buret14}.\nThe rapid decliners tend to transition to the post-peak decline phase \ndominated by \\co56\\ decay in less than 15 days,\nwhich leads to a discontinuity in the slope of the Phillips relation near\n\\dm15\\ $\\simeq$ 1.6 mag and makes it\ninapplicable for \\dm15\\ $\\simgt$ 1.6 mag.\nThe disappearance of the clear distinction between Branch Normal and 91bg-like \nbased on the \\sbv\\ parameterization of the decline rate\nsuggests that the \\snia\\ in the two groups \nmay in fact be similar kinds with a continuous distribution of observables\nthat appear to be separated by a gap in the number of observed samples between them. \n\nThere is no clear consensus yet on whether or not \nthe rapidly-declining \\snia, or at least part of them, \nindeed have different origins from the rest.\nSome favor different origins for the rapid decliners,\nusually relying on the double-degenerate sub-Chandrasekhar (sub-Ch) mass explosions\nor the deflagrations in rotating WDs \\citep[see][and references therein]{tauet08,dhaet17},\nwhile others have argued that Chandrasekhar-mass explosions can explain at least part of them,\noften associated with delayed detonations \\citep[e.g.,][]{mrbh07, ashet18}.\nBased on the results of extensive radiative transport simulations,\n\\citet{gk18} reported that \n\\snia\\ with a large \\dm15, i.e., $\\simgt$ 1.55 mag, can only be produced by sub-Ch-mass explosions.\nThis is more or less consistent with the slightly \nbroader criterion \\dm15\\ $\\simgt$ 1.4 mag \nobtained by using non-local thermodynamic equilibrium calculations of light curves \\citep{bet17}.\nIn contrast, \\citet{het17} reported that a uniform explosion based on Chandrasekhar-mass spherical delayed\ndetonations can explain most of the observed light curves of \\snia\\\nwithout the need for a separate mechanism for (extreme) rapid decliners.\n\nA key clue to understanding the origin of the division between\nthe two groups of Branch Normal and 91bg-like \\snia\\ lies \nin the nature of the so-called transitional \\snia\\ straddling them. \nThe identification of a statistically significant number of \nsuch transitional SNe with a continuous distribution of observed properties\ncould support the common-origin hypothesis.\nThe light curves of the transitional \\snia\\ are usually featured with a large Phillips parameter\nand a $B$-band peak between two $I$-band peaks.\nTheir prototypical case is SN~1986G \\citep{phiet87} observed \nwith \\dm15\\ $\\simeq$ 1.81 mag, \nwhich is substantially larger than \\dm15\\ $\\sim$ 1.1 mag of Branch Normal,\nand a low explosion energy \\citep{phiet87,ashet16},\nshowing a very rapid decay in post-peak evolution.\nSpectroscopically, however, it appears to be very similar to normal \\snia,\nthereby revealing its intermediate nature.\nThe Phillips parameter of the transitional types lies roughly within the range of\n\\dm15\\ $\\simeq$ 1.5--1.8 mag \\citep{priet06, ashet16, sriet17, galet18}.\nHowever, transitional types have also been identified in \\snia\\ beyond this range,\nwhile some \\snia\\ within the range have been observed with no clear signs \nof intermediate properties \\citep[e.g.,][]{tauet08,hsiet15}. \nAccording to \\citet{tauet08}, \\snia\\ with \\dm15\\ in the range of 1.75--1.85 mag \nshow an inconspicuous secondary $I$-band peak and can be considered as the intermediate \ngroup between Branch Normal and 91bg-like, \nwhereas those in the range of \\dm15\\ = 1.5--1.75 mag \nfeatured with a double-peaked $I$-band light curve, \na relatively small luminosity and a rapid decay\nare similar to normal types.\n\\citet{hsiet15} suggested that \\rapd\\ \\snia\\ are \nof transitional nature if they show two long-waveband (i.e., $iYJHK$) peaks in the light curves\nwith the first and primary peak preceding the $B$-band peak. \nThe 10 transitional \\snia\\ compiled in their work show\n\\dm15\\ $\\simeq$ 1.30--1.80 mag and \\sbv\\ $\\simeq$ 0.46--0.86\nwith average values of 1.62 mag (\\dm15) and 0.70 (\\sbv).\nThese transitional \\snia\\ have been observed mostly \nin non-star-forming galaxies \\citep{ashet16, mrbh07},\nsimilar to 91bg-like events (see above).\n\nIn this paper, we present multi-color, high-cadence observations of \nthe \\sni\\ \\kspn\\ (or \\iaun) which appears to be a \\rapd\\ \\sni\\ \nof transitional nature that is more similar to Branch Normal than 91bg-like,\nproviding a rare opportunity to investigate\nthe early photometric evolution of rapid decliners.\nIn addition, the absence of any apparent host galaxy for this SN\nin our deep stack images reaching $\\mu_{BVI}$ $\\sim$ 28 \\mpas\\ implies that \nsuch \\snia\\ can occur in host galaxy environments that are quite different \nfrom what have been previously reported.\nIn \\S\\ref{sec:obs} below we provide the details of our discovery and monitoring observations of \\kspn,\nfollowed by the analyses of the observed light curves and color evolution \nas well as template fitting in \\S\\ref{sec:lc}.\nWe compare the observations and model predictions in \\S\\ref{sec:bol} and \\S\\ref{sec:pro}:\ncomparisons of the bolometric light curve with model predictions \nbased on different types of \\ni56\\ distribution (\\S\\ref{sec:bol}) and\nthe observed early light curves with model light curves\nexpected from the interactions between ejecta and companion in \\snia\\ (\\S\\ref{sec:pro}).\nWe show that there is no apparent host galaxy of \\kspn\\ in \\S\\ref{sec:host}\nand provide the summary and conclusion in \\S\\ref{sec:sum}. \n\n\\section{Observations}\\label{sec:obs}\n\nWe conducted high-cadence, multi-color ($BVI$) monitoring observations of \na 2\\degr\\ $\\times$ 2\\degr\\ field\ncontaining the galaxy NGC~300 in the direction of the South Galactic pole \nusing the Korea Microlensing Telescope Network \\citep[KMTNet,][]{ket16} as part of its\nKMTNet Supernova Program \\citep[KSP,][]{met16,afset19}.\nThe initial phase of the monitoring started in 2015 July during the commissioning \ntest period of the KMTNet and continued until 2017 August 9.\nThe KMTNet operates three wide-field 1.6-m telescopes located in \nthe Cerro-Tololo Inter-American Observatory (CTIO; Chile), \nthe South African Astronomical Observatory (SAAO) and the Siding Spring Observatory (SSO; Australia).\nAll of the three telescopes are equipped with a CCD camera of \n2\\degr\\ $\\times$ 2\\degr\\ field-of-view (FoV) at 0\\farcs4 pixel sampling with Johnson-Cousins $BVRI$ filters \\citep{ket16}.\nWe obtained about 2460 images of the field with 60-s exposure time for \neach of the $BVI$ bands,\nand the mean cadence of our observations within the monitoring period\nwas approximately 3.5 hours for each band.\nThe typical limiting magnitude for a point source detected with a signal-to-noise (S\/N) ratio\ngreater than 2 in the individual images \nwas $\\sim$ 22 mag\nwhen seeing was better than 2\\arcsec.\n\nA new point source was detected in our $B$-band image obtained \nwith the SAAO telescope at 21.56 h on 2015 September 24 (UT), \nor MJD 57289.89825,\nwith a magnitude of 21.29 $\\pm$ 0.37 mag at the coordinate\n(RA, decl.) = ($\\rm 00^h57^m03.19^s, -37\\degr02\\arcmin23\\farcs6$) (J2000).\nThe source was subsequently detected in the $V$ band two minutes later with 21.45 $\\pm$ 0.33 mag\nand in the $I$ band 28 minutes later with 21.59 $\\pm$ 0.32 mag, \nwhile it was not detected in a $B$-band image obtained 7.18 hours before the first detection.\nWe name this source \\kspn.\nIt remained above the detection limit over the ensuing period of four months at the same location,\nreaching observed peak magnitudes of 18.59 $\\pm$ 0.02 mag ($B$), \n18.49 $\\pm$ 0.02 mag ($V$) and 18.91 $\\pm$ 0.05 ($I$) mag over $\\sim$ 18 days.\nFigure~\\ref{fig:det} presents images of the field centered on the location of \\kspn.\nIn the figure, (a) is a deep $I$-band image, \nwhich reaches a sensitivity limit $\\mu_{I}$ $\\simeq$ 27.8 \\mpas, \ncreated by stacking 1167 individual images obtained either \nbefore the first detection of the source or after its disappearance. \n\\moon{The image stacking is made using the SWARP package \\citep{bet02}.\nEach individual frame is subtracted by its own background estimated with the\nbackground mesh size of 512 pixels, and then resampled and median combined\nwith other frames to create the final stack image.\nOnly the individual frames whose seeing are better than 2\\arcsec\\\nare used, which can help mitigate the effects of the confusion limit \nin image stacking \\citep[e.g.][]{ashcet18}.\nThere exists no underlying source at the location of \\kspn\\ in the stack image.}\nThe three other figures (b)--(d) are the individual 60-s $B$-band images for\nthe last non-detection exposure obtained 7.18 hours prior to the first detection (b),\nthe first detection (c), \nand the image from MJD 57304.04163 when the source reached the peak brightness (d). \n\nThe photometric calibration is carried out using more than 10 nearby \nstandard reference stars within 15\\arcmin\\ distance from \\kspn\\ available in the AAVSO Photometric All-Sky \nSurvey\\footnote{The AAVSO Photometric All-Sky Survey: Data Release 9, https:\/\/www.aavso.org\/apass} database. \nOnly the reference stars whose apparent magnitudes are in the range of 15--16 mag are used \nto secure high S\/N ratios in their flux measurement while avoiding CCD saturation and non-linearity effects.\nA local point spread function (PSF), which is obtained by fitting \na Moffat function \\citep{m69, tet01} and sky background emission \nto the reference stars, is fitted for the measurements of fluxes of \\kspn\\ and the reference stars. \nThe AAVSO photometric system consists of the standard Johnson $BV$ and the Sloan $i$ band, i.e., $BVi$.\nThe calibration of the KMTNet $B$-band data against \nthe Johnson $B$-band magnitudes of the AAVSO system\nrequires a color correction given by\n$\\Delta B$ $\\simeq$ 0.27 ($B-V$) + offset \\citep{pet17, pet19}, \nwhere $\\Delta B$ is the $B$-band magnitude differences\nbetween the magnitudes obtained in KMTNet images before color correction\nand the standard magnitudes of AAVSO reference stars in the database. \nThe same $B$-band color dependence in our data is identified, and\nthe $B$-band magnitudes of the AAVSO reference stars used in our photometric calibration\nare corrected with their known $B-V$ colors.\nThe final $B$-band magnitudes of \\kspn\\ are obtained after applying $S$-correction for SNe\nby following the procedure described in \\citet{stret02}\nusing template spectra from the \\snp\\ package (see \\S\\ref{subsec:temp}). \nNo such color dependence has been identified in the KMTNet $V$ and $I$ bands\nwhen calibrated against the AAVSO $V$ and $i$ bands, respectively \\citep{pet17, pet19},\nwhich is also confirmed in the data for \\kspn.\nThus, the photometric calibration of our KMTNet $VI$-band data are made \nagainst the AAVSO $Vi$ bands without any color correction. \nAs a result, the $BV$-band magnitudes of \\kspn\\ presented in this paper are\nin the Vega magnitude system, while its $I$-band magnitudes are in the AB system.\nTable~\\ref{tab:mag} contains a sample of the observed magnitudes of \\kspn.\n\nAccording to the dust map of \\citet{sfd98}, the total Galactic reddening in\nthe direction of KSP-OT-201509b is E(\\bv) $\\simeq$ 0.013 mag. \nUsing the updated $R_V$ = 3.1 dust model of \\citet{sf11},\nwe find corresponding extinction of 0.046, 0.034 and 0.021 mag in\nthe $B$, $V$ and $i$ band, respectively, \nThe small extinction is compatible with the location of \\kspn\\ near the Galactic pole.\nSince no host galaxy is identified for the source (see below and \\S\\ref{sec:host}), \nonly this small Galactic extinction is taken into account in our extinction correction.\n\nOne notable feature in Figure~\\ref{fig:det}(a) is the absence of \nany underlying host galaxy candidate for \\kspn\\\nat its location or in the immediate vicinity of the source in our deep $I$-band image.\nThe absence is also confirmed in the $B$ and $V$ bands.\nThe most conspicuously extended nearby object of \\kspn\\ is\nlocated at (RA, decl.) $\\simeq$ ($\\rm 00^h57^m01.23^s, -37\\degr02\\arcmin37\\farcs5$) (J2000),\n$\\sim$ 27\\arcsec\\ away in the southwest. \nWe call this object \\galgg\\ hereafter as we denote in the figure.\nIn order to measure the redshift and understand the nature of \\galgg, we conducted \nspectroscopic observations of the source using\nthe Low Dispersion Survey Spectrograph 3 (LDSS3)\\footnote{http:\/\/www.lco.cl\/telescopes-information\/magellan\/instruments\/ldss-3}\non the 6.5-m Magellan Clay telescope on 2016 August 11.\nWe used the VHP-All grism coupled with 1\\arcsec\\ slit for dispersion\nand obtained a single 600~s exposure.\nSee \\S\\ref{sec:host} for the spectrum of \\galgg\\ and its analysis.\n\n\\medskip\n\\section{Light Curves and Classification of \\kspn\\ as a Rapidly-Declining Transitional \\sni}\\label{sec:lc}\n\n\n\\subsection{Light Curves and Epoch of the First Light}\\label{subsec:lc}\n\nFigure \\ref{fig:lc} shows $BVI$-band light curves of \\kspn\\ \nobtained with the KMTNet for about two months.\nOver approximately the first two week, \nthe source gradually ascends to its peak brightness more than three \nmagnitudes brighter than that of the first detection.\nThe observed peak brightnesses are 18.59, 18.49 and 18.91 mag for\n$B$, $V$, and $I$ band, respectively.\n(See Table~\\ref{tab:par} for the observed and estimated parameters of \\kspn.)\nThere exist clear band-dependent differences \nin the manner of how post-peak decay evolves in Figure~\\ref{fig:lc}.\nThe $B$-band decay is much faster than those of the other bands, \nwhile the $I$-band light curve is featured with a secondary peak about two \nweeks after the first and primary peak, which precedes the $B$-band peak.\nWe estimate the average strength of the secondary $I$-band peak \nwithin 20--40 days after the epoch of the $B$-band peak \nto be $_{\\rm 20-40}$ = 0.309 $\\pm$ 0.002\nwith respect to the maximum brightness of the first and primary\npeak of the $I$ band \\citep[][]{kriet01}. \nThe overall evolution of the observed light curves of \\kspn\\ -- \nwhich includes \nthe initial ascent in brightness,\nthe presence of a secondary peak in the $I$ band,\nthe absence of any apparent post-peak plateau or linear phase, and\nthe slow decay -- \nidentifies \\kspn\\ as a \\sni\\ powered by radioactive decay \nof \\ni56\\ and \\co56\\ \\citep[e.g.,][]{het96}. \nWe compare in Figure~\\ref{fig:comp} the $I$-band light curve (black) of \\kspn\\\nwith those of four well-sampled \\snia: \nSN2011fe (red), SN1994D (blue), SN2005ke (cyan) and SN2005bl (green).\nThe first two are Branch Normal with \\dm15\\ $\\simeq$ 1.1 and 1.4 mag, respectively, whereas \nthe last two are 91bg-like with more rapid post-peak decline rates \n\\dm15\\ $\\simeq$ 1.7 (SN2005ke) and 1.9 (SN2005bl) mag.\nThe post-peak decline rate of \\kspn\\ in Figure~\\ref{fig:comp}\nappears to be intermediate between the two groups. \n\nFigure~\\ref{fig:epo} presents the observed early light curves of \\kspn\\ \nprior to 8 days before the peak, normalized by its peak brightness.\nPower laws have been adopted to describe the evolution of early light curves of \\snia\\\nwhen intensities are substantially \nlower \\citep[e.g., $<$ 40~\\%, see][]{ollet15} than\nthe peak intensity since the brightness of a \\ni56-powered expanding homologous sphere\nmediated by photon diffussion process is expected to be $\\propto$ $t^2$ where $t$ is time\n\\citep[e.g.,][]{nuget11}.\nWe fit the early light curves of \\kspn\\ during the first $\\sim$ 10 days \nof its detection in Figure~\\ref{fig:epo} \nwith a power law $F(t)$ $\\propto$ $(t-t_0)^{\\alpha}$,\nwhere $F$ and $t_0$ represent the observed brightness \nand the epoch of the first light, respectively.\nThe best-fit power-law indices are \n$\\alpha$ = 2.0 $\\pm$ 0.2 ($B$), 1.9 $\\pm$ 0.2 ($V$) \nand 2.1 $\\pm$ 0.2 ($I$) \nand the epoch of first light is \\t0\\ = --18.4 $\\pm$ 0.6 days \nsince the peak in the observer frame, \nor --17.2 $\\pm$ 0.5 days in the rest frame.\n(See \\S\\ref{subsec:temp} for the estimation of the epoch of the peak brightness \nand the redshift of the source.)\nThe fitted power-law indices are $\\simeq$ 2 as expected for a homologous expansion, \nsimilar to what have been found\nin other \\snia\\ \\citep[e.g.,][]{nuget11,ollet15,dimitriadis19}.\n\\moon{ We also estimate the epoch of the first light using\na Gaussian process extrapolation which can provide \na less model-dependent inspection of the epoch.\nThe Gaussian process extrapolation-based epoch of the first light \nis $-$16.4 $\\pm$ 0.5 days (rest frame) for the $BVI$ light curves,\nwhich is slightly smaller but consistent with $-$17.2 $\\pm$ 0.5 days \nfrom our power-law fitting. We adopt $-$17.2 $\\pm$ 0.5 days as the\nepoch of the explosion of \\kspn\\ in its rest frame.}\n\n\n\n\\subsection{Color Evolution}\\label{subsec:color}\n\nFigure~\\ref{fig:color} shows the evolution of the $B-V$ and $V-I$ colors of\n\\kspn\\ aligned with that of the $I$-band light curve.\nThe three vertical dotted lines mark three color epochs over which\nthe colors show a notable phase transition in their evolution.\nThe first and third color epochs are --4.5 and +18 days, respectively, from\nthe epoch of the $B$-band peak brightness estimated using SN template fitting of the \nobserved $BVI$-band light curves (see below for the details).\nThese two color epochs roughly correspond to those of the primary and secondary $I$-band peaks, respectively. \nThe second color epoch is +7 days which is close to the midpoint\nbetween the first and third epochs,\nnear the onset of the $I$-band secondary rise.\n\nWe summarize the evolutions of the \\bv\\ and \\vi\\ colors,\nwhich are largely synchronous with that of the $I$ band, as follow.\n\\begin{itemize}\n\\itemsep0em \n\\item Prior to the first color epoch of --4.5 days,\nthe \\bv\\ color starts at $\\simeq$ 0.2 mag around --14 days,\nor $\\sim$ 3.3 days in the rest frame since the epoch of first light (Table~\\ref{tab:par}), \nand slowly becomes bluer to $\\simeq$ 0.0 mag in about 10 days\nnear the first color epoch. \nThe blueward evolution of the \\bv\\ during this early phase of \\kspn\\ \nappears to be consistent with that of the early red population of \\snia,\nwhich comprises predominantly Branch Normal types \\citep{stret18}.\nIt is thought to be due mainly to the heating from \\ni56\\ radioactive decay as \nthe deeper layer of the ejecta is revealed \\citep[see][]{het17}.\nIndeed, we show in \\S\\ref{sec:bol} that the observed early light curves of \\kspn\\ \ncan be explained by a shallow \\ni56\\ distribution \\citep[e.g.,][]{pm16}\nleading to an early increase in color temperature.\nThe \\vi\\ color of \\kspn\\ during this phase, \non the other hand, shows a redward evolution\nfrom approximately --0.5 to --0.1 mag within the first two days \nbefore it stalls relatively flat until the first color epoch.\nSNe in general have rarely been observed with \\vi\\ colors at these early epochs, \nand the origin of the early redward evolution of \\kspn\\ is unclear. \nIt is conceivable, however, that the evolution of spectral features \n-- e.g., Ca II features in the $I$ band \\citep[][see Figure~1 therein]{paret12} --\nare responsible for it.\n\\item Across the first color epoch of --4.5 days, \nwhen the $I$-band light curve nears its primary peak, \nboth colors change the direction of their evolution \nin that \\bv\\ becomes redder while \\vi\\ becomes bluer.\nWe attribute this redward evolution in \\bv\\ to the absorption \nby iron peak elements \\citep[e.g.][]{het17},\nwhereas the blueward evolution in \\vi\\ \nto temperature increase.\n\\item The \\bv\\ color evolves redward relatively monotonically between\nthe first (i.e., --4.5 days) and third (i.e., +18 days) color epochs,\nwhich is largely equivalent to the period between the two $I$-band peaks,\nby $\\simeq$ 1.2 mag,\nwhereas the \\vi\\ color changes its direction of evolution again around the \nsecond color epoch (i.e., +7 days) from blueward to redward until\nit reaches the third color epoch. \nThis is due to the recombination process of Fe III leading to \na rebrightening in the $I$ band by redistribution of blue\/UV radiation\n\\citep[e.g.,][see also \\S\\ref{sec:intro}]{kas06}.\n\\moon{We note that the Fe III recombination also plays a critical role\nin determining the color evolution of \\snia\\ in the near-infrared wavebands\nwith respect to their light curves \\citep{dhaet15}, similar to what\nwe report here.}\n\n\\item After the third color epoch, both colors \ngradually evolve blueward as the SN enters the Lira law phase known\nto have a linear blueward evolution for an extended period \\citep{phiet99}.\n\\item The \\bv\\ and \\vi\\ colors at the peak epoch\nin Figure~\\ref{fig:color} are 0.056 and --0.34 mag, respectively.\nAccording to \\citet{tauet08},\nthe observed \\bv\\ color of \\kspn\\ at the peak epoch \nis significantly bluer than \nthe 0.4--0.7 mag range known for 91bg-like,\nwhile it is consistent with the value\nexpected for a \\sni\\ with \\dm15\\ $\\simlt$ 1.75 mag.\n\\item \\snia\\ with \\dm15\\ in the range of 1.5--1.75 mag\nare known to show an initial blueward evolution between --10 and 10 days \nin \\vi\\ followed by a rapid redward evolution \\citep{tauet08} as observed in \\kspn,\nwhile this initial blueward evolution is absent in 91bg-like.\nIt is quite noteworthy, however, that the \\vi\\ color evolution of \\kspn\\ \nin Figure~\\ref{fig:color} also reveals that the initial blueward evolution \n(between --10 and 10 days) is in fact preceded by an earlier \nredward evolution at $<$ --10 days as explained above.\nA similar \\vi\\ color pattern at comparably early epochs has been observed \nin SN~1994D which is a normal but relatively over-luminous \\sni\\ \\citep{patet96}.\n\\item Overall, the observed color evolutions of \\kspn\\ are \ncompatible with what can be expected from a \\sni\\ \nwith \\dm15\\ = 1.62 mag as measured for the source (see \\S\\ref{subsec:temp})\nand appear to be more similar to Branch Normal than 91bg-like.\nAlso, the similarity of its color evolutions to those observed\nin other \\snia\\ indicates that there is an insignificant extinction \nby its potential host galaxy as we assumed (\\S\\ref{sec:obs}).\n\\end{itemize}\n\n\\subsection{Template Fitting and Parameters of \\kspn}\\label{subsec:temp}\n\nIn order to estimate key physical parameters of \\kspn\\ as a \\sni, \nwe conduct template fitting of its observed light curves (Figure~\\ref{fig:lc}) \nusing two \\sni\\ light curve templates from the SN template fitting package \\snp\\ \\citep[v181212;][]{buret11},\none for Branch Normal and the other for 91bg-like.\nBecause no spectroscopic observations were made for \\kspn\\ \n(due to its detection during the commissioning test period of the KMTNet)\nand its host galaxy is unknown (see \\S\\ref{sec:host}),\nwe estimate its distance using an iterative template model fitting technique.\nFor this, we adopt a set of 100 trial redshifts in the range $z$ = 0.03--0.1\nfor potential redshifts of the source.\nThis corresponds to the distance modulus (DM) of $\\simeq$ 35.6--38.6 mag\nbased on the cosmological model of \\citet{rieet16} with parameters \n$H_0$ $\\simeq$ 73.24 \\kms\\ Mpc$^{-1}$, \n$\\Omega_{\\rm M}$ $\\simeq$ 0.27 and $\\Omega_{\\rm \\Lambda}$ $\\simeq$ 0.73.\n\\moon{Note that these cosmological parameters are more recently updated values \nthat are slightly different from those adopted in the \\snp\\ package internally, \nwhich are $H_0$ $\\simeq$ 72 \\kms\\ Mpc$^{-1}$, $\\Omega_{\\rm M}$ $\\simeq$ 0.28 \nand $\\Omega_{\\rm \\Lambda}$ $\\simeq$ 0.73 \\citep{speet07}.} \nGiven $B$ = 18.59 $\\pm$ 0.02 peak apparent magnitude of \\kspn\\ (Table~\\ref{tab:par}),\nthis redshift range ensures us to investigate the full range of \npeak absolute magnitudes typically displayed by \\snia,\nwhich is between --17 mag and --20 mag as shown in Figure~\\ref{fig:phil},\nfor conceivable peculiar motion of the source (see below).\nWe use \\snp\\ \nto conduct SN template fitting and $K$-corrections with the input redshifts, assuming no host galaxy extinction.\n\nFigures~\\ref{fig:LCtemplate} and \\ref{fig:LCzfitVi} show the results\nof our template fitting to the light curves of \\kspn. \nIn Figure~\\ref{fig:LCtemplate} we compare the observed $V$-band (top panel)\nand $I$-band (bottom panel) light curves of the source\nwith the best-fit template light curves for Branch Normal (solid line) and 91bg-like (dashed line).\nWe exclude the $B$-band light curve in this initial template fitting\nstage to avoid uncertainties involved in the \n$S$-correction process (see \\S\\ref{sec:obs}).\nWhile the best-fit $V$-band light curve from the Branch Normal template \nin the figure appears to give a good match to the observed light curve, \nthat from the 91bg-like template slightly overpredicts the decline rate\nimmediately after the peak. \nThis discrepancy is more obvious in the $I$ band where \nthe double-peaked light curve of \\kspn\\ is apparently \nincompatible with the best-fit 91bg-like template,\nshowing that the source is more similar to Branch Normal than 91bg-like.\n\nFigure~\\ref{fig:LCzfitVi} presents the distribution of the fitted \nparameters from the template\nfitting as a function of the trial redshifts (bottom abscissa),\nor corresponding cosmological DM (top abscissa).\nThe top panel shows $\\Delta$(DM) (left ordinate), \nwhich is the offset between the \\snp-fitted DM at a given redshift and the cosmological DM from the top abscissa, \nrepresenting the peculiar motion (as calculated in the right ordinate) of \\kspn\\\nif the fitted DM and assumed redshift are indeed the real values for the source.\nAs in the figure, no peculiar motion of \\kspn\\ is required if the source is \nlocated at $z$ = 0.072 for Branch Normal (solid line) or 0.077 for 91bg-like (dashed line).\n\\moon{At redshifts smaller than 0.06 or larger than 0.08, \nin order for the input redshift and the cosmological redshift \nfrom the distance modulus obtained in the SNooPy fitting to be comparable,\nthe required peculiar motions\nbecome significantly larger, i.e., $>$ 1000 \\kms, than what can be expected from \\kspn\\ (see below)\nfor the Branch Normal template fitting. This also applies to the 91bg-like template fitting,\nbut at slightly larger redshift values.}\n\nIn the bottom panel of Figure~\\ref{fig:LCzfitVi},\nthe distribution of the best-fit \\csp\\ \\sbv\\ (left ordinate) \nincreases along the trial redshifts as the quality of the fitting becomes poorer.\nThe growing incompatibility between the template and observed light curves at a higher redshift,\ncaused mainly by the secondary $I$-band peak,\nresults in larger best-fit values of \\csp\\ for higher trial redshifts. \nThe Phillips parameters (\\dm15, right ordinate) in the panel \nare obtained using the relation between \\sbv\\ and \\dm15\\ in \\citet{buret14}.\nAt $z$ = 0.072, the best-fit \\sbv\\ is 0.75 (or \\dm15\\ = 1.54 mag) for Branch Normal,\nwhile it is 0.82 (or \\dm15\\ = 1.39 mag) for 91bg-like at $z$ = 0.077.\nThe \\chisqr\\ values of the best-fit templates at these redshifts are\n$\\simeq$ 2 for Branch Normal ($z$ = 0.072) \nand $\\simeq$ 3 for 91bg-like ($z$ = 0.077).\nOverall, the best-fit Branch Normal templates give \nsystematically, although slightly,\nsmaller \\chisqr\\ values of \\simlt\\ 3\nfor all the trial redshifts.\nAt higher redshifts $z$ \\simgt\\ 0.08, however, \n\\chisqr\\ values of the best-fit 91bg-like templates\nbecome increasingly greater than 3.\n\nBased on the results shown in Figures~\\ref{fig:LCtemplate} \nand \\ref{fig:LCzfitVi} from the \\snp\\ template fitting,\nwe estimate the redshift of \\kspn\\ to be $z$ = 0.072 $\\pm$ 0.003 and \nconclude that the source is closer to Branch Normal than 91bg-like as follows. \nFirst of all, the comparison between the observed and best-fit templates (Figure~\\ref{fig:LCtemplate}),\nespecially the $I$-band comparison (bottom panel),\ngives clear preference for Branch Normal for \\kspn.\nAlso, the best-fit color stretch parameter \\sbv\\ $\\simeq$ 0.82 for \nthe 91bg-like template is unacceptably larger than those found in 91bg-like SNe,\nwhile the best-fit \\sbv\\ $\\simeq$ 0.75 for Branch Normal is largely acceptable, \nalthough somewhat small, for a Branch Normal SN.\nThis shows that the 91bg-like template is apparently incompatible with \nthe observed light curves of \\kspn\\ at an admissible redshift range. \nThe estimated redshift of \\kspn\\ is largely derived from the conceivable range \nof reasonable proper motions for its progenitor. \nAs explained in \\S\\ref{sec:host}, \nit is highly likely that the host of \\kspn\\ is an unidentified \ndwarf galaxy whose brightness is fainter than $\\sim$ 25 mag in $BVI$.\nIn the local environment around the Milky Way, the peculiar velocities of dwarf galaxies are at the level of 200 \\kms\\ \\citep{let07}, \nwith a few occasional outliers showing higher velocities\nin special environments \\citep[e.g.,][]{dgc01}. \nFor a galaxy like the Milky Way, $\\sim$ 500 \\kms\\ is the escape velocity,\nwhile rare objects with faster velocities of up to $\\sim$ 1000 \\kms\\ are considered \nhypervelocity stars \\citep{brown15, beei17}.\n\\moon{Adopting 500 \\kms\\ as a conservative, or upper-limit, assessment of \nthe potential peculiar motion of \\kspn,\nwe obtain $\\Delta z$ = 0.003 as the uncertainty of the best-fit redshift \nfrom the Branch Normal template fitting \nafter taking the uncertainty in the DM measurement\nfrom the \\snp\\ fitting, which includes\nthe uncertainty contributions from the \\snp\\ template calibration\nas well as from adopting the more recently updated cosmological\nparameters than \\snp\\ (see above),\nand the uncertainty corresponding to 500 \\kms\\ peculiar motion into account together.\nThe $\\sim$ 4\\% level of the photometric redshift uncertainty is \ncomparable with what have been found in some other \nphotometric studies of \\snia\\ \\citep[e.g.,][]{palet10}.}\nThis gives $z$ = 0.072 $\\pm$ 0.003 and DM = 37.47 $\\pm$ 0.10 mag, \nor luminosity distance of 311.9 Mpc,\nfor the source (Figure~\\ref{fig:LCzfitVi}).\nThe small difference between the best-fit redshifts of Branch Normal and 91bg-like\nindicates that any potential systematic uncertainties resulting from applying the\nBranch Normal template to KSP-OT-201509b, which shows transitional nature (\\S\\ref{subsec:tran}), \nfor its redshift determination is small.\n\nWe now conduct $S$-correction of the $B$-band light curve \nusing this redshift and the spectral time series of the best-fit \nBranch Normal template shown in Figure~\\ref{fig:LCtemplate}\nas identified in the $VI$-band light curve analyses above.\n({\\it Note that the $B$-band light curve in Figure~\\ref{fig:lc} is the one \nobtained after $S$-correction.})\nWe then use SNooPy with the entire $S$-corrected $BVI$ light curves \nto obtain the best-fit Branch Normal template for \\kspn\\\n(Figure~\\ref{fig:lc}, black lines) \nfrom which we derive $K$-corrections. \nWe adopt the epoch of the $B$-band peak brightness \\tp\\ = 57305.08 $\\pm$ 0.03 (MJD) of the best-fit template,\nwhich is 15.18 $\\pm$ 0.03 days after the epoch of the first detection\nand $\\sim$18.4 days after the epoch of first light, \nas the peak epoch of \\kspn\\ from the template.\nUsing the $S$- and $K$-corrected $BVI$ curves of the SN in the rest frame, \nwe estimate absolute peak magnitudes of \n--18.94 $\\pm$ 0.01 ($B$), --18.93 $\\pm$ 0.01($V$) and --18.38 $\\pm$ 0.01 ($I$) mag \nwith their respective epochs. \nWe finally estimate the \\csp\\ and \\pp\\ of \\kspn\\ to be \n\\sbv\\ = 0.54 $\\pm$ 0.05 using the \n$B$-band light curve and \n$B-V$ color evolution of the source as defined in \\citet[][]{buret14} and\n\\dm15\\ = 1.62 $\\pm$ 0.03 mag by conducting polynomial fitting \nof the $B$-band light curve in the rest frame of $z$ = 0.072.\nThese final \\sbv\\ and \\dm15\\ values are different from those obtained \nfrom the $VI$-band template fitting above\n(i.e., \\sbv\\ $\\simeq$ 0.75 and \\dm15\\ $\\simeq$ 1.54 mag for Branch Normal;\n\\sbv\\ $\\simeq$ 0.82 and \\dm15\\ $\\simeq$ 1.39 mag for 91bg-like),\nand we attribute the discrepancy to the transitional nature of \\kspn\\ \n(see below)\nsince neither template is an intrinsically good match to the observed light curves of the source.\nSee Table~\\ref{tab:par} for the compilation of the physical parameters of \\kspn.\n\n\\moon{As described above, the redshift of \\kspn\\ is estimated using its observed peak brightness\nby allowing reasonable peculiar motion of the source in the frame of the \\snp\\ template fitting, rather than purely based on the goodness of the light-curve fitting.\nThe minimum \\chisqr\\ values of the \\snp\\ template fittings change only slightly, \ni.e., mostly between 2 and 3,\nover the trial redshift range of 0.01--0.1, making the parameter \ninadequate to discern the redshift of the source alone.\nIn order to examine the validity of our redshift estimation from the \\snp-based analyses,\nwe apply two other \\snia\\ light curve fitting packages to the observed light curves of \\kspn.\nWe first use the SALT2 fitting package \\citep{guyet07} with the same trial redshift range\nof 0.01--0.1, which results in only a slight variation, i.e., between 4 and 5, of the minimum \\chisqr\\ values,\nas we find in the \\snp\\ fitting.\nBoth the best-fit redshifts from the SALT2 fitting of the $B$- and $V$-band \nlight curves determined by the minimum \\chisqr\\ value are consistently $\\sim$ 0.07, \nsimilar to what we obtain for \\kspn\\ from the \\snp-based analyses above, \nwhile it is $\\sim$ 0.02 for the $I$ band.\nThis significant discrepancy between the $BV$ and $I$ band might be due to \nincompatibility between the SALT2 package and the transitional nature of \\kspn,\nwhich is expected to affect the $I$-band fitting more than the other bands,\nalthough it is important to note that these best-fit redshifts are \npoorly constrained given the small variations in the minimum \\chisqr\\ values.\nFigure~\\ref{fig:SLAT2}, which shows the distribution of the best-fit shape parameter (= \\x1)\nfrom our SALT2 fitting of the \\kspn\\ light curves over the trial redshift range of 0.01--0.1,\nprovides a consistency check between the \\snp- and SALT2-based fitting results.\nIn the figure, the two shaded areas represent the ranges of the redshift and \\x1\\ \nexpected from the \\snp\\ fitting for \\kspn: the vertically shaded area is $z$ = 0.072 $\\pm$ 0.003\nthat we estimate as the redshift of the source,\nwhile the horizontally shaded area is \\x1\\ = $-$1.89 $\\pm$ 0.05 converted from\nthe fitted \\sbv\\ values in Figure~\\ref{fig:LCzfitVi} (bottom panel) for the redshift\nusing the known relationship between \\x1\\ and \\sbv\\ in \\citet{buret14}.\nThe SALT2-based \\x1\\ parameter intersects with the two shaded areas, \nshowing that the results of the \\snp\\ fitting are consistent with \nthose of the SALT2 fitting on the known relation between \\sbv\\ and \\x1.\nIn addition to the SALT2 fitting, we also apply the SiFTO SN fitting package \\citep{conet08} \nto the observed light curves of \\kspn\\ over the trial redshift range of 0.01--0.1. \nAlthough the best-fit redshift from the SiFTO fitting is poorly constrained again \ndue to the small variation in the minimum \\chisqr\\ values, we obtain $\\sim$ 0.07 \nfor the best-fit redshift, compatible with \\snp\\ and SALT2 fitting results.}\n\n\n\\subsection{Transitional Nature of \\kspn}\\label{subsec:tran}\n\nThe observed characteristics and measured parameters \nof \\kspn\\ identify the source as a \\rapd\\ \\sni\\ of transitional nature\nthat is more similar to Branch Normal than 91bg-like as follows.\nThe \\rapd\\ nature of the source is easily confirmed in \nFigure~\\ref{fig:phil} (left panel) where we compare the $B$-band peak \nabsolute magnitude and \\sbv\\ of \\kspn\\ (filled yellow star)\nwith those of the group of \\snia\\ (crosses) from \\citet{buret18}.\nThe number of SNe gradually decreases as the \\sbv\\ \nvalues decrease (or the peak $B$-band luminosities become smaller) in the figure.\nAfter the location of \\kspn\\ at \\sbv\\ $\\simeq$ 0.54,\nthe gradual distribution of the SNe identified with larger \\sbv\\ values\ndisappears, and there exists only a small number of SNe as\n\\sbv\\ decreases in this range.\nThe right panel of Figure~\\ref{fig:phil}, \nwhere \\dm15\\ is used instead of \\sbv\\ for the post-peak decline rate\nfor the same \\snia\\ in the left panel,\nshows the mean \\dm15\\ values of the \nfour major subtypes of \\snia\\ \\citep[see][]{pfp14}:\n91T-like (filled red circle),\nCore-Normal (filled blue circle), Broad-Line (filled orange circle), \nand 91bg-like (filled green circle) \nas well as that (\\dm15\\ $\\simeq$ 1.62 mag) of \\kspn\\ (yellow star).\nNote that \\snia\\ in the 91T-like group are slowly-evolving and overluminous,\nwhereas Core-Normal and Broad-Line constitute Branch Normal. \n\\kspn\\ clearly appears to be associated with a small number of \\snia\\ \nbridging the gap between Branch Normal and 91bg-like in the figure, \nshowing its transitional nature. \n\n\\moon{In \\citet{buret14}, \\snia\\ with \\sbv\\ $\\simlt$ 0.5 show an $I$-band peak \n$\\sim$3--4 days later than $B$-band peak, while those with \\sbv\\ $\\simgt$ 0.7 do\n$\\sim$2--3 days earlier. The $I$-band peak of \\kspn\\ precedes that of the $B$ band by\nabout 1.5 days, which, together with its \\sbv\\ $\\simeq$ 0.54, places\nthe source in the gap connecting the two populations.\nThe relative strength of the secondary $I$-band peak is \n$_{20-40}$ = 0.309 $\\pm$ 0.002 for \\kspn\\ (\\S\\ref{subsec:lc}). \nThe estimated values of \\sbv\\ and $_{20-40}$ of \\kspn\\ \nfollow the overall correlation between the two parameters compiled for \\snia\\ in the paper.\nThe value $\\simeq$ 0.309 is low for Branch Normal but is high for 91bg-like \\citep{kriet01,buret14,sriet17}, \nconsistent with the interpretation that \\kspn\\ is of transitional nature. \nComparison of the source with other \\snia\\ in terms of \\sbv, $_{20-40}$ and the time\ndelay between $B$- and $I$-band peaks shows that it overlaps with \"Cool\" (CL) objects\n\\citep[][see Figure~6 therein]{buret14} which are mostly rapid decliners, \nbut close to the boundary to the group of slow decliners, \nsupporting the rapidly-evolving and transitional nature of \\kspn.}\n\nThe \\bv\\ color of \\kspn\\ at the peak epoch is $\\simeq$ 0.08 mag, \nand this is consistent with what have been found in transitional \n\\snia\\ \\citep{galet18}.\nWhile the source is transitional, \nthe measured values of \\dm15\\ and \\sbv, the presence of the \nsecondary $I$-band peak, the overall color evolution, \nand the results of the template fitting all show\nthat it is more similar to Branch Normal than 91bg-like.\nAccording to \\citet{dhaet17}, fast-declining \\snia\\ more luminous than\n5 $\\times$ 10$^{42}$ \\ergs\\ make a smooth connection to normal \\snia\\ \nas we find in \\kspn\\ whose peak bolometric luminosity is\n(9.0 $\\pm$ 0.3) $\\times$ 10$^{42}$ \\ergs\\ (Table~\\ref{tab:par} and see \\S\\ref{sec:bol}).\n\n\\section{Bolometric Light Curve and \\ni56\\ Distribution}\\label{sec:bol}\n\nThe radioactive decay of \\ni56\\ followed by that of \\co56\\ \ndrives the light curves of \\snia\\ after explosion. \nWe compare here the bolometric light curve of \n\\kspn\\ with those expected from two different types of model \\ni56\\ distribution\n-- centrally concentrated and stratified -- in order to investigate\nhow \\ni56\\ was distributed in the progenitor of the source as well \nas its explosion parameters of ejecta mass and kinetic energy.\nDuring the photospheric phase of a \\sni\\ when the expansion is \ndriven by homologous spherical shocks within the first $\\sim$ 30 \ndays after the explosion,\nthe majority of its emission falls within \nthe UVOIR (ultraviolet-optical-infrared) waveband \\citep[][]{conet00}.\nWe integrate the best-fit \\sni\\ template of \\kspn\\ that we obtained\nfrom the \\snp\\ template fitting (\\S\\ref{subsec:temp})\nover the rest-frame UVOIR waveband, or $\\lambda$ = 3075--23763 \\AA, \nto construct its bolometric light curve.\\footnote{\\moon{We note that a bolometric light curve obtained this way is sometimes \ncalled quasi-bolometric light curve \\citep[e.g.][]{bet17} given the limited wavelength\nrange over which the luminosity is calculated. Since the difference is expected\nto be very small, we will call it bolometric luminosity light curve\nthroughout this paper for simplicity.}}\nFigure~\\ref{fig:arn} shows the bolometric light curve of \\kspn\\ (filled black circles),\nwherein the peak bolometric luminosity \nand epoch are (9.0 $\\pm$ 0.3) $\\times$ 10$^{42}$ erg s$^{-1}$ \nand --0.5 $\\pm$ 0.6 days, respectively.\n\n\\subsection{Centrally Concentrated \\ni56\\ Distribution}\\label{sec:cenni}\n\n\nIn the case that the distribution of \\ni56\\ is strongly peaked \ntowards the center of the ejected mass and that\nthe ejecta opacity is constant during SN explosion, \nthe radioactively-powered luminosity of a \\sni\\ \nduring the photospheric phase can be described as \\citep{arn82,arn96,valet08} \n\\begin{equation}\n L(x) = M_{\\rm Ni} \\; e^{-x^{2}} \\times [(\\epsilon_{\\rm Ni}-\\epsilon_{\\rm Co}) \\; C(x) + \\epsilon_{\\rm Co} \\; D(x)]\n\\label{eq:lum}\n\\end{equation}\n\\noindent\nwhere $M_{\\rm Ni}$ is the total mass of \\ni56.\nThe parameter $x$ represents a scaled time dimension of SN explosion as \n\\begin{equation}\n x = \\frac{t}{\\tau_m}\\ \\ , \\ \\ \\ \\ \\tau_m = \\left(\\frac{\\kappa }{\\beta_{\\rm A} c}\\right)^{1\/2}\\left(\\frac{6 M_{\\rm ej}^3}{5 E_{\\rm ej}}\\right)^{1\/4}\n\\label{eq:time}\n\\end{equation}\n\\noindent\nwhere $t$ is time since explosion, $\\tau_m$ is the geometric mean of diffusion and expansion time scales, \n$\\kappa$ is the opacity, $\\beta_{\\rm A}=13.8$ is a model constant for SN density distribution, \n$c$ is the speed of light, and \\mej\\ and \\eej\\ are mass and kinetic energy of the ejecta, respectively.\nWe adopt $\\kappa$ = 0.1 cm$^2$~g$^{-1}$ dominated by the line transitions \nof \\ni56\\ during the photospheric phase \\citep{pe00, pn14}.\nIn Eqn.~\\ref{eq:lum}, $\\epsilon_{\\rm Ni} = 3.90\\times10^{10}$ erg~s$^{-1}$~g$^{-1}$ and\n$\\epsilon_{\\rm Co} = 6.78\\times10^{9}$ erg~s$^{-1}$~g$^{-1}$\nare the energy production rates per gram of \\ni56\\ and \\co56, respectively.\nThe term $C$($x$) is related to the luminosity produced by the nuclear reaction dacay of \\ni56,\nand so is $D$($x$) but to \\co56. \nThe equation shows that the luminosity of a \\sni\\ in this model is mainly determined by two parameters:\nthe mass of \\ni56\\ ($M_{\\rm Ni}$) and the mean time scale ($\\tau_m$). \n\n\\moon{If we simply assume that the onset of the model \nis the same as that of the first light obtained from \nthe power-law fit (\\S\\ref{subsec:lc}),\nwhich gives 16.7 days as the time interval between the epochs of \nthe first light and peak bolometric luminosity (or ``rise time''),\nwe obtain $M_{\\rm Ni}$ = 0.44 $\\pm$ 0.01 \\msol\\ \nand $\\tau_m$ = 14.07 $\\pm$ 0.67 days for the source\nby applying Eqn.~\\ref{eq:lum} at the bolometric peak.\nThis is equivalent to fixing Eqn.~\\ref{eq:lum} using only the two parameters: the peak bolometric luminosity and the rise time,\nregardless of the shape of the light curve.}\nFigure~\\ref{fig:arn} compares the model light curve (blue dotted line)\npredicted by the two obtained parameters, i.e., $M_{\\rm Ni}$ and $\\tau_m$,\nwith the bolometric light curve (black circles) of \\kspn,\nwhere we can clearly identify over-prediction of the bolometric luminosity \nby the model across the peak, \nshowing that this method is inappropriate.\nThis incompatibility is also confirmed with the inferred ejecta mass \nand kinetic energy of the source,\nwhich are \\mej\\ = 1.86 $\\pm$ 0.24 \\msol\\ and \\eej\\ = 1.35 $\\pm$ 0.30 $\\times$ 10$^{51}$ erg, respectively,\nobtained under the assumption of \na typical ejecta velocity \\vej\\ = 11000 $\\pm$ 1000 km~s$^{-1}$ \\citep{Scalet19}\nand $E_{\\rm ej}$ = $\\frac{3}{10}M_{\\rm ej}\\varv_{\\rm ej}^2$ for a \\sni.\nThe inferred ejecta mass and kinetic energy are \nunacceptably large for a fast-evolving \\sni\\ \\citep{Scalet19}.\nThis discrepancy is mainly due to the long rise time (16.7 days) and \nthe fast post-peak decline rate (\\dm15\\ $\\simeq$ 1.62 mag) of the source\nthat are largely incompatible with the model of Eqn.~\\ref{eq:lum}.\nIn other words, while this model may accurately estimate\nthe total \\ni56\\ mass based on the peak luminosity of \\kspn\\ (see below), \nit is likely insufficient to reproduce the detailed evolution of \nthe ascending and\/or declining phases.\n\nIn order to obtain more reliable SN explosion parameters\nthat match the evolution of the luminosity around the peak of \\kspn,\nwe conduct fitting of Eqn.~\\ref{eq:lum} to the \nbolometric light curve of \\kspn\\ (Figure~\\ref{fig:arn}), \nbut limiting the period between --10 and +10 days across the peak in the fitting. \nThis method, which excludes the early and late part of the light curve, \nshould provide more reliable explosion parameters \nsince it is less affected by the assumed \\ni56\\ \ndistribution in Eqn.~\\ref{eq:lum}\nwhile more dependent on the bulk properties of the ejecta. \nThe best-fit parameters obtained this way are\n$M_{\\rm Ni}$ = 0.32 $\\pm$ 0.01 \\msol\\ and $\\tau_m$ = 9.45 $\\pm$ 0.52 days \nwhich provide a significantly improved match to the observed bolometric\nlight curve around the peak with \\chisqr\\ $\\simeq$ 0.42\n(Figure~\\ref{fig:arn}, black solid line).\n\\moon{The inferred ejecta mass and kinetic energy are \\mej\\ = 0.84 $\\pm$ 0.12 \\msol\\ \nand \\eej\\ = (0.61 $\\pm$ 0.14) $\\times$ 10$^{51}$ erg, respectively, \nconsistent with what have been observed in other fast-evolving SNe \\citep{dhaet18,Scalet19,wyget19}.}\nWe, therefore, adopt these values as the explosion parameters of \\kspn.\nIn this model fit, we also note that --13.5 $\\pm$ 0.4 days in the SN rest frame \nis the onset of the model light curves powered by centrally concentrated \\ni56 distribution\nwhich is approximately 3.7 days after the epoch of first light (\\t0, see \\S\\ref{subsec:lc}).\nThe difference of 3.7 days implies again that the centrally concentrated \\ni56\\ distribution \nin Eqn.~\\ref{eq:lum} is inadequate to properly model the observed early light curves of \\kspn,\nalthough it is capable of providing reliable SN explosion parameters \nwhen fitting is limited to the light curves around the peak.\nIn Figure~\\ref{fig:arn}, the black dotted line shows the\nextrapolated model prediction of Eqn.~\\ref{eq:lum} \nfor early ($<$ --10 days) and late ($>$ 10 days) epochs\nusing the explosion parameters obtained in the fitting above.\nThe best-fit model apparently underpredicts bolometric luminosities at early epochs.\nWe attribute this underprediction and the aforementioned difference between \\t0\\\nand the onset of the model light curves \nto a shallow \\ni56\\ distribution in \\kspn\\ \nas we detail in \\S\\ref{sec:strni} below.\n\nThe ejecta mass \\mej\\ = 0.84 $\\pm$ 0.12 \\msol\\ places \\kspn\\ \nin the group of sub-Ch-mass \\snia,\nconsistent with the results from previous studies of \\snia\\ that\nrapid decliners are from sub-Ch-mass explosions \\citep[e.g.,][]{Scalet19}. \nNote that, \nby assuming a smaller opaicty $\\kappa$ = 0.08 cm$^2$ g$^{-1}$ \\citep[e.g.,][]{arn82, Li19},\nwe obtain ejecta mass and kinetic energy \n\\mej\\ = 1.05 $\\pm$ 0.15 \\msol\\ and \\eej\\ = (0.76 $\\pm$ 0.18) $\\times$ 10$^{51}$ erg, respectively,\nstill within the limit of the sub-Ch-mass explosion.\nRecent results from extensive radiation transport simulations also \nindicate that rapid decliners are highly unlikely to be from \nChandrasekhar or super-Chandrasekhar mass explosions \\citep{gk18}.\n\n\\subsection{Stratified \\ni56\\ Distribution}\\label{sec:strni}\n\nAs shown above, the presence of early excess emission to what is expected \nby Eqn~\\ref{eq:lum} (Figure~\\ref{fig:arn}) and the difference of 3.7 days \nbetween the epoch of the first light \\t0\\ \nand the onset of the model light curves of Eqn.~\\ref{eq:lum} \nindicate that the real \\ni56\\ distribution \nof \\kspn\\ is different from the simple central concentration \nassumed in \\citet{arn82}.\nIt is conceivable, as recently suggested \\citep{pn14,pm16,conet18,maget20,mm20}, \nthat there exists \\ni56\\ distributed close to the progenitor surface responsible \nfor the early excess and the difference of 3.7 days. \nWe investigate this possibility below based on these models.\n\n\\subsubsection{Analytic Model}\\label{subsec:analytic}\nAccording to \\citet[][PN14, hereafter]{pn14}, \na stratified \\ni56\\ distribution extended more towards the surface\ncan adequately model the bolometric light curves of other \\snia\\ \nsuch as SN~2009ig, SN~2011fe, and SN~2012cg.\nPN14 modelled the local mass fraction of \\ni56\\ in the ejecta \nfollowing the spherically-symmetric logistic distribution of \n\\begin{equation}\n X_{56}(x) \\propto \\frac{1}{1+\\exp{[-\\beta(x-x_{1\/2})]}}\n\\label{eq:ni}\n\\end{equation}\n\\noindent\nwhere $x= t\/t_{\\rm diff}$ is a scaled depth, measured from the surface to the center, \nin unit of diffusion time ($t_{\\rm diff}$) of the ejecta optical depth, while\n$\\beta$ and $x_{1\/2}$ are shape parameters representing \nthe radial decline rate of the \\ni56\\ distribution \nand the scaled depth at which\nthe distribution reaches half maximum, respectively.\nDuring a SN explosion, a diffusion wave travels backwards \ninto the expanding ejecta and SN luminosities are determined by \nthe amount of \\ni56\\ probed by the diffusion wave which reaches the \ncenter of the ejecta, or $x$ = 1, some time after the SN reaches the peak luminosity.\n\nFigure~\\ref{fig:SNpnmod} (top panel) shows the early bolometric \nlight curve (solid line) of \\kspn\\\npredicted by the best-fit stratified \\ni56\\ distribution (Eqn.~\\ref{eq:ni}),\nconfirming that the stratified \\ni56\\ distribution can match \nthe observed early bolometric luminosities (black circles) almost perfectly\nwith the best-fit parameters \n$x_{1\/2}$ = 1.0, $\\beta$ = 2.4, and $t_{\\rm diff}$ = 23.1 days \n(see Figure~\\ref{fig:SNpnchi} \nbelow for the details of the fit).\nIn contrast, the extrapolation of the model light curve (dotted line) \nbased on the centrally-distributed \\ni56\\ distribution from Eqn.~\\ref{eq:lum}\nshows a clear underprediction.\nThe bottom panel of the figure presents the early evolution \nof the estimated mass of ejecta (M$_{\\rm diff}$ for solid line)\nand \\ni56\\ (M$_{56}$ for dashed line) above the diffusion wave\n(or close to the progenitor surface)\nwhere we can identify the presence of about 0.0075 \\msol\\ \\ni56\\ mass,\nwhich corresponds to 3.4~\\% of the ejecta,\nlying above the diffusion depth at 4 days.\nThis tells us that a small amount of excess \\ni56\\ distributed shallowly\ncan account for the early excess emission in \\kspn,\nrevealing that the \\ni56\\ distribution in the \nSN ejecta is likely more stratified towards the outer layers.\n\nThe distribution of \\chisqr\\ values from our model fitting \nof the stratified \\ni56\\ distribution, \nwhich is presented in Figure~\\ref{fig:SNpnchi} \nas a function of $x_{1\/2}$ and $\\beta$,\nshows that the model is reasonably well-fitted within a curved strip of\n$x_{1\/2}$ $\\simeq$ 0.4--1.0 and $\\beta$ $\\simeq$ 2--5\nwhere large $x_{1\/2}$ values are paired with small $\\beta$ values,\nor vice versa.\nThis distribution pattern of $x_{1\/2}$ and $\\beta$ describes\neither a more gradually extended \\ni56\\ distribution\nthat drops off closer to the center (= large $x_{1\/2}$ and small $\\beta$) or a more \nrapidly decaying distribution that drops off closer to the surface\n(= small $x_{1\/2}$ and large $\\beta$),\nboth of which are consistent with a shallow \\ni56\\ distribution.\nThe bottom panel of the figure shows the distribution of the local mass fraction of \\ni56\\ \n(= $X_{56}$) in two extreme cases marked in the top panel --\ni.e., circle for $x_{1\/2}$ = 1.0 and $\\beta$ = 2.4; \ndiamond for $x_{1\/2}$ = 0.4 and $\\beta$ = 4.6 -- \nwhere we can confirm that both distributions require nearly the same \\ni56\\ \nfraction at $\\sim$4-6 days after explosion.\n\n\\subsubsection{Radiative Transfer Model}\\label{subsec:radiative}\n\n\\moon{\n\\citet{maget20} recently provided radiative transfer-based \nmodel light curves of \\snia\\ with a logistic \\ni56\\ distribution \nusing the following function\n\\begin{equation}\n X_{56}(m) = \\frac{1}{1+\\exp{[-s \\; (m-M_{\\rm Ni})\/ {\\rm M}_\\odot ]}}\n\\label{eq:ni_mm}\n\\end{equation}\n\\noindent\nfor \\ni56\\ distribution, which is very similar to what is adopted in PN14 \n(or Eqn.~\\ref{eq:ni} in \\S~\\ref{subsec:analytic}). \nIn this distribution, $m$ is the mass coordinate from the ejecta surface,\n$M_{\\rm Ni}$ is the total \\ni56\\ mass, \nand $s$ describes how fast the \\ni56\\ distribution declines. \nWe compare in Figure~\\ref{fig:mm_Ni56} the observed colors (black circles) of \\kspn,\nwhich are binned to 1 day interval to increase S\/N ratios,\nduring the first 10 days post-explosion to\nthe best-fit radiative transfer models (black dashed curves,\n$M_{\\rm Ni}$ = 0.4 \\msol, $s$ = 4.4, and $E_{\\rm ej}$ = 0.78 $\\times$ 10$^{51}$ ergs)\nof \\snia\\ from \\citet{maget20} with Chandrasekhar-mass ejecta and exponential density distributions.\nThe shaded grey regions in the figure represent \nthe range of predicted colors for a set of radiative transfer models \nwith kinetic energies in the range of (0.5--2.2) $\\times$ 10$^{51}$ ergs,\nwhile fixing $M_{\\rm Ni}$ and $s$ to those from the best-fit, \nwhich are 0.4 \\msol\\ and 4.4, respectively.\nWe identify in the figure that the observed early color evolution of \\kspn\\\nis largely consistent with radiative transfer-based predictions of \na centrally concentrated and monotonically stratified \\ni56\\ distribution.}\n\n\\moon{\n\\citet{mm20} extended the work of \\citet{maget20} to compute \nradiative transfer model light curves of \nChandrasekhar-mass \\snia\\ for a limited set of logistic \\ni56\\ distributions\nwith an external shell component in the outer layers of the ejecta. \nIn Figure~\\ref{fig:mm_clump}, we compare the observed colors of \\kspn\\\n(as in Figure~\\ref{fig:mm_Ni56}) to the predicted light curves of\nthe shell-added \\ni56\\ distributions of \\citet{mm20}.\nThe black curve represents the best-fit model from the limited set \nof Chandrasekhar-mass \\snia\\ \nwith $M_{\\rm Ni}$ = 0.6 \\msol, $s$ = 9.7, and $E_{\\rm ej}$ = 1.68 $\\times$ 10$^{51}$ ergs\nfor the case without any shell, \nwhile the blue, green, red curves are for the cases of \nan added shell of 0.01, 0.02, and 0.03 \\msol, respectively.\nThe \\ni56\\ distribution within these shells is assumed to be a Gaussian\ncentered at $m$ = 1.35 \\msol\\ from the center of the ejecta with \nwidths of 0.18 \\msol\\ (solid curves) and 0.06 \\msol\\ (dotted curves).\nAs seen in the figure, the observed colors are less consistent \nwith the presence of a \\ni56\\ shell of \\simgt\\ 0.01 \\msol\\\nthan the case of a centrally concentrated and monotonically \nstratified distribution of \\ni56\\ alone (Figure~\\ref{fig:mm_Ni56}),\nalthough we cannot rule out the possibility of a thinner \\ni56\\ shell of \\simlt\\ 0.01 \\msol\\ \nproducing colors more consistent with the observations.}\n\n\\moon{\nOverall, as shown above, the observed early color evolution of \\kspn\\ \nwithin the first 10 days post-explosion is consistent with\nwhat is expected from stratified, but still centrally concentrated, \\ni56\\ \nthat extends to the shallow layers of the ejecta near surface. \nThe color evolution is, however, largely incompatible with the presence of\na thick (\\simgt\\ 0.01 \\msol), external shell component of \\ni56\\ in addition to the logistic distribution. \nThis indicates that if \\kspn\\ originated in a \\subch\\ explosion triggered by \na He-shell detonation process \\citep[e.g.,][]{kroet10},\na thinner shell would have been required such as recently shown in simulations\nof He-shell detonations with thin, enriched He shells \\citep[e.g.][]{towet19}.}\n\n\\section{Constraint on the Progenitor of \\kspn}\\label{sec:pro}\n\nThe high-cadence, multi-color light curves of \\kspn\\ (Figure~\\ref{fig:lc}) provide\na rare opportunity to place thorough constraints on the progenitors of \nrapidly-declining \\snia\\ of transitional nature.\nFor a \\sni\\ from a single-degenerate progenitor system composed of\na white dwarf and \neither a main-sequence (MS) subgiant or a red giant companion,\n\\citet[][see also \\citealt{boehner17}]{kas10} calculated model\nluminosities from the shock interactions between the SN ejecta and the companion \nwhich are mainly determined by the mass, kinetic energy and opacity of the ejecta\nas well as by the progenitor binary separation distance.\n\\moon{Such emission has been discussed as the source of early flashes within\nroughly 5 days post-explosion in \\snia\\ \\citep[e.g.,][]{miller18,miller20}.}\nThe observable luminosity from the interaction can be approximated by \n$L_{\\rm int}(t) f \\rm {(\\theta)}$, \nwhere $L_{\\rm int}(t)$ is the intrinsic luminosity from the interaction at time $t$\nand $f \\rm {(\\theta)}$ = 0.982 exp[--($\\theta$\/99.7)$^2$] + 0.018\nis the distribution of the observed luminosity as a function of the viewing angle $\\theta$ \\citep{ollet15}.\nThe maximum observable luminosity occurs when a SN is viewed \nalong the interaction axis from the side of the companion, or $\\theta$ = 0$^\\circ$. \n\nFigure~\\ref{fig:kas} compares the observed $BVI$ magnitudes \n(black crosses) as well as the limiting magnitudes \n(black inverted triangles) of the \\kspn\\ light curves \nfrom its early phase\nwith those expected by the ejecta and companion \ninteraction model \\citep{kas10} for three\nparticular companion cases of\n1RG (blue solid line), 6MS (orange) and 2MS (green) \nwhen viewed along the interaction axis from the companion side.\nThese three cases are for progenitor systems where \nthe Roche Lobe-filling companion is \na red giant of 1 \\msol\\ (1RG),\na MS subgiant of 6 \\msol\\ (6MS)\nand 2 \\msol (2MS) located at \n2 $\\times$ 10$^{13}$ cm,\n2 $\\times$ 10$^{12}$ cm and\n5 $\\times$ 10$^{11}$ cm, respectively, from the white dwarf.\nWe use the estimated parameters of \\kspn\\ in Table~\\ref{tab:par}\nand ejecta opacity $\\kappa$ = 0.2 cm$^2$~g$^{-1}$ attributed to \nelectron scattering \\citep{kas10} to estimate luminosities \nfrom the interaction ($L_{\\rm int}$) between the ejecta and companion.\nAs in the figure, the observed $BVI$-band brightnesses \n(including the upper limits) of \\kspn\\ are lower than\nwhat are predicted by the interactions between \nthe companion and ejecta \nin the 1RG case (blue line) in most of the observed epochs, \nand this is also true for the case of 6MS (orange line) at the epochs \nearlier than day 3.\nThis comparison shows that both the 1RG and 6MS models are \nincompatible with the observations\nsince the observed fluxes need to be larger than \nthe model predictions to allow\nfor the presence of emission from the ejecta-companion interactions.\nThe 2MS case (green line) is different from the 1RG and 6MS\ncases because only the $B$-band brightness obtained around 0.6 day \nis lower than the model prediction while \nall the other observed brightnesses (including the upper limits)\nare higher.\nThe 2MS case, therefore, still appears to be incompatible\nwith the model prediction,\nbut with less confidence than the 1RG and 6MS cases. \nIn conclusion, if the interactions between the ejecta and \nthe binary companions are \nindeed viewed along the interaction axis from the companion side in \\kspn,\nour comparisons show that the companion of the source\nwas located in closer proximity to the white dwarf \nthan the 1RG and 6MS cases,\nand likely than the 2MS case too,\nindicating that the size of the progenitor companion \nof \\kspn\\ is smaller than those of these three stars.\nWe now provide a much more thorough and general investigation\ninto the presence of potential emission originating from the\nejecta-companion interactions and \nconclude that it is highly likely that the companion of \\kspn\\\nwas a white dwarf, supporting the double-degenerate scenario.\n\n\nWe first expand our search for the signal from the potential \ninteractions between the ejecta and companion in \\kspn\\ \nby including an extensive set of companion types,\nfar beyond the 1RG, 2MS and 6MS cases, \nfor all possible oblique viewing angles,\ni.e., $\\theta$ $>$ 0\\degr, \nand also by fully accounting for the uncertainties in our photometric measurements.\nFor this, we choose a set of 60 distances \nin the range of (0.001--10) $\\times$ 10$^{13}$~cm \nin logarithmic scale as trial binary separation distances of its progenitor.\nThis range of the separation distances corresponds to that of the Roche-radius separations of stars \nspanning from the smallest red dwarfs to the largest supergiants. \nWe then calculate predicted brightnesses from the \nejecta-companion interaction in $BVI$ with these \nseparation distances using the model of \\citet{kas10}\nfor a set of 100 viewing angles equally sampled \nin the range of $\\theta$ = 0\\degr--180\\degr.\nIn this process, \nwe compare the predicted brightnesses to the observed light curves \nat various confidence levels by adopting a Gaussian distribution \nof the observed $BVI$ magnitudes\n(including the upper limits, Figure~\\ref{fig:kas}) of the source\nwith their photometric uncertainties. \nFigure~\\ref{fig:kassep} shows our results\nwhere the abscissa is the binary separation distance\nand the ordinate represents\nthe lower limit of acceptable viewing angles \nfor the interaction between the ejecta and its companion in \\kspn\\ based \non the model of \\citet{kas10}\nat the confidence level of 68~\\%\\ (solid curve) and 95~\\%\\ (dashed curve).\nAt the 95~\\%\\ confidence level (dashed curve), \nas in the figure,\nonly highly-oblique viewing angles $\\theta$ $\\simgt$ 130\\degr\\ are allowed\nfor the separation distances larger than 2 $\\times$ 10$^{13}$~cm,\nwhereas all of the small viewing angles are ruled out.\nFor the small separation distances of $<$ 0.03 $\\times$ 10$^{13}$~cm,\non the other hand, we cannot rule out any viewing angles.\nIn the separation distance range of \n(0.03--2) $\\times$ 10$^{13}$~cm, \nthe lower limit of acceptable viewing angles increases along the\nseparation distance, ruling out more viewing angles for larger separation distances. \nAt the 68~\\% confidence level (solid curve), \nthey are naturally more constrained\nthan the 95~\\% case (dashed curve) --\nalmost all viewing angles are excluded for large separation distances,\nwhile we cannot rule out any angle only for the separation distances\nsmaller than 0.007 $\\times$ 10$^{13}$~cm.\n\nThe results shown in Figure~\\ref{fig:kassep} accommodate the \nphotometric uncertainties of the light curves of \\kspn, \nbut not those of the estimated values of the redshift, epoch of the first light, and ejecta mass and kinetic energy\nneeded to compute the model luminosities from the ejecta-companion interactions.\nIn order to reflect the uncertainties of those parameters \nin our analyses, we conduct 40000 \nMonte Carlo simulations of the light curves of the ejecta-companion \ninteractions predicted by \\citet{kas10}\nby randomly selecting the values of these parameters \nunder the assumption that they follow a Gaussian distribution \nwith the measured uncertainties.\nWe use the same set of the separation distances \nand viewing angles that \nwe used above for Figure~\\ref{fig:kassep}\nand then conduct comparison between the observed light curves \nof \\kspn\\ with the model light curves. \nFigure~\\ref{fig:kasepo} shows the results of our comparison,\nwherein all the viewing angles are allowed for the separation distances \nup to $\\simeq$ 0.045 $\\times$ 10$^{13}$ cm at the confidence level of 95~\\%\nwhen all the uncertainties are statistically accounted for. \nThe separation distance of 0.045 $\\times$ 10$^{13}$ cm\nis about 90~\\% of that of 2MS after which the lower limit of acceptable viewing angle increases.\nAt the separation distances larger than that of 1RG, \nonly large viewing angles, i.e., $\\theta$ $>$ 125\\degr, are allowed.\nThe constraints on the viewing angles become more stringent at the 68~\\% confidence level\nwith even small viewing angles being ruled out at very short separation distances.\n\nOur analyses above apparently show that \nthere is not much chance for the companion of \\kspn\\ to have been a red giant\nconsidering that the lower limit of acceptable viewing angles\nrules out most of the viewing angles\nfor such a large star.\nNote that it is highly unlikely the companion was \nat a different location from the Roche-radius separation where\nit can provide the stable accretion needed to trigger the explosion.\nA MS subgiant appears to be more probable \nthan the red giant in our analyses, \nespecially at highly-oblique viewing angles;\nhowever, sustaining substantial accretion from such a MS companion \nfor a \\sni\\ detonation is more challenging.\nThese results indicate that a double-degenerate system is much more \nlikely to have been the progenitor system of \\kspn,\nwhich agrees well with the results of the statistical \nsearches for ejecta-companion shock interactions\nin the surveys of SDSS, SNLS and TESS \\citep{hayden10,bianco11,fausnaugh19}\nand also with those of individual SN studies of \nSN2011fe \\citep{nuget11,liet11}, \nSN2012ht \\citep{yamanaka14}, \nSN2013gy \\citep{holmbo19}, \nASASSN-14lp \\citep{shapet16} \nand SN2012cg \\citep[][although see \\citealt{maret16}]{shapet18},\nstrengthening the case of small companions for the majority \nof \\snia\\ progenitors \\citep[e.g.,][]{ollet15}. \nWe, however, also note that some SNe, e.g.,\nSN2014J \\citep{goobar15}, \nSN2017cbv \\citep{hosseinzadeh17} and \nSN2018oh \\citep{shapet19,dimitriadis19}, \ndo show indications of early excess emission that could come \nfrom ejecta-companion shock interactions, \nalthough it is possible that some other processes---such as circumstellar interactions or a shallow layers of \\ni56---are responsible for it.\n\n\n\n\\section{Missing Host Galaxy for \\kspn}\\label{sec:host}\n\nAs shown in Figure~\\ref{fig:det} (see also \\S~\\ref{sec:obs}),\nno host galaxy underlying \\kspn\\ \nis detected in our deep stack images reaching the sensitivity\nlimits of $\\simeq$ 27.8 ($B$), 28.5 ($V$) and 28.2 ($I$) \\mpas, while the source\n\\galgg, which is the only apparently extended source near \\kspn, \nis located $\\sim$27\\arcsec\\ away in the southwestern direction.\nIn order to understand the nature of \\galgg\\ and its potential \nconnection to \\kspn,\nwe fit its $V$-band surface brightness\nwith the S{\\'e}rsic profile\n$\\mu$ = $\\mu_0$ +1.0857 $b_n(r\/r_{\\rm e})^{1\/n}$\n(where $\\mu_0$, $r$, $r_{\\rm e}$ and $n$ are the central \nsurface brightness, radius, effective radius and S\\'ersic \ncurvature index, respectively, and \n$b_n$ = 1.9992$n$--0.3271, see \\citealt{graham05})\nto obtain $\\mu_{0}$ = 19.04 $\\pm$ 0.14 \\mpas,\n$r_{\\rm e}$ = 3\\farcs63 $\\pm$ 0\\farcs03 and $n$ = 2.05 $\\pm$ 0.06.\nThe apparent $V$-band magnitude and the \\bv\\ color of \\galgg\\ \nare $V$ $\\simeq$ 17.38 mag and \\bv\\ $\\simeq$ 1.4 mag, respectively.\nThese fitted parameters and color of \\galgg\\ are similar to those \nfound in early-type galaxies \\citep{baset13, valet11}, \nwhich is consistent with its spectroscopic properties.\nFigure~\\ref{fig:gal} shows our Magellan spectrum of \\galgg\\ (see \\S~\\ref{sec:obs} \nfor the details of the observations)\nwith clear detections of Ca H+K, G and Na ID absorption lines, \ntypical of early-type galaxies. \nWe determine the redshift of \\galgg\\ to be \n$z$ $\\simeq$ 0.167 $\\pm$ 0.001\nusing the measured wavelengths of the absorption lines,\nwhere the uncertainty is due to the rms wavelength \nsolution error of 0.135 \\AA.\nThe measured redshift is much larger than $z$ = 0.072 $\\pm$ 0.003 inferred for \\kspn\\ by\nfitting to \\sni\\ template (\\S\\ref{subsec:temp}).\nFigure~\\ref{fig:z0163fit} compares the best-fit \\snp\\ Branch Normal template (dotted curve) \nobtained at $z$ = 0.167 with those in Figure~\\ref{fig:LCtemplate}, \nshowing clearly that it gives significantly \nworse results than the fit obtained at $z$ = 0.072.\nIn addition, in order for \\kspn\\ to be a \\sni\\ with a high redshift $z$ = 0.167,\nits observed $B$-band peak brightness requires the SN to either have a\npeculiar motion greater than 14000 \\kms\\ or peak absolute magnitude\nmore luminous than --21.8 mag with \\dm15\\ $\\simeq$ 1.96 mag.\nBoth of these are unrealistic since the required peculiar motion is more than\n20 times greater than what can be expected for the source (see \\S\\ref{subsec:temp})\nand the required peak luminosity and \\dm15\\ make the source an extremely luminous \n\\sni\\ with an exceptionally large decline rate.\nWe, therefore, conclude that \\galgg\\ is unrelated to \\kspn\\\nand is an early-type galaxy located at a much larger redshift.\nThis leaves \\kspn\\ still hostless. \n\nIn order to investigate whether another nearby detected source\nmay be the host galaxy of \\kspn, we then \ncalculate probabilities that 22 sources identified within\nabout 35\\arcsec\\ from \\kspn\\ in our deep $BVI$ images\nwith S\/N $>$ 2\nare random galaxies coincidentally located in the field\nby adopting the methodology in \\citet{bkd02} using their \nmagnitudes and distances from \\kspn\\ \n\\citep[see also][and the details therein]{berger10}.\nIn Figure~\\ref{fig:maria}, which shows \nthe calculated probabilities of \nchance coincidence as a function of distance,\nall the sources have high ($>$ 0.4) chance coincidence probabilities \nwith the majority near 1. \nThis tells us that all these sources near \\kspn\\\nare likely to be coincident by chance\nand are unlikely related to the SN.\n\n\nThe absence of any host galaxy candidate of \\kspn\\ \nin our deep stack images hints to the nature of its \nhost galaxy since,\nat the luminosity distance of $\\simeq$ 310 Mpc (or $z$ $\\simeq$ 0.072), \nregular galaxies should be easily identifiable as an extended object \\citep[see][for example]{aet14} in our images.\nConsidering that there is no such extended object \nother than \\galgg, which is at a much higher redshift, \nin the vicinity of \\kspn, \nit is highly likely that the host galaxy of \\kspn\\ is \na faint dwarf galaxy.\nThe limiting magnitudes of an unresolved source in our images \nare $B\\simeq 25.83$, $V\\simeq 25.30$, and $I\\simeq 24.74$ mag.\nThis corresponds to an absolute magnitude limit of $\\simeq$ --12 mag \nin the $V$ band at $z$ = 0.072 and it is \ncompatible with the previously-known $V$-band absolute magnitude range\nof dwarf galaxies \\citep{tol09}.\nThe nearest unresolved source to \\kspn\\ in Figure~\\ref{fig:det} \nis about 5\\arcsec\\ away\nfrom \\kspn\\ with $V$- and $I$-band magnitudes of \n24.14 $\\pm$ 0.15 and 23.99 $\\pm$ 0.17 mag, respectively. \nIf this unresolved source is a dwarf galaxy hosting \\kspn, \nthe SN is located $\\sim$ 6.6 kpc away from the center of the host galaxy.\nIts $V$-band absolute magnitude is --13.23 mag at $z$ = 0.072, \nand a dwarf galaxy with such magnitude \nis expected to have $\\simeq$ 25 \\mpas\\ effective surface brightness\nand $\\simeq$ 1 kpc effective radius \\citep{cb15,tol09}.\nThis requires that \\kspn\\ exploded at a location away \nfrom the center of a dwarf galaxy more than six times \nof its effective radius. \nAccording to \\citet{k13}, the stellar density in a dwarf galaxy \nat such a location drops significantly more, i.e., $\\gg$ 50 times, \nthan the central part.\nWe, therefore, conclude again that it is highly unlikely that \nany source identified in our deep stack images is \nthe host galaxy of \\kspn\\ and that the host galaxy of the SN\nis most likely a dwarf galaxy fainter than our detection limit.\nThe inferred insignificant host galaxy extinction of \\kspn\\ \n(\\S\\ref{sec:obs} and \\S\\ref{subsec:color}) points to \nthe outskirts of its potential host galaxy as the explosion location.\n\nRecently there has been a growing number of \\snia\\ detected \nin low-luminosity dwarf galaxies as well as those that remained hostless. \nThese include\n(1) SN1999aw, a luminous and slow-decaying SN from a \nvery faint galaxy with $M_V=-12.4$ mag \\citep{Strolger2002};\n(2) SN2007qc from an extremely faint host galaxy with $M_B \\sim -11$ mag \\citep{Quimby2012};\n(3) SN2007if, a luminous super-Chandrasekhar-mass \\sni\\\ndetected in a host galaxy with $M_g=-14.45$ mag \\citep{Childress2011};\nand (4) PTF10ops, a peculiar type SN with subluminous spectral \nproperties but with a normal light-curve width \nwhile remaining hostless to the detection \nlimit of $r$ $\\gtrsim$ --12 mag \\citep{maguire11}. \nIt is also noteworthy that the group of Ca-rich transients, \nwhose thermonuclear origin is still under debate, \nhave consistently been found in the outskirts of their host galaxies \nwhere there is no apparent stellar population,\npreferring high-velocity progenitors to low-metallicity environment \\citep{yuan13,foley15,lyman16,lunnan17}.\n\\cite{Graham2015} identified the host of a \\sni\\ with \n$M_V$ $\\simeq$ --8.4 mag which may be either\na dwarf galaxy or a globular cluster from a nearby elliptical \ngalaxy along with two other cases where no host galaxy \nis identified to the limit of $M_R$ $>$ --9.2 mag, \nsuggesting that their progenitors most likely belong to the intracluster \nstellar population.\nAlthough \\snia\\ from a faint host,\nincluding those from intra-cluster environment and dwarf galaxies, \ncan potentially bring us an important insight into the connection between\ntheir progenitors and stellar population and be used to trace missing dwarf galaxies in the \nlocal universe \\citep[e.g.,][]{Graham2015,cb15},\nour understanding of such \\snia\\ is still \nhighly incomplete, mainly due to the lack of statistically meaningful sample size. \nThe identification of \\kspn\\ as a rapidly-declining hostless\n\\sni\\ of transitional nature indicates that this type of \n\\snia\\ can be produced\nin faint dwarf galaxies and that the coupling between\nspecific types of \\snia\\ and host galaxy environment may not be\nas strong as previously thought considering that\nthe transitional \\snia\\ have largely been detected\nin early-type galaxies \\citep[e.g.,][]{ashet16, mrbh07}.\n\n\\section{Summary and Conclusion}\\label{sec:sum}\n\nIn this paper, we report the discovery and \nidentification of \\kspn\\ \nas a rapidly-declining hostless \\sni\\ of transitional nature\nlikely originating from a sub-Ch explosion in a double-degenerate progenitor\nbased on high-cadence, multi-color observations made with the KMTNet.\nWe summarize our results and conclusion as follows.\n\n\\begin{itemize}\n\\item The observed light curves and colors of \\kspn\\ \nare compatible with a \\rapd\\ \\sni\\ at $z$ $\\simeq$ 0.072\nwhose properties are intermediate between Branch Normal \nand 91bg-like, but much closer to the former, \nwith clear signs of transitional nature.\nWhile the evolution of its early light curves is well fitted \nwith a power law representing a homologous expansion\npowered by \\ni56\\ radioactive decay which is mediated by photon diffusion processes,\nthe overall \\bv\\ and \\vi\\ color evolution of \\kspn\\ is largely\nsynchronous with that of the $I$ band as found in other \\snia.\nWe identify the presence of an early redward evolution\nin the \\vi\\ color prior to --10 days since peak in the SN\nbefore it enters the previously-known phase of blueward evolution.\nThis early redward evolution in \\vi\\ \nhas not been much studied but may bear an important clue\nto understanding the physical conditions of SN explosions.\n\n\\item The \\pp\\ and \\csp\\ of \\kspn\\ are \n\\dm15\\ = 1.62 $\\pm$ 0.03 mag and \\sbv\\ = 0.54 $\\pm$ 0.05, respectively,\nwhich place the source in the gap between the two groups of \nBranch Normal and 91bg-like with the peak luminosity of\n(9.0 $\\pm$ 0.3) $\\times$ 10$^{42}$ \\ergs. \nThe transitional nature of the source is also confirmed with \nthe relative strength ($\\simeq$ 0.309) \nof the secondary $I$-band peak of the source \nand its \\bv\\ $\\simeq$ 0.08 mag color at the peak epoch.\nWe obtain $M_{\\rm Ni}$ = 0.32 $\\pm$ 0.01 \\msol, \n\\mej\\ = 0.84 $\\pm$ 0.12 \\msol\\ and \n\\eej\\ = (0.61 $\\pm$ 0.14) $\\times$ 10$^{51}$ erg,\nwhich make \\kspn\\ a \\sni\\ explosion with\na sub-Ch mass.\n\n\\item The bolometric light curve of \\kspn\\ shows the presence of\nan early excess emission to what is expected from a centrally-concentrated\n\\ni56\\ distribution. \n\\moon{We find that a stratified \\ni56\\ distribution \nextended more shallowly to the surface of the progenitor provides \na good match to the observed bolometric light curve,\nwhile the presence of a thick, \\simgt\\ 0.01 \\msol, external shell\nis largely incompatible with the observed early color evolution.}\nThorough comparisons between the observed light curves \nand those predicted from the ejecta-companion interactions \nclearly prefer a small binary separation distance for the progenitor, \nfavoring the double-degenerate scenario for its origin.\n\n\\item Even in our deep stack images reaching the sensitivity limit\n$\\mu_{BVI}$ $\\simeq$ 28 \\mpas,\n\\kspn\\ remains hostless, suggesting that its host galaxy \nis a faint dwarf galaxy. This contradicts to what has been previously\nthought for the types of host galaxies that produce transitional \\snia.\nIt will be worthwhile to investigate the nature of the host galaxy \nof \\kspn\\ with deeper imaging observations than presented in this paper\nthat can shed new insight into the relationship between \nhost galaxies and types of \\snia.\n\n\\end{itemize}\n\n\n\n\n\n\\acknowledgments\nThis research has made use of the KMTNet facility operated by the Korea Astronomy and Space Science Institute and the data were obtained at three host sites of CTIO\nin Chile, SAAO in South Africa, and SSO in Australia. \nWe acknowledge with thanks the variable star observations\nfrom the AAVSO International Database contributed by\nobservers worldwide and used in this research. \nDSM was supported in part by a Leading Edge Fund from the Canadian Foundation \nfor Innovation (project No. 30951) and a Discovery\nGrant (RGPIN-2019-06524) from the Natural Sciences and Engineering Research Council (NSERC) of Canada. \nMRD acknowledges support from NSERC through grant RGPIN-2019-06186, \nthe Canada Research Chairs Program, \nthe Canadian Institute for Advanced Research, \nand the Dunlap Institute at the University of Toronto. \nHSP was supported in part by the National Research Foundation of Korea (NRF) grant\nfunded by the Korea government (MSIT, Ministry of Science and ICT; No. NRF-2019R1F1A1058228).\nJA is supported by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the 2nd Call of \"Science and Society\" Action Always strive for excellence Theodoros Papazoglou\" (Project Number: 01431)\n\n\n\n\n\\vspace{5mm}\n\\facilities{KMTNet, Magellan, AAVSO}\n\\software{\nAstropy \\citep{astropy13,astropy18},\n\\snp\\ \\citep{buret11},\nSCAMP \\citep{bertin06},\nSWARP \\citep{bet02},\nSiFTO \\citep{conet08}\n}\n\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Data Extraction}\nAt each gate setting we measure the qubit transition frequency over at least one period in offset charge $n_g$. From this we extract the qubit frequency $f_{0n}$ and charge dispersion $\\delta f_{0n}$ by applying a peak-finding algorithm to the raw two-tone spectroscopy data. The algorithm first smooths the data in frequency axis in order to combat noise, after which peaks are identified and fit with Lorentzian lineshapes in order to obtain their center frequency. For the gate voltage ranges in which two peaks are identified we take $\\delta f_{0n}$ to be the local maximum in peak separation. Conversely, $f_{0n}$ is obtained from the regions where only a single peak is identified. In the regions of parameter space where $\\delta f_{0n}$ is smaller than the qubit linewidth $\\gamma_{0n}$ such that only a single peak can be discerned for any $n_g$ (open markers in Fig. 4 of the main text), we take the center frequency to be $f_{0n}$ and use the extracted linewidth of the Lorentzian lineshape as an upper bound for $\\delta f_{0n}$.\n\n\\section{Modelling of the qubit}\n\\label{sec:supmodel}\nIn order to model the measured data we study the Hamiltonian of a capacitively shunted junction given by \\begin{equation}\n\\hat{H} = 4 E_{\\mathrm{c}} \\left(\\hat{n} - n_{\\mathrm{g}}\\right)^2 + U(\\hat{\\phi})\n\\label{eq:suphamiltonian}\n\\end{equation}\nwhere $U(\\hat{\\phi})$ is the junction potential, $E_{\\mathrm{c}}$ is the charging energy, $\\hat{n}$ is the number of Cooper pairs that have traversed the junction, $n_{\\mathrm{g}}$ is a dimensionless offset charge and $\\hat{\\phi}$ is the phase difference between the superconductors on either side of the junction. We obtain the qubit energy levels $E_n(n_g)$ and the corresponding qubit transitions $f_{ij}(n_g) = E_j(n_g) - E_i(n_g)$ through numerical diagonalization of the Hamiltonian, from which $f_{0n}$ and $\\delta f_{0n}$ are calculated by evaluating the transitions at the appropriate offset charges.\n\nWe perform this procedure for three possible models for the junction potential: a sinusoidal potential as encountered in tunnel junctions, a potential considering only occupation of the $E_-$ ABS branch of the resonant level model, and a potential including ITLZ tunneling between the ABS of the resonant level model. In the case of the sinusoidal model we take $U(\\hat{\\phi}) = -E_{\\mathrm{J}} \\cos{\\hat{\\phi}}$, where we define an effective $E_{\\mathrm{J}} \\equiv \\widetilde{\\Delta} \\widetilde{D} \/4$ in order to compare the models on equal footing. For the model including only the $E_{-}(\\hat{\\phi})$ ABS branch we use $U(\\hat{\\phi}) = -\\widetilde{\\Delta} \\sqrt{1-\\widetilde{D} \\sin^2 \\hat{\\phi}\/2}$, while in order to include ITLZ tunneling between the $E_{\\pm}(\\hat{\\phi})$ branches we follow the work of Ivanov and Feigel'man \\cite{Feigelman1999a} and approximate the many-body superconducting system by an effective two-level system. This results in a junction potential given by \n\\begin{equation}\nU(\\hat{\\phi}) =\n\\widetilde{\\Delta} \\begin{pmatrix}\n \\cos{\\frac{\\hat{\\phi}}{2}} & \\sqrt{1-\\widetilde{D}}\\sin{\\frac{\\hat{\\phi}}{2}} \\\\[4pt]\n \\sqrt{1-\\widetilde{D}}\\sin{\\frac{\\hat{\\phi}}{2}} & -\\cos{\\frac{\\hat{\\phi}}{2}} \n\\end{pmatrix}\n\\label{eq:feigelman}\n\\end{equation}\nIn the above $\\widetilde{\\Delta}$ and $\\widetilde{D}$ are effective parameters resulting from the underlying quantum dot parameters discussed in section \\ref{sec:QD}.\n\nThe results of this procedure are demonstrated in Fig. \\ref{fig:supaverin}, which shows how $f_{01}$ and $\\delta f_{01}$ depend on $\\widetilde{D}$ for the three models. The sinusoidal model reproduces the expected results of the conventional tunnel junction transmon, exhibiting exponential suppression of $\\delta f_{01}$ for large values of $\\widetilde{D}$ and thus $E_{\\mathrm{J}}\/E_{\\mathrm{c}}$. The $E_{-}(\\phi)$ model exhibits similar behaviour up to moderate values of $\\widetilde{D}$, after which an enhanced suppression of $\\delta f_{01}$ takes place due to the increased height of the potential compared to the sinusoidal model. Finally, the model including ITLZ tunneling shows comparable behaviour to $E_{-}(\\phi)$ up to large values of $\\widetilde{D}$, after which a much more rapid decrease in charge dispersion takes place. This reproduces the scaling law predicted for ballistic Josephson junctions \\cite{Averin1999}. We note that the cross-over value of $\\widetilde{D}$ between the adiabatic regime well-described by only $E_-$ and the diabatic regime including ITLZ tunneling is approximately given by $\\widetilde{D} = 1-E_c\/\\widetilde{\\Delta}$, where the rate of phase evolution becomes comparable to the energy gap between the ABS \\cite{Feigelman1999}. \n\n\\begin{figure}[t!]\n\\includegraphics[width=12.9cm]{supp_averin_v1.pdf\n\\caption{\\label{fig:supaverin} Qubit frequency (panel a) and charge dispersion (panel b) evaluated for three different models: a sinusoidal potential, the negative energy ABS branch of the resonant level model, and a potential considering ITLZ tunneling between both ABS branches. All models are evaluated for fixed parameter values $\\widetilde{\\Delta} = 14$~GHz and $E_c = 715$~MHz. The dashed line indicates the crossover value $1-E_c\/\\widetilde{\\Delta}$. Inset of panel b: Zoom in of the behaviour near $D=1$, where the ITLZ model exhibits rapid suppression in charge dispersion down to a small value set by tunneling through a potential barrier $\\widetilde{\\Delta} \\cos \\phi\/2$.}\n\\end{figure}\n\n\\section{Resonant Level Model}\n\\label{sec:QD}\nIn order to develop a quantitative understanding of our device we study a simplified model of a quantum dot between two superconducting leads known as the resonant level model \\cite{Beenakker2001}. Depicted in Fig. 4a of the main text, it considers the presence of a single spin-degenerate level in the junction with an energy $\\epsilon_{\\mathrm{0}}$ relative to the Fermi level, coupled to two identical superconductors with superconducting gap $\\Delta$ via the spin-degenerate tunnel rates $\\Gamma_{\\mathrm{l}}$ and $\\Gamma_{\\mathrm{r}}$. Its discrete energy spectrum follows from the solutions $\\epsilon \\in (0,\\Delta)$ of the equation\n\\begin{equation}\n\\left(\\Delta^2-\\epsilon^2)(\\epsilon^2-\\epsilon_0^2-\\Gamma^2\\right) + 4\\Delta^2\\Gamma_{\\mathrm{l}}\\Gamma_{\\mathrm{r}} \\sin{}^2(\\phi\/2) + 2\\Gamma \\epsilon^2(\\Delta^2-\\epsilon^2)^{1\/2} = 0\n\\label{eq:supbeenakker}\n\\end{equation}\nwhere $\\Gamma = \\Gamma_{\\mathrm{l}}+\\Gamma_{\\mathrm{r}}$. \nThis equation can be solved numerically for general parameter values, resulting in a single pair of spin degenerate ABS $E_{\\pm}(\\phi)$. Furthermore, in certain limits its solution can be recovered analytically (given in Eq. 1 of the main text), which coincides with the eigenvalues of Eq. \\ref{eq:feigelman}. However, we found these limits too constraining for the model to accurately describe our data. We therefore construct an approximate solution to Eq. \\ref{eq:supbeenakker} based on the ABS energies $E_{\\pm}(\\phi)$ and transparency $\\widetilde{D}$ given by Eq. 1 of the main text, whereas for $\\widetilde{\\Delta}$ we do not use the limiting values but instead solve Eq. \\ref{eq:supbeenakker} for the bound state energy at $\\phi = 0$. Shown in Fig. \\ref{fig:QDerr}, we tested the validity of this approximation for a wide range of parameters by explicit comparison to the solutions of Eq. \\ref{eq:supbeenakker}. The effective spectrum closely resembles the exact solutions over a wide range of parameters, with relative errors on the order of a few percent even in the regime $\\Gamma \\approx \\Delta$. We therefore argue that the effective model of Eq. \\ref{eq:feigelman} should accurately describe the phenomenology of the resonant level junction. A more detailed description of the model and its derivation can be found in \\cite{Kringhoj2019a}.\n\n\\begin{figure}[t!]\n\\includegraphics[width=12.9cm]{Supp_QD_v1.pdf\n\\caption{\\label{fig:QDerr} Comparison between the exact and approximate solutions for the ABS of the resonant level model. Top half shows the explicit energy levels for both solutions while the bottom half shows the relative error of the approximate model. Panel a shows the comparison in the regime of weak coupling ($\\Gamma \\ll \\Delta)$ on resonance ($\\epsilon_0 = 0$) with symmetric barriers ($\\delta\\Gamma = 0$). Panel b shows the regime of moderate coupling in which $\\Gamma_{\\mathrm{l}}=\\Gamma_{\\mathrm{r}} = \\Delta$ and where the level is off-resonant ($\\epsilon_0 \\neq 0$). Panel c shows the regime of far off-resonant strong coupling with asymmetric tunneling rates, in which $\\Gamma, \\epsilon_0 \\gg \\Delta$ and $\\delta\\Gamma \\neq 0$.}\n\\end{figure}\n\nHaving constructed the effective parameters $\\widetilde{D}$ and $\\widetilde{\\Delta}$, we now show how the ITLZ model introduced in section \\ref{sec:supmodel} behaves as a function of the underlying quantum dot parameters. We do this for the parameter values that resulted in the best fits to the measured data, shown in Fig. 4 of the main text. In Fig. \\ref{fig:fig4d} we study the effect of varying $\\epsilon_0$ at $\\Gamma_{\\mathrm{l}} = \\Gamma_{\\mathrm{r}}$. It results in a weak dependence of $\\widetilde{\\Delta}$, which is minimal when $\\epsilon_0 = 0$. This coincides coincides with a maximum in $\\widetilde{D}$, as given by Eq. 1 of the main text. Shown in panel b this translates into a qubit frequency that is maximal at $\\epsilon_0 = 0$, coinciding with a minimum in charge dispersion. Panels c and d in turn show the dependence on $\\Gamma_{\\mathrm{l}}$ at fixed $\\Gamma_{\\mathrm{r}}$ with $\\epsilon_0 = 0$. We find that $\\widetilde{\\Delta}$ is a monotonically increasing function of $\\Gamma_{\\mathrm{l}}$, whereas $\\widetilde{D}$ is maximal when the asymmetry $\\delta\\Gamma = \\vert \\Gamma_{\\mathrm{l}}-\\Gamma_{\\mathrm{r}} \\vert$ is minimized. Panel d shows that the relative rapidity at which $\\widetilde{\\Delta}$ and $\\widetilde{D}$ change around $\\delta\\Gamma = 0$ can result in a situation where the maximal qubit frequency does not coincide with minimal charge dispersion. We believe this effect to be the origin of the non-monotonic dependence between the qubit frequency and the charge dispersion seen in figures 3 and 4d of the main text.\n\n\\begin{figure}[t!]\n\\includegraphics[width=12.9cm]{supp_4d_v1.pdf\n\\caption{\\label{fig:fig4d} Effective junction and qubit parameters as a function of resonant level parameters. Panel a (b) shows how $\\widetilde{D}$ and $\\widetilde{\\Delta}$ ($f_{01}$ and $\\delta f_{01}$) behave as a function of $\\epsilon_0$ for $\\delta\\Gamma = 0$, panel c (d) shows how these parameters behave as a function of $\\Gamma_{\\mathrm{l}}$ for $\\epsilon_0 =0$ and $\\delta \\Gamma \\neq 0$.}\n\\end{figure}\n\n\\section{Fitting Routine}\nWe fit the measured relationships between $\\{f_{0n}, \\delta f_{0n}\\}$ using the models developed in section \\ref{sec:supmodel}. For the data measured as a function of $V_{\\mathrm{j}}$, shown in Fig. 4c of the main text, we assume that only $\\epsilon_{\\mathrm{0}}$ is varied. The remaining remaining parameters $\\Delta$, $E_{\\mathrm{c}}$, $\\Gamma_{\\mathrm{l}}$, and $\\Gamma_{\\mathrm{r}}$ are taken to be independent. Additionally, we fix $\\Delta = 53 \\ \\mathrm{GHz}$ based on DC transport experiments on similar nanowires \\cite{Deng2016} and we assume that $\\Gamma_{\\mathrm{l}} = \\Gamma_{\\mathrm{r}}$. We then apply a fitting routine in which for each set of parameter values a range of $\\{\\widetilde{\\Delta}, \\widetilde{{D}}\\}$ is generated from the effective resonant level ABS potential for a large range of $\\epsilon_{\\mathrm{0}}$. These effective parameters are then used in the three different junction potentials of section \\ref{sec:supmodel}, resulting in a set of calculated values for $\\{f_{01}, \\delta f_{01}\\}$ that can be compared to the measured values. The best fit to the data for each model is obtained through the standard method of least-squares.\n\nFor the extracted relationships between $\\{f_{0n}, \\delta f_{0n}\\}$ as a function of $V_{\\mathrm{g}}$, shown in Fig. 4d of the main text, this procedure is slightly modified. We now assume that only $\\Gamma_{\\mathrm{l}}$ is a function of $V_{\\mathrm{g}}$, with $\\Delta$, $E_{\\mathrm{c}}$, $\\epsilon_{\\mathrm{0}}$, and $\\Gamma_{\\mathrm{r}}$ taken to be independent. For simplicity we fix $\\epsilon_{\\mathrm{0}} = 0$. We note that, in order to obtain a good fit for the data versus $V_{\\mathrm{g}}$, $\\Delta$ needed to enter as a free parameter. In addition, a good fit could not be found simultaneously for all three measured transitions with a single set of parameters. Instead we only fit the $\\{f_{01}, \\delta f_{01}\\}$ data to the model, resulting in a qualitatively satisfying fit for the $01$ transition. However, the fit suggests a value of superconducting gap $\\Delta = 18.6$~GHz, much smaller than the value measured in DC experiments \\cite{Deng2016}. Moreover, the fit does not manage to capture the position of the higher order transitions. As discussed in the main text, we attribute this parameter discrepancy as well as the inability to fit all three transitions to possible modifications in the shape of the potential originating from the lack of electron-electron interactions in the model.\n\n\\section{Estimating transparencies}\nAn estimate for the transparencies $\\widetilde{D}$ realized the in the experiment can be obtained from the fits to the data. As illustrated in Fig. \\ref{fig:fig4d}, each numerically calculated value of $\\delta f_{0n}$ corresponds to a value of $\\widetilde{D}$, and one can therefore infer the values of $\\widetilde{D}$ by matching the measured values of $\\delta f_{0n}$ to the numerical values. We emphasize that these values are model and parameter dependent, and are therefore only an estimate. Shown in Fig. \\ref{fig:ts}a, we find that by varying $V_{\\mathrm{j}}$ transparencies between 0.5 and 1 are attained, with the largest transparency based on a distinguishable charge dispersion (filled markers) being 0.998 and the largest value based on the qubit linewidth $\\gamma_{01}$ (open markers) being 0.9996. Finally in panel b we show the asymptotic probability $p$ of the ABS remaining in the ground state as calculated from the extracted $\\widetilde{D}$ and $\\widetilde{\\Delta}$ \\cite{Averin1999}. This illustrates that the suppression of charge dispersion coincides with the vanishing of $p$. Furthermore, it shows that sizeable ITLZ probabilities are obtained over a wide range of the measured values, robust to small changes in fit parameters. We do not repeat this procedure for the data obtained by varying $V_{\\mathrm{g}}$, given the unsatisfactory fit to the data.\n\n\\begin{figure}[t!]\n\\includegraphics[width=12.9cm]{supp_fig_Ts.pdf\n\\caption{\\label{fig:ts} (a) Extracted $\\widetilde{D}$ for the values of $\\delta f_{01}$ measured as a function of $V_{\\mathrm{j}}$, with the inset showing the behaviour near $\\widetilde{D}=1$. (b) Asymptotic probability for the ABS to remain in the $E_-$ branch as calculated from the extracted $\\widetilde{D}$ and $\\widetilde{\\Delta}$.}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nVideo Object Segmentation (VOS) is a fundamental task in computer vision with many potential applications, including augmented reality~\\cite{ngan2011video} and self-driving cars~\\cite{zhang2016instance}. In this paper, we focus on semi-supervised VOS, which targets on segmenting a particular object across the entire video sequence based on the object mask given at the first frame. \n\\zongxin{The development of semi-supervised VOS can benefit many related tasks, such as video instance segmentation~\\cite{vis,Feng_2019_ICCV} and interactive video object segmentation~\\cite{oh2019fast,miao2020memory,liangmemory}.}\n\nEarly VOS works~\\cite{osvos,onavos,premvos} rely on fine-tuning with the first frame in evaluation, which heavily slows down the inference speed. Recent works (\\emph{e.g.},~\\cite{osmn,feelvos,spacetime}) aim to avoid fine-tuning and achieve better run-time. \nIn these works, STMVOS~\\cite{spacetime} introduces memory networks to learn to read sequence information and outperforms all the fine-tuning based methods. However, STMVOS relies on simulating extensive frame sequences using large image datasets~\\cite{voc,coco,cheng2014global,shi2015hierarchical,semantic} for training. The simulated data significantly boosts the performance of STMVOS but makes the training procedure elaborate. Without simulated data, FEELVOS~\\cite{feelvos} adopts a semantic pixel-wise embedding together with a global (between the first and current frames) and a local (between the previous and current frames) matching mechanism to guide the prediction. The matching mechanism is simple and fast, but the performance is not comparable with STMVOS.\n\n\n\n\\begin{wrapfigure}[17]{R}{0.55\\textwidth}\n\\centering \n\n\\includegraphics[width=0.98\\linewidth]{figs\/CI.pdf}\n\n\\caption{CI means collaborative integration. There are two foreground sheep (pink and blue). In the top line, the contempt of background matching leads to a confusion of sheep's prediction. In the bottom line, we relieve the confusion problem by introducing background matching (dot-line arrow).}\\label{fig:cl}\n\n\\end{wrapfigure}\n\n\n\nEven though the efforts mentioned above have made significant progress, current state-of-the-art works pay little attention to the feature embedding of background region in videos and only focus on exploring robust matching strategies for the foreground object (s). Intuitively, it is easy to extract the foreground region from a video when precisely removing all the background. Moreover, modern video scenes commonly focus on many similar objects, such as the cars in car racing, the people in a conference, and the animals on a farm. For these cases, the contempt of integrating foreground and background embeddings traps VOS in an unexpected background confusion problem. As shown in Fig.~\\ref{fig:cl}, if we focus on only the foreground matching like FEELVOS, a similar and same kind of object (sheep here) in the background is easy to confuse the prediction of the foreground object. Such an observation motivates us that the background should be equally treated compared with the foreground so that better feature embedding can be learned to relieve the background confusion and promote the accuracy of VOS.\n\n\n\nWe propose a novel framework for Collaborative video object segmentation by Foreground-Background Integration (CFBI) based on the above motivation. \nDifferent from the above methods, we not only extract the embedding and do match for the foreground target in the reference frame, but also for the background region to relieve the background confusion.\nBesides, our framework extracts two types of embedding (\\emph{i.e.}, pixel-level, and instance-level embedding) for each video frame to cover different scales of features. Like FEELVOS, we employ pixel-level embedding to match all the objects' details with the same global \\& local mechanism. However, the pixel-level matching is not sufficient and robust to match those objects with larger scales and may bring unexpected noises due to the pixel-wise diversity. Thus we introduce instance-level embedding to help the segmentation of large-scale objects by using attention mechanisms.\nMoreover, we propose a collaborative ensembler to aggregate the foreground \\& background and pixel-level \\& instance-level information and learn the collaborative relationship among them implicitly.\n\\zongxin{For better convergence, we take a balanced random-crop scheme in training to avoid learned attributes being biased to the background attributes.}\nAll these proposed strategies can significantly improve the quality of the learned collaborative embeddings for conducting VOS while keeping the network simple yet effective simultaneously.\n\n\n\n\nWe perform extensive experiments on DAVIS~\\cite{davis2016,davis2017}, and YouTube-VOS~\\cite{youtubevos} to validate the effectiveness of the proposed CFBI approach. Without any bells and whistles (such as the use of simulated data, fine-tuning or post-processing), CFBI outperforms all other state-of-the-art methods on the validation splits of DAVIS 2016 (ours, $\\mathcal{J}\\&\\mathcal{F}$ $\\mathbf{89.4\\%}$), DAVIS 2017 ($\\mathbf{81.9\\%}$) and YouTube-VOS ($\\mathbf{81.4\\%}$) while keeping a competitive single-object inference speed of about 5 FPS. By additionally applying multi-scale \\& flip augmentation at the testing stage, the accuracy can be further boosted to $\\mathbf{90.1\\%}$, $\\mathbf{83.3\\%}$ and $\\mathbf{82.7\\%}$, respectively. \nWe hope our simple yet effective CFBI will serve as a solid baseline and help ease VOS's future research.\n\n\n\\section{Related Work}\n\n\\noindent\\textbf{Semi-supervised Video Object Segmentation.}\nMany previous methods for semi-supervised VOS rely on fine-tuning at test time. Among them, OSVOS~\\cite{osvos} and MoNet~\\cite{xiao2018monet} fine-tune the network on the first-frame ground-truth at test time. OnAVOS~\\cite{onavos} extends the first-frame fine-tuning by an online adaptation mechanism, \\emph{i.e.}, online fine-tuning. MaskTrack~\\cite{masktrack} uses optical flow to propagate the segmentation mask from one frame to the next. PReMVOS~\\cite{premvos} combines four different neural networks (including an optical flow network~\\cite{flownet}) using extensive fine-tuning and a merging algorithm. Despite achieving promising results, all these methods are seriously slowed down by fine-tuning during inference.\n\nSome other recent works (\\emph{e.g.},~\\cite{osmn,favos}) aim to avoid fine-tuning and achieve a better run-time. OSMN~\\cite{osmn} employs two networks to extract the instance-level information and make segmentation predictions, respectively. PML~\\cite{pml} learns a pixel-wise embedding with the nearest neighbor classifier. Similar to PML, VideoMatch~\\cite{videomatch} uses a soft matching layer that maps the pixels of the current frame to the first frame in a learned embedding space. Following PML and VideoMatch, FEELVOS~\\cite{feelvos} extends the pixel-level matching mechanism by additionally matching between the current frame and the previous frame. Compared to the methods with fine-tuning, FEELVOS achieves a much higher speed, but there is still a gap inaccuracy. Like FEELVOS, RGMP~\\cite{rgmp} and STMVOS~\\cite{spacetime} does not require any fine-tuning. STMVOS, which leverages a memory network to store and read the information from past frames, outperforms all the previous methods. However, STMVOS relies on an elaborate training procedure using extensive simulated data generated from multiple datasets. Moreover, the above methods do not focus on background matching.\n\nOur CFBI utilizes both the pixel-level and instance-level embeddings to guide prediction. Furthermore, we propose a collaborative integration method by additionally learning background embedding. \n\n\\noindent\\textbf{Attention Mechanisms.}\nRecent works introduce the attention mechanism into convolutional networks (\\emph{e.g.}, ~\\cite{attention_conv1,attention_conv2}). \nFollowing them, SE-Nets~\\cite{senet} introduced a lightweight gating mechanism that focuses on enhancing the representational power of the convolutional network by modeling channel attention. Inspired by SE-Nets, CFBI uses an instance-level average pooling method to embed collaborative instance information from pixel-level embeddings. After that, we conduct a channel-wise attention mechanism to help guide prediction. Compared to OSMN, which employs an additional convolutional network to extract instance-level embedding, our instance-level attention method is more efficient and lightweight.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figs\/overview.pdf}\n \\caption{An \\textbf{overview} of CFBI. F-G denotes Foreground-Background. We use \\textcolor{red}{red} and \\textcolor{blue}{blue} to indicate foreground and background separately. The deeper the red or blue color, the higher the confidence. Given the first frame ($t=1$), previous frame ($t=T-1$), and current frame ($t=T$), we firstly extract their pixel-wise embedding by using a backbone network. Second, we separate the first and previous frame embeddings into the foreground and background pixels based on their masks. After that, we use F-G pixel-level matching and instance-level attention to guide our collaborative ensembler network to generate a prediction.}\n \\label{fig:overview}\n\n\\end{figure}\n\n\n\\section{Method}\\label{sec:model}\n\n\\noindent\\textbf{Overview.} Learning foreground feature embedding has been well explored by previous practices (\\emph{e.g.},~\\cite{osmn,feelvos}). OSMN proposed to conduct an instance-level matching, but such a matching scheme fails to consider the feature diversity among the details of the target's appearance and results in coarse predictions. PML and FEELVOS alternatively adopt the pixel-level matching by matching each pixel of the target, which effectively takes the feature diversity into account and achieves promising performance. Nevertheless, performing pixel-level matching may bring unexpected noises in the case of some pixels from the background are with a similar appearance to the ones from the foreground (Fig.~\\ref{fig:cl}).\n\nTo overcome the problems raised by the above methods and promote the foreground objects from the background, we present Collaborative video object segmentation by Foreground-Background Integration (CFBI), as shown in Figure~\\ref{fig:overview}. We use red and blue to indicate foreground and background separately. First, beyond learning feature embedding from foreground pixels, our CFBI also considers embedding learning from background pixels for collaboration. Such a learning scheme will encourage the feature embedding from the target object and its corresponding background to be contrastive, promoting the segmentation results accordingly. Second, we further conduct the embedding matching from both pixel-level and instance-level with the collaboration of pixels from the foreground and background. For the pixel-level matching, \nwe improve the robustness of the local matching under various object moving rates. For the instance-level matching, we design an instance-level attention mechanism to augment the pixel-level matching efficiently. Moreover, to implicitly aggregate the learned foreground \\& background and pixel-level \\& instance-level information, we employ a collaborative ensembler to construct large receptive fields and make precise predictions.\n\n\n\n\\subsection{Collaborative Pixel-level Matching}\n\nFor the pixel-level matching, we adopt a global and local matching mechanism similar to FEELVOS for introducing the guided information from the first and previous frames, respectively. Unlike previous methods~\\cite{pml,feelvos}, we additionally incorporate background information and apply multiple windows in the local matching, which is shown in the middle of Fig.~\\ref{fig:overview}. \n\nFor incorporating background information, we firstly redesign the pixel distance of~\\cite{feelvos} to further distinguish the foreground and background.\nLet $B_t$ and $F_t$ denote the pixel sets of background and all the foreground objects of frame $t$, respectively. We define a new distance between pixel $p$ of the current frame $T$ and pixel $q$ of frame $t$ in terms of their corresponding embedding, $e_p$ and $e_q$, by\n\\begin{equation} \\label{equ:distance}\n D_t(p,q)=\n \\begin{cases}\n 1-\\frac{2}{1+exp(||e_p-e_q||^2+b_B)} & \\text{if } q \\in B_t\\\\\n 1-\\frac{2}{1+exp(||e_p-e_q||^2+b_F)} & \\text{if } q \\in F_t\n \\end{cases},\n\\end{equation}\nwhere $b_B$ and $b_F$ are trainable background bias and foreground bias. We introduce these two biases to make our model be able further to learn the difference between foreground distance and background distance.\n\n\n\\noindent\\textbf{Foreground-Background Global Matching.} Let $\\mathcal{P}_t$ denote the set of all pixels (with a stride of 4) at time $t$ and $\\mathcal{P}_{t,o}\\subseteq \\mathcal{P}_{t}$ is the set of pixels at time $t$ which belongs to the foreground object $o$. The global foreground matching between one pixel $p$ of the current frame $T$ and the pixels of the first reference frame (\\emph{i.e.}, $t=1$) is,\n\\begin{equation} \\label{equ:global_f}\n G_{T,o}(p)=\\min_{q\\in\\mathcal{P}_{1,o}} D_1(p,q).\n\\end{equation}\nSimilarly, let $\\mathcal{\\overline{P}}_{t,o} =\\mathcal{P}_t \\backslash \\mathcal{P}_{t,o}$ denote the set of relative background pixels of object $o$ at time $t$, and the global background matching is,\n\\begin{equation} \\label{equ:global_b}\n \\overline{G}_{T,o}(p)=\\min_{q\\in\\mathcal{\\overline{P}}_{1,o}} D_{1}(p,q).\n\\end{equation}\n\n\\noindent\\textbf{Foreground-Background Multi-Local Matching.}\n\n\\setlength{\\intextsep}{-10pt}\n\\begin{wrapfigure}[20]{R}{0.37\\textwidth}\n\\center\n\n\\subfloat[Slow moving rate]{\n\\label{fig:slow}\n\\includegraphics[width=0.9\\linewidth]{figs\/slow.pdf}\n}\n\n\\subfloat[Fast moving rate]{\n\\label{fig:fast}\n\\includegraphics[width=0.9\\linewidth]{figs\/fast.pdf}\n}\n\n\\caption{The moving rate of objects across two adjacent frames is largely variable for different sequences. Examples are from YouTube-VOS~\\cite{youtubevos}.}\\label{fig:offset}\n\\end{wrapfigure}\n\n\n\\noindent In FEELVOS, the local matching is limited in only one fixed extent of neighboring pixels, but the offset of objects across two adjacent frames in VOS is variable, as shown in Fig.~\\ref{fig:offset}. Thus, we propose to apply the local matching mechanism on different scales and let the network learn how to select an appropriate local scale, which makes our framework more robust to various moving rates of objects. Notably, we use the intermediate results of the local matching with the largest window to calculate on other windows. Thus, the increase of computational resources of our multi-local matching is negligible.\n\n\\setlength{\\intextsep}{0pt}\n\nFormally, let $K=\\{k_1,k_2,...,k_n\\}$ denote all the neighborhood sizes and $H(p,k)$ denote the neighborhood set of pixels that are at most $k$ pixels away from $p$ in both $x$ and $y$ directions, our foreground multi-local matching between the current frame $T$ and its previous frame $T-1$ is\n\\begin{equation} \n ML_{T,o}(p,K)=\\{L_{T,o}(p,k_1),L_{T,o}(p,k_2),...,L_{T,o}(p,k_n)\\},\n\\end{equation}\nwhere\n\\begin{equation} \\label{equ:local_f}\n L_{T,o}(p,k)=\n \\begin{cases}\n \\min_{q\\in\\mathcal{P}^{p,k}_{T-1,o}} D_{T-1}(p,q) & \\text{if }\\mathcal{P}^{p,k}_{T-1,o}\\neq\\emptyset \\\\\n 1 & \\text{otherwise}\n \\end{cases}.\n\\end{equation}\nHere, $\\mathcal{P}^{p,k}_{T-1,o}:=\\mathcal{P}_{T-1,o}\\cap H(p,k)$ denotes the pixels in the local window (or neighborhood). And our background multi-local matching is\n\\begin{equation} \n \\overline{ML}_{T,o}(p,K)=\\{\\overline{L}_{T,o}(p,k_1),\\overline{L}_{T,o}(p,k_2),...,\\overline{L}_{T,o}(p,k_n)\\},\n\\end{equation}\nwhere\n\\begin{equation} \\label{equ:local_b}\n \\overline{L}_{T,o}(p,k)=\n \\begin{cases}\n \\min_{q\\in\\mathcal{\\overline{P}}_{T-1,o}^{p,k}} D_{T-1}(p,q) & \\text{if }\\mathcal{\\overline{P}}_{T-1,o}^{p,k}\\neq\\emptyset \\\\\n 1 & \\text{otherwise}\n \\end{cases}.\n\\end{equation}\nHere similarly, $\\mathcal{\\overline{P}}^{p,k}_{T-1,o}:=\\mathcal{\\overline{P}}_{T-1,o}\\cap H(p,k)$.\n\n\nIn addition to the global and multi-local matching maps, we concatenate the pixel-level embedding feature and mask of the previous frame with the current frame feature. FEELVOS demonstrates the effectiveness of concatenating the previous mask. Following this, we empirically find that introducing the previous embedding can further improve the performance ($\\mathcal{J}$\\&$\\mathcal{F}$) by about $0.5\\%$.\n\nIn summary, the output of our collaborative pixel-level matching is a concatenation of (1) the pixel-level embedding of the current frame, (2) the pixel-level embedding and mask of the previous frame, (3) the multi-local matching map and (4) the global matching map, as shown in the bottom box of Fig.~\\ref{fig:overview}. \n\n\\setlength{\\intextsep}{-10pt}\n\\begin{wrapfigure}[22]{r}{0.3\\textwidth}\n\\center\n\n\\includegraphics[width=0.98\\linewidth]{figs\/ins_a.pdf}\n\n\\caption{The trainable part of the instance-level attention. $C_e$ denotes the channel dimension of pixel-wise embedding. $H$, $W$, $C$ denote the height, width, channel dimension of CE features.}\n\\label{fig:instance}\n\n\\end{wrapfigure}\n\n\\subsection{Collaborative Instance-level Attention}\n\nAs shown in the right of Fig~\\ref{fig:overview}, we further design a Collaborative instance-level attention mechanism to guide the segmentation for large-scale objects. \n\nAfter getting the pixel-level embeddings of the first and previous frames, we separate them into foreground and background pixels (\\emph{i.e.}, $\\mathcal{P}_{1,o}$, $\\mathcal{\\overline{P}}_{1,o}$, $\\mathcal{P}_{T-1,o}$, and $\\mathcal{\\overline{P}}_{T-1,o}$) according to their masks. Then, we apply channel-wise average pooling on each group of pixels to generate a total of four instance-level embedding vectors and concatenate these vectors into one collaborative instance-level guidance vector. Thus, the guidance vector contains the information from both the first and previous frames, and both the foreground and background regions.\n\n\\setlength{\\intextsep}{0pt}\n\n\n\nIn order to efficiently utilize the instance-level information, we employ an attention mechanism to adjust our Collaborative Ensembler (CE). We show a detailed illustration in Fig.~\\ref{fig:instance}. Inspired by SE-Nets~\\cite{senet}, we leverage a fully-connected (FC) layer (we found this setting is better than using two FC layers as adopted by SE-Net) and a non-linear activation function to construct a gate for the input of each Res-Block in the CE. The gate will adjust the scale of the input feature channel-wisely.\n\n\n\n\n\nBy introducing collaborative instance-level attention, we can leverage a full scale of foreground-background information to guide the prediction further. The information with a large (instance-level) receptive field is useful to relieve local ambiguities~\\cite{torralba2003contextual}, which is inevitable with a small (pixel-wise) receptive field.\n\n\n\n\n\n\\subsection{Collaborative Ensembler (CE)}\n\nIn the lower right of Fig.~\\ref{fig:overview}, we design a collaborative ensembler for making large receptive fields to aggregate pixel-level and instance-level information and implicitly learn the collaborative relationship between foreground and background. \n\nInspired by ResNets~\\cite{resnet} and Deeplabs~\\cite{deeplab,deeplabv3p}, which both have shown significant representational power in image segmentation tasks, our CE uses a downsample-upsample structure, which contains three stages of Res-Blocks~\\cite{resnet} and an Atrous Spatial Pyramid Pooling (ASPP)~\\cite{deeplabv3p} module. The number of Res-Blocks in Stage 1, 2, and 3 are $2$, $3$, $3$ in order. Besides, we employ dilated convolutional layers to improve the receptive fields efficiently. The dilated rates of the $3\\times3$ convolutional layer of Res-Blocks in one stage are separately $1$, $2$, $4$ ( or $1$, $2$ for Stage 1). At the beginning of Stage 2 and Stage 3, the feature maps will be downsampled by the first Res-Block with a stride of 2. After these three stages, we employ an ASPP and a Decoder~\\cite{deeplabv3p} module to increase the receptive fields further, upsample the scale of feature and fine-tune the prediction collaborated with the low-level backbone features.\n\n\n\n\\section{Implementation Details}\n\\setlength{\\intextsep}{-10pt}\n\\begin{wrapfigure}[11]{r}{0.5\\textwidth}\n\\center\\vspace{-9mm}\n\n\\subfloat[Normal]{\n\\label{fig:normal_crop}\n\\includegraphics[width=0.426\\linewidth]{figs\/normal_crop.pdf}\n}\n\\subfloat[Balanced]{\n\\label{fig:balanced_crop}\n\\includegraphics[width=0.52\\linewidth]{figs\/balanced_crop.pdf}\n}\n\n\\caption{When using normal random-crop, some red windows contain few or no foreground pixels. For reliving this problem, we propose balanced random-crop.}\\label{fig:crop}\n\n\\end{wrapfigure}\n\nFor better convergence, we modify the random-crop augmentation and the training method in previous methods~\\cite{spacetime,feelvos}.\n\n\\setlength{\\intextsep}{0pt}\n\n\\noindent\\textbf{Balanced Random-Crop.}\nAs shown in Fig.~\\ref{fig:crop}, there is an apparent imbalance between the foreground and the background pixel number on VOS datasets. Such an issue usually makes the models easier to be biased to background attributes. \n\nIn order to relieve this problem, we take a balanced random-crop scheme, which crops a sequence of frames (\\emph{i.e.}, the first frame, the previous frame, and the current frame) by using a same cropped window and restricts the cropped region of the first frame to contain enough foreground information. The restriction method is simple yet effective. To be specific, the balanced random-crop will decide on whether the randomly cropped frame contains enough pixels from foreground objects or not. If not, the method will continually take the cropping operation until we obtain an expected one.\n\n\\iffalse\n\\begin{figure}[!t]\n \\center\n \\includegraphics[width=0.8\\linewidth]{figs\/sequence_training.pdf}\n \\caption{An illustration of the sequential training. In each step, the previous mask comes from the previous prediction (the green lines) except for the first step, whose previous mask comes from the ground-truth mask (the blue line).}\n \\label{fig:sequence_training}\n\\end{figure}\n\\fi\n\n\\noindent\\textbf{Sequential Training.} \\zongxin{In the training stage, FEELVOS predicts only one step in one iteration, and the guidance masks come from the ground-truth data. RGMP and STMVOS uses previous guidance information (mask or feature memory) in training, which is more consistent with the inference stage and performs better. In the evaluation stage, the previous guidance masks are always generated by the network in the previous inference steps.}\n\n\\zongxin{Following RGMP, we train the network using a sequence of consecutive frames in each SGD iteration. \nIn each iteration, we randomly sample a batch of video sequences. For each video sequence, we randomly sample a frame as the reference frame and a continuous $N+1$ frames as the previous frame and current frame sequence with $N$ frames. When predicting the first frame, we use the ground-truth of the previous frame as the previous mask. When predicting the following frames, we use the latest prediction as the previous mask. \n}\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figs\/comparison.pdf}\n\n \\caption{Qualitative comparison with STMVOS on DAVIS 2017. In the first video, STMVOS fails in tracking the gun after occlusion and blur. In the second video, STMVOS is easier to partly confuse with bicycle and person.}\n \\label{fig:comparison}\n\n\\end{figure}\n\n\\noindent\\textbf{Training Details.}\nFollowing FEELVOS, we use the DeepLabv3+~\\cite{deeplabv3p} architecture as the backbone for our network. However, our backbone is based on the dilated Resnet-101~\\cite{deeplabv3p} instead of Xception-65~\\cite{xception} for saving computational resources. We apply batch normalization (BN)~\\cite{bn} in our backbone and pre-train it on ImageNet~\\cite{deng2009imagenet} and COCO~\\cite{coco}. The backbone is followed by one depth-wise separable convolution for extracting pixel-wise embedding with a stride of 4.\n\nWe initialize $b_B$ and $b_F$ to $0$. For the multi-local matching, we further downsample the embedding feature to a half size using bi-linear interpolation for saving GPU memory. Besides, the window sizes in our setting are $K=\\{2, 4, 6, 8, 10, 12\\}$. For the collaborative ensembler, we apply group normalization (GN)~\\cite{gn} and gated channel transformation~\\cite{gct} to improving training stability and performance when using a small batch size. For sequential training, the current sequence's length is $N=3$, which makes a better balance between computational resources and network performance.\n\n\n\nWe use the DAVIS 2017~\\cite{davis2017} training set (60 videos) and the YouTube-VOS~\\cite{youtubevos} training set (3471 videos) as the training data. \\zongxin{We downsample all the videos to 480P resolution, which is same as the default setting in DAVIS.} We adopt SGD with a momentum of $0.9$ and apply a bootstrapped cross-entropy loss, which only considers the $15\\%$ hardest pixels. During the training stage, we freeze the parameters of BN in the backbone. For the experiments on YouTube-VOS, we use a learning rate of $0.01$ for $100,000$ steps with a batch size of 4 videos (\\emph{i.e.}, 20 frames in total) per GPU using $2$ Tesla V100 GPUs. The training time on YouTube-VOS is about 5 days. For DAVIS, we use a learning rate of $0.006$ for $50,000$ steps with a batch size of 3 videos (\\emph{i.e.}, 15 frames in total) per GPU using $2$ GPUs.\nWe apply flipping, scaling, and balanced random-crop as data augmentations. The cropped window size is $465\\times 465$. For the multi-scale testing, we apply the scales of $\\{1.0, 1.15, 1.3, 1.5\\}$ and $\\{2.0, 2.15, 2.3\\}$ for YouTube-VOS and DAVIS, respectively. CFBI achieves similar results in PyTorch~\\cite{pytorch} and PaddlePaddle~\\cite{paddlepaddle}.\n\n\n\n\n\n\n\\setlength{\\intextsep}{-3pt}\n\\begin{wraptable}[28]{r}{0.55\\textwidth}\n\t\\centering\\vspace{-9mm}\n\t\\caption{The quantitative evaluation on YouTube-VOS~\\cite{youtubevos}. F, S, and $^*$ separately denote fine-tuning at test time, using simulated data in the training process and performing model ensemble in evaluation. CFBI$^{MS}$ denotes using a multi-scale and flip strategy in evaluation.}\\label{tab:youtubevos}\n\t\\begin{tabular}{lccccccc}\n\\toprule[1.5pt]\n & & & & \\multicolumn{2}{c}{Seen} & \\multicolumn{2}{c}{Unseen} \\\\\n\\midrule[1pt]\n Methods & F & S & Avg & $\\mathcal{J}$ & $\\mathcal{F}$ & $\\mathcal{J}$ & $\\mathcal{F}$ \\\\\n\\midrule[1pt]\n\\multicolumn{8}{c}{\\textit{Validation 2018 Split}} \\\\\n\\midrule[1pt]\nAG~\\cite{agame} & & & 66.1 & 67.8 & - & 60.8 & - \\\\\nPReM~\\cite{premvos} & \\checkmark & & 66.9 & 71.4 & 75.9 & 56.5 & 63.7 \\\\\nBoLT~\\cite{boltvos} & \\checkmark & & 71.1 & 71.6 & - & 64.3 & - \\\\\nSTM$^-$~\\cite{spacetime} & & & 68.2 & - & - & - & - \\\\\nSTM~\\cite{spacetime} & & \\checkmark & 79.4 & 79.7 & 84.2 & 72.8 & 80.9 \\\\\n\\hline\nCFBI & & & \\textbf{81.4} & \\textbf{81.1} & \\textbf{85.8} & \\textbf{75.3} & \\textbf{83.4} \\\\\nCFBI$^{MS}$ & & & \\textbf{82.7} & \\textbf{82.2} & \\textbf{86.8} & \\textbf{76.9} & \\textbf{85.0} \\\\\n\\midrule[1pt]\n\\multicolumn{8}{c}{\\textit{Validation 2019 Split}} \\\\\n\\midrule[1pt]\nCFBI & & & \\textbf{81.0} & \\textbf{80.6} & \\textbf{85.1} & \\textbf{75.2} & \\textbf{83.0} \\\\\nCFBI$^{MS}$ & & & \\textbf{82.4} & \\textbf{81.8} & \\textbf{86.1} & \\textbf{76.9} & \\textbf{84.8} \\\\\n\\bottomrule[1.5pt]\n\\multicolumn{8}{c}{\\textit{Testing 2019 Split}} \\\\\n\\midrule[1pt]\nMST$^*$~\\cite{mst} & & \\checkmark & 81.7 & 80.0 & 83.3 & \\textbf{77.9} & 85.5 \\\\\nEMN$^*$~\\cite{emn} & & \\checkmark & 81.8 & \\textbf{80.7} & \\textbf{84.7} & 77.3 & 84.7 \\\\\n\\hline\nCFBI & & & 81.5 & 79.6 & 84.0 & 77.3 & 85.3 \\\\\nCFBI$^{MS}$ & & & \\textbf{82.2} & 80.4 & \\textbf{84.7} & \\textbf{77.9} & \\textbf{85.7} \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{wraptable}\n\n\n\\section{Experiments}\n\nFollowing the previous state-of-the-art method~\\cite{spacetime},\nwe evaluate our method on YouTube-VOS~\\cite{youtubevos}, DAVIS 2016~\\cite{davis2016} and DAVIS 2017~\\cite{davis2017}. For the evaluation on YouTube-VOS, we train our model on the YouTube-VOS training set~\\cite{youtubevos} (3471 videos). For DAVIS, we train our model on the DAVIS-2017 training set~\\cite{davis2017} (60 videos). Both DAVIS 2016 and 2017 are evaluated using an identical model trained on DAVIS 2017 for a fair comparison with the previous works~\\cite{feelvos,spacetime}. Furthermore, we provide DAVIS results using both DAVIS 2017 and YouTube-VOS for training following some latest works~\\cite{feelvos,spacetime}.\n\n\\setlength{\\intextsep}{0pt}\n\nThe evaluation metric is the $\\mathcal{J}$ score, calculated as the average IoU between the prediction and the ground truth mask, and the $\\mathcal{F}$ score, calculated as an average boundary similarity measure between the boundary of the prediction and the ground truth, and their average value ($\\mathcal{J}$\\&$\\mathcal{F}$). We evaluate our results on the official evaluation server or use the official tools.\n\n\n\n\n\n\n\\subsection{Compare with the State-of-the-art Methods}\n\n\n\n\\noindent \\textbf{YouTube-VOS}~\\cite{youtubevos} is the latest large-scale dataset for multi-object video segmentation. Compared to the popular DAVIS benchmark that consists of $120$ videos, YouTube-VOS is about 37 times larger. In detail, the dataset contains 3471 videos in the training set (65 categories), 507 videos in the validation set (additional 26 unseen categories), and 541 videos in the test set (additional 29 unseen categories). Due to the existence of unseen object categories, the YouTube-VOS validation set is much suitable for measuring the generalization ability of different methods. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figs\/quality.pdf}\n \\caption{Qualitative results on DAVIS 2017 and YouTube-VOS. In the first video, we succeed in tracking many similar-looking sheep. In the second video, our CFBI tracks the person and the dog with a red mask after occlusion well. In the last video, CFBI fails to segment one hand of the right person (the white box). A possible reason is that the two persons are too similar and close.}\n \\label{fig:quality}\n\\end{figure}\n\n\n\\begin{wraptable}[21]{r}{0.58\\textwidth}\n\\centering\n\\caption{The quantitative evaluation on DAVIS 2016~\\cite{davis2016} validation set. (\\textbf{Y}) denotes using YouTube-VOS for training.}\\label{tab:davis2016}\n\\begin{tabular}{l c c c c c c}\n\\toprule[1.5pt]\nMethods & F & S & Avg & $\\mathcal{J}$ & $\\mathcal{F}$ & t\/s \\\\\n\\midrule[1pt]\nOSMN~\\cite{osmn} & & & - & 74.0 & & 0.14 \\\\\nPML~\\cite{pml} & & & 77.4 & 75.5 & 79.3 & 0.28 \\\\\nVideoMatch~\\cite{videomatch} & & & 80.9 & 81.0 & 80.8 & 0.32 \\\\\nRGMP$^-$~\\cite{rgmp} & & & 68.8 & 68.6 & 68.9 & 0.14 \\\\\nRGMP~\\cite{rgmp} & & \\checkmark & 81.8 & 81.5 & 82.0 & 0.14 \\\\\nA-GAME~\\cite{agame} (\\textbf{Y}) & & & 82.1 & 82.2 & 82.0 & \\textbf{0.07} \\\\\nFEELVOS~\\cite{feelvos} (\\textbf{Y}) & & & 81.7 & 81.1 & 82.2 & 0.45 \\\\\nOnAVOS~\\cite{onavos}{} & \\checkmark & & 85.0 & 85.7 & 84.2 & 13 \\\\\nPReMVOS~\\cite{premvos} & \\checkmark & & 86.8 & 84.9 & 88.6 & 32.8 \\\\\nSTMVOS~\\cite{spacetime} & & \\checkmark & 86.5 & 84.8 & 88.1 & 0.16 \\\\\nSTMVOS~\\cite{spacetime} (\\textbf{Y}) & & \\checkmark & \\textbf{89.3} & \\textbf{88.7} & 89.9 & 0.16 \\\\\n\\hline\nCFBI & & & 86.1 & 85.3 & 86.9 & 0.18 \\\\\nCFBI (\\textbf{Y}) & & & \\textbf{89.4} & 88.3 & \\textbf{90.5} & 0.18 \\\\\nCFBI$^{MS}$ (\\textbf{Y}) & & & \\textbf{90.7} & \\textbf{89.6} & \\textbf{91.7} & 9 \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{wraptable}\n\n\nAs shown in Table~\\ref{tab:youtubevos}, we compare our method to existing methods on both Validation 2018 and Testing 2019 splits. Without using any bells and whistles, like fine-tuning at test time~\\cite{osvos,onavos} or pre-training on larger augmented simulated data~\\cite{rgmp,spacetime}, our method achieves an average score of $\\mathbf{81.4\\%}$, which significantly outperforms all other methods in every evaluation metric. Particularly, the $81.4\\%$ result is $2.0\\%$ higher than the previous state-of-the-art method, STMVOS, which uses extensive simulated data from~\\cite{coco,voc,cheng2014global,semantic,shi2015hierarchical} for training. Without simulated data, the performance of STMVOS will drop from $79.4\\%$ to $68.2\\%$. Moreover, we further boost our performance to $\\mathbf{82.7\\%}$ by applying a multi-scale and flip strategy during the evaluation. \n\n\nWe also compare our method with two of the best results on the Testing 2019 split, \\emph{i.e.}, \\textit{Rank 1} (EMN~\\cite{emn}) and \\textit{Rank 2} (MST~\\cite{mst}) results in the 2nd Large-scale Video Object Segmentation Challenge. Without applying model ensemble, our single-model result ($\\mathbf{82.2\\%}$) outperforms the \\textit{Rank 1} result ($81.8\\%$) in the unseen and average metrics, which further demonstrates our generalization ability and effectiveness.\n\n\n\n\n\n\n\\noindent \\textbf{DAVIS 2016}~\\cite{davis2016} contains 20 videos annotated with high-quality masks each for a single target object. We compare our CFBI method with state-of-the-art methods in Table~\\ref{tab:davis2016}. On the DAVIS-2016 validation set, our method trained with an additional YouTube-VOS training set achieves an average score of $\\mathbf{89.4\\%}$, which is slightly better than STMVOS ($89.3\\%$), a method using simulated data as mentioned before. The accuracy gap between CFBI and STMVOS on DAVIS is smaller than the gap on YouTube-VOS. A possible reason is that DAVIS is too small and easy to over-fit.\nCompare to a much fair baseline (\\emph{i.e.}, FEELVOS) whose setting is same to ours, the proposed CFBI not only achieves much better accuracy ($\\mathbf{89.4\\%}$ \\emph{vs.}\\hspace{-0.8mm} $81.7\\%$) but also maintains a comparable fast inference speed ($0.18s$ \\emph{vs.}\\hspace{-0.8mm} $0.45s$). After applying multi-scale and flip for evaluation, we can improve the performance from $\\mathbf{89.4\\%}$ to $\\mathbf{90.1\\%}$. However, this strategy will cost much more inference time ($9s$).\n\n\n\n\\setlength{\\intextsep}{-3pt}\n\\begin{wraptable}[29]{r}{0.52\\textwidth}\n\n\\caption{The quantitative evaluation on DAVIS-2017~\\cite{davis2017}.}\\label{tab:davis2017}\n\\begin{center}\n\\begin{tabular}{l c c c c c}\n\\toprule[1.5pt]\n Methods & F & S & Avg & $\\mathcal{J}$ & $\\mathcal{F}$ \\\\\n\\midrule[1pt]\n\\multicolumn{6}{c}{\\textit{Validation Split}} \\\\\n\\midrule[1pt]\nOSMN~\\cite{osmn} & & & 54.8 & 52.5 & 57.1 \\\\\nVideoMatch~\\cite{videomatch} & & & 62.4 & 56.5 & 68.2 \\\\\nOnAVOS~\\cite{onavos} & \\checkmark & & 63.6 & 61.0 & 66.1 \\\\\nRGMP~\\cite{rgmp} & & \\checkmark & 66.7 & 64.8 & 68.6 \\\\\nA-GAME~\\cite{agame} (\\textbf{Y}) & & & 70.0 & 67.2 & 72.7 \\\\\nFEELVOS~\\cite{feelvos} (\\textbf{Y}) & & & 71.5 & 69.1 & 74.0 \\\\\nPReMVOS~\\cite{premvos} & \\checkmark & & 77.8 & 73.9 & 81.7 \\\\\nSTMVOS~\\cite{spacetime} & & \\checkmark & 71.6 & 69.2 & 74.0 \\\\\nSTMVOS~\\cite{spacetime} (\\textbf{Y}) & & \\checkmark & \\textbf{81.8} & \\textbf{79.2} & 84.3 \\\\\n\\hline\nCFBI & & & 74.9 & 72.1 & 77.7 \\\\\nCFBI (\\textbf{Y}) & & & \\textbf{81.9} & \\textbf{79.1} & \\textbf{84.6} \\\\\nCFBI$^{MS}$ (\\textbf{Y}) & & & \\textbf{83.3} & \\textbf{80.5} & \\textbf{86.0} \\\\\n\\bottomrule[1.5pt]\n\\multicolumn{6}{c}{\\textit{Testing Split}} \\\\\n\\midrule[1pt]\nOSMN~\\cite{osmn} & & & 41.3 & 37.7 & 44.9 \\\\\nOnAVOS~\\cite{onavos} & \\checkmark & & 56.5 & 53.4 & 59.6 \\\\\nRGMP~\\cite{rgmp} & & \\checkmark & 52.9 & 51.3 & 54.4 \\\\\nFEELVOS~\\cite{feelvos} (\\textbf{Y}) & & & 57.8 & 55.2 & 60.5 \\\\\nPReMVOS~\\cite{premvos} & \\checkmark & & 71.6 & 67.5 & 75.7 \\\\\nSTMVOS~\\cite{spacetime} (\\textbf{Y}) & & \\checkmark & 72.2 & 69.3 & 75.2 \\\\\n\\hline\nCFBI (\\textbf{Y})& & & \\textbf{74.8} & \\textbf{71.1} & \\textbf{78.5} \\\\\nCFBI$^{MS}$ (\\textbf{Y}) & & & \\textbf{77.5} & \\textbf{73.8} & \\textbf{81.1} \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{center}\n\n\\end{wraptable}\n\n\n\n\\noindent \\textbf{DAVIS 2017}~\\cite{davis2017} is a multi-object extension of DAVIS 2016. The validation set of DAVIS 2017 consists of 59 objects in 30 videos.\nNext, we evaluate the generalization ability of our model on the popular DAVIS-2017 benchmark. \n\n\nAs shown in Table~\\ref{tab:davis2017}, our CFBI makes significantly improvement over FEELVOS ($\\mathbf{81.9\\%}$ \\emph{vs.}\\hspace{-0.8mm} $71.5\\%$). Besides, our CFBI without using simulated data is slightly better than the previous state-of-the-art method, STMVOS ($\\mathbf{81.9\\%}$ \\emph{vs.}\\hspace{-0.8mm} $81.8\\%$). We show some examples compared with STMVOS in Fig.~\\ref{fig:comparison}. Same as previous experiments, the augmentation in evaluation can further boost the results to a higher score of $\\mathbf{83.3\\%}$. We also evaluate our method on the testing split of DAVIS 2017, which is much more challenging than the validation split. As shown in Table~\\ref{tab:davis2017}, we significantly outperforms STMVOS ($72.2\\%$) by $\\textbf{2.6\\%}$. By applying augmentation, we can further boost the result to $\\textbf{77.5\\%}$. The strong results prove that our method has the best generalization ability among the latest methods.\n\n\n\n\n\n\\noindent \\textbf{Qualitative Results} We show more results of CFBI on the validation set of DAVIS 2017 ($\\mathbf{81.9\\%}$) and YouTube-VOS ($\\mathbf{81.4\\%}$) in Fig.~\\ref{fig:quality}. It can be seen that CFBI is capable of producing accurate segmentation under challenging situations, such as large motion, occlusion, blur, and similar objects. In the \\emph{sheep} video, CFBI succeeds in tracking five selected sheep inside a crowded flock. In the \\emph{judo} video, CFBI fails to segment one hand of the right person. A possible reason is that the two persons are too similar in appearance and too close in position. Besides, their hands are with blur appearance due to the fast motion.\n\n\n\n\n\\subsection{Ablation Study}\n\n\\setlength{\\intextsep}{-2pt}\n\\begin{wraptable}[15]{r}{0.4\\textwidth}\n\n\\centering\n\\caption{Ablation of background embedding. P and I separately denote the pixel-level matching and instance-level attention. $^*$ denotes removing the foreground and background bias.}\\label{tab:ablation_a}\n\\setlength{\\tabcolsep}{6.5pt}\n\\begin{tabular}{l c c c c}\n \\toprule[1.5pt]\n P & I & Avg & $\\mathcal{J}$ & $\\mathcal{F}$ \\\\\n \\midrule[1pt]\n \\checkmark & \\checkmark & 74.9 & 72.1 & 77.7 \\\\\n \\hline\n \\checkmark$^*$ & \\checkmark & 72.8 & 69.5 & 76.1 \\\\\n \\checkmark & & 73.0 & 69.9 & 76.0 \\\\\n & \\checkmark & 72.3 & 69.1 & 75.4 \\\\\n & & 70.9 & 68.2 & 73.6 \\\\\n \\bottomrule[1.5pt]\n\\end{tabular}\n\\end{wraptable}\n\nWe analyze the ablation effect of each component proposed in CFBI on the DAVIS-2017 validation set. Following FEELVOS, we only use the DAVIS-2017 training set as training data for these experiments. \n\n\n\n\\noindent \\textbf{Background Embedding.} As shown in Table~\\ref{tab:ablation_a}, we first analyze the influence of removing the background embedding while keeping the foreground only as~\\cite{feelvos,osmn}. Without any background mechanisms, the result of our method heavily drops from $74.9\\%$ to $70.9\\%$.\nThis result shows that it is significant to embed both foreground and background features collaboratively. Besides, the missing of background information in the pixel-level matching or the instance-level attention will decrease the result to $73.0\\%$ or $72.3\\%$ separately. \nThus, compared to instance-level attention, the pixel-level matching performance is more sensitive to the effect of background embedding. A possible reason for this phenomenon is that the possibility of existing some background pixels similar to the foreground is higher than some background instances. Finally, we remove the foreground and background bias, $b_F$ and $b_B$, from the distance metric and the result drops to $72.8\\%$, which further shows that the distance between foreground pixels and the distance between background pixels should be separately considered.\n\n\\setlength{\\intextsep}{-3pt}\n\\begin{wraptable}[11]{r}{0.55\\textwidth}\n\n\\centering\n\\caption{Ablation of other components.}\\label{tab:ablation_b}\n\\begin{tabular}{l c c c c}\n \\toprule[1.5pt]\n & Ablation & Avg & $\\mathcal{J}$ & $\\mathcal{F}$ \\\\\n \\midrule[1pt]\n 0 & Ours (CFBI) & 74.9 & 72.1 & 77.7 \\\\\n \\hline\n 1 & w\/o multi-local windows & 73.8 & 70.8 & 76.8 \\\\\n 2 & w\/o sequential training & 73.3 & 70.8 & 75.7 \\\\\n 3 & w\/o collaborative ensembler & 73.3 & 70.5 & 76.1 \\\\\n \n 4 & w\/o balanced random-crop & 72.8 & 69.8 & 75.8 \\\\\n 5 & w\/o instance-level attention & 72.7 & 69.8 & 75.5 \\\\\n \n \\hline\n 6 & baseline (FEELVOS) & 68.3 & 65.6 & 70.9 \\\\\n \\bottomrule[1.5pt]\n\\end{tabular}\n\\end{wraptable}\n\n\\noindent \\textbf{Other Components.} The ablation study of other proposed components is shown in Table~\\ref{tab:ablation_b}. Line 0 ($74.9\\%$) is the result of proposed CFBI, and Line 6 ($68.3\\%$) is our baseline method reproduced by us. Under the same setting, our CFBI significantly outperforms the baseline.\n\n\n\nIn line 1, we use only one local neighborhood window to conduct the local matching following the setting of FEELVOS, which degrades the result from $74.9\\%$ to $73.8\\%$. It demonstrates that our multi-local matching module is more robust and effective than the single-local matching module of FEELVOS. Notably, the computational complexity of multi-local matching dominantly depends on the biggest local window size because we use the intermediate results of the local matching of the biggest window to calculate on smaller windows.\n\n\n\nIn line 2, we replace our sequential training by using ground-truth masks instead of network predictions as the previous mask. By doing this, the performance of CFBI drops from $74.9\\%$ to $73.3\\%$, which shows the effectiveness of our sequential training under the same setting.\n\n\n\n\n\nIn line 3, we replace our collaborative ensembler with 4 depth-wise separable convolutional layers. This architecture is the same as the dynamic segmentation head of~\\cite{feelvos}. Compared to our collaborative ensembler, the dynamic segmentation head has much smaller receptive fields and performs $1.6\\%$ worse.\n\n\nIn line 4, we use normal random-crop instead of our balanced random-crop during the training process. In this situation, the performance drops by $2.1\\%$ to $72.8\\%$ as well. As expected, our balanced random-crop is successful in relieving the model form biasing to background attributes.\n\nIn line 5, we disable the use of instance-level attention as guidance information to the collaborative ensembler, which means we only use pixel-level information to guide the prediction. In this case, the result deteriorates even further to $72.7$, which proves that instance-level information can further help the segmentation with pixel-level information.\n\n\nIn summary, we explain the effectiveness of each proposed component of CFBI. For VOS, it is necessary to embed both foreground and background features. Besides, the model will be more robust by combining pixel-level information and instance-level information, and by using more local windows in the matching between two continuous frames. Apart from this, the proposed balanced random-crop and sequential training are useful but straightforward in improving training performance. \n\n\n\n\\section{Conclusion}\n\nThis paper proposes a novel framework for video object segmentation by introducing collaborative foreground-background integration and achieves new state-of-the-art results on three popular benchmarks. Specifically, we impose the feature embedding from the foreground target and its corresponding background to be contrastive. Moreover, we integrate both pixel-level and instance-level embeddings to make our framework robust to various object scales while keeping the network simple and fast. We hope CFBI will serve as a solid baseline and help ease the future research of VOS and related areas, such as video object tracking and interactive video editing.\n\n\\noindent \\textbf{Acknowledgements.} This work is partly supported by ARC DP200100938 and ARC DECRA DE190101315.\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nFeSi is a non-magnetic \\cite{wyi63}, narrow-gap semiconductor\n\\cite{jwwwa67,ktidv68} at low temperatures. Its magnetic\nsusceptibility $\\chi(T)$ increases with temperature and passes through\na maximum at $T \\sim 500$ K \\cite{jwwwa67}.\nFeSi becomes metallic above 300 K\\cite{ktidv68,wwh65}. The\nsubstitution of Co for Fe (about 10\\% Co) yields a magnet with a\nhelical spin order \\cite{bvr83}. Local density\nfunctional\\cite{hohe64} band structure calculations\n\\cite{mh93,fkd94,gpb94} give the correct value of the semiconducting\ngap (about 0.1 eV), but can not explain the large magnitude of\n$\\chi(T)$. According to infrared and optical measurements\n\\cite{sfzmt93}, the gap of 50 meV is gradually filled with increasing\ntemperature, with new spectral weight which can not be explained\nwithin the conventional band structure picture. In connection with a\ntemperature-induced local moment, a model based on unified\nspin-fluctuation theory was proposed in Ref.~\\onlinecite{tm79} which explains\n$\\chi(T)$ using a model density of states (DOS).\n\nIn spite of the large number of publications devoted to measurements\nof X-ray photoelectron spectra (XPS) and ultraviolet photoelectron\nspectra (UPS) of FeSi\\cite{slfsk89,kmmos92,cymty94,oal87,ksnik82,ssmfi94},\nmost measurements were performed using polycrystalline samples which do not\nallow precise measurements on clean surfaces free of contamination.\nIn this paper, we present a full set of precise spectral measurements,\ndata, including XPS of the valence band and X-ray emission valence spectra\nfor both components of FeSi, which were obtained on the same single\ncrystal, and providing experimental information about the distribution of\ntotal and partial DOS in the valence band.\n\nThe already published information on the calculated\nelectronic structure of FeSi presented in Ref.\\cite{mh93,fkd94,gpb94}\nreveals only total DOS and Fe $3d$, Si $3s$ partial DOS distributions\nin Ref. \\onlinecite{fkd94}, and Fe $3d$, Si $3p$\nDOS for the CsCl-type structure (which is rather\ndifferent from that known for the bulk FeSi\\cite{wyi63}) in\nRef. \\onlinecite{gpb94}. Because of this we performed a new set of\nband structure calculations of FeSi by two independent methods --\nlinearized muffin-tin orbitals (LMTO) and linearized augmented plane\nwave (LAPW) -- which give more detailed information about the total\nand the Fe $3d$, Fe $4p$, Si $3s$, Si $3d$ and Si $3p$ partial\nDOS distributions.\n\n\\section{Experimental}\n\\label{sec:exp}\n\nThe Fe $L_{\\alpha}$ ($2p$--$3d4s$ transition) X-ray emission spectrum\nwas measured on the RSM-500 type X-ray vacuum spectrometer with a\ndiffraction grating ($N=600$~lines\/mm and $R=6$~m) and electron\nexcitation. The spectra were recorded in the second order of\nreflection by a secondary electron multiplier with a CsI\nphotocathode. The energy resolution was about 0.35--0.40 eV. The\nX-ray tube was operated at $V=4.6$~keV, $I=0.4$~mA.\n\nThe Si $K_{\\beta_{1,3}}$ ($1s$--$3p$ transition) X-ray emission\nspectrum was measured using a fluorescent Johan-type vacuum\nspectrometer with a position-sensitive detector \\cite{dckgo84}. The Pd\n$L$-X-ray radiation from a special sealed X-ray tube was used for the\nexcitation of the fluorescent Si$K_{\\beta_{1,3}}$ spectra. A quartz\n$(10\\bar{1}0)$ single crystal curved to $R=1400$~mm served as a\ncrystal-analyzer. The spectra were measured with an energy resolution\nof approximately 0.2--0.3~eV. The X-ray tube was operated at $V=25$~keV,\n$I=50$~mA.\n\nThe Si $L_{2,3}$ ($2p$--$3s3d$ transition) X-ray emission spectra of\nFeSi were taken from Ref. \\onlinecite{kvswk92}, and the Fe\n$K_{\\beta_5}$ ($1s$--$4p$ transition) X-ray emission spectrum was\nreproduced from Ref.~\\onlinecite{kolo70}.\n\nThe XPS valence band spectra of FeSi were measured using a\nPerkin-Elmer ESCA spectrometer (PHI 5600 ci, monochromatized Al\n$K_{\\alpha}$ radiation). The FeSi single crystal was cleaved in high\nvacuum prior to the XPS measurements. The XPS spectra were calibrated\nbased on the Au $4f$-spectra of Au metal ($E_b$=84.0~eV).\n\nX-ray emission spectra have been brought to the scale of binding\nenergies with respect to the Fermi level using the binding energies of\nthe relevant initial (core level) states of the X-ray transitions as\nmeasured by the XPS technique. Corresponding binding energies are\n$E_b({\\rm Fe~}2p)=706.7$~eV, $E_b({\\rm Si~}2p)=99.3$~ eV. The values\nof $E({\\rm Fe~}K_{\\alpha_1})=6403.86$~ eV and $E({\\rm\nSi~}K_{\\alpha_1})=1740.1$~eV were taken for comparison of Fe\n$L_{\\alpha}$ and Fe $K_{\\beta_5}$, Si $L_{2,3}$ and Si\n$K_{\\beta_{1,3}}$ X-ray emission spectra of FeSi.\n\nThe measured XPS and X-ray emission spectra are shown in\nFig.~\\ref{spec}.\n\n\\section{Details of calculation}\n\\label{sec:calc}\n\nElectronic structure calculations have been performed for the cubic FeSi\nstructure (8 atoms\/cell, space group $P2_{1}3$) as determined in\nRef.~\\onlinecite{wyi63} and discussed in detail in\nRef.~\\onlinecite{mh93}. We have used the cubic lattice constant of\n$a=4.493$ \\AA, with Fe atoms occupying the $(0.1358,0.1358,0.1358)a$ and\nequivalent positions of the $B20$ structure, while Si atoms occupy the\n$(0.844,0.844,0.844)a$ and equivalent positions.\n\nIn the calculations using the tight-binding LMTO method \\cite{tblmto},\nwe used space-filling atomic spheres of equal size on Fe and Si sites\n($R=1.394$ \\AA) and no empty spheres were introduced. The\nexchange-correlation potential as proposed by von Barth and\nHedin \\cite{vbh} was used. DOS calculated by the tetrahedron\nmethod over 470 {\\bf k}-points in the irreducible part of the\nBrillouin zone are shown in Fig.~\\ref{spec}, and compared with the\nexperimental spectra. Our calculated electron bands are very close to\nthose obtained by Fu {\\it et al.}\\cite{fkd94} using the augmented\nspherical wave method, and our calculated DOS agree\nwell with those of Ref.~\\onlinecite{fkd94,cfu}. We found a direct gap of\n0.14~eV at the $X$ point of the Brillouin zone and an indirect gap of\n0.05--0.08~eV, in agreement with the\nresistivity measurement data reported in Ref.~\\onlinecite{fkd94}.\n\nSome small deviations from the bands calculated in\nRef.~\\onlinecite{mh93} using the LAPW method without any shape\napproximation imposed on the potential seem to have quite negligible\neffect on the densities of states, in the present context of making a\ncomparison with X-ray spectra using the LMTO method.\nWe also carried out an independent LAPW\\cite{ande75,sing94}\ncalculation, in which we used the local density approximation\n\\cite{hohe64} form for exchange and correlation given by Ceperley and\nAlder \\cite{cepe80}, as parameterized by Perdew and\nZunger\\cite{perd81}. In the expansion of the charge density and\npotential inside the muffin-tin spheres ($R_{\\rm Fe}=R_{\\rm Si}=1.111$\n\\AA), lattice harmonics up to angular momentum of $l=8$ have been\nused, while the interstitial region was described by a plane wave\nexpansion. Core orbitals were treated self-consistently, retaining,\nhowever, only the spherically-symmetric part of their density. Scalar\nrelativistic effects (neglecting spin-orbit coupling) were included\nfor the valence states, and full relativistic effects were included\nfor the core states. A large basis set consisting of both real-space\norbitals (inside the muffin-tin regions) and plane waves was used. The\ntotal number of basis functions used was $\\sim$ 570 per unit cell of 8\natoms. The Brillouin zone summations were performed using 24 {\\bf\nk}-vectors in the irreducible section of the Brillouin zone in the\nself-consistency loop. The densities of states calculated using\nthe LAPW method, with the tetrahedron integration\nover 176 {\\bf k}-points in the irreducible\npart of the Brillouin zone, are shown in\nFigs.~\\ref{dospart} and \\ref{dostot}.\nNote that the total DOS calculated in both LMTO\nand LAPW methods per unit cell (i.e. 4 formula units) are in good\nabsolute agreement, whereas the partial DOS are somehow smaller in the\nLAPW calculation (Fig.~\\ref{dospart} because they are attributed\nto non-overlapping muffin-tin spheres rather than space-filling\natomic spheres as in LMTO.\nThe expanded part of the total\ndensities of states plot in the vicinity of the gap is shown at the\ninset in Fig.~\\ref{dostot}. Two sharp peaks formed mostly by Fe $d$\nstates are separated by the indirect gap of about 0.065 eV.\n\nThere seems to be generally good agreement between the several LDA-based\nband structure calculation schemes in reproducing the bands, total\ndensities of states, and bandwidths in FeSi. Below, we concentrate on\ndiscussing particular features of partial densities of states of the\nconstituents as revealed in the X-ray emission spectra and the present\ncalculations.\n\n\\section{Experimental results and discussion}\n\\label{sec:disc}\n\nThe X-ray photoelectron valence band spectrum\nand X-ray emission valence band spectra\nof Fe (Fe $L_{\\alpha}$, Fe $K_{\\beta_5}$) and Si (Si $L_{2,3}$, Si\n$K_{\\beta_{1,3}}$) are presented in Fig.~\\ref{spec}. As is known,\nthe XPS valence band spectrum gives an information about the total\nDOS distribution (accurate up to the weight function depending\non the atomic photoemission cross-sections, see Ref.~\\onlinecite{yl85}),\nwhereas the Fe $L_{\\alpha}$, Fe $K_{\\beta_5}$, Si $L_{2,3}$,\nSi $K_{\\beta_{1,3}}$ X-ray emission spectra (in accordance to dipole\nselection rules) -- give information about partial Fe $3d4s$, Fe $4p$,\nSi $3s3d$ and Si $3p$ densities of states, respectively.\nThese spectra are compared in Fig.~\\ref{spec} with the results\nof the LMTO band structure calculations by aligning the calculated\nand experimental positions of the Fermi level. We note that\nthe experimental XPS valence band spectrum reproduces the total DOS\ndistribution of FeSi quite well. Especially, it is very important\nthat in our experiments we have found the splitting of the main peak\nof XPS spectrum in the range 0--2 eV which corresponds to the\nFe $3d$ band splitting. This distinct splitting was not found before\nin XPS\\cite{slfsk89} and UPS\\cite{kmmos92,cymty94,ksnik82,ssmfi94}\nmeasurements on FeSi, and its absence was attributed in\nRef.~\\onlinecite{ssmfi94} to a hole lifetime broadening which increases\nwith binding energies. The similar splitting of the XPS valence\nband spectrum of FeSi was found in Ref.~\\onlinecite{oal87} only\nin the measurements done at low temperatures ($T$=120 K).\nIn our opinion, this discrepancy between fine structure in the present\nand previous XPS (UPS) valence band spectra is due to the high quality\nof the FeSi single crystal which was used for the present measurements.\nThe low-energy Si $3p$ and Si $3s$ subbands are reflected\nin the XPS valence band spectrum as low-intensity\n(due to low values of respective photoinization\ncross-sections \\cite{yl85}) features located at binding energies of\napproximately 4.5 and 9 eV, respectively.\n\nThe experimental Fe $L_{\\alpha}$ X-ray emission spectrum reproduces\nfairly well the position of the center of gravity of calculated Fe\n$3d$ DOS distribution in the valence band of FeSi, but\nnot the splitting of the Fe $3d$ band. This is due to the large total\ndistortion of the Fe $L_{\\alpha}$ X-ray emission spectrum which\nincludes the instrumental distortion (about 0.4 eV) and the width of\nthe inner (core) Fe $2p$ level (about 0.8-1.0 eV) which is determined\nby the lifetime of the core-level vacancy under an X-ray transition.\n\nThe experimental Fe $K_{\\beta_5}$ spectrum shows two maxima located at\nbinding energies of approximately 3.5 and 9.0 eV that is in accordance\nwith the theoretical distribution of the Fe $4p$ partial DOS.\nIt is also seen from both the theoretical and experimental\nspectra that Fe $4p$ states are hybridized mostly with Si $3p$ and Si\n$3d$ states.\n\nThe experimental Si $L_{2,3}$ X-ray emission spectra corresponds to\nthe $2p$--$3s3d$ transition. Usually it is considered that the Si $3d$\nstates do not take part in the chemical bonding in $3d$ transition\nmetal silicides. As was emphasized in Ref.~\\onlinecite{kvswk92}, the\nhigh-energy subband of Si$L_{2,3}$ X-ray emission spectra of\ntransition metal silicides FeSi, MnSi and NiSi, as well as that of\ndisilicides FeSi$_2$, MnSi$_2$ and NiSi$_2$, cannot be explained\nwithout an assumption that Si $3d$ states contribute to the chemical\nbonding. Subsequently, the same conclusion was drawn for Pt silicides\n(Pt$_2$Si, PtSi) in Ref.~\\onlinecite{yhkin94} based on an analysis of\nthe Si $L_{2,3}$ X-ray emission spectra of these compounds. This\nconclusion is again confirmed now for FeSi by two independent sets of\nband structure calculations (see Figs.~\\ref{spec} and \\ref{dospart}).\nOne can see from Fig.~\\ref{spec} that the Si $3s$ DOS is\nconcentrated in the range of binding energies about 6-13 eV where the\nmost intensive low-energy subband of the Si $L_{2,3}$ X-ray emission\nspectrum is located. The center of gravity of the Si $3d$ states\ndistribution corresponds to a binding energy about 2 eV where the\nhigh-energy maximum of the Si $L_{2,3}$ spectrum is situated. The Si\n$3d$ states are strongly hybridized with Fe3$d$ states, and the same\nsplitting of the Si $3d$ band (about 2 eV) as that in the Fe $3d$\nsubband is observed in the band structure calculations.\n\nThe energy position and fine structure of the Si $K_{\\beta_{1,3}}$\nX-ray emission spectrum is in a good accord with the Si3 $p$ partial\nDOS distribution (see Fig.~\\ref{spec}).\n\n\\section{Conclusion}\n\\label{sec:conclu}\n\nThe results of high-energy spectroscopy measurements of FeSi single\ncrystals, including the XPS valence band spectrum and the X-ray emission\nvalence band spectra of both constituents, are presented. They are\ncompared with two independent {\\it ab initio} band structure\ncalculations of FeSi, performed using the LMTO and LAPW methods, and\ngood agreement between the experimental and theoretical spectra is\nfound.\n\n\\acknowledgements\n\nThe authors are grateful to Castor Fu for sending his data on the\ncalculated density of states of FeSi. AVP, StU and MN appreciate the\nfinancial support of the Deutsche Forschungsgemeinschaft (SFB~225,\nGraduate College). This work was supported by Russian Foundation for\nFundamental Research (grant No.94-03-08040) and NATO International\nScientific Exchange Program (Project HTECH LG 940861). ZWL, BMK, and\nZPS thank the support by the University Research Funds of the\nUniversity of California at Davis.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\nAt the level of the thermodynamical description of out of equilibrium\ntransformations, the irreversibility of macroscopic processes is\nencoded in the second law expressed through strict inequalities\nsatisfied by various thermodynamic observables, like entropy and free\nenergies. At the more refined level of the statistical mechanical\ndescription of the same processes, the irreversibility ought to\nmanifest as an asymmetry in the probability distributions of the\nfluctuations of these same quantities. The central and yet unsolved\nproblem is to find general methods, comparable to those available in\nequilibrium, to characterize these probability distributions. A major\nadvance in this direction has been made in the last two decades with\nthe development of Fluctuations Theorems (FT), which allow to\nconstrain the form of the distributions with a certain degree of\nuniversality and in conditions arbitrarily far from\nequilibrium~\\cite{Seifert}. This theoretical approach turned out to be\nvery useful, both in experimental and numerical studies, in the\ncharacterization of several nonequilibrium systems, such as colloidal\nparticles in harmonic traps~\\cite{Wang,Zon,imparato07,kim}, vibrated\ngranular media~\\cite{Sarracino,Naert,Gnoli}, models of coupled\nLangevin equations~\\cite{Farago,Joubaud,Crisanti}, driven stochastic\nLorentz gases~\\cite{Gradenigo,Gradenigo2}, and active\nmatter~\\cite{Kumar}, just to name a few examples.\n\nAn important development in the field has been achieved with the understanding\nof the impact of symmetry and symmetry breaking on the FT of currents\nin stationary diffusive systems~\\cite{Hurtado}.\nMore recently, it has been shown that relations in the form of FT are of\ngeneral occurrence also in the probability distributions of order\nparameters in equilibrium statistical mechanics, in connection with\nsymmetry breaking~\\cite{Gaspard}. Here we show that these equilibrium\nresults do feedback in the understanding of the nonequilibrium FT,\nshedding new light on the conditions under which the FT in the\nstandard form (namely, with a linear asymmetry function) is expected to\nhold~\\cite{Puglisi,Baiesi,Harris,Park}. In particular, we study the\nfluctuations of the heat exchanged with the environment by one or more\nBrownian oscillators in an interval of time $[t_w,t]$, during the\nrelaxation following the instantaneous quench from high to low\ntemperature. This protocol induces a \\emph{nonstationary dynamics}\nthat is noninvariant under time translation and therefore the heat\nexchanged along a given trajectory explicitly depends on the two times\n$(t_w,t)$. The study of the FT in similar processes has been addressed\nin~\\cite{Ritort,Picco,Zamponi}. Here, we derive exact expressions for\nthe heat probability density functions, for one, two and a large\nnumber of independent oscillators. Our analysis shows how, in the case\nof more than one oscillator, the heat probability obeys an FT which\ndeviates from the standard form and whose physical meaning can be\nrationalized by resorting to the analogy with the equilibrium problem\nin the presence of a nonuniform external field.\n\nIn the case of a large number of degrees of freedom, arising, for\ninstance, in the normal mode decomposition of an extended system, we\npresent a computation based on the steepest descent method, which\nallows us to obtain an accurate description of the large deviation\nfunction of the exchanged heat. Here, we will use\n large deviation theory with a large number of degrees of freedom\n (and not for long time intervals) \\cite{hugo,ld}. This kind of\nproblem was considered previously in Refs.~\\cite{Gonnella,Jo,Pechino},\nshowing that fluctuations undergo a condensation transition. Briefly,\nby this is meant that fluctuations in a multi-component system do\ncondense if there exists a critical threshold above which the\nfluctuation is feeded by {\\it just one} of the components (or degrees\nof freedom). Here, we analyse in detail the crossover region between\nthe normal and the condensed phase, providing an explicit expression\nfor the large deviation function. We also reconstruct numerically the\nphase diagram at finite temperature, in the parameter space\n$(t_w,\\tau=t-t_w)$, which shows a nontrivial reentrant behavior.\n\nThe paper is organized as follows. In Section~\\ref{zero} we present\nthe general framework within which various FT forms are derived and we\nargue on physical grounds for the particular form we adopt. We also\nrecall in some detail how an FT arises in statics as a consequence of\nsymmetry breaking, highlighting its relevance for the dynamical\nproblem. Section~\\ref{One} is devoted to the simplest case of a\nsingle oscillator. We compute exactly the probability distribution of\nheat fluctuations, we introduce the basic concept of time-dependent\neffective temperature and we derive the FT in the ``Gallavotti-Cohen''\nform~\\cite{Gallavotti}. In Section~\\ref{twobrownian} we consider the\ncase of two oscillators, and we analyse the modifications arising in\nthe FT form due to the presence of more than one degree of freedom.\nIn Section~\\ref{extended} we consider the case of a large number of\noscillators. We discuss the conditions leading to condensation and we\nmap out numerically the phase diagram in the parameter space. The\nconsequences on the FT are analysed. Finally, conclusions are drawn\nin Section~\\ref{conclusions}.\n\n\n\n\n\n\\section{General setup}\n\\label{zero}\n\nThere exist many variants of FT whose derivation~\\cite{Maes,Seifert,Gawedzki}\ncan be unified into a single master theorem. Let us first outline the general setting\nwhich applies to all fluctuation problems, in or out of equilibrium. \nConsider a sample space $\\Omega$ with elements $\\sigma$.\nLet $\\mu(\\sigma)$ and $\\mu^{\\prime}(\\sigma)$ be two arbitrary probability measures\nover $\\Omega$ and let ${\\cal W}(\\sigma)$ be defined by\n\\be\n\\frac{\\mu^{\\prime}(\\sigma^*)}{\\mu(\\sigma)} = e^{-{\\cal W}(\\sigma)},\n\\label{I.1}\n\\ee\nwhere the $*$ operation denotes an involutory transformation of $\\Omega$ onto itself, \ni.e. $(\\sigma^*)^*=\\sigma$.\nIn the following this will be taken as the representation of the inversion element of\nthe $\\mathbb{Z}_2 =(\\mathbb{I},*)$ group, where $\\mathbb{I}$ is the identity operator.\nIntroducing an arbitrary function ${\\cal F}(\\sigma)$, from the above relation there follows\nthe identity\n\\be\n\\langle {\\cal F}(\\sigma)e^{-{\\cal W}(\\sigma)} \\rangle = \\langle {\\cal F}(\\sigma^*) \\rangle^{\\prime},\n\\label{I.2}\n\\ee\nwhere $\\langle \\cdot \\rangle$ and $\\langle \\cdot \\rangle^{\\prime}$ denote expectations with respect\nto $\\mu(\\sigma)$ and $\\mu^{\\prime}(\\sigma)$, respectively. Taking for ${\\cal F}(\\sigma)$ the characteristic \nfunction $\\theta_{{\\cal M}}(\\sigma|M)$ of a certain observable ${\\cal M}(\\sigma)$, that is\n\\be\n\\theta_{{\\cal M}}(\\sigma|M) = \\left \\{ \\begin{array}{ll}\n 1,\\;\\; $if$ \\;\\; {\\cal M}(\\sigma) = M,\\\\\n 0,\\;\\; $if$ \\;\\; {\\cal M}(\\sigma) \\neq M,\n \\end{array}\n \\right .\n \\label{I.2bis}\n \\ee\nfrom Eq.~(\\ref{I.2}) follows\n\\be\n\\frac{P^{\\prime}({\\cal M}(\\sigma^*) = M)}{P({\\cal M}(\\sigma) = M)} = \n\\langle e^{-{\\cal W}(\\sigma)} | {\\cal M}(\\sigma) = M \\rangle,\n\\label{I.3}\n\\ee\nwhere $P^{\\prime}$ and $P$ are the probabilities of the events in the arguments induced\nby $\\mu^{\\prime}(\\sigma)$ and $\\mu(\\sigma)$, respectively, and\nin the right hand side there appears the expectation with respect to $\\mu(\\sigma)$,\nconditioned to ${\\cal M}(\\sigma) = M$. Defining the $*$ transformation on the\nset of random variables by ${\\cal M}(\\sigma) \\mapsto {\\cal M}^*(\\sigma) = {\\cal M}(\\sigma^*)$,\nwhich is also an involution, the above relation can be\nrecast in the form\n\\be\n\\frac{P^{\\prime}({\\cal M}^* = M)}{P({\\cal M} = M)} = \ne^{-{\\cal K}(M)},\n\\label{I.03}\n\\ee\nwith\n\\be\n{\\cal K}(M) = -\\ln \\langle e^{-{\\cal W}(\\sigma)} | {\\cal M}(\\sigma) = M \\rangle.\n\\label{I.04}\n\\ee\nIt appears, then, that Eq.~(\\ref{I.03}) is the transposition to the level of the observable\n${\\cal M}$ of the underlying basic relation~(\\ref{I.1}). In particular, regarding ${\\cal W}$\nas the bias which is necessary to apply on $\\mu$ in order to construct $\\mu^{\\prime}$ and ${\\cal K}$ as \nthe analogous quantity relating $P$ and $P^{\\prime}$, Eq.~(\\ref{I.04}) tells how these biases\nare related one to the other. \n\nAs long as $\\mu(\\sigma)$ and $\\mu^{\\prime}(\\sigma)$ are arbitrary, the above result does not\nhave predictive power. \nIt becomes the master FT when $\\mu^{\\prime}(\\sigma)$ is taken with a definite relation to $\\mu(\\sigma)$,\nwhich constrains the form\nof $P(M)$ if the right hand side of Eq.~(\\ref{I.03}) is accessible without having\nto actually compute the expectation, possibly via symmetry arguments.\nIn the simplest case $\\mu^{\\prime}(\\sigma)$ is taken to be the same as $\\mu(\\sigma)$. \nThen, from Eq.~(\\ref{I.1}) one has\n\\be\n\\frac{\\mu(\\sigma^*)}{\\mu(\\sigma)} = e^{-{\\cal W}(\\sigma)},\n\\label{I.1bis}\n\\ee\nshowing that ${\\cal W}$ is odd ${\\cal W}(\\sigma^*) = -{\\cal W}(\\sigma)$ and\ncharacterizes the asymmetry of the probability measure under inversion. \nNow, the question is to what extent this\nasymmetry is preserved or distorted as the description is moved up\nfrom the microscopic level to the higher one of random variables. The answer from\nEq.~(\\ref{I.03}) is given by\n\\be\n\\frac{P({\\cal M}^* = M)}{P({\\cal M} = M)} = e^{-{\\cal K}(M)}.\n\\label{I.3bis}\n\\ee\nTaking ${\\cal M}$ to have definite parity, if it is even no information about the symmetry of\n$P$ is obtained and Eq.~(\\ref{I.3bis}) yields the conditional integral FT\n\\be\n\\langle e^{-{\\cal W}(\\sigma)} | {\\cal M}(\\sigma) = M \\rangle = 1.\n\\label{I.3tris}\n\\ee\nConversely, if ${\\cal M}$ is odd, one has\n\\be\n\\frac{P({\\cal M} = -M)}{P({\\cal M} = M)} = e^{-{\\cal K}(M)},\n\\label{I.3quater}\n\\ee\nwhose usefulness depends on the possibility of assessing the form of ${\\cal K}(M)$,\nwhich is called the asymmetry function (AF).\nIn particular, if ${\\cal M} \\propto {\\cal W}$, there follows\nan FT of the Gallavotti-Cohen~\\cite{Gallavotti} form with a linear AF\n\\be\n{\\cal K}(M) \\propto M.\n\\label{I.3pente}\n\\ee\nIf, instead, ${\\cal W}$ and ${\\cal M}$ are not simply related,\nthe meaning of ${\\cal K}(M)$ in general is not immediately transparent.\nFor an overview of the variety of the different FT forms arising in the general case see \nRef.~\\cite{Seifert}.\n\n\\subsection{FT and symmetry breaking in equilibrium}\n\nThe latter remarks are well clarified in the equilibrium context, used\nin Ref.~\\cite{Gaspard} to investigate the relation between FT and\nsymmetry breaking. Let $\\Omega$ and $\\sigma$ be the system's phase\nspace and configurations, respectively. For definiteness, let $\\sigma\n= (s_1,...,s_N)$, with $s_i=\\pm 1$, be a spin configuration of a\nmagnetic system on the lattice in the presence of an external\nsite dependent field $\\mathbb{B}=\\{B_i\\}$, whose equilibrium state is described\nby the probability measure \\be \\mu(\\sigma) = \\mu_0(\\sigma)e^{\\beta\n \\sum_i B_i s_i},\n\\label{equi.1}\n\\ee\nwhere $\\mu_0(\\sigma)$ is symmetric under spin inversion $\\sigma^* = (-s_1,...,-s_N)$,\nwhile the exponential term breaks explicitly the $\\mathbb{Z}_2$ symmetry.\nWe are interested in the fluctuations of the global magnetization ${\\cal M}(\\sigma) = \\sum_i s_i$.\nFrom Eq.~(\\ref{equi.1}) follows\n\\be\n{\\cal W}(\\sigma) = 2 \\beta \\sum_i B_i s_i,\n\\label{equi.01}\n\\ee\nand\n\\be\n{\\cal K}(M) = -\\ln \\langle e^{-2\\beta \\sum_i B_i s_i}|{\\cal M} = M \\rangle,\n\\label{equi.02}\n\\ee\nwhich takes a simple form only if the external field\nis uniform $B_i=B, \\forall i$, yielding an FT with the linear AF~\\cite{Gaspard}\n\\be\n{\\cal K}(M) = 2\\beta B M.\n\\label{equi.3}\n\\ee\nInstead, if $\\mathbb{B}$ is not uniform, {\\it deviations} from the FT arise.\nIn order to take a closer look, let the number of spins\nto become large and consider the case of the ideal paramagnet, in which\n$\\mu_0(\\sigma)$ in Eq.~(\\ref{equi.1}) is the uniform measure $Z^{-1} = [\\prod_i 2\\cosh (\\beta B_i)]^{-1}$.\nThen, by a straightforward saddle point\ncomputation one obtains the large deviation principle\n\\be\nP(M) \\sim e^{-NI(m)},\n\\label{G.1}\n\\ee\nwhere $m=M\/N$ is the magnetization per spin and the large deviation function is given by\n\\be\nI(m) = x^*(m) m + \\beta \\bigl[ f\\bigl(\\mathbb{B}^*(m)\\bigr) - f(\\mathbb{B})\\bigr].\n\\label{G.2}\n\\ee\nHere,\n\\be\nf(\\mathbb{B}) = \\lim_{N \\to \\infty} - \\frac{1}{N\\beta} \\sum_i \\ln [2\\cosh (\\beta B_i)],\n\\label{G.3}\n\\ee\nis the Helmotz free energy density, which depends on the field configuration $\\mathbb{B}$,\nwe have defined $\\mathbb{B}^*(m)=\\{B^*_i(m)\\}$ with \n\\be\nB^*_i(m) = B_i + \\beta^{-1} x^*(m),\n\\label{G.3bis}\n\\ee\nand $x^*(m)$ is obtained by solving with respect to $x$ the\nequation of state\n\\bea\nm & = & - \\frac{\\partial}{\\partial x} f\\bigl(\\{B_i + \\beta^{-1} x\\}\\bigr) \\nonumber \\\\\n& = & \\frac{1}{N} \\sum_i \\tanh (B_i + \\beta^{-1} x).\n\\label{G.4}\n\\eea\nConsequently, $\\beta^{-1} x^*(m)$ is the shift to be applied to the external field on each site in order\nto produce $m$ as the {\\it average} magnetization per spin.\nFrom the definition~(\\ref{I.3quater}) follows \n\\begin{eqnarray}\n& & \\frac{1}{N}{\\cal K}(M) = I(-m) - I(m) \\nonumber \\\\\n& = & - \\bigl[x^*(-m) + x^*(m)\\bigr]m + \\beta \\bigl[ f\\bigl(\\mathbb{B}^*(-m)\\bigr) - \nf\\bigl(\\mathbb{B}^*(m)\\bigr)\\bigr]. \\nonumber \\\\\n\\label{G.5}\n\\end{eqnarray} \nAveraging Eq.~(\\ref{G.3bis}) over $i$, we can write\n\\be\n\\beta^{-1} x^*(m) = \\overline{\\mathbb{B}^*(m)} - \\overline{\\mathbb{B}},\n\\label{G.01}\n\\ee\nwith\n\\be\n\\overline{\\mathbb{B}^*(m)} = \\frac{1}{N} \\sum_i B^*_i(m), \\,\\,\\, \\overline{\\mathbb{B}} = \\frac{1}{N} \\sum_i B_i,\n\\label{G.02}\n\\ee\nand Eq.~(\\ref{G.5}) can be put in the form\n\\be\n\\frac{1}{N}{\\cal K}(M) = 2 \\beta \\overline{\\mathbb{B}} m + \\beta \\bigl[g(-m) - g(m)\\bigr],\n\\label{G.03}\n\\ee\nwhere we have defined\n\\be\ng(m) = f\\bigl(\\mathbb{B}^*(m)\\bigr) + \\overline{\\mathbb{B}^*(m)} m.\n\\label{G.04}\n\\ee\nIn the particular case of the uniform external field \n\\begin{eqnarray}\n& & \\overline{\\mathbb{B}} = B, \\nonumber \\\\\n& & B^*_i(m) = B^*(m) = B + \\beta^{-1} x^*(m), \\,\\,\\,\\forall i, \\nonumber \\\\\n& & \\overline{\\mathbb{B}^*(m)} = B^*(m), \n\\label{G.05}\n\\end{eqnarray}\nand\n\\be\ng(m) = f(B^*) + B^*m\n\\label{G.06}\n\\ee\nis the Legendre transform of $f(B)$, which is even under $m$ reversal,\nsince $B^*(m)$ is odd. Hence, in this case Eq.~(\\ref{G.03}) reproduces the result~(\\ref{equi.3}).\nIn the nonuniform case $g(m)$ is not the Legendre transform of $f(\\mathbb{B})$ and\nin general does not have a definite parity, leading to a nonlinear AF function.\nWe will come back on this point in Section~\\ref{twobrownian}.\nWhat the above exercise shows is that the FT, in the sense of a linear AF, holds as\nlong as the macrovariable ${\\cal M}$, whose fluctuations are considered, is conjugate\nto the symmetry breaking field. Instead, if ${\\cal M}$ is not a conjugate variable, as it is the case\nwith a site dependent $\\mathbb{B}$, the FT in the form~(\\ref{equi.3}) does not hold.\n\n\\subsection{FT out of equilibrium}\n\nLet us, next, consider the nonequilibrium context. Assuming stochastic\nevolution, take for $\\Omega$ the space of stochastic trajectories\nand for $\\sigma$ an individual\ntrajectory. Then, $\\sigma^*$ stands for the time reversed trajectory, while\n$\\mu(\\sigma)$ and $\\mu^{\\prime}(\\sigma)$ are the probability measures associated with\ntwo different evolutions, whose relation is specified from case to case.\nHere, as in the equilibrium problem, we shall be concerned with $\\mu^{\\prime}(\\sigma) = \\mu(\\sigma)$,\nwhich in the dynamical context arises when there are \nno time dependent external parameters and the system evolves\nin contact with a single thermal reservoir at the final temperature $T$.\nThen, only heat\nis exchanged with the environment and one has~\\cite{Seifert}\n\\be\n{\\cal W}(\\sigma) = \\ln \\frac{P_0(x_0)}{P_0(x_t)} - \\beta {\\cal Q}(\\sigma),\n\\label{I.001}\n\\ee\nwhere $P_0$ is the initial probability distribution, $x_0$ and $x_t$ are the\ninitial and final entries in the trajectory $\\sigma$, $\\beta = 1\/T$ and ${\\cal Q}(\\sigma)$\nis the heat exchanged along the trajectory, which we take as negative if released\nto the environment. Using the above form of ${\\cal W}$ and the definition~(\\ref{I.04}),\nthe heat AF is given by\n\\be\n{\\cal K}(Q) = -\\beta Q -\\ln \\langle e^{\\ln P_0(x_t) - \\ln P_0(x_0)}|{\\cal Q} = Q \\rangle,\n\\label{I.002}\n\\ee \nwhose understanding requires some clue on the role of the boundary\nterms. This can be gained from the work of Puglisi {\\it et al.}~\\cite{Puglisi}. \n\nIn this paper, as anticipated\nin the Introduction, we shall be interested in the heat exchanged by a system \nof Brownian oscillators with the environment in an interval of time $[t_w,t]$. Eventually,\nthis will lead to recast the above equation in the form\n\\be\n{\\cal K}(Q) = - \\ln \\bigl\\langle e^{-\\sum_{\\mathbf{k}} \\Delta \\beta_{\\mathbf{k}} Q_{\\mathbf{k}}} |{\\cal Q} \n= Q \\bigr\\rangle,\n\\label{I.003}\n\\ee\nwhere $\\mathbf{k}$ are single oscillators labels and $Q_{\\mathbf{k}}$ the heat exchanged by each one of \nthem.\nThe above expression is clearly analogous to Eq.~(\\ref{equi.02}), where the role of\nthe nonuniform external field $\\mathbb{B}$ is played by the set of \naffinities $\\{\\Delta \\beta_{\\mathbf{k}} \\}$,\nwhich are $\\mathbf{k}$-dependent differences of inverse temperatures. The correspondence \nbetween the two problems helps to understand the deviations or modifications of the FT in terms\nof a collection of nonuniform degrees of freedom.\n\nIt should be emphasized that the choice of taking $\\mu^{\\prime} =\n\\mu$, and therefore $P^{\\prime} = P$, is dictated by the particular\nphysical setting of interest, since we consider the relaxation\nfollowing a temperature quench and we want to compare the probability\nof exchanging the heat $Q$ with that of exchanging $-Q$, in the {\\it\n same quench process}, that is {\\it without time reversal}. We focus\non the asymmetry of the heat distribution in the given process, as it\nwas done, for instance, in the experimental work of Gomez-Solano et\nal.~\\cite{gomez11}.\n\n\n\n\n\n\n\\section{Brownian oscillator}\n\\label{One}\n\n\\noindent \nThe equation of motion for the single overdamped Brownian oscillator is of the\nLangevin type \n\\begin{equation}\n\\dot{x}=-\\omega x + \\eta,\n\\label{process}\n\\end{equation}\nwhere $\\omega$ is the frequency and $\\eta$ is the white noise, modeling the\ninteraction with the thermal bath at the temperature $T$, with expectations\n\\begin{eqnarray}\n\\langle \\eta(t)\\rangle &=&0 \\\\\n\\langle \\eta(t)\\eta(t')\\rangle&=&2T\\delta(t-t').\n\\end{eqnarray}\nThe Boltzmann constant will be taken $k_B=1$ throughout.\nInitially the system is in equilibrium at the temperature $T_0$, with the\nposition probability distribution\n\\be\nP_0(x) = \\sqrt{\\frac{\\beta_0 \\omega}{2\\pi}} e^{-\\beta_0 {\\cal H}(x)}, \n\\label{HO.1}\n\\ee\nwhere ${\\cal H}(x) = \\frac{1}{2}\\omega x^2$ is the energy of the oscillator.\nInstantaneous cooling (quenches) or heating processes are realized by putting, at the time $t=0$, the system\nin contact with the thermal bath at the temperature $T < T_0$ or $T > T_0$, respectively.\nIn the following, we shall be mainly interested in the case of\nthe temperature quench.\n\n\n\n\\subsection{Fluctuations of exchanged heat}\n\nLet us focus on the fluctuations of\nthe heat exchanged by the oscillator with the thermal bath in the\ntime interval $(t_w \\geq 0,t>t_w)$ after the temperature step. \nSince no work can be carried out on or by the system,\ndue to $\\omega$ constant, the heat exchanged in a single realization of the dynamical evolution\ncoincides with the energy difference\n\\be\n\\mathcal{Q}(t,t_w)={\\cal H}\\bigl(x(t)\\bigr)- {\\cal H}\\bigl(x(t_w)\\bigr),\n\\label{Phex.1}\n\\ee\nwhich is positive if heat is absorbed from the bath and negative if\nit is released to the bath. Then,\nthe probability of exchanging the amount $Q$ of heat is given by\n\\be\nP(Q) = \\int_{-\\infty}^{\\infty} dx dx_w \\, P(x, t;x_w, t_w) \\delta (\\mathcal{Q} - Q),\n\\label{Phex.2}\n\\ee\nwhere $P(x, t;x_w, t_w)$ is the joint probability of the two events $(x,t)$ and $(x_w,t_w)$,\ngiven by\n\\begin{eqnarray}\\label{HO.9}\nP(x, t;x_w, t_w) \n&=& \\frac{1}{\\sqrt{2 \\pi \\Delta(\\tau) \\nu(t_w)}} \\\\ \\nonumber\n&\\times & e^{-\\left \\{ \\frac{1}{2\\Delta(\\tau)}x^2 - \\frac{G(\\tau)}{\\Delta(\\tau)} xx_w\n+\\frac{1}{2}\\left [ \\frac{G^2(\\tau)}{\\Delta(\\tau)} + \\frac{1}{\\nu(t_w)} \\right ] x_w^2 \\right \\} },\n\\end{eqnarray}\nwhere $G(\\tau) = e^{-\\omega \\tau}$\nis the response function dependent on the time difference $\\tau=t - t_w$,\n$\\Delta(\\tau) = \\frac{T}{\\omega}[1-G^2(\\tau)]$\nand $\\nu(t_w) = G^2(t_w) \\nu_0 + {\\Delta(t_w)}$ is the position variance\nat the time $t_w$, whose initial value is given by $\\nu_0=T_0\/\\omega$.\n\nThe position probability distribution at the generic time $t$, obtained by integrating the\nabove quantity over $x_w$, maintains the equilibrium form~(\\ref{HO.1})\n\\be\nP(x,t) = \\sqrt{\\frac{\\beta_{\\rm eff}(t) \\omega}{2\\pi}} e^{-\\beta_{\\rm eff}(t){\\cal H}(x)},\n\\label{HO.6}\n\\ee\nwhere $\\beta_{\\rm eff}(t)$ is the time dependent inverse effective temperature\ndefined by the equipartition-like statement~\\cite{gomez11} \n\\be\n\\langle {\\cal H} \\rangle_t = \\frac{1}{2}T_{\\rm eff}(t),\n\\label{HO.6bis}\n\\ee\nwhich yields\n\\be\nT_{\\rm eff}(t) = \\omega \\nu(t) = (T_0-T)G^2(t) + T.\n\\label{HO.10}\n\\ee\nFor this quantity, which will play an important role in the following,\nwe shall use the short hand notation $T_w = T_{\\rm eff}(t_w)$ or $T_t = T_{\\rm eff}(t)$\nand similarly for the inverse temperature $\\beta_w = T_w^{-1}$ or $\\beta_t = T_t^{-1}$.\n\nReturning to Eq.~(\\ref{Phex.2}) and\nintroducing the integral representation of the $\\delta$ function\n\\be\n\\delta (\\mathcal{Q} - Q) = \\int_{-i \\infty}^{i \\infty} \\frac{d \\lambda}{2\\pi i} \\,\ne^{-\\lambda (Q - \\mathcal{Q})},\n\\label{Phex.3}\n\\ee\nwe obtain\n\\be\nP(Q) = \\int_{-i \\infty}^{i \\infty} \\frac{d \\lambda}{2\\pi i} \\, e^{-\\lambda Q}\n\\int_{-\\infty}^{\\infty} dx dx_w \\, \\frac{e^{-f(\\lambda,x,x_w)}}{2 \\pi \\sqrt{\\Delta(\\tau)\\nu(t_w)}},\n\\label{Phex.4}\n\\ee\nwhere\n\\be\nf(\\lambda,x,x_w) = \\frac{1}{2 \\Delta(\\tau)} [x-G(\\tau) x_w]^2 + \\frac{x_w^2}{2\\nu(t_w)}\n-\\frac{\\lambda \\omega}{2}(x^2 - x_w^2).\n\\label{Phex.5}\n\\ee\nCarrying out the $x$ and $x_w$ integrations and rotating the $\\lambda$ integration from the\nimaginary to the real axis, this can be rewritten as\n\\be\nP(Q) = \\int_{-\\infty}^{\\infty} \\frac{d \\lambda}{2\\pi \\sqrt{a}} \\,\n\\frac{e^{-i \\lambda Q}}{\\sqrt{(\\lambda -i\\lambda_-)(\\lambda + i\\lambda_+)}},\n\\label{Phex.16bis}\n\\ee\nwhere\n\\be\n\\lambda_\\pm = \\frac{1}{2 a} \\left [\\sqrt{b^2 + 4 a} \\pm b \\right ],\\,\\,\\,\\,\\lambda_+\\lambda_- = \n\\frac{1}{a},\n\\label{Phex.14}\n\\ee\nwith\n\\be\na = TT_w(1-G^2), \\,\\,\\,\\, b = \\Delta T G^2_w(1-G^2),\n\\label{Phex.11}\n\\ee\nafter using the notation\n\\be\n\\Delta T = T_0 - T,\\,\\,\\,\\, G_w=G(t_w),\\,\\,\\,\\,G = G(\\tau),\n\\label{Phex.10bis}\n\\ee which will be adopted from now on. Let us briefly comment on the\nparameters $a$ and $b$ introduced above. The asymmetry\nbetween $\\lambda_+$ and $\\lambda_-$ is controlled by $b$, while $a$ controls the symmetric\npart. The sign of $b$ depends on that of $\\Delta T$, implying\n$\\lambda_+ > \\lambda_-$ in the quench process and $\\lambda_+ < \\lambda_-$ in the heating process. \nThe most asymmetrical situation is\nobtained for $a=0$, which is realized either setting $T=0$, or $T_0=0$\nand $t_w=0$, yielding \n\\be \\lambda_+ = \\left \\{ \\begin{array}{ll}\n \\infty,\\;\\; $for$ \\;\\; b > 0,\\\\ 1\/|b| ,\\;\\; $for$ \\;\\; b < 0,\n \\end{array}\n \\right .\n \\label{eig.2}\n \\ee\n\\be\n\\lambda_- = \\left \\{ \\begin{array}{ll}\n 1\/b,\\;\\; $for$ \\;\\; b > 0,\\\\\n \\infty,\\;\\; $for$ \\;\\; b < 0.\n \\end{array}\n \\right .\n \\label{eig.3}\n\\ee \nInstead, the symmetrical situation with\n\\be\n\\lambda_+ = \\lambda_-= \\frac{1}{\\sqrt{a}},\n\\label{eig.1}\n\\ee\nis obtained when the system is equilibrated ($b=0$) for $\\Delta T=0$ or\nfor $t_w \\rightarrow \\infty$.\n\nThe integral in Eq.~(\\ref{Phex.16bis}) is evaluated by closing the contour either on the upper \nhalf plane or on the lower half plane around the branch cuts\nrunning along the imaginary axis from $\\pm i \\lambda_{\\mp}$ up to $ \\pm i \\infty$\n(see Fig.~\\ref{fig_int}), depending on $Q < 0$ or $Q > 0$, and obtaining\n\\be\nP(Q) = \\frac{\\sqrt{\\lambda_+\\lambda_-}}{\\pi} e^{\\frac{1}{2}\\Delta \\beta Q} \nK_0\\left (\\frac{\\lambda_+ +\\lambda_-}{2}|Q| \\right ),\n\\label{Phex.16}\n\\ee\nwhere \n\\be\n\\Delta \\beta = \\lambda_- -\\lambda_+ = \\beta_w - \\beta,\n\\label{aff.1}\n\\ee\nwhile $K_0$ is the modified Bessel function of the second kind. \n\n\\begin{figure}[!tb]\n\\includegraphics[width=0.6\\columnwidth,clip=true]{drawing.eps}\n\\caption{Contours of integration, for $Q<0$ and $Q>0$, in the case of a single oscillator.}\n\\label{fig_int}\n\\end{figure}\n\nIn the limit $t_w \\to \\infty$ one has $\\lambda_+ = \\lambda_- = \\beta$, recovering\nthe equilibrium result\n\\be\nP(Q) = \\frac{\\beta}{\\pi}K_0 (\\beta|Q|),\n\\label{Phex.16ter}\n\\ee\nwhich was derived in Refs.~\\cite{imparato07,chatterjee10} for a Brownian particle\noptically trapped in a stationary harmonic potential. \nThe complementary case of a Brownian oscillator in the strongly underdamped limit \nhas been studied in the recent work of Salazar and Lira~\\cite{salazar16}, where a similar\nresult for the heat distribution is derived. The dependence on $t_w$ of the asymmetry\nof the distribution, for a quench to a small but finite temperature $T=0.1$, is illustrated in\nFig.~\\ref{fig1}, where the analytical expression~(\\ref{Phex.16}) is compared \nwith the numerical simulation of the process~(\\ref{process}). \n\n\\begin{figure}[!tb]\n\\includegraphics[width=0.9\\columnwidth,clip=true]{pq.eps}\n\\caption{Heat probability distribution for the single oscillator, with $\\omega=10^{-1}$, $T_0=100$ and $T=10^{-1}$, for $t=200$\nand different values of $t_w$.\nAnalytical form from Eq.~(\\ref{Phex.16}) (continuous lines) and\nnumerical simulations (symbols).}\n\\label{fig1}\n\\end{figure}\n\n\n\\subsection{Decomposition into cooling and heating contributions}\n\\label{realization}\n\n\nThe two quantities $\\lambda_{\\pm}$ turn out to be the basic building blocks in all\nwhat follows.\nThe physical meaning can be readily understood\nby rewriting $P(Q)$ as the convolution of the two distributions arising\nin the purely cooling and in the purely heating process. Defining\n\\be\nP_\\pm(Q) = \\sqrt{\\pm i\\lambda_\\pm} \\int_{-\\infty}^{\\infty} \\frac{d \\lambda}{2\\pi}\n\\frac{e^{-i \\lambda Q}}{\\sqrt{\\lambda \\pm i\\lambda_\\pm}}\n\\label{Decomp.1}\n\\ee\nand carrying out the integral, which involves only one of the branch points, one finds\n\\be\nP_-(Q) = \\left \\{ \\begin{array}{ll}\n \\sqrt{\\frac{\\lambda_-}{\\pi |Q|}} e^{\\lambda_-Q},\\;\\; $for$ \\;\\; Q < 0,\\\\\n 0 ,\\;\\; $for$ \\;\\; Q > 0,\n \\end{array}\n \\right .\n \\label{Decomp.3}\n \\ee\nand\n\\be\nP_+(Q) = \\left \\{ \\begin{array}{ll}\n \\sqrt{\\frac{\\lambda_+}{\\pi Q}} e^{-\\lambda_+Q},\\;\\; $for$ \\;\\; Q > 0,\\\\\n 0 ,\\;\\; $for$ \\;\\; Q < 0,\n \\end{array}\n \\right .\n \\label{Decomp.4}\n \\ee\nfrom which follows\n\\be\n\\lambda_\\pm = \\pm \\frac{1}{2 \\langle {\\cal Q} \\rangle_\\pm},\n\\label{Decomp.5}\n\\ee\nwhere $\\langle {\\cal Q} \\rangle_\\pm$ are the average heats \nexchanged in the processes in which heat can only be absorbed or released.\nThen, the probability~(\\ref{Phex.16bis}) can be rewritten as \n\\be\nP(Q) = \\int_{-\\infty}^{\\min(0,Q)} dQ^{\\prime}P_+(Q-Q^{\\prime})P_-(Q^{\\prime}),\n\\label{1ho}\n\\ee\nwhich reproduces the result of Eq.~(\\ref{Phex.16}) and\nthe total average heat exchanged is clearly given by\n\\bea\n\\langle {\\cal Q} \\rangle & = & \\langle {\\cal Q} \\rangle_+ + \\langle {\\cal Q} \\rangle_- \\nonumber \\\\\n& = &\n\\frac{1}{2} \\left ( \\frac{1}{\\lambda_+} - \\frac{1}{\\lambda_-} \\right ) \n= - \\frac{b}{2}.\n\\label{Decomp.6}\n\\eea\n\nThe physical conditions for $P(Q) = P_-(Q)$ or $P(Q) = P_+(Q)$ are obtained \nputting the oscillator in contact with the thermal reservoir at $T=0$,\nor by arranging a purely heating process with $T_0=0, T > 0$ and $t_w=0$.\nIn the first case, one has\n$T_w=T_0 G^2_w, a=0, b=(T_w-T_t)$\nfrom which follows, according to Eqs.~(\\ref{eig.2}) and~(\\ref{eig.3}),\n$\\lambda_+=\\infty, \\lambda_-=(T_w-T_t)^{-1}$, implying\n$P_+(Q)= \\delta(Q)$ and, therefore, according to Eq.~(\\ref{Decomp.3})\n\\be\nP(Q) = P_-(Q) = \\left \\{ \\begin{array}{ll}\n \\sqrt{\\frac{1}{\\pi |Q|(T_w-T_t)}} e^{Q\/(T_w-T_t)},\\;\\; $for$ \\;\\; Q < 0,\\\\\n 0 ,\\;\\; $for$ \\;\\; Q > 0.\n \\end{array}\n \\right .\n \\label{Asy.4}\n \\ee\nSimilarly, in the second case, with $T_0=0$ and $t_w=0$, one has \n$T_w=0, a=0, b=-T_t$, which yield $\\lambda_+=1\/T_t, \\lambda_-=\\infty$, \nimplying $P_-(Q)= \\delta(Q)$ and\n\\be\nP(Q) = P_+(Q) = \\left \\{ \\begin{array}{ll}\n \\sqrt{\\frac{\\beta_t}{\\pi Q}} e^{-\\beta_tQ},\\;\\; $for$ \\;\\; Q > 0,\\\\\n 0 ,\\;\\; $for$ \\;\\; Q < 0.\n \\end{array}\n \\right .\n \\label{Asy.8}\n \\ee\n\n\n\n\n\n\\subsection{FT and time reversal symmetry breaking}\n\nIn this subsection we discuss the symmetry properties of the heat distribution.\nFrom Eq.~(\\ref{Phex.16}) follows immediately the FT in the standard form \n\\be\n\\frac{P(-Q)}{P(Q)} = e^{-\\Delta \\beta Q}.\n\\label{Phex.21}\n\\ee\nA similar result was derived in Ref.~\\cite{salazar16}\nwith $t_w=0$ and $\\Delta \\beta = \\beta_0 - \\beta$, while in Ref.~\\cite{gomez11}\nthere appears $\\Delta \\beta = \\beta_w - \\beta_t$, which holds true only in the limit of large\n$\\tau$ when the system is time decorrelated and $\\beta_t \\sim \\beta$.\nHence, as anticipated in section~\\ref{zero}, the AF is linear\n\\be\n{\\cal K}(Q) = \\Delta \\beta Q,\n\\label{aff.2}\n\\ee\nsince the role of the initial probability $P_0$\nnow is played by $P(x,t_w)$, which after Eqs.~(\\ref{Phex.1}) and~(\\ref{HO.6})\nyields\n\\be\n\\ln \\frac{P(x_w,t_w)}{P(x,t_w)} = \\beta_w {\\cal Q}.\n\\label{aff.3}\n\\ee\n\n\nHere, we want to highlight the connection between the above result and the\nbreaking of the time reversal symmetry, by adapting to the present context\nthe approach of Gaspard~\\cite{Gaspard} sketched in Section~\\ref{zero}.\nTaking for $\\Omega$ the set of ordered pairs $\\sigma = (x_w,x)$,\nthe time reversal symmetry operation is represented by the involution\n$\\sigma^* = (x,x_w)$ which exchanges the order of $x_w$ and $x$. \nThe task is to rewrite the joint probability Eq.~(\\ref{HO.9}) in the form of Eq.~(\\ref{equi.1}), \nthat is as the product of a $\\mathbb{Z}_2$ symmetric\nfactor $\\mu_0(\\sigma)$ times an exponential where there appears the\nheat ${\\cal Q}$ in the role of the observable explicitly\nbreaking the $\\mathbb{Z}_2$ symmetry. In order to do this, let us first\ncast the joint probability Eq.~(\\ref{HO.9}) in the form\n\\be\n\\mu(\\sigma) = \\frac{1}{Z}e^{-{\\cal A}(\\sigma)},\n\\label{TRSB.1}\n\\ee\nwhere the ``action'' ${\\cal A}(\\sigma)$ can be read out from Eq.~(\\ref{HO.9}) as\n\\be\n{\\cal A}(\\sigma) = \\frac{\\beta \\omega}{2(1-G^2)} \\left [x^2 -2Gxx_w + G^2x_w^2 \\right ] \n+ \\beta_w \\frac{1}{2} \\omega x_w^2,\n\\label{TRSB.2}\n\\ee\nand the normalization factor is given by $Z=\\sqrt{\\frac{2\\pi \\beta \\beta_w \\omega^2}{(1-G^2)}}$. \nDecomposing ${\\cal A}(\\sigma)$ into the sum of an even and an odd part\n\\be\n{\\cal A}(\\sigma) = {\\cal E}(\\sigma) + {\\cal O}(\\sigma),\n\\label{TRSB.4}\n\\ee\nwith\n\\bea\n{\\cal E}(\\sigma) & = & \\frac{1}{2} \\left [ {\\cal A}(\\sigma) + {\\cal A}(\\sigma^*) \\right ] \\nonumber \\\\\n& = & \\frac{\\beta \\omega}{4(1-G^2)} \\left [(1+G^2)x^2 -4Gxx_w \\right . \\nonumber \\\\\n&+&\\left . (1+G^2)x_w^2 \\right ]\n+ \\beta_w \\frac{1}{4} \\omega (x^2 + x_w^2),\n\\label{TRSB.5}\n\\eea\nand\n\\be\n{\\cal O}(\\sigma) = \\frac{1}{2} \\left [ {\\cal A}(\\sigma) - {\\cal A}(\\sigma^*) \\right ] \n= -\\frac{1}{2} \\Delta \\beta {\\cal Q}(\\sigma),\n\\label{TRSB.6}\n\\ee\nwhere ${\\cal Q}(\\sigma)$ is defined by Eq.~(\\ref{Phex.1}), \nthe form~(\\ref{equi.1}) of $\\mu(\\sigma)$ is obtained\n\\be\n\\mu(\\sigma) = \\mu_0(\\sigma) e^{\\frac{1}{2} \\Delta\\beta {\\cal Q}(\\sigma)},\n\\label{TRSB.7}\n\\ee\nwhere\n\\be\n\\mu_0(\\sigma) = \\frac{1}{Z}e^{-{\\cal E}(\\sigma)},\n\\label{TRSB.8}\n\\ee\nis $\\mathbb{Z}_2$ invariant. Therefore, the invariance under time reversal in\n$\\mu(\\sigma)$ is explicitly broken by the exponential\nfactor involving ${\\cal Q}(\\sigma)$ in Eq.~(\\ref{TRSB.7}).\nOnce this is established, the FT in the form of Eq.~(\\ref{Phex.21}) \nis recovered straightforwardly. The following\nremarks are in order:\n\n\\begin{enumerate}\n\n\n\\item the affinity $\\Delta \\beta$\ndriving the heat flow plays the role of the external field\ncausing the explicit breaking of the $\\mathbb{Z}_2$ invariance.\n\n\\item The $\\mathbb{Z}_2$\ninvariant measure $\\mu_0(\\sigma)$ of Eq.~(\\ref{TRSB.8}) is not time translation\ninvariant, as it could have been naively expected, since the action \n${\\cal E}(\\sigma)$ in Eq.~(\\ref{TRSB.5}) \ndepends both on $\\tau$, through $G$, and on $t_w$, through $\\beta_w$.\nComputing $P(Q)$ from Eq.~(\\ref{TRSB.7}) one has\n\\be\nP(Q) = \\sum_{\\sigma} \\mu(\\sigma) \\delta({\\cal Q} - Q) = \ne^{\\frac{1}{2} \\Delta\\beta Q} \\sum_{\\sigma} \\mu_0(\\sigma) \\delta({\\cal Q} - Q),\n\\label{TRSB.9}\n\\ee\nand, comparing with Eq.~(\\ref{Phex.16}), one obtains\n\\be\n\\sum_{\\sigma} \\mu_0(\\sigma) \\delta({\\cal Q} - Q) =\n\\frac{\\sqrt{\\lambda_+\\lambda_-}}{\\pi} \nK_0\\left (\\frac{\\lambda_+ +\\lambda_-}{2}|Q| \\right ).\n\\label{TRSB.10}\n\\ee\n\n\\item From Eq.~(\\ref{Phex.16bis}) it is straightforward to derive the\nmoment generating function\n\\be\n\\langle e^{\\vartheta {\\cal Q}} \\rangle =\n\\sqrt {\\frac{\\lambda_+\\lambda_-}{(\\vartheta + \\lambda_-)(\\lambda_+ - \\vartheta)}},\n\\label{CGF.2}\n\\ee\nimplying that the cumulant generating function \n$\\psi(\\vartheta) = \\ln \\langle e^{\\vartheta {\\cal Q}} \\rangle$\nsatisfies the same symmetry\n\\be\n\\psi(\\vartheta) = \\psi(-\\vartheta + \\Delta \\beta),\n\\label{CGF.2bis}\n\\ee\nwhich was found for a particle coupled to two thermostats at different temperatures~\\cite{Farago}.\n\n\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\\section{Two Brownian oscillators}\n\\label{twobrownian}\n\n\nLet us, next, see how the overall heat fluctuations are modified when\nthere are internal degrees of freedom. We consider first the case of\ntwo uncoupled oscillators with frequencies $\\omega_1$ and $\\omega_2$\nand, in the next section, we consider the limit of a large number of\noscillators. \n\nAs before, initially the system is in equilibrium at the temperature $T_0$ and at the time $t=0$\nis put in contact with the reservoir at the temperature $T$. Denoting by\n$Q_1$ and $Q_2$ the amounts of heat exchanged by each oscillator,\nthe probability that the system as a whole exchanges\nthe heat $Q$ is given by \n\\be\nP(Q) = \\int_{-\\infty}^{\\infty} dQ_1\\, P_1(Q_1)P_2(Q -Q_1),\n\\label{2osc.1}\n\\ee\nwhere $P_i(Q_i)$, with $i=1,2$, are the probabilities \npertaining to each component. \nInserting the expression~(\\ref{Phex.16}) and changing the integration variable from\n$Q_1$ to $y = Q_1 - Q\/2$, this can be put in the form analogous to Eq.~(\\ref{Phex.16})\n\\be\nP(Q) = e^{\\frac{1}{2}\\overline{\\Delta \\beta}Q} R(Q),\n\\label{moz.1}\n\\ee\nwhere\n\\be\n\\overline{\\Delta \\beta} = \\frac{1}{2}(\\Delta \\beta_1 + \\Delta \\beta_2),\n\\label{moz.01}\n\\ee\nand \n\\be\nR(Q) = \\int_{-\\infty}^{\\infty} dy \\, W_1(y + Q\/2) W_2(y - Q\/2) \ne^{\\frac{1}{2}(\\Delta \\beta_1 - \\Delta \\beta_2)y},\n\\label{moz.2}\n\\ee\nwith\n\\be\nW_i(x) = \\frac{\\sqrt{\\lambda_{+,i}\\lambda_{-,i}}}{\\pi} \nK_0\\left (\\frac{\\lambda_{+,i} +\\lambda_{-,i}}{2}|x| \\right ).\n\\label{moz.3}\n\\ee\nHere, $\\lambda_{+,i}$ are defined in Eq.~(\\ref{Phex.14}), the index $i=1,2$\ndenoting the oscillator frequencies, $\\omega_1$ and $\\omega_2$, respectively\nand $\\Delta \\beta_i$ is defined by Eqs.~(\\ref{HO.6bis}) and~(\\ref{aff.1})\nwith frequency $\\omega_i$.\nThe resulting AF is given by\n\\be\n{\\cal K}(Q) = \\overline{\\Delta \\beta}Q\n+ \\ln \\left (\\frac{R(Q)}{R(-Q)} \\right ).\n\\label{moz.4}\n\\ee \nHence, if the two oscillators are equal ($\\omega_1 = \\omega_2$)\nthe second term disappears, since $R(Q)$ becomes an even function,\nand the linear result~(\\ref{aff.2}) is recovered. If they are\nunequal, $R(Q)$ is not even under the change of sign of $Q$ and the second term in \nthe above equation gives a nonlinear contribution.\nThe modifications introduced by this second contribution are\nillustrated by Fig.~\\ref{fig_AF} obtained with parameters\n$\\omega_1=0.5$ and $\\omega_2=0.2$, $T_0=10$, $T=1$ at times\n$t=12,t_w=10$. Around $Q=0$ one observes a linear behavior with slope\n$\\overline{\\Delta \\beta}$ (see dashed line in the\ninset), while for large values of $Q$ the AF shows a \nnonmonotonic behavior.\n\n\\begin{figure}[!tb]\n\\includegraphics[width=0.9\\columnwidth,clip=true]{AF_2oscil.eps}\n\\caption{Asymmetry function for two Brownian oscillators with\n frequencies $\\omega_1=0.5$ and $\\omega_2=0.2$, for $T_0=10$, $T=1$,\n $t_w=10$ and $t_w=12$. The line represents the analytical expression\n Eq.~(\\ref{moz.4}), while the points are numerical simulations. In\n the inset a zoom of the region at small $Q$ is shown.}\n\\label{fig_AF}\n\\end{figure}\n\n\n\nIn order to expose the connection between the presence of internal structure\nand the equilibrium problem mentioned in the Introduction, let us go back\nto the microscopic measure.\nWith two oscillators the phase space enlarges to the set of pairs $\\Omega = \\{ \\sigma_1,\\sigma_2 \\}$\nand the corresponding action, due to the absence of coupling, is additive\n\\bea\n{\\cal A}(\\sigma_1,\\sigma_2) = {\\cal A}_1(\\sigma_1) + {\\cal A}_2(\\sigma_2).\n\\label{FT2.1}\n\\eea\nThe two terms in the right hand side are given by Eq.~(\\ref{TRSB.2}), keeping into\naccount that the frequencies $\\omega_1$ and $\\omega_2$ are different, that is\n\\bea\n{\\cal A}_i(\\sigma_i) &=& \\frac{\\beta \\omega_i}{2(1-G_i^2)} \n\\left [x_i^2 -2G_ix_ix_{w,i} + G_i^2x_{w,i}^2 \\right ] \n\\nonumber \\\\\n&+& \\beta_{w,i} \\frac{1}{2} \\omega_i x_{w,i}^2.\n\\label{FT2.2}\n\\eea\nCarrying out the decomposition into even and odd parts, as in Eq.~(\\ref{TRSB.4}),\n\\be\n{\\cal A}(\\sigma_1,\\sigma_2) = {\\cal E}(\\sigma_1,\\sigma_2) + {\\cal O}(\\sigma_1,\\sigma_2),\n\\label{FT2.3}\n\\ee\nthe measure can be written as \n\\be\n\\mu(\\sigma_1,\\sigma_2) = \\mu_0(\\sigma_1,\\sigma_2) e^{-{\\cal O}(\\sigma_1,\\sigma_2)},\n\\label{FT2.4}\n\\ee\nwhere \n\\be\n\\mu_0(\\sigma_1,\\sigma_2) = \\frac{1}{Z} e^{-{\\cal E}(\\sigma_1,\\sigma_2)},\n\\label{FT2.5}\n\\ee\nis even under $* : (\\sigma_1,\\sigma_2) \\mapsto (\\sigma^*_1,\\sigma^*_2)$, while\n\\be\n{\\cal O}(\\sigma_1,\\sigma_2) = -\\frac{1}{2} [ \\Delta \\beta_1 {\\cal Q}_1(\\sigma_1) +\n\\Delta \\beta_2 {\\cal Q}_2(\\sigma_2) ]\n\\label{FT2.6}\n\\ee\nis odd. Hence, \n${\\cal W}(\\sigma_1,\\sigma_2) = - 2{\\cal O}(\\sigma_1,\\sigma_2)$ and from Eq.~(\\ref{I.3quater})\nthere follows\n\\be\n\\frac{P(-Q)}{P(Q)} = e^{-{\\cal K}(Q)},\n\\label{flu.1}\n\\ee\nwith\n\\be\n{\\cal K}(Q) = -\\ln \\langle e^{-\\sum_i \\Delta \\beta_i {\\cal Q}_i} | {\\cal Q} = Q \\rangle.\n\\label{flu.2}\n\\ee \nThe formal\nsimilarity with Eq.~(\\ref{equi.02}) is evident, with the affinities $\\Delta \\beta_i$ playing the role\nof the site dependent external field $\\{B_i\\}$ and the partial heat ${\\cal Q}_i$ that of the local\norder parameter $s_i$. Accordingly, the above expression simplifies in the particular case \nakin to that of the uniform external field, that is for $\\omega_1=\\omega_2$ or\nfor $t_w=0$, yielding\n$\\Delta \\beta_i = \\beta_0 - \\beta$ for all $i$.\n\n\n\n\n\n\\section{Extended system}\n\\label{extended}\n\n\n\nIn this section we consider the case of a large number of oscillators.\nThe aim is to analyse important qualitatively new features arising in\nthe behavior of fluctuations when the size of the system becomes\nlarge. As it is usually the case for large systems, fluctuations of\nextensive quantities, such as heat, obey a large deviation\nprinciple~\\cite{hugo,ld}. The non trivial feature, unexpected for a\nlinear system, is that in the large deviation function there appears a\nsingularity corresponding to a condensation\ntransition~\\cite{Gonnella,Jo,Pechino}. This means, as briefly\nanticipated in the Introduction, that there exists a critical\nthreshold $Q_c$ of the exchanged heat, such that fluctuations above\nthreshold are due to a {\\it single} oscillator. We give a careful\ntreatment of the saddle point computation involved in the computation\nof the large deviation function and we study the phase diagram of the\ntransition. Mathematical details are collected in\nAppendix~\\ref{app:crit}.\n\n\nWe consider an extended system of linear size $L$ and volume $V=L^d$, where $d$ is\nthe space dimensionality. We suppose that this system is linear and that the normal\nmodes decomposition with periodic boundary conditions gives rise to a set of harmonic \ncomponents for the low lying modes \nwith frequencies obeying the dispersion relation~\\cite{note} \n\\be\n\\omega(\\mathbf{k}) = k^{\\alpha} + \\omega_0,\n\\label{dis.1}\n\\ee\nwhere $\\omega_0 > 0$, $\\mathbf{k} = \\frac{2\\pi}{L} \\mathbf{n}, \\,\\,n_i=0,\\pm 1,\\pm 2,...$ and $\\alpha >0$.\nJust to fix ideas, a dispersion relation of this type arises in the Gaussian model\nor Van Hove theory of critical phenomena~\\cite{Ma}.\nThe $\\mathbf{k}$-space volume per mode is $(2 \\pi)^d\/V$. Assuming that there exists \nan underlying lattice structure with lattice spacing $a_0$, the first Brillouin zone is bounded by\n$\\Lambda = 2\\pi\/a_0$ and the total number of modes is given by $N=V\/a_0^d$.\nDue to modes independence, the\nprobability of a generic microscopic state at the time $t_w$ is given by the product\nmeasure\n\\be\nP(\\{x_{\\mathbf{k}}\\},t_w) = \\prod_{\\mathbf{k}} \\sqrt{\\frac{\\beta_{\\mathbf{k},w} \\omega(\\mathbf{k})}{2 \\pi}}\ne^{-\\beta_{\\mathbf{k},w}{\\cal H}_{\\mathbf{k}}(x_{\\mathbf{k}})},\n\\label{dis.1bis}\n\\ee\nwhere, according to Eq.~(\\ref{HO.10}),\n\\be\n\\beta_{\\mathbf{k},w} = [(T_0-T)e^{-2\\omega(\\mathbf{k})t_w} + T ]^{-1}.\n\\label{HO.10bis}\n\\ee\nis the inverse effective temperature of the mode $\\mathbf{k}$\nand ${\\cal H}_{\\mathbf{k}}(x_{\\mathbf{k}}) = (1\/2)\\omega(\\mathbf{k})x^2_{\\mathbf{k}}$ is the single mode\nenergy. Hence, Eq.~(\\ref{aff.3}) generalizes to\n\\be\n\\frac{P(\\{x_{\\mathbf{k}}\\},t_w)}{P(\\{x_{\\mathbf{k},w}\\},t_w)} = e^{-\\sum_{\\mathbf{k}} \\beta_{\\mathbf{k},w} {\\cal Q}_{\\mathbf{k}}},\n\\label{dis.1tris}\n\\ee\nwhere ${\\cal Q}_{\\mathbf{k}} = {\\cal H}_{\\mathbf{k}}(x_{\\mathbf{k}}) - {\\cal H}_{\\mathbf{k}}(x_{\\mathbf{k},w})$ is the heat\nexchanged by the mode $\\mathbf{k}$, and the sum is restricted to the first Brillouin zone. \nInserting into Eq.~(\\ref{I.002}) we recover Eq.~(\\ref{I.003})\n\\be\n{\\cal K}(Q) = - \\ln \\bigl\\langle e^{-\\sum_{\\mathbf{k}} \\Delta \\beta_{\\mathbf{k}} {\\cal Q}_{\\mathbf{k}}} |{\\cal Q} = Q \\bigr\\rangle.\n\\label{dis.001}\n\\ee\nComparing with Eq.~(\\ref{equi.02}), \nwe see that the differences $\\Delta \\beta_{\\mathbf{k}}=\\beta_{\\mathbf{k},w}-\\beta$\nplay the role of the site dependent external field in the equilibrium problem,\nas remarked in section~\\ref{twobrownian}, and that the AF linear form\nis recovered when these differences become independent\nof $\\mathbf{k}$, as for instance for $t_w=0$, or when the oscillators are\nidentical, i.e. $\\alpha = 0$ (see Appendix~\\ref{app:alpha0}).\n\nLet us, next, see what form takes the AF by letting $N$ to become large.\nGeneralizing Eq.~(\\ref{2osc.1}) and keeping $V$ finite,\nthe probability that the system exchanges the amount of heat $Q$\nis given by \n\\bea\n& & P(Q,V) = \\int_{-\\infty}^{\\infty} \\prod_{\\mathbf{k}} dQ_{\\mathbf{k}}\\, P_{\\mathbf{k}}(Q_{\\mathbf{k}})\\,\n\\delta(Q - \\sum_{\\mathbf{k}}Q_{\\mathbf{k}}) \\nonumber \\\\\n& = & \\int_{-i\\infty}^{i\\infty}\\frac{d\\lambda}{2\\pi i} \\, e^{-\\lambda Q} \n\\prod_{\\mathbf{k}} \\sqrt{\\frac{\\lambda_{+}(k)\\lambda_{-}(k)}{[\\lambda_{+}(k) -\\lambda][\\lambda_{-}(k) +\\lambda]}}, \\nonumber \\\\\n\\label{dis.3}\n\\eea\nwhere $\\lambda_{\\pm}(k)$ are defined in Eq.~(\\ref{Phex.14}),\nevaluated for $\\omega(\\mathbf{k})$ as in Eq.~(\\ref{dis.1}). \nDenoting by $\\lambda^+_0$ and $\\lambda^-_0$ the upper and lower edges of the two branches\nof the spectrum, that is \n\\be\n\\lambda^+_0 = \\min_{0\\leq k\\leq \\Lambda} \\{ \\lambda_{+}(k) \\}, \\,\\,\\, \n\\lambda^-_0 = \\min_{0\\leq k\\leq \\Lambda} \\{ \\lambda_{-}(k) \\},\n\\label{dis.5}\n\\ee\nthe contour of integration can be deformed to an arbitrary line $\\Gamma$ on the complex plane, provided\nit crosses the real axis within the gap $[-\\lambda^-_0,\\lambda^+_0]$. \nThis allows to rewrite the above integral as\n\\be\nP(Q,V) = \\int_{\\Gamma}\\frac{d z}{2\\pi i} \\, e^{-V[ zq + L(z;V)]},\n\\label{dis.6}\n\\ee\nwhere $q=Q\/V$ is the density of the exchanged heat and\n\\be\nL(z;V) = \\frac{1}{2V}\\sum_{0\\leq\\mathbf{k}\\leq\\Lambda}\n\\ln \\left[ \\frac{[\\lambda_{+}(k) - z][z + \\lambda_{-}(k)]}{\\lambda_{+}(k) \\lambda_{-}(k)} \\right].\n\\label{dis.4bis}\n\\ee\nThe large $V$ behavior of $P(Q,V)$ can be obtained by \nusing the steepest descent method to \nestimate the integral in Eq. \\eqref{dis.6} \\cite{nota}. \nWhen $V\\gg1$ \nthe wave vector $\\mathbf{k}$ can be assumed continuous and \nthe sum in Eq. \\eqref{dis.4bis} approximated by an \nintegral with an error of $O(1\/V)$. \nThe neglected $O(1\/V)$ terms in the exponent, however, give an\n$O(1)$ contribution to $P(Q;V)$,\nresulting in an undetermined normalization of $P(Q;V)$,\nmemory of the discrete nature of $\\mathbf{k}$.\n\nThe function $L(z;V)$ contains the dangerous boundary terms \n$(1\/V)\\ln[\\lambda_0^{+} - z]$ and\n$(1\/V)\\ln[\\lambda_0^{-}+ z]$.\nIf the steepest descent path $\\Gamma$ does not come too close to either edges \n$z = \\pm \\lambda_0^{\\pm}$ of the gap\nthese terms just give an $O(1\/V)$ correction which can be included into the normalization factor.\nConsequently, in this case the function\n\\be\n\\label{dis.8}\n L(z) = \\frac{1}{2} \\int_0^{\\Lambda} d\\mu(k) \\,\n\\ln \\left[ \\frac{[\\lambda_{+}(k) - z][z + \\lambda_{-}(k)]}{\\lambda_{+}(k) \\lambda_{-}(k)} \\right],\n\\ee\nwith \n$d\\mu(k) = \\Upsilon_d k^{d-1}dk$ and\n$\\Upsilon_d = [2^{d-1}\\pi^{d\/2} \\Gamma(d\/2)]^{-1}$ coming from the\nangular integration,\nis an uniform approximation to the function $L(z;V)$ as $V\\gg 1$.\nHere, $\\Gamma(x)$ denotes the Euler gamma function.\n\n\nDenoting by $\\phi(z;q) = zq + L(z)$ and by primes derivatives with respect to $z$, \nit is straightforward to verify\nthat $\\mathbb{I}m\\, \\phi^{\\prime}(z;q) \\sim \\mathbb{I}m\\, z$, \nimplying that the stationary point $z^*$ of $\\phi(z;q)$ is on the real\naxis. Thus, setting $z = x + iy$, the coordinate $x^*$ of the saddle point is \nobtained solving the equation $\\phi^{\\prime}(x^*;q) = 0$,\nwhich explicitly reads\n\\begin{eqnarray}\nq & = & - L'(x^*) \\nonumber \\\\\n& = & \\frac{1}{2} \\int_0^{\\Lambda} d\\mu(k) \\, \\left [ \\frac{1}{\\lambda_{+}(k) - x^*}\n- \\frac{1}{\\lambda_{-}(k) + x^*} \\right ].\n\\label{Sdlpt.7bis}\n\\end{eqnarray}\nThe function $-L'(x)$ is monotonically increasing in the interval\n$x\\in [-\\lambda_0^-,\\lambda_0^+]$, so Eq. \\eqref{Sdlpt.7bis} admits solution \non the condition that $q$ lies between the limiting values:\n\\be\nq_c^{\\pm} = -L'(\\pm \\lambda_0^{\\pm}),\n\\label{Sdlpt.01bis}\n\\ee\nwith $q_c^{-} < 0 < q_c^{+}$. \nIn this case the steepest descent path in the neighborhood of the saddle point is parallel to the \nimaginary axis\nand a straightforward calculation leads to the asymptotic result for $V\\gg 1$\n\\be\n\\label{eq:PQ_norm}\n P(Q,V) = \\frac{1}{\\sqrt{-2\\pi V L''(x^*)}}\\, e^{-V [qx^* + L(x^*)]}.\n\\ee\nIf both $q_c^{\\pm}$ diverge this result is valid\nfor arbitrary finite values of $q$ because $x^* $ is always inside the gap $[-\\lambda^-_0,\\lambda^+_0]$,\nand far from the edges.\nIf, instead, one or both $q_c^{\\pm}$ are finite, the saddle point $x^*$ may reach \nthe boundary of the gap $[-\\lambda^-_0,\\lambda^+_0]$,\nand leave it for $q < q_c^{-}$ or $q > q_c^{+}$.\nWhen this occurs the asymptotic approximation in Eq. \\eqref{eq:PQ_norm} is no more valid.\n\n\nThe boundary values $q_c^{\\pm}$ are finite or infinite depending on whether the singularities\nof the integrand function in $L'(x)$ are integrable or not.\nMore specifically, let us denote by $k^{\\pm}_0$ the wave vectors at which \n$\\lambda_{\\pm}(k)$ attain the minimum value,\nthat is\n$\\lambda_{\\pm}(k^{\\pm}_0) = \\lambda^{\\pm}_0$.\nThen, in the neighborhoods of $k^{\\pm}_0$, \n\\be\n\\lambda_{{\\pm}}(k) - \\lambda^{\\pm}_0 \\sim \\begin{cases}\n C\\, (k - k^{\\pm}_0)^{2n}, & \\text{for}\\ k^{\\pm}_0 > 0; \\\\\n C\\, k^{\\alpha}, & \\text{for}\\ k^{\\pm}_0 = 0,\n \\end{cases}\n \\label{Sdlpt.10}\n\\ee\nwhere $C$ is a positive constant, $n=1,2,\\dotsc$ and $\\alpha > 0$.\nTherefore, $q_c^{\\pm}$ diverge if $k^{\\pm}_0 > 0$ or if $k^{\\pm}_0=0$ and $d \\leq \\alpha$, while\nthey are finite if $k^{\\pm}_0=0$ and $d > \\alpha$. \nWhich is the case depends on the parameters of the quench $T_0,T,\\tau,t_w$. So, in general,\nthis manifold of parameters is partitioned into phases\ndistinguished by $q_c^{\\pm}$ being finite or infinite. \n\nFor what follows, and before discussing how the steepest descent calculation must be modified, it \nis useful to get some insight into the meaning of the two different cases.\nRecalling that $\\lambda_+(k)$ and\n$-\\lambda_-(k)$ are the inverse average heat absorbed or released, it is evident\nfrom Eq.~(\\ref{Sdlpt.7bis}) that\nif $q$ coincides with the average heat density $\\langle q \\rangle$, then\n$x^*(\\langle q \\rangle) = 0$, implying $-L'(0) = \\langle q \\rangle$. \nConsequently, if $q$ differs\nfrom $\\langle q \\rangle$ then $x^*(q) \\neq 0$ and \n\\be\n\\lambda^*_{\\pm}(k,q) = \\lambda_{\\pm}(k) \\mp x^*(q)\n\\label{0.02}\n\\ee\nacquire the meaning of the inverse average heat absorbed or released in new conditions, such that\n$q$ would be the new average total heat density, exactly as the $\\{B_i^*\\}$ of section~\\ref{zero}\nhave been recognized to be the shifted external fields necessary to render the fluctuation\n$m$ equal to the average magnetization per spin.\nIn other words, a deviation of $q$ \nfrom the average $\\langle q \\rangle$\nbrings in a shift by $x^*(q)$ of the inverse heats exchanged by the single modes. \nAccordingly, when $x^*(q)$ approaches the edges of the gap the above defined $\\lambda_0^{\\pm *}$ vanish,\nwhich means that the corresponding edge modes give an {\\it infinite} contribution to the exchanged heat.\nThis may happen either because the fluctuation $q$ itself is infinite, or because the contributions\nof all the modes, other than the edge ones, can sum up at most to the finite amounts $q_c^{\\pm}$. In the\nlatter case, if $|q|$ goes above the thresholds $|q_c^{\\pm}|$, in order to make up\nfor the finite difference $|q| - |q_c^{\\pm}|$ a spike contribution must come from the edge modes\nat $k_0^{\\pm}$. This phenomenon is the condensation of fluctuations\nin the edge mode, since as anticipated in the Introduction and at the beginning of\nthis section, the entire amount of\nthe fluctuation above the critical threshold comes from a single degree of freedom. \nThe mechanism of the transition can be recognized to be the same as that of \nBose-Eintein condensation~\\cite{Huang}.\nMore in general, condensation of fluctuations\nhas been found in widely different contexts such as information theory~\\cite{Merhav},\nfinance~\\cite{Filiasi} and statistical mechanics encompassing\nboth equilibirum and out of equilibrium situations~\\cite{JoEPL}.\n\nWhen one or both $q_c^{\\pm}$ are finite, the steepest descent calculation must be \nmodified.\nThis case will be dealt with in the next subsection. Returning to the case in which\nEq.~(\\ref{eq:PQ_norm}) holds, neglecting subdominant terms and taking the ratio $P(-Q)\/P(Q)$, \none obtains the following explicit form of the AF\n\\be\n\\frac{1}{V}{\\cal K}(Q) = -[x^*(-q) + x^*(q)]q + [L(x^*(-q)) - L(x^*(q))],\n\\label{condns.7}\n\\ee\nwhich is clearly reminiscent of Eq.~(\\ref{G.5}) in sect.~\\ref{zero}. \nIn order to expose the analogy, let us\nsubtract equations~(\\ref{0.02}) one from the other and take the average over $k$,\nobtaining\n\\be\nx^*(q) = \\frac{1}{2} \\left (\\overline{\\lambda_+} - \\overline{\\lambda_-} \\right )\n+ \\frac{1}{2} \\left [\\overline{\\lambda_-^*(q)} - \\overline{\\lambda_+^*(q)} \\right ],\n\\label{0.03}\n\\ee\nwhere \n\\be\n\\overline{\\lambda_{\\pm}} = \\frac{1}{{\\cal N}}\\int_0^{\\Lambda} d\\mu(k) \\, \\lambda_{\\pm}(k), \\,\\,\\,\n{\\cal N} = \\int_0^{\\Lambda} d\\mu(k) \\, 1,\n\\label{0.04}\n\\ee\nand similar expressions for $\\overline{\\lambda_{\\pm}^*(q)}$. Inserting into Eq.~(\\ref{condns.7}),\nwe obtain\n\\be\n\\frac{1}{V}{\\cal K}(Q) = \\left (\\overline{\\lambda_-} - \\overline{\\lambda_+} \\right )q\n+ \\left [\\Psi(-q) - \\Psi(q) \\right ],\n\\label{0.05}\n\\ee \nwhere\n\\be\n\\Psi(q) = L(x^*(q)) + \n\\frac{1}{2} \\left [ \\overline{\\lambda_-^*(q)} - \\overline{\\lambda_+^*(q)} \\right ]q.\n\\label{0.06}\n\\ee \nHence, comparing with Eqs.~(\\ref{G.03}) and~(\\ref{moz.4}), we can recognize the same structure: \nthe prefactor $\\left (\\overline{\\lambda_-} - \\overline{\\lambda_+} \\right )$ \nof the linear contribution (the same appearing in the single oscillator case)\nplays the role of the external average field $\\mathbb{B}$, while the nonlinear\nterm $\\left [\\Psi(-q) - \\Psi(q) \\right ]$, arises from ``free energy'' contributions.\nAs remarked above, the presence of this latter term is due to the $k$-dependence of $\\Delta \\beta_k$,\nwhich plays the same role as the $i$-dependence of $B_i$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Condensation transition}\n\n\nOn physical grounds one can argue that condensation at finite\n$q_c^{-}$ can occur only in cooling experiments, where the final\ntemperature is lower than the initial one, while condensation at\nfinite $q_c^{+}$ can occur only in heating experiment. We do not have\na proof of this, but numerical analysis of the conditions\nfor condensation confirm this conjecture.\n\nLet us thus concentrate on the case with $q_c^{+} = \\infty$ and\n$q_c^{-}$ finite, which occurs for $k_0^- = 0$ and $d > \\alpha$. The\nanalysis of the opposite case in which $q_c^{+}$ is finite and\n$q_c^{-} = -\\infty$, (or both finite, if such a case exists) is\nstraightforward.\n\\begin{figure}[!tb]\n\\includegraphics[width=0.9\\columnwidth,clip=true]{path_cusp2.eps}\n\\caption{Schematic representation of the steepest descent path of\n integration. If $x^*=x_1^*\\ge -\\lambda_0^-$, the integration contour\n goes through $x_1^*$, where $\\phi'=0$, ($\\Gamma_1$ curve).\n If $x^*=x_2^*< -\\lambda_0^-$,\n the integration contour develops a cusp and sticks\n to $-\\lambda_0^-$, ($\\Gamma_2$ curve).}\n\\label{cusp}\n\\end{figure}\nWhen $q_c^-$ is finite, Eq.~(\\ref{Sdlpt.7bis}) \ndoes not admit a solution with $x^*\\in[-\\lambda_0^{-}, \\lambda_0^{+}]$ whenever $q < q_c^-$.\nThe problem is well known from the theory of Bose-Einstein\ncondensation of an ideal gas of bosons~\\cite{Huang} or from\nthe ``sticking'' of the saddle point to a singularity\nin the solution of the spherical model of ferromagnetism~\\cite{Berlin}. \nIt arises because \nwhen $q < q_c^{-} = -L'(\\lambda_0^{-})$ the steepest descent path cannot pass through the \nstationary point $x^*< -\\lambda_0^{-}$ of $\\phi(z;q) = zq + L(z)$ \nand must traverse the real axis at the gap edge $z=-\\lambda_0^{-}$. \nSince $\\phi'(-\\lambda_0^-;q) = q - q_c^{-}$\nis negative for $q < q_c^{-}$, the steepest descent path at $z=-\\lambda_0^{-}$ bends \ntoward the negative real axis forming \nthe cusp characteristic of saddle point sticking at the gap edge (see Fig. \\ref{cusp}).\nThe large $V$ behavior of $P(Q,V)$ is dominated by the \nneighborhood of the gap edge because $\\phi(-\\lambda_0^{-};q) < \\phi(x^*;q)$\nleading to the asymptotic behavior for $V\\gg 1$:\n\\be\n\\label{eq:PQ_cond1}\n P(Q,V) \n = \\frac{1}{\\sqrt{-\\pi V \\bigl(q - q_c^{-}\\bigr)}}\\,\n e^{-V[ -\\lambda_0^{-} q + L(-\\lambda_0^{-})]},\n\\ee\nvalid for $q < q_c^{-}$.\n\nAs a matter of fact, this expression and Eq. \\eqref{eq:PQ_norm} valid\nfor $q > q_c^{-}$, hold for $x^*$ not to close to the gap edge\n$-\\lambda_0^{-}$. That is, respectively, in the condensed phase and\nnormal phase for $q$ not too close to $q_c^{-}$. The analysis of the\nasymptotic behavior of $P(Q,V)$ as $V\\gg 1$ for all values of $q$,\nincluding close to $q_c^{-}$, requires some care. Details can be\nfound in Appendix \\ref{app:crit}.\n\n\\begin{figure}[!tb]\n\\includegraphics[width=0.9\\columnwidth,clip=true]{pq_new.eps}\n\\caption{Comparison of the saddle point computation of $P(q)$ with the\n numerical computation of the same quantity for $\\omega_0=0.1$, $T_0=100$, $T=0.1$,\n $t_w=3$, $\\tau=7$, for a system with $L=41$ in $d=2$ ($N=1681$\n oscillators), and with $\\alpha=1$. (The quantity $q_\\delta^-$ is defined in Eq.~(\\ref{eq:q_d})).}\n\\label{fig20}\n\\end{figure}\n\nAdding and subtracting $\\lambda_0^{-}q_c^-$ in the exponent of Eq. \\eqref{eq:PQ_cond1}, \n$P(Q;V)$ in the condensed phase far from the critical point can be written as\n\\be\n P(Q,V) = \\frac{e^{\\lambda_0^- (Q-Q_c^-)}}{\\sqrt{\\pi \\bigl|Q - Q_c^{-}\\bigr|}} \\,\ne^{V[\\lambda_0^- q_c^- - L(-\\lambda_0^-)]},\n\\label{Img.21}\n\\ee \nwhere $Q_c^- = Vq_c^-$.\nComparing the first term with the expression~(\\ref{Decomp.3})\nfor the distribution in the purely cooling process we have\n\\be\n P(Q,V) = P_{k=0,-}(Q-Q^-_c)\\, P(Q^-_c,V),\n\\label{Img.22}\n\\ee\nwhere, up to normalization factors, \n\\be\n P(Q_c^-,V) \\sim e^{V [\\lambda_0^{-} q_c^{-} - L(-\\lambda_0^{-})]}.\n\\ee\nThis means that negative fluctuations below the critical lower threshold $Q^-_c$\ncondense into the cooling contribution of the $\\mathbf{k} =0$ mode for the exceeding\npart $(Q - Q^-_c)$, while the contribution of all the other modes is locked onto\n$Q^-_c$.\nThe comparison of the saddle point estimates of $P(Q,V)$, in the normal and in the condensed\nphase, with the ``exact'' numerical computation is illustrated in Fig.~\\ref{fig20}.\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\\subsection{Phase diagram}\n\nIn order to establish the ``phase'' structure, it is necessary to analyse\nthe behavior of $k^{\\pm}_0 > 0$. Let us first recall the results \nof Ref.~\\cite{Pechino}, where the problem was analysed for the quench\nto $T=0$. This is the simplest case because,\nas specified in section~\\ref{realization}, heat can only be released\nand we have to deal only with the $\\lambda_-(k)$ branch of the spectrum.\nDifferentiating with respect to $k$ the expression for $\\lambda_-(k)$, given by the \nfirst line of Eq.~(\\ref{eig.3}), we get\n\\be\n\\frac{\\partial \\lambda_-}{\\partial k} = C(k,\\tau,t_w)A(k,\\tau,t_w),\n\\label{Part.01}\n\\ee\nwhere $C(k,\\tau,t_w) = b^{-2} \\Delta T e^{-2\\omega(\\vec k)(t_w + \\tau)}\\alpha k^{\\alpha -1}$\nis a positive quantity, while \n\\be\nA(k,\\tau,t_w) = t_w e^{2\\omega(\\vec k)\\tau}[1-e^{2(E-k^{\\alpha})\\tau}],\n\\label{Part.02}\n\\ee\nhas the sign of $(k^{\\alpha}-E)$, with \n\\be\nE(\\tau,t_w) = \\frac{1}{2\\tau} \\ln \\left (1 + \\frac{\\tau}{t_w} \\right ) - \\omega_0,\n\\label{Part.03}\n\\ee\nand vanishes at $k^{\\alpha} = E$.\nTherefore, imposing the condition $E(\\tau,t_w) = 0$, in the $(\\tau,t_w)$ plane\nthere remains defined the critical line, given by\n\\begin{equation}\nt_w = \\frac{\\tau}{e^{2\\omega_0 \\tau} - 1},\n\\label{Part.6}\n\\end{equation}\nsuch that above it $k_0^- =0$, while below $k_0^- > 0$. Thus, $k_0^-$ or, equivalently,\n$q_c^-$ act as order parameters and below the critical line the system is in the \nnormal phase, corresponding to $q_c^-=-\\infty$, while above it is in the \ncondensed phase, corresponding to $q_c^-$ finite,\nas illustrated in Fig.~\\ref{Phasediagr} (black line). \n\n\n\n\nIf $T > 0$, we must take into account both branches $\\lambda_{\\pm}(k)$ \nand keep track of the two order parameters $q_c^{\\pm}$. The\nanalytical search of the critical lines turns out to be quite complicated. \nSo, we have looked for the absolute minima of $\\lambda_{\\pm}(k)$ numerically.\nThe resulting phase diagram for $q_c^-$ is \ndepicted in Fig.~\\ref{Phasediagr} for a few values of $T$, ranging from very low to\nalmost equal to $T_0$. When the system is equilibrated ($T=T_0$), there is no\ncondensed phase. Fig.~\\ref{Phasediagr} shows that upon lowering $T$ there appears a condensed\nphase which, starting from the far right, pronges toward the left eventually filling the entire\nregion above the $T=0$ critical line. The prominent qualitative difference between the $T=0$\nand the $T>0$ phase diagrams for $q_c^-$ is that in the latter case the transition driven by an increasing\n$t_w$, for fixed $\\tau$, manifests reentrant behavior. The same computation for\n$q_c^+$ does not show the existence of a condensed phase, namely $q_c^+ = \\infty$ all\nover the explored $(\\tau,t_w)$ plane.\n\nThe reentrant behavior of the phase diagram can be interpreted as follows. Let us consider a quench at finite\ntemperature, and let us fix $\\tau$. Then,\nfor small enough $t_w$, all oscillators are out of equilibrium and can exchange an arbitrary amount\nof heat with the bath. Therefore no condensation can take place in this region. Upon increasing $t_w$,\nall oscillators do equilibrate but the slowest one, corresponding to the mode $k=0$, \nthat can account for large exchange of heat (above the threshold), leading to the condensation of fluctuations. Eventually,\nfor large $t_w$, all oscillators are equilibrated and again the normal phase is recovered.\n\n\n\\begin{figure}[!tb]\n\\includegraphics[width=0.9\\columnwidth,clip=true]{phase3.eps}\n\\caption{$q_c^-$ phase diagram for various final temperatures $T$ and\nfor $T_0=1$. The normal phase corresponds to $q_c^- = \\infty$, while\nthe condensed phase to $q_c^-$ finite. }\n\\label{Phasediagr}\n\\end{figure}\n\n\n\\section{Conclusions}\n\\label{conclusions}\n\n\nIn this paper we have studied the fluctuations of the heat exchanged with the environment\nby a system of oscillators relaxing after a temperature quench. Focusing on the\nFT, we have investigated the relation between the deviations from linearity of the AF\nand the internal structure of the system, building on the analogy with the behavior\nof fluctuations in equilibrium when a symmetry is broken by nonuniform external\nperturbations.\n\nWe have first analysed in great detail the case of a single\noscillator, reducing it to the convolution of the two elementary\nprocesses in which heat can be only released or only absorbed. The\ninverse average heats released $(\\lambda_-)$ and absorbed\n$(\\lambda_+)$ in these processes turn out to be the basic objects\nunderlying the behavior of the quantities of interest. In the\none-oscillator case the AF is linear with slope given by the\ndifference $\\Delta \\beta = \\lambda_- - \\lambda_+$ and, in the\nframework of the above mentioned analogy with the equilibrium problem,\nthis quantity acts like an external field explicitly breaking the time\nreversal symmetry. By adding a second oscillator it starts to surface\nthe role of the dishomogeneity of the external perturbation in\ndetermining deviations from linearity in the AF. It should be pointed\nout that in the dynamical problem the lack of homogeneity is due to\nthe existence of different relaxation rates, related to the presence\nof degrees of freedom evolving on different time scales. This\nproduces a temporal dishomogeneity, which induces a differentiation of\nthe affinities $\\Delta \\beta_1$ and $\\Delta \\beta_2$, analogous to the\nspatial heterogeneity generated by the external field $\\{B_i\\}$.\n\nThis picture emerges most clearly in the case of a large number of\noscillators. In this case, the system, although diagonal, presents\nnontrivial features, the most notable of which is the possible\ncondensation of fluctuations~\\cite{Gonnella,Jo,Pechino}. Here, we have\npresented a detailed analytical study of the large deviation function\nof the heat distribution, with an accurate treatment of the crossover\nfrom the normal to the condensed phase, pointing out some interesting\ntechnical points involved in the application of the steepest descent\nmethod in the presence of a transition. In addition, we have mapped\nout the phase diagram of the condensation transition, discovering an\nunexpected reentrant behavior, in the $(t_w,\\tau)$ plane.\n\nOur analysis allowed us to establish a close correspondence with the\nstructure of the AF for the paramagnet in equilibrium under the action\nof a nonuniform external field. In the equilibrium case it seems\nreasonable to make the statement that the linearity of the AF depends\non whether the observable of interest ${\\cal M}(\\sigma)$ is conjugate\nto the external perturbation, the implication of which being that\ndeviations from the FT are due to the lack of conjugation. Clearly,\nthe interesting and challenging issue is to understand whether this\nway of looking at the AF and its deviations from linearity can be extended to the\nnonequilibrium case. The study we have presented in this paper is a\nfirst step in this direction, paving the way to futures investigations\nwithin the context of more complex interacting systems, showing slow\nrelaxation and aging phenomena.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhsmq b/data_all_eng_slimpj/shuffled/split2/finalzzhsmq new file mode 100644 index 0000000000000000000000000000000000000000..5867874cbd1650687f22024e6bccc3f73606d5c1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhsmq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nFirst notions of \\textit{artificial neural networks (ANNs)} -- in the context of supervised learning -- date back to the early 1940s~\\cite{McCulloch1943}. Although considerable research has been conducted in the succeeding decades, people had to wait for the emergence of powerful computers, and the accompanying (significant) drop in costs for storing and processing data, until ANNs turned into highly competitive machine learning algorithms~\\cite{GoodfellowBengio2016,hastie01statisticallearning}. Nowadays, ANNs represent the state of the art in a variety of supervised learning tasks, such as image, sound or object recognition \\cite{HeZRS16,SimonyanZ14,AbdelHamidMJDPY14}.\n\n\nHowever, while the strength of ANNs undoubtedly results from their huge amount of parameters, their enormous complexity also makes them rather incomprehensible. \nAlthough numerous works have introduced methods for interpreting the decisions made by ANNs \\cite{nguyen2016synthesizing,erhan2009visualizing,zhou2016learning}, these methods lack explanations for malicious and intentional behaviour against ANNs.\n\nOne possibility for fooling ANNs is the addition of carefully crafted noise, so-called \\textit{adversarial perturbations} (cf. Figure~\\ref{fig:adv_example}), to the model's input \\cite{SzegedyZSBEGF13,goodfellow2014,GuR14,TabacofV15,FawziMF16,CarliniW16,Carlini017,KurakinGB17a}. It should be noted that these perturbations do not need to be crafted individually per network; instead, ANNs that are trained for a similar task often can be fooled by the same adversaries \\cite{SzegedyZSBEGF13,goodfellow2014,PapernotMGJCS16}.\n\n\\newpage\nWhile a plethora of works have shown the advantages of (deep learning) neural networks, their black-box characteristic makes them very vulnerable. Hence, in order to enable a more frequent and, even more importantly, more secure integration of neural networks in our daily lives, we need to ensure their robustness against adversarial attacks. \n\nThe issue of a network's vulnerability and how to defend it against adversarial attacks will be investigated in this work. As such, our contributions can be summarized as follows:\n\\begin{compactenum}\n \\item We present a \\textbf{novel method to craft transferable adversaries}, which in turn helps to improve the understanding of a network's vulnerability against adversarial attacks.\n \\item We introduce a novel approach to regularize decision boundaries and, thus, \\textbf{enhance the resilience of neural networks against adversarial attacks}. Compared to previous attempts, our approach does not require additional backward-passes, which can decrease training speed significantly (cf. Table \\ref{tab:training_time}).\n\\end{compactenum}\n\\begin{figure}[t!]\n\\centering\n \\begin{overpic}[width=0.95\\linewidth, trim = 2mm 0mm 0mm 0mm, clip]{figures\/example_adver.png}\n \\put (31.5,11) {\\large{$+~\\varepsilon~\\cdot$}}\n \\put (70,11) {\\large{$=$}}\n \\end{overpic}\n \n \\caption{Example for an adversary (from left to right): the original image (classified as \\textit{great panda} with a confidence of 99.99\\%), adversarial noise with $\\varepsilon=3$ and the resulting adversarial image (classified as \\textit{saxophone} with a confidence of 83.8\\%).}\n \\label{fig:adv_example}\n\\end{figure}\n\nThe remainder of this manuscript is structured as follows. In Section~\\ref{sec:related} we set our work into context. We then introduce a method for crafting transferable adversaries in Section~\\ref{sec:transfer}, and list possible defense strategies against adversarial attacks in Section~\\ref{sec:counteractions}. Thereafter, we provide our experimental setup in Section~\\ref{sec:experiments} and discuss our findings in Section~\\ref{sec:results}. At last, Section~\\ref{sec:conclusion} concludes our work.\n\n\\section{Related Work}\n\\label{sec:related}\n\n\nSeveral methods to craft adversarial examples have been published in recent years \\cite{goodfellow2014,MoosaviDezfooli15,Carlini017,madry2018towards}. In principle, one can categorize these methods into \\textit{single-} and \\textit{multi-step} attacks: the former only require a single gradient while the latter calculate several gradients. Kurakin et al.~\\cite{KurakinGB16a} found in their work that multi-step adversaries are less transferable than single-step ones and they concluded that models are indirectly robust against multi-step adversaries. Yet, in line with the findings of \\cite{dong2018boosting}, we observed that the transfer of adversaries between models is rather easy when using a so-called \\emph{ensemble attack}, i.e., a multi-step attack, which is executed on an ensemble of models simultaneously \\cite{dong2018boosting}.\n\nAs generating adversarial perturbations is rather simple, we will at first address the characteristics of these perturbations along with their accompanying risks of black-box attacks (see Section~\\ref{sec:charact}). Note that in contrast to white-box methods, black-box approaches have no access to the model's gradient. Thereafter, in Section~\\ref{sec:resil}, we briefly outline selected strategies to enhance resilience.\n\n\\subsection{Characteristics of Adversaries}\n\\label{sec:charact}\nSzegedy et al.~\\cite{SzegedyZSBEGF13} crafted artificial noise, which causes ANNs to misclassify images whose original versions they classified correctly. As a result, it could be demonstrated that already slight changes, which are hardly recognizable to the human eye, are sufficient to fool the network. As further demonstrated by \\cite{SzegedyZSBEGF13,goodfellow2014}, and also in this work, adversarial perturbations are not model-specific, but instead generalize well over networks with different hyper-parameters. In addition, even networks trained on disjoint data sets, yet fulfilling a similar classification task, are likely vulnerable to the same adversarial image \\cite{goodfellow2014,PapernotMGJCS16}. Thus, two models with different architectures, hyper-parameters, and trained on different data sets, often misclassify the same images. This property can be used to execute black-box attacks as demonstrated by \\cite{PapernotMGJCS16}. \n\nSeveral works have been conducted to examine the effects of adversaries. The neighborhoods of adversarial examples and their original counterparts were investigated by \\cite{TabacofV15}. The authors found that while already small and random noise applied to adversaries often caused it to be shifted back to their original class, the classification of the original images did not change (even when rather large random noise was applied). Therefore, the authors concluded that adversarial examples likely inhabit small sub-spaces within the decision space.\n\nNext, \\cite{tramer2017space} investigated these subspaces by measuring the number of orthogonal attack directions and found that the number of these directions is rather large. \nThey concluded that the more orthogonal attack directions exist at a given point, the larger these adversarial subspaces are and the more likely adversaries transfer between models (since larger subspaces are more likely to intersect between different models).\nThis may be the explanation on \\textit{why} adversarial perturbations are often universal across different model designs and often universal across datasets.\n\n\\subsection{Enhancing Resilience through Adversarial Training}\n\\label{sec:counteractions}\n\\begin{figure}[t!]\n\\flushright\n \\begin{overpic}[width=0.92\\columnwidth]{figures\/example_alpha.png}\n \\put (-8,11) {{$(1-\\alpha)~\\cdot$}}\n \\put (29.5,11) {{$+~\\alpha~\\cdot$}}\n \\put (70,11) {{$=$}}\n \\end{overpic}\n \n\t\\caption{Idea of our proposed method: linear combination of the original image (\\textit{great panda}) with a randomly selected image (\\textit{saxophone}) using weight $\\alpha \\in (0, 0.5)$ (here: $\\alpha = 0.3$). The resulting training image (right) inherits the class of the original image. This allows us to generate, on the one hand, more training examples and, on the other one, training examples which lie in close proximity to the decision boundaries.}\n \\label{fig:alpha_example}\n\\end{figure}\n\\label{sec:resil}\n\nDefense strategies against adversarial attacks can mainly be distinguished into two categories: approaches that either change (i) a model's topology, or (ii) the way a model is trained. Gu and Rigazio~\\cite{GuR14} stated that a model's robustness \\enquote{is more related to intrinsic deficiencies in the training procedure and objective function than to [its] topology}.\n\n\n\nUnder the concept of \\textit{Adversarial Training}, several works have introduced regularizers which decrease a model's vulnerability to adversarial perturbations. The first one was introduced by Goodfellow et al.~\\cite{goodfellow2014}. Here, the objective function of the \\textit{Fast Gradient Sign Method} (FGSM) \\cite{goodfellow2014} is added to the classification loss.\nFor every training image, a single-step adversary is calculated and the network's sensitivity to this adversary is minimized. Thus, \\textit{Adversarial Training} acts as a regularizer to improve resilience \\cite{goodfellow2014,KurakinGB16a}.\n\n\nThe training method of \\cite{goodfellow2014} only uses a single back-propagation step to calculate adversarial perturbations. In contrast to the aforementioned approach, other works introduced a min-max optimization method \\cite{Huang2015LearningWA,shaham2015}. It prepares the network for the worst-case scenario under a limited perturbation and minimizes its sensitivity to it. The algorithm works in two steps. First, it maximizes the loss of the input image by adding a small adversarial perturbation. However, the network trained with a min-max regularizer performed worse on the classification tasks \\cite{Huang2015LearningWA} -- even when trained with rather small perturbations. Due to the additional maximization problem, the algorithm is even more complex to execute, which further increases training time.\n\nSimilar approaches were introduced by \\cite{Miyato2015DistributionalSW,madry2018towards}. The approach of \\cite{Miyato2015DistributionalSW}, denoted \\textit{Virtual Adversarial Training} (VAT), works in a similar way as the min-max optimization, yet it maximizes the Kullback-Leibler divergence \\cite{Kullback51klDivergence} between the probability vectors of the training image and an adversary. The KL-divergence between the original and an adversary can be interpreted as the local smoothness of the decision space. Thus, the network not only learns to classify the training images correctly, but also smoothens the decision boundaries around each of the training points. Miyato et al.~\\cite{Miyato2015DistributionalSW} used \\textit{Projected Gradient Descent} (PGD) to solve the inner maximization problem. The authors could demonstrate a guaranteed robustness even against multi-step adversaries under small $l_\\infty$-norm -- with $l_\\infty = max\\{|x~-~x^{adv}|\\}$ -- where $x$ is the unchanged input image while $x^{adv}$ is the image with additional adversarial perturbation. Multi-step black- and white-box attacks have been conducted under a $l_\\infty\\leq8$-constraint. Even under multi-step white-box attacks, the accuracy of their model is 45\\%, whereas the accuracy of a non-robustified, and thus defenseless, model would be close to zero. However, as there are additional forward-backward passes required, training time takes accordingly longer.\n\n\\section{Crafting Strong and Transferable Adversaries}\n\\label{sec:transfer}\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=\\columnwidth, trim = 0mm 8mm 0mm 0mm, clip]{figures\/example_pic.png}\n\t\\caption[Random Images from the Dataset]{Exemplary images from the modified \\textit{Large Scale Visual Recognition Challenge 2017} dataset: Seal, Bear, Dragonfly and Rabbit (left to right).}\n\t\\label{fig:example_pic}\n\\end{figure}\n\n\nGoodfellow et al.~\\cite{goodfellow2014} introduced the \\textit{Fast Gradient Sign Method} (FGSM):\n\\begin{align}\n x^{adv} &= x + \\varepsilon \\cdot \\text{sign} \\left(\\nabla_{x}~L(\\Theta, x, y)\\right) \\label{eq:fgsm}\n\\end{align}\nThe loss function\\footnote{In this work, we refer to the cross-entropy as loss function. Yet, any other function can most likely be used, too.} $L$ of the model $\\Theta$ is differentiated with respect to the input image $x$ and its true class $y$. The sign of the resulting gradient $\\nabla_{x}~L$ is used to calculate adversarial noise which, when applied to an image, increases the model's loss and, thereby, could shift its classification decision. The parameter $\\varepsilon$ is used to control the amount of perturbations. Note that a larger $\\varepsilon$ does not necessarily lead to a higher chance of fooling a model as demonstrated by Figure \\ref{fig:loss}. Instead, tuning $\\varepsilon$ is highly important. The resulting adversary is a single-step one.\n\nOne can execute the FGSM multiple times with a small $\\varepsilon$-value to converge towards the adversarial sub-space iteratively. The resulting adversaries are then referred to as multi-step ones. In order to find multi-step adversaries under limited amount of perturbations and close to the original image, several methods such as \\cite{MoosaviDezfooli15,Carlini017,madry2018towards,KurakinGB16a} have been introduced. When the perturbation exceeds a certain limit -- usually the $l_\\infty$- or $l_2$-norm of the adversarial perturbation is used -- the perturbation is projected back into the search space, e.g., by using pixel-wise clipping.\n\nAs pointed out by \\cite{TabacofV15,tramer2017space}, adversaries lie in sub-spaces which are in close proximity to true training examples and which often intersect between models. One can argue that when crafting adversaries with multi-step methods, these adversaries often do not lie in intersecting sub-spaces because the resulting adversary may be overfitted to the particular model as its loss is maximized. To overcome this issue, \\cite{dong2018boosting} introduced the (i) \\textit{Momentum Iterative Gradient Based Method} (MI-FGSM) and (ii) the \\textit{ensemble in logits}. The former method uses a momentum to overcome local maxima and the latter one combined the logits of an ensemble of models to find a common and intrinsic direction. Attacking an ensemble of models increases the likelihood of transferable adversaries as these adversaries are not optimized for a single model but for several models simultaneously. Hence, the adversaries lie more likely in intersecting adversarial sub-spaces. \n\nIn contrast to \\cite{dong2018boosting}, we used the combined gradient of an ensemble of models to craft adversaries which lie in intersecting sub-spaces. We call our method \\textit{gradient ensemble attack}. We found that these adversaries are likely to transfer between models of the same architecture -- and frequently transfer to other architectures as well (see Section~\\ref{sec:results}). To ensure that every model's gradient has the same impact on the resulting adversarial perturbation, we normalize each gradient to length one:\n\\begin{align}\n \\hat{\\nabla}_{x}~L(\\Theta, x, y) &= \\frac{\\nabla_{x}~L(\\Theta, x, y)}{||\\nabla_{x}~L(\\Theta, x, y)||_2}. \\label{eq:grad_norm}\n\\end{align}\nAfterwards, the different gradients are summed up to find a common direction of intersecting adversarial sub-spaces. Similar to \\cite{KurakinGB16a}, we approximated the true direction by using the sign-method of the summed and normalized gradients to the image in an iterative process:\n\\begin{eqnarray}\n x^{adv}_{i} &=& x_{i-1} + \\lambda \\cdot \\text{sign} \\left(\\sum_{n=0}^N~\\hat{\\nabla}_{x}~L(\\Theta_n, x_{i-1}, y) \\right) \\label{eq:adv_methode}\\\\\n \\text{s.t.} && ||x^{adv}_{i} - x||_{\\infty}~\\leq~\\varepsilon \\quad \\text{and} \\quad x_0 = x.\\nonumber\n\\end{eqnarray}\nTo ensure that the magnitude of the perturbations stays within a given limit, we used pixel-wise clipping. Further, we used a learning rate of $\\lambda=1$ (to slowly convert towards the adversarial spot) and set the number of steps to $I=\\min(\\varepsilon + 4, 1.25\\cdot\\varepsilon)$ as proposed by \\cite{KurakinGB16a} to craft adversarial images. In addition, we chose $\\varepsilon\\in\\{4,8,16\\}$ to limit the amount of perturbations in order to test and compare different magnitudes of adversarial perturbations.\n\n\\section{Proposed Defense Strategies}\nAs demonstrated by \\cite{TramerKPGBM18} and \\cite{athalye18a}, adversarial training may lead to \\emph{gradient masking}. This term describes a phenomenon in which the decision boundaries are roughly the same, yet the corresponding gradient is damaged or obfuscated in a way that white-box attacks fail. However, the model does not become more resilient against adversarial examples in black-box attack scenarios or against transferable adversaries as these attacks are based on surrogate models.\n\nIn order to avoid the risk of gradient masking, we propose a method that does not require gradient calculations and still flattens the decision space. In addition, we avoid the expensive optimization of an inner maximization problem as done in VAT or PGD. We found that while random noise is mostly ignored by the model, superimposing two pictures does distract a model. Therefore, as illustrated in Figure~\\ref{fig:alpha_example}, we designed training images $\\tilde{x}_i$ by placing a randomly selected image $x_r$ on top of the original training image $x_i$:\n\\begin{align}\n \\tilde{x}_i = (1-\\alpha) \\cdot x_i + \\alpha \\cdot x_r. \\label{eq:alpha_method}\n\\end{align}\nThe parameter $\\alpha \\in (0, 0.5)$ controls the impact of image $x_r$ on the generated image $\\tilde{x}_i$. As the majority of the image originates from the original image $x_i$, the generated image $\\tilde{x}_i$ will inherit the class label $y_i$ of the original image. Our proposed approach thus allows to (i) generate \\emph{more training examples}, and (ii) create images that are closer to the decision boundaries of at least two classes -- and thus \\emph{harder to distinguish} -- as $\\tilde{x}_i$ contains properties of two image classes. Thereby, the space between the different classes is flattened and the boundaries are regularized. Thereafter, the networks will be trained by minimizing the loss $L(\\Theta,\\tilde{x}_i, y_i)$.\nAs the results depend on the choice of $\\alpha$, we considered three different approaches:\n\\begin{compactenum}\n\\item Using a fixed $\\alpha$-value.\n\\item Following the idea of \\cite{Miyato2015DistributionalSW}, i.e., first predicting the classes $y_i$ and $\\tilde{y}_i$ based on $x_i$ and $\\tilde{x}_i$ (and using a fixed $\\alpha$ as in 1.), and then minimizing the loss of the unmodified training examples \\emph{and} the KL-Divergence between the predicted classes of the unmodified and modified training examples:\n\\[\\underset{\\Theta}{\\text{minimize}} \\quad L(\\Theta, x_i, y_i) + \\lambda \\cdot KL(\\hat{y}_i~||~\\tilde{y}_i).\\]\n\\item Similarly to 2., but this time drawing $\\alpha$ from a beta-distribution instead of a fixed $\\alpha$-value; $\\alpha \\sim B(p,q)$ with $p=2$ and $q \\in \\{4,6,10\\}$.\n\\end{compactenum}\nNote that while the first method does not require any additional passes, the latter two methods require a second forward-pass to calculate the KL-Divergence -- but no additional backward-pass. As stated by \\cite{Miyato2015DistributionalSW}, the KL-divergence can be interpreted as the local smoothness of the decision space. If the divergence is high, the predictions of both input images are dissimilar, implying that the decision space between the two activations is not flattened. By minimizing the divergence, the models learn to find similar activations for both images and, thereby, flatten the decision space between the two activations.\n\nAs a side effect, our method allows to generate a multitude of training images (using Equation~(\\ref{eq:alpha_method})), which in turn allows to train a larger area.\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.95\\columnwidth]{figures\/acc_over_time.png}\n\t\\caption[Validation Accuracy during training]{Visualization of the validation accuracy during training. The \\textit{Projected Gradient Descent} \\cite{madry2018towards} substantially reduces the overall accuracy compared to the \\textit{Base Model}, our proposed method, or the \\textit{Virtual Adversarial Training} \\cite{Miyato2015DistributionalSW}. Furthermore, the model $\\alpha \\sim B(2, 4)$, $KL_{\\lambda=10}$ achieves an accuracy of more than 70\\% with fewer methods than the \\textit{Base Model}.}\n\t\\label{fig:acc_druing_training}\n\\end{figure}\n\n\\section{Methodology and Experiments}\n\\label{sec:experiments}\nIn the following, we briefly outline the models' topologies and used training data.\n\n\\subsection{Dataset}\nWithin our experiments, we used the data from the second challenge of the \\textit{Large Scale Visual Recognition Challenge 2017}\\footnote{\\url{http:\/\/image-net.org\/challenges\/LSVRC\/2017\/}} \\cite{ILSVRC15} for training and validating our models. \nYet, as down-scaling the whole images to the required size was not reasonable, we first used bounding boxes to cut out the objects within each of the images. Then, as the bounding boxes were of different sizes, the cropped images were down-scaled to the desired resolution of 128 $\\times$ 128 pixels (cf. Figure \\ref{fig:example_pic}).\n\nAs a result, our dataset contains roughly 400,000 images. Unfortunately, the images are not uniformly distributed across all 200 classes. As the mismatch between the classes is too large to use oversampling, we selected 100 classes with an identical number of images per class. For training and validation purposes, we performed holdout with a 80-20-split. Thereby, we created a dataset which is larger and has a higher resolution than CIFAR10\/100~\\cite{krizhevsky2009learning} while (at the same time) being significantly smaller and less complex than ImageNet~\\cite{ILSVRC15}. In addition, by using balanced classes, we eliminate any side-effects which may occur from unbalanced datasets and may influence our results.\n\n\\subsection{Models}\n\\begin{table}[!t]\n \\centering\n \\caption{Measured training Time for different models (all models were trained on a single Nvidia Quadro RTX 6000). Training a single epoch with our proposed method takes only slightly longer than the \\textit{Base Model} without any defense. In addition, $\\alpha \\sim B(2, 4)$, $KL_{\\lambda=10}$ reaches 60\\% and 70\\% Validation Accuracy about 30 minutes earlier than the \\textit{Base Model} as it converges faster (cf. Figure \\ref{fig:acc_druing_training}). Training models using the \\textit{Projected Gradient Descent} \\cite{madry2018towards} or \\textit{Virtual Adversarial Training} \\cite{Miyato2015DistributionalSW} takes noticeably longer per epoch, and they take even longer to converge.}\n \\label{tab:training_time}\n \n \n \\begin{tabular}{l|cccc}\n & Median Time & Time until & Time until & Time until \\\\\n Model & p. Epoch & 50\\% Acc. & 60\\% Acc. & 70\\% Acc. \\\\\n \\midrule\n \\textit{Base} & \\cellcolor{gray!25}\\textbf{6:43m} & \\cellcolor{gray!25}\\textbf{00:59h} & 02:15h & 03:17h \\\\\n $\\alpha=0.4$ & 6:55m & 02:18h & 03:14h & -\/- \\\\\n $\\alpha \\sim B(2, 4)$ & 6:51m & 01:01h & \\cellcolor{gray!25}\\textbf{01:42h} & \\cellcolor{gray!25}\\textbf{02:44h} \\\\\n VAT ($\\varepsilon=15$) & 14:45m & 02:12h & 03:55h & 08:05h \\\\\n PGD ($I=7$) & 22:40m & 11:46h & 15:56h & -\/- \\\\\n PGD ($I=3$) & 11:37m & 03:53h & 07:45h & -\/- \n \\end{tabular}\n \n \n\\end{table}\nTo investigate the effects of adversarial perturbations, we trained multiple ANNs. First, an ``unprotected'' model -- denoted \\textit{base model} in the following -- is trained. For this purpose, we took a VGGNet~\\cite{SimonyanZ14}, as it provides a straightforward and uncluttered design of convolutions, pooling and dense layers. In addition, we modified the design by using a \\textit{Global-Average-Pooling} layer \\cite{LinCY13} and extended each CNN layer with a \\textit{Batch Normalization} layer \\cite{Ioffe15}. Afterwards, we compared our proposed methods to different ResNets \\cite{HeZRS16} to verify our findings.\n\nIn order to compare different methods to defend against adversarial attacks, we trained several models with and without different defense methods (see Section~\\ref{sec:results} for more details). All models were trained on a single Nvidia RTX 6000 or a single Nvidia V100 GPU. We found, that by using VAT \\cite{Miyato2015DistributionalSW} or PGD \\cite{madry2018towards} the training speed is reduced, significantly. \nEach training epoch not only takes longer, but the models also converge more slowly towards the optimum. \nIn contrast, our proposed method is more time efficient as it only requires an additional forward pass and it even converges faster (cf. Figure \\ref{fig:acc_druing_training} and Table \\ref{tab:training_time}).\n\n\\subsection{Assessing a Network's Resilience against Adversarial Perturbations}\n\\label{sec:assessment}\nTo assess the robustness of a network against adversaries, we trained six models per considered network architecture. We then used an ensemble of one, three or five of the six ANNs\\footnote{Note that our method applied to an ensemble of one model is identical to the i-FGSM.} (see Section~\\ref{sec:results} for details) to extract a set of adversarial images using Equation~(\\ref{eq:adv_methode}). More precisely, an image is considered being adversarial, if it is misclassified by \\emph{all} of the ensemble's networks -- note that for simplicity the networks do not have to agree on the same wrong class. The set of the extracted adversarial images is then classified by the sixth model and its accuracy is taken as quality indicator of the respective network architecture.\n\n\\section{Results and Discussion}\n\\label{sec:results}\n\\begin{table*}[!t]\n \\centering\n \\caption{Accuracy of different networks on the sets of original and adversarial images. All adversarial datasets are crafted using our proposed \\textit{gradient ensemble attack} approach with one, three or five (vulnerable) VGGNet13 models. Our method is equivalent to \\cite{KurakinGB16a,madry2018towards} if only one model is used. The shown results are for $\\varepsilon=16$. When training a model with PGD, the performance on the original images is significantly lower in comparison to the \\textit{Base Model}. In contrast, VAT and our proposed method slightly increase the validation accuracy. Across all adversarial perturbations, VAT models achieved the highest accuracy. Noteworthy values are highlighted.\n }\n \\label{tab:results}\n \n \n \\begin{tabular}{r|cc|cc|cc|cc}\n &\\multicolumn{2}{c|}{\\bf Accuracy on} & \\multicolumn{6}{c}{\\bf Accuracy on Adversarial Dataset based on} \\\\\n &\\multicolumn{2}{c|}{\\bf Original Data} & \\multicolumn{2}{c|}{\\bf One Model} & \\multicolumn{2}{c|}{\\bf Three Models} & \\multicolumn{2}{c}{\\bf Five Models} \\\\\n Parametrization \\, & \\, Train &Val. \\, & \\, True& Adv. \\, & \\, True \\, & \\, Adv. \\, & \\, True \\, & \\, Adv.\\\\\n \\midrule\n \\multicolumn{9}{l}{\\textit{Base Model}}\\\\\n \\, & \\, 91.2\\% \\, & \\, \\cellcolor{gray!25}\\textbf{73.4}\\% \\, & \\, 89.9\\% \\, & \\, \\cellcolor{gray!25}\\textbf{23.8}\\% \\, & \\, 95.3\\% \\, & \\, \\cellcolor{gray!25}\\textbf{4.1}\\% \\, & \\, 96.7\\% \\, & \\, \\cellcolor{gray!25}\\textbf{0.9}\\% \\\\\n \\midrule\n \\multicolumn{9}{l}{\\textit{Projected Gradient Descent (PGD) with} $\\varepsilon = 8$ \\textit{and} $\\lambda = 2$}\\\\\n $I = 3$ \\, & \\, 86.7\\% \\, & \\, \\cellcolor{gray!25}\\textbf{66.2}\\% \\, & \\, 79.6\\% \\, & \\, 79.0\\% \\, & \\, 84.6\\% \\, & \\, 83.6\\% \\, & \\, 86.6\\% \\, & \\, 86.5\\% \\\\\n $I = 7$ \\, & \\, 77.8\\% \\, & \\, \\cellcolor{gray!25}\\textbf{62.4}\\% \\, & \\, 75.6\\% \\, & \\, 75.1\\% \\, & \\, 80.7\\% \\, & \\, 80.0\\% \\, & \\, 82.8\\% \\, & \\, 81.8\\% \\\\\n \\midrule\n \\multicolumn{9}{l}{\\textit{Virtual Adversarial Training (VAT) with} $I = 3$ \\textit{and} $\\lambda = 1$}\\\\\n $\\varepsilon = \\phantom{1}5$ \\, & \\, 97.7\\% \\, & \\, 76.2\\% \\, & \\, 90.6\\% \\, & \\, 85.1\\% \\, & \\, 95.2\\% \\, & \\, \\cellcolor{gray!25}\\textbf{87.2}\\% \\, & \\, 96.6\\% \\, & \\, 86.9\\% \\\\\n $\\varepsilon = 15$ \\, & \\, 99.0\\% \\, & \\, 74.6\\% \\, & \\, 88.5\\% \\, & \\, \\cellcolor{gray!25}\\textbf{86.0}\\% \\, & \\, 93.0\\% \\, & \\, 86.0\\% \\, & \\, 94.5\\% \\, & \\, \\cellcolor{gray!25}\\textbf{90.3}\\% \\\\\n $\\varepsilon = 25$ \\, & \\, 96.9\\% \\, & \\, 76.5\\% \\, & \\, 91.1\\% \\, & \\, 83.1\\% \\, & \\, 95.5\\% \\, & \\, 82.8\\% \\, & \\, 96.8\\% \\, & \\, 80.2\\% \\\\\n \\midrule\n \\multicolumn{9}{l}{\\textit{Our Proposed Method} with $KL_{\\lambda=10}$}\\\\\n $\\alpha \\sim B(2, \\phantom{1}4)$ \\, & \\, 90.6\\% \\, & \\, 74.3\\% \\, & \\, 88.8\\% \\, & \\, 71.9\\% \\, & \\, 93.5\\% \\, & \\, 62.1\\% \\, & \\, 95.1\\% \\, & \\, 53.7\\% \\\\\n $\\alpha \\sim B(2, \\phantom{1}6)$ \\, & \\, 84.7\\% \\, & \\, 72.4\\% \\, & \\, 87.1\\% \\, & \\, 74.2\\% \\, & \\, 91.8\\% \\, & \\, 70.0\\% \\, & \\, 93.7\\% \\, & \\, 66.0\\% \\\\\n $\\alpha \\sim B(2, 10)$ \\, & \\, 86.9\\% \\, & \\, 71.4\\% \\, & \\, 85.8\\% \\, & \\, 76.9\\% \\, & \\, 90.4\\% \\, & \\, 75.0\\% \\, & \\, 92.4\\% \\, & \\, 72.4\\% \\\\\n \\end{tabular}\n \n \n\\end{table*}\n\nAt first, adversaries have been generated as described in Section~\\ref{sec:assessment}. For the gradient alignment, we considered an ensemble of one, three and five models, respectively. In a first analysis, we investigated the impact of the perturbation parameter for the values $\\varepsilon \\in\\{4,8,16\\}$. Interestingly, we observed that the success rate for crafting adversaries is not sensitive to the tested values for $\\varepsilon$. Therefore, we are only referring to the adversaries with $\\varepsilon=16$ as they are the most powerful ones. If a single unprotected model -- which is fully identical to the \\textit{Base Model} in terms of topology and training execution -- is used to calculate multi-step adversaries, the \\textit{Base Model's} classification accuracy is still $23.8\\%$ as shown in Table~\\ref{tab:results}. However, aligning the gradients of an ensemble of three or five models, the \\textit{Base Model's} accuracy on these adversaries decreases to $4\\%$ and $0.9\\%$, respectively.\n\nNext, we trained multiple models with \\textit{Virtual Adversarial Training (VAT)}, \\textit{Projected Gradient Descent Training (PGD)} and the three different defense methods proposed in this work (see Section~\\ref{sec:counteractions}). For VAT we used $I=3$, $\\lambda=1$ and $\\varepsilon\\in\\{5,10,15,20,25\\}$. Miyato et al. \\cite{Miyato2015DistributionalSW} recommended\nusing $I=1$ and $\\lambda=1$ as they found it to be sufficient. We increased the \\textit{power of iterations} to $I=3$ to ensure a better conversion (cf. Miyato et al. \\cite{Miyato2015DistributionalSW} for more details). As tuning $\\varepsilon$ is most important, we tried several different values and compared them to each other. Next, we adapted the default parameter for PGD as proposed by \\cite{madry2018towards}: $\\varepsilon=8$, $\\lambda=2$ and $I\\in\\{3, 7\\}$ as the number of iterations. $\\varepsilon=8$, $\\lambda=2$ and $I=\\{7\\}$ are the default settings used for on CIFAR10 by \\cite{madry2018towards}. In addition, we used $I=3$ to speed up training.\n\nTable \\ref{tab:results} shows the results of different methods based on their accuracy on our crafted adversarial images. As indicated by the base model's accuracy values on the adversarial data, the more models are used for our proposed \\textit{gradient ensemble attack}, the higher is the success rate of transferring the adversaries to other models. This demonstrates that our adversaries, crafted from an ensemble of models, are likely transferable to other networks. Moreover, the VAT models seem to perform best on adversarial images.\n\nTo test the generalizability of our approach, we additionally assessed our adversaries on ResNet32 and Res\\-Net50~\\cite{HeZRS16}. As shown in Table~\\ref{tab:transfer}, when applying a \\textit{gradient ensemble attack} on VGGNet13 and the ResNet models together, the resulting adversaries likely transfer between both topologies. The accuracy of unprotected (base) models on our combined adversarial dataset is 0.01\\% for the VGGNet13 network and about 26 to 27\\% for both ResNet models -- although all three models have an accuracy of over 90\\% on the original images. Even adversaries that were originally crafted for a different topology reduce a model's accuracy noticeably. \n \nWe further tested our method on different ResNets. As shown in the bottom half of Table~\\ref{tab:transfer}, we found that not only the originally considered VGGNet13 models, but also ResNet32 and ResNet50 became more resilient against the transferable adversaries.\n\nHowever, comparing the performance of adversarial defense methods merely based on the model's accuracy or on the success rate for crafting adversaries is problematic. Adversarial sub-spaces may occur a little aside of the original ones or gradient masking could prevent gradient-based attack methods from being successful. Therefore, we do not only refer to a model's accuracy on strong and transferable adversaries, but also investigate the surrounding decision space, as well as its gradient.\n\n\\begin{figure*}[p]\n \\centering\n \\centering\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/gradient_base.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (a) \\textit{Base Model}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/gradient_vat.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (b) VAT ($\\varepsilon=15$) \\cite{Miyato2015DistributionalSW}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/gradient_pgd_07.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (c) PGD ($I=7$) \\cite{madry2018towards}\n \\end{minipage}\\\\\n \\medskip\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/gradient_vgg_alpha_04.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (d) $\\alpha=0.4$\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/vgg_vialpha_4_10.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (e) $\\alpha=0.4$, $KL_{\\lambda=10}$\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/gradients\/gradient_vibeta_204_10.png}\n \\put (40,20) {\\tiny{$X$}}\n \\put (30,2) {$\\varepsilon_1$}\n \\put (80,5) {$\\varepsilon_2$}\n \\end{overpic}\\\\\n (f) $\\alpha \\sim B(2, 4)$, $KL_{\\lambda=10}$\n \\end{minipage}\n \n \\caption{Visualization of the decision space of six different VGGNet13 models in two adversarial directions of the same input image $X$. The loss is plotted using $x^* = x + \\varepsilon_1 \\cdot \\text{sign} \\left(\\nabla_{x}~L_1\\right) + \\varepsilon_2 \\cdot \\text{sign} \\left(\\nabla_{x}~L_2\\right)$ (cf. \\cite{TramerKPGBM18}). The red peak is an adversarial spot, which corresponds to a very high loss. The magnitude of the related model's gradients $\\text{sign}\\left(\\nabla_{x}~L_2\\right)$ is controlled by $\\varepsilon_2\\in[-8,32]$, whereas $\\varepsilon_1\\in[-16,16]$ controls the magnitude of an unprotected model's gradient $\\text{sign}\\left(\\nabla_{x}~L_1\\right)$. The six images visualize how the loss values change when the input $x$ is moved in one of these two directions. The images in (d) to (f) correspond to our proposed models.}\n \\label{fig:gradient}\n \n \\bigskip\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_vgg_base.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (a) \\textit{Base Model}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_vat_15.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (b) VAT ($\\varepsilon=15$) \\cite{Miyato2015DistributionalSW}\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_pgd_07.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (c) PGD ($I=7$) \\cite{madry2018towards}\n \\end{minipage}\\\\\n \\medskip\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_vibeta_204_10.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (d) VGGNet13 (Ours)\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_resnet32.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (e) ResNet32 (Ours)\n \\end{minipage}\n \\hfill\n \\begin{minipage}[b]{0.325\\textwidth}\n \\begin{overpic}[width=\\textwidth]{figures\/losses\/loss_resnet50.png}\n \\put (30,1) {$X$}\n \\put (80,5) {$\\varepsilon$}\n \\end{overpic}\\\\\n (f) ResNet50 (Ours)\n \\end{minipage}\n \n \\caption{Impact of different images $X$ with additional perturbation on the loss value. The magnitude of the loss is controlled by varying $\\varepsilon$. In comparison to the models from the literature (top row), our approach flattens the decision space significantly as demonstrated by the low loss values -- even for very large values of $\\varepsilon$. Note that our proposed models (bottom row) were trained using $\\alpha \\sim B(2, 4)$ and $KL_{\\lambda=10}$.}\n \\label{fig:loss}\n\\end{figure*}\n\n\\begin{table*}[!t]\n \\caption{Accuracy of different networks on the sets of original and adversarial images. All adversarial datasets are crafted using our \\textit{gradient ensemble attack} approach with five unprotected VGGNet13, ResNet32 and ResNet50 models. For the last dataset (last column), we combined three VGG13, ResNet32 and ResNet50 models. The table shows the accuracy of three unprotected models on different adversarial datasets. \n As one can see, multi-step adversaries do transfer between topologies. Further, our proposed method provides resilience against these transferable adversaries.}\n \\label{tab:transfer}\n \n \\centering\n \\begin{small}\n \n \\begin{tabular}{r|cc|cc|cc|cc|cc}\n &\\multicolumn{2}{c|}{\\bf Accuracy on} & \\multicolumn{8}{c}{\\bf Accuracy on Adversarial Dataset}\\\\\n &\\multicolumn{2}{c|}{\\bf Original Data} & \\multicolumn{2}{c|}{\\bf VGGNet13} & \\multicolumn{2}{c|}{\\bf ResNet32} & \\multicolumn{2}{c|}{\\bf ResNet50} & \\multicolumn{2}{c}{\\bf Combined} \\\\\n &Train &Val. & True& Adv. & True & Adv. & True & Adv. & True & Adv.\\\\\n \\midrule \\multicolumn{11}{l}{\\textit{Base Models}}\\\\\n VGG13 \\, & \\, 91.2\\% \\, & \\, \\cellcolor{gray!25}\\textbf{73.4}\\% \\, & \\, 96.7\\% \\, & \\, \\cellcolor{gray!25}\\textbf{0.9}\\% \\, & \\, 96.2\\% \\, & \\, 28.3\\% \\, & \\, 96.9\\% \\, & \\, 50.2\\% \\, & \\, 98.4\\% \\, & \\, \\cellcolor{gray!25}\\textbf{0.01}\\% \\\\\n ResNet32 \\, & \\, 89.4\\% \\, & \\, 66.7\\% \\, & \\, 87.9\\% \\, & \\, 67.6\\% \\, & \\, 95.5\\% \\, & \\, \\cellcolor{gray!25}\\textbf{16.0}\\% \\, & \\, 95.3\\% \\, & \\, 44.0\\% \\, & \\, 96.1\\% \\, & \\, \\cellcolor{gray!25}\\textbf{26.1}\\% \\\\\n ResNet50 \\, & \\, 87.5\\% \\, & \\, 62.8\\% \\, & \\, 84.1\\% \\, & \\, 64.9\\% \\, & \\, 92.2\\% \\, & \\, 24.6\\% \\, & \\, 94.7\\% \\, & \\, \\cellcolor{gray!25}\\textbf{30.9}\\% \\, & \\, 94.5\\% \\, & \\, \\cellcolor{gray!25}\\textbf{26.7}\\% \\\\\n \\midrule \\multicolumn{11}{l}{Our Proposed Method with $\\alpha \\sim B(2, 4)$ and $KL_{\\lambda=10}$}\\\\\n VGG13 \\, & \\, 90.6\\% \\, & \\, \\cellcolor{gray!25}\\textbf{74.3}\\% \\, & \\, 95.1\\% \\, & \\, \\cellcolor{gray!25}\\textbf{53.7}\\% \\, & \\, 96.6\\% \\, & \\, 69.6\\% \\, & \\, 97.1\\% \\, & \\, 81.5\\% \\, & \\, 98.1\\% \\, & \\, \\cellcolor{gray!25}\\textbf{32.4}\\% \\\\\n ResNet32 \\, & \\, 83.1\\% \\, & \\, 59.8\\% \\, & \\, 80.3\\% \\, & \\, 67.8\\% \\, & \\, 88.5\\% \\, & \\, \\cellcolor{gray!25}\\textbf{38.5}\\% \\, & \\, 89.5\\% \\, & \\, 56.9\\% \\, & \\, 90.2\\% \\, & \\, \\cellcolor{gray!25}\\textbf{45.1}\\% \\\\\n ResNet50 \\, & \\, 80.4\\% \\, & \\, 62.4\\% \\, & \\, 83.8\\% \\, & \\, 70.3\\% \\, & \\, 91.6\\% \\, & \\, 44.1\\% \\, & \\, 92.9\\% \\, & \\, \\cellcolor{gray!25}\\textbf{56.4}\\% \\, & \\, 93.6\\% \\, & \\, \\cellcolor{gray!25}\\textbf{46.9}\\% \\\\\n \\end{tabular}\n \n \n \\end{small}\n\\end{table*}\n\nFigure \\ref{fig:gradient} illustrates the loss of six different models based on the decision space of a single input image in two adversarial directions -- one taken from an unprotected model and the other one in direction of the related model. As depicted by Figure~\\ref{fig:gradient}~(a), i.e., the image of the \\textit{Base Model}, one can see an adversarial sub-space in close proximity of the input image as indicated by the high loss value (shown in red). Thus, it only takes a small change in $\\varepsilon$ ($\\varepsilon=6$ is optimal in this particular case) to push the input image $X$ deep into this adversarial sub-space. A similar adversarial sub-space occurs in case of the VAT model as shown in Figure~\\ref{fig:gradient}~(b). Here, the distance is greater, but the sub-space still exists. So, it takes a larger $\\varepsilon$ to fool the VAT model.\n\nIn case of our proposed methods, using a fixed $\\alpha$-value is not sufficient either as the blind-spot still exists, as illustrated in Figures~\\ref{fig:gradient}~(d) and (e). Although our proposed methods with a fixed $\\alpha$-value weaken the adversarial spot, they do not eliminate it. Probably, the $\\alpha$-value is chosen too high in this case as there is another adversarial spot behind the first one. We found that sampling $\\alpha \\sim B(p,q)$ is essential to flatten the decision space. As demonstrated by Figure~\\ref{fig:gradient}~(f) the decision space of our method with varying $\\alpha$-values is significantly more flat than all the others. In fact, it is even more flat than the one of the PGD model -- especially for large $\\varepsilon$-values as illustrated in Figures~\\ref{fig:gradient}~(c) and (f), respectively. \n\nTo verify these findings, we generated 128 adversarial images $X$ (using the FGSM) for each defense method and compared the loss values based on varying $\\varepsilon\\in[0,128]$ (c.f. Figure \\ref{fig:loss}). As for the VAT model, adversarial attacks can be highly successful. In fact, $\\varepsilon=26$ provides the greatest success-rate for single-step adversaries while for the \\textit{Base Model} $\\varepsilon=8$ works best (see Figures \\ref{fig:loss}~(a) and (b)). This proves once again, that the adversarial spots are just a little further away compared to the \\textit{Base Model}. This may be an explanation, on why the VAT model performs rather good on our generated adversaries. However, the VAT model is not substantially more robust against adversarial attacks than the unprotected base model -- it only requires a larger $\\varepsilon$ to fool it.\n\nIn contrast, the PGD model and our proposed method clearly flatten the decision space and thereby strongly reduce the risk of adversarial spots (cf. Figure \\ref{fig:loss}~(c) and (d)). As one can see, adversarial examples occur in greater distance compared to the \\textit{Base Model} and occur significantly less frequent as the lower loss values demonstrate. However, there are adversarial sub-spaces which cause a high distraction of the PGD models. In contrast to the PGD model, our model (Virtual Alpha with $p=2$, $q=4$ and $\\lambda=10$) provides a significantly more flattened space, i.e., high loss values rarely occur at all. \n\nNoticeably, for $\\varepsilon<16$ the loss values of our model are significantly higher than the ones of the PGD model. In addition, loss values grow rapidly for small $\\varepsilon$-values. The high average loss values for small $\\varepsilon$-values may explain \\textit{why} our model has a lower accuracy on our generated adversarial datasets than the PGD model as the loss values do not need do be maximal in order to indicate misclassification.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis work investigates the effects of adversarial attacks on deep learning networks. It analyzes different strategies for increasing a model's resilience and, thus, countervailing malicious attacks. The performance of the different defense strategies is compared across large sets of transferable, carefully generated adversaries. Next, three new approaches to improve resilience against such perturbations were first introduced and then compared against the state-of-the-art techniques \\textit{Virtual Adversarial Training} and \\textit{Projected Gradient Descent}. In addition, a novel adversarial attack method called \\textit{gradient ensemble attack} has been introduced. Further, this work has demonstrated the transferability of adversaries, which have been crafted using our proposed method.\n\nWithin our investigations we have observed that VAT does not provide substantial resilience against adversarial perturbations as the adversarial sub-spaces are just pushed a little further away. However, the incidence of these spaces is similar to an unprotected model. Further, PGD trained models reduce the frequency of adversarial sub-spaces and strongly increase the distance to them. Yet, these sub-spaces still occur. Our proposed method, superimposing two images\nand minimizing the KL-Divergence between the two activations, reduces the risk of adversarial sub-spaces with high loss. In fact, our results demonstrate that these spaces rarely occur. However, the average loss value is significantly higher which explains why our models performed worse on our adversarial test sets. Nevertheless, our proposed method is very promising as it (i) is easily executable (it only requires an additional forward pass), and (ii) still provides a noticeable regularized decision space. \n\n\n\nOur ideas for future work are two-fold: (i) we will compare additional methods to further decrease the overall loss of our proposed method and thereby improve its performance on adversaries; (ii) we will investigate the effects of our \\textit{gradient ensemble attack} for crafting strong and transferable adversaries in a wider context -- especially applying it to different white- and black-box attack scenarios.\n\n\\section*{Acknowledgment}\n\nAll three authors acknowledge support by the \\href{https:\/\/www.ercis.org}{\\emph{European Research Center for Information Systems (ERCIS)}}.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: Prediction of Hemispheric Asymmetry}\n\nIt is well known, starting already with the researches of Rudolf Wolf in the\nnineteenth century, that the sunspot cycle is not strictly periodic. Significant\ncycle-to-cycle variability is observed in both the amplitude and duration\nof the cycle. The hemispheric asymmetry in sunspot coverage\nwas already noticed by \\cite{Waldmeier1955}, who also pointed out that such\nasymmetry could be sustained for several years.\n\\cite{Babcock1959} reported that this asymmetry appears in the case of the polar field reversal as well.\nBased on observational data for cycles 12-23, \\cite{Norton2010} analyzed the asymmetry and proposed 20\\% as upper limit of sunspot area asymmetry. Regarding the phase lag asymmetry for the same period of time, \\cite{McIntosh2013} found a roughly four cycles long periodicity. \\cite{Zolotova2010} investigated the phase difference of sunspot cycles in the hemispheres for a longer duration, back to the Maunder Minimum and found secular variation. On the other hand, this periodicity appears only in the sign of the phase lag but not in its magnitude \\citep{Norton2014}.\n\\cite{Hathaway2016} predicted the hemispheric asymmetry of cycle 25 by extrapolating the polar fields of cycle 24 using their Advective Flux Transport model.\nAccording to their results, the Southern hemisphere should dominate the North.\nHowever, from a purely statistical point of view, the available polar field data (eg. \\citealt{Munoz2012}) is insufficient to infer a significant\ncorrelation from past cycles.\n\n\\cite{Belucz2013} investigated the hemispheric asymmetry generated\nin a 2D flux transport dynamo model by inserting a second meridional circulation cell on the southern hemisphere with different amplitude, latitude and depth. They found significant hemispheric asymmetry depending the properties of the second cell. Using the same model, \\cite{Shetye2015} focused on the effects of the meridional inflow towards the activity belts. According to their results, the intense inflow in one hemisphere leads to stronger toroidal fields and this asymmetry is sustained for more than one cycle.\n\n\\cite{Karak2017}, using their 3D surface flux transport and Babcock-Leighton solar dynamo model (\\citealt{Miesch2014}; \\citealt{Miesch2016}),\ninvestigated the influence of the tilt angle distribution by adding random scatter on Joy's law and a tilt-angle saturation was also added to the model (on this point\nsee also \\citealt{Lemerle2017}).\nThe SpotMaker algorithm they use places new BMRs in each hemispheres, with a set time\ndelay in order to avoid artificially imposing hemispheric symmetry. One of their results is the hemispheric asymmetry appearing in the polar fluxes\nwas only weakly correlated to the toroidal flux of the subsequent cycle. Due to the\nstrong diffusive coupling between the hemispheres in the model, the asymmetry reduces quickly.\n\n\nIn the present paper \\fix{we extend the work of \\citep{Nagy2017} on\nthe hemispheric asymmetry triggered by rogue BMRs. We present a detailed analysis of how rogue BMRs affect the hemispheric asymmetry of the polar cap flux, including prediction of the asymmetry level of the subsequent cycle based on the polar field asymmetry. We also investigate whether the asymmetry in the model shows periodicity or temporal persistence in its characterizing parameters.\nFollowing \\citep{Nagy2017}, our analysis is based on simulations carried out\nusing the recent $2\\times2$D dynamo model of \\citealt{Lemerle2015, Lemerle2017}.}\n\n\n\\section{Rogue BMRs in the $2\\times2$D Dynamo Model}\n\nThe \\cite{Lemerle2017} solar cycle model invokes differential rotation shear\nand the regeneration of the solar dipole via surface decay of active regions\n(the so-called Babcock-Leighton mechanism) as its primary inductive mechanisms.\nThis mean-field-like kinematic model\ncouples a 2D surface flux transport module\n(SFT) with a\n2D axisymmetric flux transport dynamo (FTD). The SFT component provides the azimutally averaged radial field\nserving as the upper boundary condition for the FTD\nsimulation, while the FTD module drives the SFT through the emergence\nof new bipolar magnetic region (BMR). This step is based\non a semi-empirical emergence function that sets the\nprobability of a BMR emerging (radially)\nas a function of toroidal magnetic field $B_t$ at\nthe bottom of the convective zone in the FTD module.\n\\fix{Motivated by the modelling of the destabilization and buoyant rise of thin\nmagnetic flux ropes initially located immediately beneath the base of the\nsolar convection zone \\citep[see, e.g.,][and references therein]{Fan2009},\na lower threshold\non $B_t$ is introduced, below which the emergence probability\nvanishes. The presence of a threshold implies that the dynamo is not self-excited,\nas the internal magnetic field must remain above threshold for regeneration of\nthe surface dipole to take place. Above this threshold, the proportionality constant\n$K$ between emergence probability and\n$B_t^{1.5}$ acts as the dynamo number for the model, since it effectively\nsets the mean surface dipole growth rate for a given internal toroidal\nfield strength.}\nProperties of the\nnew BMR --- emergence latitude, longitude, flux, angular separation, tilt --- are randomly drawn\nfrom distribution functions for these quantities built from observed\nstatistics of solar active regions during solar cycle 21, as described in Appendix A of\n\\citealt{Lemerle2015}.\n\nBecause the model is kinematic and includes a steady quadrupole-like meridional\nflow in the FTD module, the only physical mechanism available to couple\nthe two hemispheres is diffusive transport, operating in both the SFT and FTD modules.\n\nThe only amplitude-limiting nonlinearity included in the model\nis a\nreduction of the average BMR tilt angle $\\alpha$ as a function of the deep-seated\ntoroidal field strength $B_t$, parametrized\naccording to the following ad hoc algebraic formula:\n\\begin{equation}\\label{eq:tiltquenching}\n \\alpha_q = \\frac{\\alpha(\\theta)}{1 + (B_t\/B_q)^2},\n\\end{equation}\nwhere $B_q$ is the quenching field amplitude, and $\\alpha$ is the reference\ntilt variation with latitude, i.e., Joy's law \\citep{McClintock2013}.\n\\fix{Lacking any reliable information on the manner in which the emerging flux ropes\nproducing BMRs disconnect from the underlying toroidal magnetic flux system,\nwe do not implement any flux reduction in the FTD module when emergences\nare introduced in the SFT module.}\n\nThe main advantages of the $2\\times2$D model are its high numerical efficiency\nand the fact that it is calibrated to follow accurately the\nstatistical properties of the real Sun. The complete\nlatitude--longitude representation of the simulated solar surface in\nthe SFT component further makes it possible to achieve high spatial\nresolution and account for the effect of individual active region\nemergences.\n\nThe reference solar cycle solution presented in \\citet{Lemerle2017} is\ndefined by 11 adjustable parameters, for which optimal values were obtained\nvia a formal optimization by a\ngenetic algorithm. The algorithm was designed to minimize the differences between the\nspatiotemporal distribution of emergences produced by the model, and\nthe observed sunspot butterfly diagram during cycle 21.\n\nFigure \\ref{fig:refsolution} shows a portion of the reference dynamo solutions\nused for the analyses presented in what follows. The solid lines on the\ntop and middle panels\nshow time series of hemispheric pseudo-sunspot number and polar cap flux, \\fix{which is calculated here as the surface integral of the radial magnetic field over a latitudinal extent of $20^{\\circ}$ from the poles.}\nThe bottom panel shows the corresponding time-latitude ``butterfly''\ndiagram for the spatial density of emerging BMRs, encoded as a grayscale.\nThe red dot indicates the time and latitude at which a large ``rogue''\nBMR emerged in this simulation,\nits properties being listed in the first row of Table \\ref{tab:BMRs}.\nArtificially removing this single BMR from the simulation leads to\na markedly different subsequent evolution of the dynamo solution,\nas shown by the dashed time series on panels (a) and (b)\nof Figure \\ref{fig:refsolution}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{Refcase_SSN.pdf}\\\\\n \\includegraphics[width=\\textwidth]{Refcase_PC.pdf}\\\\\n \\includegraphics[width=\\textwidth]{Refcase_Bfly.pdf}\\\\\n \\caption{A short segment of the reference dynamo solution used for the analyses\ndiscussed in the present paper.\nPanel (a) shows the pseudo sunspot number time series separately for the hemispheres.\nThe solid lines shows the reference simulation run.\nOn panel (b) hemispheric time series of the polar cap flux are plotted, in\nred and blue for the Northern and Southern hemispheres, respectively.\nOn both of these panels dashed lines\npertain to a experiment\nin which a single large ``rogue'' BMR was removed from the simulation at the time indicated\nby the vertical black dashed line.\nPanel (c) shows the pseudo-sunspot butterfly diagram of the reference simulation plotted\nas solid lines in panels (a) and (b).\nThe gray scale encodes the density of emerging BMRs, and\nthe red dot indicates the position of the rogue BMR removed from the simulation\nto yield the dashed time series in (a) and (b).\n}\\label{fig:refsolution}\n\\end{figure}\n\nThe reference solution plotted on Figure \\ref{fig:refsolution}\nis the same as adopted in the numerical experiments of\n\\citet{Nagy2017} where the impact of individual ``rogue'' BMRs were analyzed. These peculiar active regions were identified based on their contribution to the global dipole moment defined as follows:\n\\begin{equation}\n \\delta D_{\\mathrm{BMR}} \\approx F \\,d \\, \\sin\\alpha \\, \\sin\\theta,\n \\label{eq:thenumber}\n\\end{equation}\nwhere $F$ is magnetic flux, $d$ is the\nangular separation of the two polarities, $\\alpha$ is the tilt angle, $\\theta$\nis the colatitude. According to this expression, BMRs with high flux content, tilt angle and angular separation, close to the equator influence the most the building up dipole moment, and therefore the strength of the subsequent cycle \\fix{as suggested by \\citet{Jiang2015} as an explanation for the low amplitude of Cycle 24}.\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.55\\textwidth}\n \\includegraphics[width=0.95\\linewidth]{DeltaTheta_andothers.pdf}\n \\end{minipage}\n \\begin{minipage}{0.44\\textwidth}\n \\caption{Average effect of varying the properties of a BMR (2nd data\nrow of Table \\ref{tab:BMRs}), inserted in the simulations at cycle\nmaximum, on the amplitude of the subsequent cycle. Variations in BMR\nflux (green), tilt angle (blue) and polarity separation (orange) are\nconverted to the contribution to the dipole moment according to\nEquation (\\ref{eq:thenumber}), while the varying colatitudes (red) are\nshown on the top axis (adapted from Figure 11 in \\citealt{Nagy2017}).}\\label{fig:summarize}\n \\end{minipage}\n\\end{figure}\n\n\\citet{Nagy2017} carried out\nseveral numerical experiments aiming to study how the parameters of BMRs affect the next, or even the ongoing cycle. A selected ``test'' BMR\nwith specified properties ($F$, $d$, etc.)\nwas inserted manually into simulations while the parameters of the active region were varied separately during each experimental series. The test-BMR emerged spontaneously during the reference simulation with parameters listed in the second row of Table \\ref{tab:BMRs}. The experiments were performed for three cycles with average, below average and above average amplitudes, respectively.\nIn each case two series of experiments were carried out with Hale (anti-Hale) test-BMR in order to increase (decrease) the dipole moment of the examined cycle. The characteristics of the\ntest-BMR -- emergence time and latitude, flux, tilt angle and angular separation -- were changed one by one in order to map the impact of each property on the subsequent simulated cycle.\nThe results of these experiments are summarized in Figure \\ref{fig:summarize}.\nBy jointly varying the flux, tilt angle or separation of the test-BMR, similar results were obtained for the amplitude of the upcoming cycle. This is because\nthese quantities have a combined effect in the form of Equation (\\ref{eq:thenumber}). The impact of the emergence latitude is also shown in Figure \\ref{fig:summarize} (red curve, top axis).\nThe effect of the BMR decreases as a function of the emergence latitude, but is still significant $20^{\\circ}$ away from the Equator. The emergence epoch is also an important factor; the strongest impact on the subsequent cycle is expected when the test-BMR appears at cycle maximum,\nand diminishes gradually as the rogue BMR is forced to emerge later and later in the descending\nphase of the cycle.\nWhen the emergence occurs during the rising phase of the perturbed cycle, it modifies the ongoing cycle as well.\n\n\\begin{table}[t!]\n \\center\n \\caption{Parameters of active regions discussed in the\npaper. Colatitudes $\\theta_{\\mathrm{lead}}$ and\n$\\theta_{\\mathrm{trail}}$ are the\n latitudinal positions of leading and trailing polarities; $F$ is\nthe flux of the trailing polarity ($F_{\\mathrm{trail}} =\n-F_{\\mathrm{lead}}$); $\\alpha$ is the tilt angle and $d$ is the\nangular separation of leading and trailing polarities. $\\delta\nD_{\\mathrm{BMR}}$, the contribution of the BMR to the global dipole\nmoment, is defined according to Equation (\\ref{eq:thenumber}). J\/H\nindicates whether the active region is (anti-)Joy\/(anti-)Hale. In the\ncase of the second row a J\/H (J\/a-H) test-BMR increases (decreases)\nthe dipole moment during the experiments detailed in Section 5 of\n\\citep{Nagy2017}. }\\label{tab:BMRs}\n \\begin{tabular}{ c c c c c c c }\n \\hline\n $\\theta_{\\mathrm{lead}}$ & $\\theta_{\\mathrm{trail}}$ & $F $ [$10^{23}$ Mx] & $\\alpha$ & $d$ & $\\delta D_{\\mathrm{BMR}}$[$10^{23}$ Mx] & J\/H \\\\\n \\hline\n 112$^{\\circ}$ & 118.6$^{\\circ}$ & 4.39 & $-11.08^{\\circ}$ & 34.08$^{\\circ}$ & $-0.5124$ & J\/H\\\\\n 89.5$^{\\circ}$ & 82.1$^{\\circ}$ & --1.39 & 13.98$^{\\circ}$ & 30.97$^{\\circ}$ & --0.1810 & J\/H \\\\\n & & & & & & J\/a-H \\\\\n \\hline\n \\end{tabular}\n\n\\end{table}\n\n\n\\section{Hemispheric Asymmetry due to Rogue BMRs}\n\nAs proposed by \\cite{Hathaway2016} in the context of variation in the surface\nmeridional flow, the hemispheric asymmetry of a solar cycle originates from the polar cap flux asymmetry during the preceding cycle. \\citet{Nagy2017} analyzed whether the polar cap flux asymmetry is a good indicator of the upcoming simulated cycle's asymmetry in the $2\\times2$D model.\n\nComparison of the solid and dashed lines on Figure \\ref{fig:refsolution}(a) and (b) indicates that a single, large BMR with unusual characteristics\ncan have a large and persistent impact on\nhemispheric asymmetry in the resulting dynamo solution.\n\\fixx{The top panel on Figure \\ref{fig:synopticRogue} shows a synoptic magnetogram extracted from a simulation in which the rogue BMR listed in the first row of Table \\ref{tab:BMRs} was inserted at simulation time $t = 1992.72$, one cycle before the strongly asymmetric cycle that we will discuss in the next section.\nThis snapshot is extracted six months after emergence of the rogue BMR.\nThis\nis an anti-Hale BMR, with polarity ordering opposite to that which should\ncharacterize its hemisphere, so that the emergence impedes the build up\nof the Southern polar fields. The poleward surge of positive polarity (red)\ncan be seen quite clearly.\nFor comparison, the bottom\npanel shows a second synoptic magnetogram, extracted at the same time in\na parent simulation without artificial insertion of the rogue BMR.\nComparing the two snapshots reveals how\nthe positive trailing flux of the decaying rogue has strongly\naltered\nthe pattern of magnetic flux transport to the Southern polar regions.\nThis pattern is qualitatively similar to that highlighted by\n\\citet{Upton2018}, with the large\nactive region AR12192\nemerging in October 2014 and producing a poleward stream of positive\nmagnetic polarity which weakened the buildup of the southern polar cap\nnegative magnetic field.\n}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{FrameT-400rogue5952-6950.pdf}\n \\includegraphics[width=\\textwidth]{FrameTnorogue5952-6950.pdf}\n \\caption{\\fixx{On the top we plot the synoptic magnetogram of the rogue BMR listed in the first row of Table \\ref{tab:BMRs} at central longitude $\\phi\\sim360^{\\circ}$ and simulation time $t = 1992.72$, six months after the time of emergence (see the corresponding time series in Figure \\ref{fig:refsolution} and \\ref{fig:magnetogr}).\nThe bottom panel shows the magnetic field distribution without the emergence of this active region at the same epoch. Here (and on Fig.~\\ref{fig:magnetogr} following), the color scale is strongly saturated to make the weaker magnetic fields visible.\n}}\\label{fig:synopticRogue}\n\\end{figure}\n\nFigure \\ref{fig:magnetogr}\noffers another example of this effect, for a different rogue event\nand now in the form of time-latitude maps\nof the zonally-averaged\nsurface radial magnetic field component. The top panel corresponds to the\nreference solution, while the bottom panel shows the magnetogram resulting from the\nartificial removal of a single large BMR at the time indicated by the vertical dashed line.\nNote in particular how the reversal of the polar field occurs almost\nsimultaneously in both hemispheres when the rogue BMR is removed, while in the\noriginal reference solution the southern polar cap reverses polarity some two years\nprior to the northern hemisphere.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{Refcase_magnetogr.pdf}\\\\\n \\includegraphics[width=\\textwidth]{Refcase_magnetogr_norogureBMR.pdf}\\\\\n \\caption{The top panel shows a synoptic magnetogram of the surface radial magnetic\nfield component in the reference solution of Figure \\ref{fig:refsolution}. The bottom panel\nshows the synoptic magnetogram resulting from removal of the rogue BMR at the\ntime indicated by the vertical dashed line, corresponding to the dashed time\nseries on panels (a) and (b) of Figure \\ref{fig:refsolution}.}\\label{fig:magnetogr}\n\\end{figure}\n\\fix{Note that here the polar cap flux peaks 2--3 years prior to SSN minimum,\nwhile in\nthe case of the sun this peak usually occurs somewhat closer to cycle minimum.\nHowever, in our model as in the sun,\nthe peak polar cap flux does turn out\nto be a good predictor of the SSN amplitude of the subsequent cycle, so we retain\nit as a measure of cycle dipole strenth in all analyses that follow.}\n\nTo quantify the level of asymmetry, the normalized asymmetry of the peak polar cap flux ($\\Delta_{\\Phi}$) produced during the cycles is\ncompared to two asymmetry measures characterizing the subsequent cycles. These measures are the asymmetry of the total number of emergences in each hemisphere ($\\Delta_{\\mathrm{SSN}}$), and the time delay between the epochs when the new cycle BMRs first\nstart to emerge in the North and the South ($\\Delta_{T}$).\n\nThe asymmetry of the polar cap flux at a given cycle is defined as follows:\n \\begin{equation}\n \\Delta_{\\Phi} = \\frac{|\\Phi_{\\mathrm{N},\\mathrm{max}}| - |\\Phi_{\\mathrm{S},\\mathrm{max}}|} {( |\\Phi_{\\mathrm{N},\\mathrm{max}}| + |\\Phi_{\\mathrm{S},\\mathrm{max}}| )\/2},\n \\end{equation}\n \\noindent where $\\Phi_{\\mathrm{N},\\mathrm{max}}$ ($\\Phi_{\\mathrm{S},\\mathrm{max}}$) is the northern (southern) polar cap flux maximum.\nThe asymmetry of the activity level is defined similarly, but in\nterms of the pseudo-sunspot number constructed from the model output:\n \\begin{equation}\n \\Delta_{\\mathrm{SSN}} = \\frac{\\Sigma \\mathrm{SSN}_{\\mathrm{N}} - \\Sigma \\mathrm{SSN}_{\\mathrm{S}} } {( \\Sigma \\mathrm{SSN}_{\\mathrm{N}} + \\Sigma \\mathrm{SSN}_{\\mathrm{S}} )\/2},\n \\end{equation}\n \\noindent where $\\Sigma \\mathrm{SSN}_{\\mathrm{N}}$ ($\\Sigma \\mathrm{SSN}_{\\mathrm{S}}$) is the total number of emergences in the northern (southern) hemisphere.\nFinally, the time lag between the hemispheres is defined as:\n \\begin{equation}\n \\Delta_{T} = \\frac{t_{\\mathrm{N}} - t_{\\mathrm{S}}} {( T_{\\mathrm{N}} + T_{\\mathrm{S}} )\/2},\n \\end{equation}\n \\noindent where $t_{\\mathrm{N}}$ ($t_{\\mathrm{S}}$) is the beginning epoch of the cycle, while $T_{\\mathrm{N}}$ ($T_{\\mathrm{S}}$) is the duration of the cycle on the North (South).\n\nUpon calculating these asymmetry measures for 540 simulated cycles, \\citet{Nagy2017} found strong anticorrelations between the polar cap flux asymmetry of cycle $i$ and time delay, $\\Delta_{\\mathrm{T}}$ ($r = -0.7174$)\nduring cycle $i+1$.\nIn the case of asymmetry in number of emergences of cycle $i+1$, $\\Delta_{\\mathrm{SSN}}$ the correlation coefficient is $r = 0.7430$ as it is shown in Figure \\ref{fig:asymmetry500cycles}. This result shows that in the model the asymmetry of a cycle can be predicted via the asymmetry of the polar cap flux built up during the previous cycle.\n\n\n \\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{1\\textwidth}\n \\includegraphics[width=0.5\\textwidth]{HemisphericAsymm_crossSSN.pdf}\n \\includegraphics[width=0.5\\textwidth]{HemisphericAsymm_crossT.pdf}\n \\end{minipage}\n \\begin{minipage}[b]{1\\textwidth}\n \\caption{\\small{ Two-dimensional histograms of the asymmetry of the hemispheric total\n pseudo-SSN (\\textbf{left}) and the time lag between North and South (\\textbf{right})\n in pseudo-solar cycle $i$ against the polar cap flux asymmetry during the\n previous cycle for 540 simulated cycles. Some outlier data have been removed.\n The number of cases (cycles) in each bin are\n indicated by the colour codes. The correlation coefficient is\n $r_{\\Delta SSN} = 0.7430$ and $r_{\\Delta T} = -0.7174$, respectively (adapted from Figure 6 in \\citealt{Nagy2017}).\n }}\\label{fig:asymmetry500cycles}\n \\end{minipage}\n \\end{figure}\n\nIn order to assess the persistence of hemispheric asymetry, we separate the\nsimulated cycles in two groups according to hemispheric dominance, as measured\nby the quantity $\\Delta_{\\rm SSN}$ introduced above. For each group, we then\nconstruct the histograms of $\\Delta_{\\rm SSN}$ values characterising the\ncycle following each member of the groups. The resulting histograms are plotted\nin Figure\n\\ref{fig:PeriodHist}. Both are very well fit with Gaussian centered on\n$\\Delta_{\\rm SSN}=0$, with deviation from zero at the $10^{-2}$ level\nand standard deviation $\\sim 0.4$. This indicates that the probability of\nfinding a given hemispheric dominance in cycle $n+1$ is independent of\nhemispheric dominance in cycle $n$, and thus that hemispheric dominance is\ndetermined by processes operating on inter-cycle timescales.\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{hist_SSN_afterN.pdf}\n \\includegraphics[width=0.4\\linewidth]{hist_SSN_afterS.pdf}\n \\end{minipage}\n \\begin{minipage}{\\textwidth}\n \\caption{ Strength asymmetry histograms of cycles following a North (South) dominated cycle on the left (right) panel. The histograms show the distribution of asymmetry\nparameters for all cycles following a Northern-dominated cycle (left) or\nSouthern-dominated cycle (right). The distributions are approximately Gaussian,\nwith means and standard deviations are respectively\n$0.012$ and $0.36$ in the left panel, and $-0.008$ and $0.41$ for the right panel.\nThis indicates that hemispheric dominance shows no significant\npersistence from one cycle to the next, at least in the (optimal) parameter regime\nof this simulation run.\n}\\label{fig:PeriodHist}\n \\end{minipage}\n\\end{figure}\nWe repeated this exercise, this time constructing the distribution of asymetry\nparameters two cycles in the future instead of one, in order to possibly detect\npersistence of hemispheric asymmetry associated with one magnetic polarity\ndominating over the other for a few subsequent cycles, a features sometimes\nobserved in the \\cite{Lemerle2017} dynamo solutions. Once again the mean\nof the distribution are very close to zero, with standard deviations $\\sim 0.4$,\nindicating that hemispheric asymmetry does not persist beyond\none cycle in this model.\n\nBased on their numerical experiments, \\citet{Nagy2017} identified another interesting effect triggered by the emergence of a rogue BMRs. After such emergences in one cycle, the next cycle tends to be strongly asymmetric. This phenomenon was analyzed using a new test-BMR described in the first line of Table \\ref{tab:BMRs}, emerged on the southern hemisphere at cycle maximum, indicated by the vertical dashed line in Figure \\ref{fig:casestudy}. According to the previous results, the AR's impact on the upcoming cycle is the strongest at this epoch. On the other hand, the original position of the BMR is a bit far from the Equator.\nFor this reason, during the experimental runs the test region was replaced to emerge closer, about $15^{\\circ}$ far from the equator, within the region where significant effect was observed during the first experimental series.\nAt this position the active region's flux was decreased from about $4\\cdot10^{23}$ Mx down to $2.19\\cdot10^{23}$ Mx.\nThe black solid line in the top panel of Figure \\ref{fig:casestudy} shows the reference case when the BMR emerged at the original position, while black dashed line shows the case when the BMR was removed from the simulation. The coloured dashed lines show how the asymmetry changed for various values of the test-BMR's flux, as color-coded.\nOne can see that the asymmetry is changing according to the flux of the test region. There are slight changes in the amplitude of the northern hemisphere as well due to the\ndiffusive hemispheric coupling in the model.\nThe bottom panel of Figure \\ref{fig:casestudy} shows that the hemispheric asymmetry already appears in the form of polar cap flux asymmetry during the cycle within which\nthe test-BMR emerges.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\textwidth]{cycle_03_hemSSN_BMRonS.pdf}\\\\\n \\includegraphics[width=\\textwidth]{cycle_03_PC_BMRonS.pdf}\\\\\n \\caption{The \\textbf{top} panel shows how a test-BMR can modify the amplitude of the subsequent cycle, separately in each hemisphere. The properties of this region are listed in the first row in Table \\ref{tab:BMRs}.\n Black solid curves indicate the reference solution, with a ``rogue'' BMR emerging\n$25^{\\circ}$ away from the Equator, at the time indicated by the vertical dashed line.\nThe black dashed curves refer to a\nmodified simulation in which this ``rogue'' BMR is removed. Colored curves indicate the results with test-BMRs with different flux, $15^{\\circ}$ far from the Equator, as labeled.\nOn the \\textbf{bottom} panel the polar cap flux is shown. Solid and dot-dashed lines indicates the southern polar cap flux while dashed lines correspond to the northern polar cap flux.\n}\\label{fig:casestudy\n\\end{figure}\n\nThe correlation between the polar cap flux asymmetry triggered by the test-BMR and the asymmetry parameters of the subsequent cycle is plotted in Figure \\ref{fig:asymmetry_seed512}. Besides the results for both hemispheres of the reference cycle, we plot results for five more cycles that were studied using the same test-BMR emerging $15^{\\circ}$ far from the equator at cycle maximum.\nWhen the BMR is inserted in the northern hemisphere, its tilt and polarity are set to obey Joy's and Hale's law. Considering all the six experimental runs, the correlation coefficient between the polar cap flux asymmetry during the perturbed cycle ($\\Delta_{\\Phi,i-1}$) and the asymmetry of the number of emerged BMRs during the next cycle ($\\Delta_{\\mathrm{SSN},i}$) is 0.8431. In the case of the time lag between the hemispheres ($\\Delta_{T,i}$) this correlation is $-0.8029$.\n\n \\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{1\\textwidth}\n \\includegraphics[width=0.51\\textwidth]{crossplot_AsymmCorr_SSN.pdf}\n \\includegraphics[width=0.51\\textwidth]{crossplot_AsymmCorr_TL.pdf}\n \\end{minipage}\n \\begin{minipage}[b]{1\\textwidth}\n \\caption{\\small{ Correlation plots between the asymmetry of the hemispheric total\n pseudo-SSN (\\textbf{left}) and the time lag between North and South (\\textbf{right}) in\n pseudo-solar cycle $i$ against the polar cap flux asymmetry during the\n previous cycle. Colored markers correspond to the example shown in Figure \\ref{fig:casestudy} and also for the experimental series for the northern hemisphere (red markers). Gray markers show the same series for five more cycles in the same simulation. The correlation coefficients are\n $0.9463$ and $-0.9158$, respectively, and $0.8431$ and $-0.8029$ when using\ndata from the combined six experiments. }}\\label{fig:asymmetry_seed512}\n \\end{minipage}\n \\end{figure}\n\n\nDiffusive transport is the only cross-equatorial coupling mechanism operating\nin the \\cite{Lemerle2017} dynamo model used to carry out the various experiments\ndescribed above. The leading member of a BMR emerging $15^\\circ$ from the equator\nwill diffuse to the equator on a timescale $\\tau=(\\pi R\/12)^2\/\\eta_R\\simeq 2\\,$yr\nfor the surface diffusivity value $\\eta_R=6\\cdot 10^{12}\\,$cm$^2\\,$s$^{-1}$\nused in the SFT module. On the other hand, the internal toroidal field\nat 15 degrees will diffuse to the equator on a timescale controlled\nby the internal magnetic diffusivity $\\eta_t=10^{12}\\,$cm$^2\\,$s$^{-1}$ of the\nFTD module,\nleading to a timescale $\\tau\\simeq 12\\,$yr. The first timescale\nindicates that a rogue\nBMR emerging close to the equator can induce a polar cap asymmetry in the ongoing\ncycle, in agreement with the experimental results displayed\non Fig.~\\ref{fig:asymmetry_seed512};\nwhile the second timescale reveals that this asymmetry,\nonce it has built up in the internal toroidal field, can persist over\na full cycle before being diffusively balanced. Periodicity in hemispheric\nasymmetry cannot be driven or sustained by such diffusive coupling alone, and\nindeed is not observed in our dynamo simulations. Dynamical backreaction on\nlarge-scale flows, namely meridional circulation and differential rotation\nwould be the most likely candidate mechanism that could lead to such behavior.\n\n\\section{Conclusion}\n\nThe hemispheric asymmetry triggered by rogue active regions was studied using simulated data of the $2\\times2$D solar dynamo model. The flux of a selected test-BMR was changed while its position was fixed to $15^{\\circ}$ far from the Equator either\non the northern or southern hemispheres. The emergence epoch was the cycle maximum, while the polarity was set to increase the building up dipole moment. Experimental series were carried out in the case of six simulated cycles of varying amplitudes.\n\nIn contrast to the results of \\cite{Karak2017} we found strong correlation between the hemispheric asymmetry of the polar cap flux of cycle $i$ and the asymmetry of hemispheric activity levels\nduring the subsequent cycle $i+1$. The time lag between the hemispheric\nlag in the onset of cycle $i+1$ is also strongly correlated to the asymmetry of the\npolar cap flux of the preceding cycle. These results can be understood in terms\nof diffusive coupling of the magnetic field across the equatorial plane.\n\nOur results thus demonstrate that\nthe polar cap flux asymmetry at the end of a cycle can be determined\nby a single peculiar active region, emerging relatively close to the equator.\nThis offers an alternate scenario to that suggested by \\cite{Hathaway2016},\nbased on hemispheric variations in the surface meridional flow, which,\nin our kinematic dynamo model, remains strictly constant.\nIn view of the relatively strong\nhemispheric asymmetry observed in cycle 24, the unfolding of cycle 25 may\nallow to discriminate between these two scenarios.\n\n\\section*{Acknowledgements}\nM.N.'s research is currently supported by the \\'UNKP-18-3 New National Excellence Program of the Ministry of Human Capacities.\nP.C. and A.L. are supported through the Discovery Grant Program of the\nNatural Sciences and Engineering Research Council of Canada.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{&Intro}}\n\nThe symmetries of the Euler-Lagrange equations of motion\nwere recently used to study the constrained dynamics of singular\nLagrangians \\cite{ADS2020}. The focus was on almost regular\nLagrangians \\cite{Got1978, Got1979, Got1980, Car1990a}, and it was\nfound that for these Lagrangians the Euler-Lagrange equations of motion \nadmit a generalized Lie symmetry (also known as a local gauge\nsymmetry). The generators $\\mathcal{S}\\hbox{ym}$ of this symmetry group\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ were determined in the Lagrangian\nphase space approach to Lagrangian mechanics, and were found to lie\nin the kernel of the Lagrangian two-form $\\mathbf{\\Omega}_L$. While it\nis well-known that the solutions $\\mathbf{X}_E$ of the energy equation, \n\\begin{equation}\n 0=\\mathbf{d}E-i_{\\mathbf{X}_E}\\mathbf{\\Omega}_L,\n \\label{EnergyE}\n\\end{equation}\nis not unique for almost regular Lagrangians, it was shown in \\cite{ADS2020} that\nthe action of $\\mathcal{S}\\hbox{ym}$ on a general solution \nto this equation\\textemdash and in particular, on the \\textbf{second-order,\n Lagrangian vector field} (SOLVF)\\textemdash will result in a vector\nfield that is no longer a solution of Eq.~$(\\ref{EnergyE})$. Thus, not all\nsolutions of the energy equation have\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ as a symmetry group. It is,\nhowever, possible to construct \nsolutions to Eq.~$(\\ref{EnergyE})$ for whom $\\mathcal{S}\\hbox{ym}$\ndoes generate a group of symmetry transformations\n\\cite{ADS2020}. These vector fields are called \\textbf{second-order,\n Euler-Lagrange vector fields} (SOELVFs). As the \nevolution of the dynamical system for singular Lagrangians must lie on\nLagrangian constraint surfaces \\cite{Car1990a}, a Lagrangian constraint\nalgorithm for SOELVFs was also introduced in \\cite{ADS2020} to\nconstruct such solutions to the energy equation. It was\nthen shown that these SOELVFs, along with the dynamical structures\nin the Lagrangian phase space needed \nto describe and determine the motion of the dynamical system, are projectable to the\nHamiltonian phase space. In particular, the primary\nHamiltonian constraints can be constructed from vectors that lie in the kernel of\n$\\mathbf{\\Omega}_L$, and the Lagrangian constraint\nalgorithm for the SOELVF is equivalent to the stability analysis of \nthe total Hamiltonian (we follow the terminology found in\n\\cite{Hen1992}; see also \\cite{Dir1950, Mun1989, Lus2018}) obtained\nusing constrained Hamiltonian mechanics. Importantly, the end result\nof this stability analysis gives a Hamiltonian vector field that is the \nprojection of the SOELVF obtained from the Lagrangian constraint\nalgorithm. The Lagrangian and Hamiltonian formulations of mechanics\nfor almost regular Lagrangians were thereby shown to be equivalent. \n\nWhile \\cite{ADS2020} focused on the generalized Lie symmetries of the\nEuler-Lagrange equations of motion and whether the dynamical\nstructures constructed in the Lagrangian phase space are projectable to\nthe Hamiltonian phase space, in this paper the focus is on the symmetries\nof the action itself and the impact these symmetries have on the\nevolution of dynamical systems. This impact is found to be quite broad,\nsurprisingly restrictive, and unexpectedly subtle. Indeed, even the\nseemingly reasonable expectation that any generalized Lie symmetry of\nthe Euler-Lagrange equations of motion should be a reflection of the\nsymmetries of the action itself is not borne out.\n\nWe find that if the action has a generalized Lie\nsymmetry, then its Lagrangian is necessarily singular; the converse\nneed not be true, as we show through a specific example. We also find\nthat the generators of the generalized Lie symmetry of the action form\na Lie sub-algebra of the generators of the\ngeneralized Lie symmetry of the Euler-Lagrange equation of motion;\nonce again, the converse is not true. We give an example of a dynamical\nsystem for which the Euler-Lagrange equations of motion has a\ngeneralized Lie symmetry, while its action does not. Most importantly,\nfor systems where the Lagrangian is almost regular and for \nwhich the two-form $\\mathbf{\\Omega}_L$ has constant rank, we show that\neach generalized Lie symmetry of the action contributes one\narbitrary constant to the SOELVF. The dimensionality of the space of\nsolutions to the energy equation that have\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ as a symmetry group is thus at\nleast as large as the number of generalized Lie symmetries of the\naction. Moreover, if second- or higher-order Lagrangian\nconstraints are introduced during the application of the Lagrangian\nconstraint algorithm, these additional constraints cannot be due to the\ngeneralized Lie symmetry of the action. \n \nSymmetries of Lagrangian systems have been studied before. However,\nsuch analyses have been focused on time-dependent Lagrangians \\cite{Pri1983, Pri1985, \nCra1983, Car1991, Car1988b, Car1992, Car1993, Car2003}; on systems of\nfirst-order evolution equations \\cite{Car1990b, Mar1992, Gra2002, Gra2005,\n Pop2017}; or on general solutions of Eq.~$(\\ref{EnergyE})$ \n\\cite{deL1995} (see also \\cite{Dim2016}). Importantly, the great majority of these\nstudies have been done using first-order prolongations on\nfirst-order jet bundles with a focus on the Lie symmetries of first-order\nevolution equations. Our interest is in the symmetries of the action, which\nnaturally leads us to consider generalized Lie symmetries and\nsecond-order prolongations. To our knowledge, such symmetry analysis\nof the action has not been done before. (The framework for\n$k^{\\hbox{\\textit{th}}}$-order prolongations on\n$k^{\\hbox{\\textit{th}}}$-order jet bundles have been introduced before\n\\cite{Car1993, deL1995, Car2003, Pop2009, Pop2011}, but they were not applied\nto the action or to the Euler-Lagrange equations of motion.)\n\nThe rest of the paper is arranged as follows. In \\textbf{Section\n \\ref{&Sym}} the conditions under which the action for a dynamical\nsystem, and the conditions under which the Euler-Lagrange equations of\nmotion for this action, have a generalized Lie symmetry are\ndetermined. To compare the conditions for each, the analysis for the\ntwo are done separately, with each self-contained. In\n\\textbf{Section \\ref{&review}} properties of the Lagrangian phase\nspace are reviewed, and the notation used here established. The\ngenerators of the generalized Lie symmetry group for the\nEuler-Lagrange equations of motion were determined in \\cite{ADS2020},\nand a summary of the results found therein that are needed here is\ngiven. In \\textbf{Section \\ref{&A-S}} the generators of the\ngeneralized Lie symmetry group for the action is found within the\nLagrangian phase space approach, and their relation to the\ngenerators for the symmetry group of the Euler-Lagrange equations of\nmotion is determined. The impact of the symmetries of the action on\nthe SOELVF is then analyzed by applying the \nLagrangian constraint algorithm introduced in \\cite{ADS2020} to these\nSOELVF. The results obtained in this paper is then applied to three different\ndynamical systems in \\textbf{Section \\ref{&Exam}}. In particular, an\nexample of a dynamical system that has no generalized Lie symmetries\nand yet is still singular, and another example where the action has no\nsymmetries and yet the Euler-Lagrange equations of motion do, are\ngiven. Concluding remarks can be found in \\textbf{Section \\ref{&Conc}}. \n\n\\section{Generalized Lie symmetries and Lagrangian Mechanics\\label{&Sym}}\n\nIn this section we determine the conditions under which the action of\na dynamical system, and the conditions under which the Euler-Lagrange\nequations of motion for this system, has a generalized Lie\nsymmetry. While the determination for both is done within Lagrangian\nmechanics, the analysis for the action is completed separately from that of\nthe equations of motion\\textemdash with each self-contained\\textemdash so \nthat the two conditions can be compared. We will later show that every\ngenerator of the generalized Lie symmetry of the action is a\ngenerator of a generalized Lie symmetry of the Euler-Lagrange\nequations of motion. Interestingly, the converse is not true. \n\n\\subsection{Symmetries of the Action \\label{&A-Sym}}\n\nWe begin with Lagrangian mechanics, and an analysis of the\ngeneralized Lie symmetry \\cite{Olv1993} of the action \n\\begin{equation}\n S := \\int_{t_1}^{t_2}L\\left(q(t),\\dot{q}(t)\\right)dt,\n \\nonumber\n\\end{equation}\nfor a dynamical\nsystem on a $D$-dimensional configuration space $\\mathbb{Q}$. Here,\n$L\\left(q(t), \\dot{q}(t)\\right)$ is the Lagrangian along a path $q(t)\n= \\left(q^1(t), \\dots, q^D(t)\\right)$ on $\\mathbb{Q}$ with end\npoints given by $Q_1:=q(t_1), Q_2:= q(t_2)$. These points are \nchosen at the same time the choice of $S$ is made, and are fixed.\n\nAs $L\\left(q(t),\\dot{q}(t)\\right)$ depends on both the position $q(t)$\nand the velocity $\\dot{q}(t)$ of the path, we consider a generalized\nLie symmetry that is generated by \n\\begin{equation}\n \\mathbf{g}_L := \\rho_L(q, \\dot{q})\\cdot \\frac{\\mathbf{\\partial}\n \\>\\>\\,}{\\mathbf{\\partial} q}, \n \\nonumber\n\\end{equation}\nwhere $\\rho_L(q,\\dot{q})$ does not depend explicitly on\ntime. Evolution along the path gives the total time derivative \n\\begin{equation}\n \\frac{\\mathbf{d}\\>\\>\\>}{\\mathbf{d}t} := \\dot{q}\\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q} +\\ddot{q}\\cdot\n \\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial} \\dot{q}}.\n \\label{Dt}\n\\end{equation}\nThis in turn gives $\\dot{\\rho}_L:=\\mathbf{d}\\rho_L\/\\mathbf{d}t$, and the second-order\nprolongation vector \\cite{Olv1993}, \n\\begin{equation}\n \\hbox{\\textbf{pr }}\\mathbf{g}_L := \\rho_L \\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q} \n + \\dot{\\rho}_L \\cdot \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} \\dot{q}} +\n \\ddot{\\rho}_L\\cdot \\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial}\n \\ddot{q}},\n \\label{prog}\n\\end{equation}\non the second-order jet space $\\mathbb{M}^{(2)}=\\{(t, q,\n\\dot{q},\\ddot{q})\\}$ where this $\\hbox{\\textbf{pr\n}}\\mathbf{g}_L\\in\\mathbf{T}\\mathbb{M}^{(2)}$.\n\nUnder this generalized Lie symmetry, the action varies by\n\\begin{equation}\n \\delta S=\\int_{t_1}^{t_2}\\hbox{\\textbf{pr }}\\mathbf{g}_L \\Big[L(q(t),\n \\dot{q}(t))\\Big]dt,\n \\nonumber\n\\end{equation}\nwith the requirement that \n$\\rho_L(q(t_1), \\dot{q}(t_1))=0=\\rho_L(q(t_2),\n\\dot{q}(t_2))$. Then after an integration by parts,\n\\begin{equation}\n \\delta S=\\int_{t_1}^{t_2} \\rho_L\\cdot\\left[\\frac{\\partial L}{\\partial\n q} - \\frac{d\\>\\>\\>}{dt}\\left(\\frac{\\partial L}{\\partial\n \\dot{q}}\\right)\\right]dt.\n \\label{Ae1}\n\\end{equation}\nIt is important to realize that the action may be evaluated along any path on\n$\\mathbb{Q}$. As such, if $\\mathbf{g}_L$ generates a symmetry of the action, then\nEq.~$(\\ref{Ae1})$ must vanish for \\textit{all} paths $q(t)$ on\n$\\mathbb{Q}$, and not just for those that minimize the action.\n\nTo make connection with the Lagrangian phase space approach used in\nthe rest of the paper, we make use of\n\\begin{equation}\nE\\left( q,\\dot{q}\\right) :=\\dot{q}^{a}\\frac{\\partial L\\left( q,\\dot\n {q}\\right) }{\\partial\\dot{q}^{a}}-L\\left( q,\\dot{q}\\right) ,\n\\nonumber\n\\end{equation\nalong with\n\\begin{equation}\nM_{ab}\\left( q,\\dot{q}\\right) :=\\frac{\\partial^{2}L\\left( q,\\dot{q}\\right)\n}{\\partial\\dot{q}^{a}\\partial\\dot{q}^{b}}, \\quad \\hbox{and }\\quad \nF_{ab}\\left( q,\\dot{q}\\right) := \\frac{\\partial^{2}L\\left(\n q,\\dot{q}\\right) }{\\partial\\dot{q}^{a}\\partial q^{b}} -\n\\frac{\\partial^{2}L\\left( q,\\dot{q}\\right)}{\\partial\\dot{q}^{b}\\partial \n q^{a}},\n\\nonumber\n\\end{equation}\nto express Eq.~$(\\ref{Ae1})$ as\n\\begin{equation}\n \\delta S=-\\int_{t_1}^{t_2} \\rho^a_L\\left(\\frac{\\partial E}{\\partial\n q^a} +F_{ab}(q,\n \\dot{q})\\dot{q}^b+M_{ab}(q,\\dot{q})\\ddot{q}^b\\right)dt.\n \\label{e2}\n\\end{equation}\nHere, Latin indices run from $1$ to $D$, and Einstein's summation\nconvention is used. We then arrive at our first result.\n\n\\begin{lemma} \\label{Action-Sym} An action $S$ of a dynamical\n system has a generalized Lie symmetry generated by $\\mathbf{g}_L$ if\n and only if there exists a $\\rho_L\\in \\hbox{ker }M_{ab}$ such that \n \\begin{equation}\n 0=\\rho_L^a(q,\\dot{q})\\left(\\frac{\\partial E}{\\partial\n q^a}+F_{ab}(q,\\dot{q})\\dot{q}^b\\right),\n \\label{e3}\n \\end{equation}\n on $\\mathbf{T}\\mathbb{Q}$.\n \n \\begin{proof}\n If $\\mathbf{g}_L$ generates a generalized Lie symmetry of $S$, then\n Eq. $(\\ref{e2})$ must vanish for all paths on $\\mathbb{Q}$. For an\n arbitrary path on $\\mathbb{Q}$ the\n curvature of the path $\\ddot{q}$ will not depend on\n either the $q(t)$ or the $\\dot{q}(t)$ for the path, however. As such, \n for $\\delta S=0$, it must be that $\\rho^a_LM_{ab}\\ddot{q}^b=0$ for any choice of\n $\\ddot{q}$, and thus $\\rho_L^a\\in\\hbox{ker }M_{ab}$. The remaining\n terms in Eq.~$(\\ref{e2})$ gives the condition\n Eq.~$(\\ref{e3})$. \n \\end{proof}\n\\end{lemma}\n\nThe set of all vector fields $\\mathbf{g}_L$ that satisfy \\textbf{Lemma\n $\\mathbf{\\ref{Action-Sym}}$} is denoted by $\\mathfrak{g}_L$, while \n$\\hbox{\\textbf{pr }}\\mathfrak{g}_L := \\{\\hbox{\\textbf{pr\n}}\\mathbf{g}_L\\ \\vert \\ \\ \\mathbf{g}_L\\in \\mathfrak{g}_L\\}$ is the set of\ntheir prolongations. This $\\hbox{\\textbf{pr }}\\mathfrak{g}_L$ is involutive\n\\cite{Olv1993}, and the conditions under which $\\hbox{\\textbf{pr }}\\mathfrak{g}_L$\ngenerates a generalized Lie symmetry group are given in\n\\cite{Olv1993}.\n\nWe see from \\textbf{Lemma \\ref{Action-Sym}} that if the\naction has a generalized Lie symmetry, then the Lagrangian is\nnecessarily singular, and as such the Lagrangian two-form\n$\\mathbf{\\Omega}_L$ will not have maximum rank. It is also important\nto note that while equations of the form\nEq.~$(\\ref{e3})$ often appear in the Lagrangian phase space description of \nmechanics \\cite{ADS2020}, they appear as Lagrangian\nconstraints, conditions that must be imposed for evolution under the Euler-Lagrange\nequations to be well defined. Here, Eq.~$(\\ref{e3})$ is not a \nconstraint. Rather, because the action must have this symmetry for\n\\textit{all} possible paths on $\\mathbb{Q}$, and since the set of all possible \npaths cover $\\mathbb{Q}$, Eq.~$(\\ref{e3})$ is a condition on $\\rho_L$\nthat must be satisfied identically on \\textit{all} of\n$\\mathbf{T}\\mathbb{Q}$\\textemdash and \nthus, on the Lagrangian phase space\\textemdash for $\\mathbf{g}_L$ to\nbe a generator of the symmetry group. We will see that not all the vectors in\n$\\hbox{ker } M_{ab}$ satisfy the identity Eq.~$(\\ref{e3})$,\nhowever, and thus not all of these vectors will generate a generalized Lie\nsymmetry of the action. \n \n\\subsection{Symmetries of the Euler-Lagrange\n Equations of Motion\\label{&EL-Sym}}\n\nWhile in \\textbf{Section \\ref{&A-Sym}} the focus was on arbitrary paths on the\nconfiguration space $\\mathbb{Q}$ and the symmetries of the action,\nin this section the focus is on the trajectories that minimizes the\naction and the generalized Lie symmetries of them. These trajectories\nare solutions of the Euler-Lagrange equations of motion, and for almost\nregular Lagrangians such solutions form a family of curves. It is, in fact, the\npresence of this family of curves that gives rise to the generalized\nLie symmetry. The treatment here follows closely to that given in \\cite{ADS2020}. \n\nFor almost regular Lagrangians the solutions of the Euler-Lagrange\nequations of motion \n\\begin{equation}\nM_{ab}(q, \\dot{q})\\ddot{q}^{b}=-\\frac{\\partial E}{\\partial\n q^{a}}-F_{ab}(q, \\dot{q})\\dot{q}^{b}, \n\\label{2ndEL1\n\\end{equation}\nare not unique. While for these Lagrangians the rank of\n$M_{ab}\\left(q, \\dot{q} \\right) =D-N_{0}$\\textemdash with $N_0=\\hbox{dim\n}\\left(\\hbox{ker }M_{ab}(q, \\dot{q})\\right)$\\textemdash is constant, this rank\nis not maximal, and thus Eq.~$(\\ref{2ndEL1})$ does not have a unique\nsolution for $\\ddot{q}$. Instead, for a chosen set of initial data\n$\\left(q_{0}=q(t_0),\\dot{q}_{0}=\\dot{q}(t_0)\\right)$, the solution to\nEq.~$(\\ref{2ndEL1})$ results in a family of solutions that evolve from\nthis $(q_0, \\dot{q}_0)$. As with the paths in \\textbf{Section \\ref{&A-Sym}}, these\nsolutions are related to one another through a generalized Lie \nsymmetry \\cite{Olv1993}.\n\nFollowing \\cite{Olv1993}, the collection of functions\n\\begin{equation}\n \\Delta_a(q,\\dot{q}, \\ddot{q}) := \\frac{\\partial E(q, \\dot{q})}{\\partial q^a} +\n F_{ab}(q, \\dot{q})\\dot{q}^b + M_{ab}(q, \\dot{q}) \\ddot{q}^b, \n\\label{delta}\n\\end{equation}\ndefines a set of surfaces $\\Delta_a(q,\\dot{q}, \\ddot{q})=0$ on\n$\\mathbb{M}^{(2)}$, while the family of solutions to Eq.~$(\\ref{2ndEL1})$ \n\\begin{equation}\n\\mathcal{O}\\left(q_0, \\dot{q}_0\\right) :=\\big\\{q\\left( t\\right)\n\\ \\vert \\ \\\n\\Delta_a(q,\\dot{q}, \\ddot{q}) =0 \\hbox{ with }\nq\\left( t_{0}\\right)\n=q_{0},\\ \\dot{q}\\left( t_{0}\\right) =\\dot{q}_{0}\\big\\} ,\n\\nonumber\n\\end{equation}\nthat evolve from the same initial data $(q_0, \\dot{q}_0)$ gives the collection\nof trajectories that lie on these surfaces. Indeed, for any two such \nsolutions $q^a(t)$ and $Q^a(t)$ there exists a \n$\\mathfrak{z}(q, \\dot{q})\\in\\hbox{ker } \nM_{ab}(q, \\dot{q})$ such that $\\ddot{Q}^a-\\ddot{q}^a =\n\\mathfrak{z}^a$. Importantly, because $\\mathfrak{z}^a$ depends on both $q$\nand $\\dot{q}$, the symmetry group that maps one member of\n$\\mathcal{O}$ to another must be a generalized Lie symmetry. We\ntherefore take the generator of this symmetry group to be \n\\begin{equation}\n \\mathbf{g} := \\rho(q, \\dot{q})\\cdot \\frac{\\mathbf{\\partial}\n \\>\\>\\>}{\\mathbf{\\partial} q},\n \\nonumber\n\\end{equation}\nwith the corresponding the second-order prolongation vector\nfor $\\mathbf{g}$ being,\n\\begin{equation}\n \\hbox{\\textbf{pr }}\\mathbf{g} := \\rho \\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q} \n + \\dot{\\rho} \\cdot \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} \\dot{q}} +\n \\ddot{\\rho}\\cdot \\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial} \\ddot{q}},\n \\nonumber\n\\end{equation}\nwith this $\\hbox{\\textbf{pr }}\\mathbf{g}\\in\\mathbf{T}\\mathbb{M}^{(2)}$. As with\nthe above, the total time \nderivative is given by Eq.~$(\\ref{Dt})$, but unlike the analysis in\n\\textbf{Section \\ref{&A-Sym}}, the evolution of the path\\textemdash and indeed,\nfor all the trajectories in $\\mathcal{O}(q_0,\\dot{q}_0)$\\textemdash here is\ngiven by the Euler-Lagrange equations of motion.\n\nThe action of this\nprolongation on $\\Delta_a$ on the $\\Delta_a=0$ surface gives, \n\\begin{equation}\n \\hbox{\\textbf{pr }}\\mathbf{g}\\left[\\Delta_a(q,\n \\dot{q},\\ddot{q})\\right] = -\\frac{\\partial \n \\ddot{q}^b}{\\partial q^a}M_{bc}(q, \\dot{q})\\rho^c +\n \\frac{d\\>\\>\\>}{dt}\\left[F_{ab}(q, \\dot{q})\\rho^b + M_{ab}(q,\n \\dot{q})\\dot{\\rho}^b\\right]. \n \\nonumber \n\\end{equation}\nSince $N_0>0$, $\\ddot{q}$ is not unique on this surface, and yet \n$\\mathbf{g}$ must generate the same symmetry group for all the\ntrajectories in $\\mathcal{O}(q_0, \\dot{q}_0)$. Necessarily, \n$\\rho(q, \\dot{q})\\in\\hbox{ker }M_{ab}(q, \\dot{q})$. It then follows\nthat $\\hbox{\\textbf{pr g}}[\\Delta_a(q,\\dot{q}, \\ddot{q})] =0$\nif and only if (iff) there are constants $b_a$ such that $b_a = \nF_{ab}\\rho^b + M_{ab}\\dot{\\rho}^b$. The solutions in $\\mathcal{O}(q_0,\n\\dot{q}_0)$ all have the same initial data, however, and thus\nnecessarily $\\rho(q_0, \\dot{q}_0)=0=\\dot{\\rho}(q_0,\\dot{q}_0)$. We\nconclude that $b_a=0$. The following result, first proved in\n\\cite{ADS2020}, then follows. \n\n\\begin{lemma} \\label{GS} If $\\mathbf{g}$ is\n a generalized infinitesimal symmetry of $\\Delta_a$, then\n $\\rho^a(q, \\dot{q})\\in\\hbox{ker } M_{ab}(q, \\dot{q})$, and\n $\\dot{\\rho}^a(q, \\dot{q})$ is a solution of\n \\begin{equation}\n 0=F_{ab}(q, \\dot{q})\\rho^b(q, \\dot{q}) +\n M_{ab}(q, \\dot{q})\\dot{\\rho}^b(q, \\dot{q}).\n \\label{sol}\n \\end{equation}\n\\end{lemma}\n\nAs before, we denote the set of all vector fields $\\mathbf{g}$ that satisfy \\textbf{Lemma\n $\\mathbf{\\ref{GS}}$} by $\\mathfrak{g}$, while \n$\\hbox{\\textbf{pr }}\\mathfrak{g} := \\{\\hbox{\\textbf{pr\n}}\\mathbf{g}\\ \\vert \\ \\ \\mathbf{g}\\in \\mathfrak{g}\\}$ is the set of\ntheir prolongations. Once again $\\hbox{\\textbf{pr }}\\mathfrak{g}$ is\ninvolutive, and the conditions under which $\\hbox{\\textbf{pr\n}}\\mathfrak{g}$ \ngenerates a generalized Lie symmetry group are given in\n\\cite{Olv1993}. Note, however, that while $\\rho=0$ and\n$\\dot{\\rho}=\\mathfrak{z}$ for any $\\mathfrak{z}\\in\\hbox{ker\n}M_{ab}(q, \\dot{q})$ is a solution of \n Eq.~$(\\ref{sol})$, we require that $\\dot{\\rho} =\n \\mathbf{d}\\rho\/\\mathbf{d}t$; these solutions cannot be generators of the\n generalized Lie symmetry. Next, if $\\dot{\\rho}$ is a solution of\n Eq.~$(\\ref{sol})$, then \n$\\dot{\\rho}^a+\\mathfrak{z}$ is a solution of Eq.~$(\\ref{sol})$ as\n well, and thus these solutions are not unique. This, along with the\n previous observation, leads us to generators that are constructed from\n equivalence classes of prolongations. Finally, Eq.~$(\\ref{delta})$\n gives for any $\\mathfrak{z} \\in\\hbox{ker }M_{ab}(q, \\dot{q})$,\n\\begin{equation}\n 0=\\mathfrak{z}^a\\left(\\frac{\\partial E}{\\partial\n q^a}+F_{ab}(q,\\dot{q})\\dot{q}^b\\right),\n \\label{e}\n\\end{equation}\non the solution surface $\\Delta_a(q,\\dot{q},\\ddot{q})=0$. If\nEq.~(\\ref{e}) does not hold identically, it must be\nimposed, leading to Lagrangian constraints\n\\cite{Car1990a}. More importantly, because each\n$q(t)\\in\\mathcal{O}(\\mathfrak{u}_0)$ \nmust lie on the Lagrangian constraint submanifold, any symmetry transformation of\n$q(t)$ generated by $\\mathbf{pr }\\>\\> \\mathbf{g}$ must give a path\n$Q(t)$ that also lies on the constraint submanifold.\n\nNot all vectors in $\\mathbf{pr}\\> \\mathfrak{g}$ will be generators of\nthe generalized Lie symmetry group for\n$\\mathcal{O}(\\mathfrak{u}_0)$. Determining which of these vectors are,\nand the relationship between the generators of symmetries of\nthe Euler-Lagrange equations of motion and those of \nthe action, is best done within the Lagrangian phase space\nframework. To accomplish this, we will need the following generalization of\n\\textbf{Lemma \\ref{GS}}.\n\nConsider the vector\n\\begin{equation}\n \\mathbf{k}:=c\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q}\n +\\dot{c}\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}\n \\dot{q}},\n \\nonumber\n\\end{equation}\nwith a $c\\in\\hbox{ker }M_{ab}(q, \\dot{q})$ along with the quantity\n\\begin{equation}\n l_a:=F_{ab}c^b(q, \\dot{q})+M_{ab}\\dot{c}^b(q,\\dot{q}).\n \\nonumber\n\\end{equation}\nAfter an integration by parts,\n\\begin{eqnarray}\n l_a&=& c^b(q,\n \\dot{q})\\left\\{F_{ab}(q, \\dot{q})-\\frac{\\mathbf{d}\\>\\>\\,}{\\mathbf{d}t}\n \\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial}\n \\dot{q}^a\\mathbf{\\partial} \\dot{q}^b}\\right\\},\n \\nonumber\n \\\\\n &=& c^b(q,\n \\dot{q})\\left\\{F_{ab}(q, \\dot{q})-\\left[\\frac{\\mathbf{d}\\>\\>\\,}{\\mathbf{d}t}, \\frac{\\partial\n \\>\\>\\,}{\\mathbf{\\partial} \\dot{q}^a}\\right]\\frac{\\mathbf{\\partial} L}{\\mathbf{\\partial}\n \\dot{q}^b}- \\frac{\\mathbf{\\partial}\\>\\>\\,}{\\mathbf{\\partial}\n \\dot{q}^a} \\left(\\frac{\\mathbf{d}\\>\\>\\,}{\\mathbf{d}t}\n \\frac{\\mathbf{\\partial} L}{\\mathbf{\\partial}\n \\dot{q}^b}\\right)\\right\\}. \n \\nonumber\n\\end{eqnarray}\nUsing Eq.~$(\\ref{Dt})$ we have\n\\begin{equation}\n \\left[\\frac{\\mathbf{d}\\>\\>\\,}{\\mathbf{d}t}, \\frac{\\mathbf{\\partial}\n \\>\\>\\,}{\\mathbf{\\partial} \\dot{q}^a}\\right]\\frac{\\partial L}{\\partial \\dot{q}^b} =\n -\\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial} q^a\\mathbf{\\partial}\n \\dot{q}^b}-\n \\frac{\\mathbf{\\partial}\n \\ddot{q}^c}{\\mathbf{\\partial}\n \\dot{q}^a}\\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial}\n \\dot{q}^c\\mathbf{\\partial} \\dot{q}^b}.\n \\nonumber\n\\end{equation}\nAs $q(t)$ is a solution of the Euler-Lagrange equations of\nmotion, we find that\n\\begin{equation}\nl_a= c^b(q, \\dot{q})\\left\\{F_{ab}(q, \\dot{q})\n+\\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial} q^a\\mathbf{\\partial}\n \\dot{q}^b}-\\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial}\n \\dot{q}^a\\mathbf{\\partial}q_b} +\n \\frac{\\mathbf{\\partial}\n \\ddot{q}^c}{\\mathbf{\\partial}\n \\dot{q}^a}\\frac{\\mathbf{\\partial}^2L}{\\mathbf{\\partial}\n \\dot{q}^c\\mathbf{\\partial} \\dot{q}^b}\n \\right\\}.\n \\nonumber\n\\end{equation}\nThis last expression vanishes after the definition of $F_{ab}(q,\n\\dot{q})$ is used along with the requirement that $c\\in\\hbox{ker\n}M_{ab}(q,\\dot{q})$. We then have the following result. \n\n\\begin{lemma} \\label{AllS} For any vector \n\\begin{equation}\n \\mathbf{k}=c\\cdot\\frac{{\\partial}\\>\\>\\>}{{\\partial} q}\n +\\dot{c}\\cdot\\frac{{\\partial}\\>\\>\\>}{{\\partial}\n \\dot{q}},\n \\nonumber\n\\end{equation}\nsuch that $c\\in\\hbox{ker }M_{ab}$, \n\\begin{equation}\n 0=F_{ab}c^b(q, \\dot{q})+M_{ab}\\dot{c}^b(q,\\dot{q}).\n \\nonumber\n\\end{equation}.\n\\end{lemma}\n\n\\section{Generators of the Generalized Lie Symmetry for the\n Euler-Lagrange Equations of Motion\\label{&review}}\n\nThe generators of the generalized Lie symmetry for both the\nEuler-Lagrange equations of motion and the action are best found using\nthe Lagrangian phase space approach to mechanics. This phase space\nand its concomitant mathematical structure provide the tools needed to\ndetermine both the generators of the symmetry and the solutions to the\nenergy equation on which they act. For the Euler-Lagrange equations of \nmotion this determination was done in \\cite{ADS2020}. In this\nsection we will review the Lagrangian phase space approach, establish\nthe notation used in this paper, and summarize the results obtained in\n\\cite{ADS2020} that are needed here. (We will also take the\nopportunity to correct typographical errors made in \\cite{ADS2020}.)\nProofs of the majority of the assertions listed in this section will\nnot be given; the reader is instead referred to \\cite{ADS2020} where the\nproofs and the context of their development can be found.\n\n\\subsection{The Lagrangian Phase space\\label{&phase}}\n\nFor a configuration space $\\mathbb{Q}$ the \\textbf{Lagrangian phase\n space} $\\mathbb{P}_L$ is the tangent space\n$\\mathbb{P}_L=\\mathbf{T}\\mathbb{Q}$, with the coordinates on\n$\\mathbb{P}_L$ denoted as $\\mathfrak{u}=(q^1, \\dots, q^D, v^1, \\dots \nv^D)$. Integral flows on $\\mathbb{P}_L$,\n$t\\in[t_0,\\infty)\\to\\mathfrak{u}(t)\\in\\mathbb{P}_L$ \n\\cite{Abr1978}, for a set of initial data $\\mathfrak{u}_0=(q_0,\nv_0)$ are given as solutions to \n\\begin{equation}\n\\frac{d\\mathfrak{u}}{dt}:=\\mathbf{X} (\\mathfrak{u}),\n\\nonumber\n\\end{equation}\nwhere $\\mathbf{X}$ is a smooth vector field in \n$\\mathbf{T}\\mathbb{P}_L=\\mathbf{T}(\\mathbf{T}\\mathbb{Q})$. The two\ntangent spaces $\\mathbf{T}\\mathbb{Q}$ and $\\mathbf{T}\\mathbb{P}_L$\nhave the bundle projections: $\\tau_{\\mathbb{Q}}:\\mathbf{T}\\mathbb{Q}\\to \n\\mathbb{Q}$ and \n$\\tau_{\\mathbf{T}\\mathbb{Q}}:\\mathbf{T}(\\mathbf{T}\\mathbb{Q})\n\\to\\mathbf{T}\\mathbb{Q}$. They can be used to\nconstruct two other projection maps: $\\tau_{\\mathbb{Q}}\\circ \n\\tau_{\\mathbf{T}\\mathbb{Q}}:\\mathbf{T}(\\mathbf{T}\\mathbb{Q}) \n\\to\\mathbb{Q}$ and the prolongation of $\\tau_{\\mathbf{T}\\mathbb{Q}}$\nto $\\mathbf{T}(\\mathbf{T}\\mathbb{Q})$ (see \\cite{Got1979} and\n\\cite{Abr1978}). This prolongation is the map \n$\\mathbf{T}\\tau_{\\mathbb{Q}}:\\mathbf{T}(\\mathbf{T}\\mathbb{Q})\\to\\mathbf{T}\\mathbb{Q}$,\nand is defined by requiring that the two maps\n$\\tau_{\\mathbb{Q}}\\circ \\tau_{\\mathbf{T}\\mathbb{Q}}$ and \n$\\tau_{\\mathbb{Q}}\\circ \\mathbf{T}\\tau_{\\mathbb{Q}}$ map any point in\n$\\mathbf{T}(\\mathbf{T}\\mathbb{Q})$ to the same point in\n $\\mathbb{Q}$. The \\textbf{vertical subbundle}\n$[\\mathbf{T}\\mathbb{P}_L]^v$ of $\\mathbf{T}(\\mathbf{T}\\mathbb{Q})$\nis $[\\mathbf{T}\\mathbb{P}_L]^v = \\hbox{ker\n}\\mathbf{T}\\tau_{\\mathbb{Q}}$ \\cite{Got1979}; a $\\mathbf{X}^v\\in\n[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L]^v$ above a point\n$\\mathfrak{u}\\in\\mathbb{P}_L$ is called a \\textbf{vertical vector\n field}. The \\textbf{horizontal subbundle} \n$[\\mathbf{T}\\mathbb{P}_L]^q$ of\n$\\mathbf{T}(\\mathbf{T}\\mathbb{Q})$ is \n$[\\mathbf{T}\\mathbb{P}_L]^q = \\hbox{Image\n}\\mathbf{T}\\tau_{\\mathbb{Q}}$; a\n$\\mathbf{X}^q\\in[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L]^q$ is called a \n\\textbf{horizontal vector field}. Consequently, each \n$\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L$ consists of a\n$\\mathbf{X}^q \\in\n\\left[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^q$ and a\n$\\mathbf{X}^v \\in\n\\left[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^v$ with $\\mathbf{X}\n= \\mathbf{X}^q + \\mathbf{X}^v$. In terms of local coordinates, \n\\begin{equation}\n\\mathbf{X}^{q}\n:=X^{qa}\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}q^{a}} ,\\quad\n\\hbox{and}\\quad\\mathbf{X}^{v} :=X^{va}\n\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v^{a}}.\n\\nonumber\n\\end{equation}\nOf special interest is the second order Lagrangian vector field\n$\\mathbf{X}_L$. This vector field is the particular solution of\nEq.~$(\\ref{EnergyE})$ for which \n$\\mathbf{T}\\tau_{\\mathbb{Q}}\\circ\\mathbf{X}_L$ is the identity on\n$\\mathbf{T}\\mathbb{Q}$ (see \\cite{Abr1978}). In terms of local\ncoordinates\n\\begin{equation}\n \\mathbf{X}_L = v^a \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q^a} +\n X^{va} \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v^a}.\n \\nonumber\n\\end{equation}\n\nThe space of one-forms on\n$\\mathbb{P}_L$ is the cotangent space\n$\\mathbf{T}^{*}\\mathbb{P}_L$. For a one-form $\\mathbf{\\alpha}\\in \n\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L$, and a vector\nfield $\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L$, the\ndual prolongation map $\\mathbf{T}^{*}\\tau_{\\mathbb{Q}}$ is defined as\n\\begin{equation}\n \\langle \\mathbf{\\alpha}\\vert\n \\mathbf{T}\\tau_{\\mathbb{Q}}\\mathbf{X}\\rangle = \\langle\n \\mathbf{T}^{*}\\tau_{\\mathbb{Q}} \\mathbf{\\alpha}\\vert\n \\mathbf{X}\\rangle,\n \\nonumber\n\\end{equation}\nafter a useful adaptation of Dirac's bra and ket notation. In\naddition, for a general $k$-form \\textbf{$\\mathbf{\\omega}$} in the\n$k$-form bundle $\\mathbf{\\Lambda}^{k}\\left(\\mathbb{P}_L\\right)$, \n\\begin{equation}\n\\mathbf{\\omega}\\left( x\\right) :\\mathbf{Y}_{1}\\otimes\\cdots\\otimes\n\\mathbf{Y}_{k}\\rightarrow\\left\\langle \\left. \\mathbf{\\omega}\\left( x\\right)\n\\right\\vert \\mathbf{Y}_{1}\\otimes\\cdots\\otimes\\mathbf{Y}_{k}\\right\\rangle\n\\in\\mathbb{R},\n\\nonumber\n\\end{equation}\nwith\n$\\mathbf{Y}_{j}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L$ for $j = 1,\n\\dots, k$. The \\textbf{vertical one-form subbundle}\n$[\\mathbf{T}^{*}\\mathbb{P}_L]^v$ of $\\mathbf{T}^{*}\\mathbb{P}_L$ is \n$[\\mathbf{T}^{*}\\mathbb{P}_L]^v :=\\hbox{ker }\n\\mathbf{T}^{*}\\tau_{\\mathbb{Q}}$; a\n$\\mathbf{\\alpha}_v\\in [\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L]^v$ is\ncalled a \\textbf{vertical one-form}. The \\textbf{horizontal\none-form subbundle} $[\\mathbf{T}^{*}\\mathbb{P}_L]^q$ of\n$\\mathbf{T}^{*}\\mathbb{P}_L$ is \n$[\\mathbf{T}^{*}\\mathbb{P}_L]^q=\\hbox{Image }\n\\mathbf{T}^{*}\\mathbb{P}_L$; a\n$\\mathbf{\\alpha}_q\\in[\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{Q}]^q$ is\ncalled a \\textbf{horizontal one-form}. Each one-form $\\mathbf{\\varphi}\\in\n\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L$ consists of a\n$\\mathbf{\\varphi}_{q} \\in \\left[\\mathbf{T}_{\\mathfrak{u}}^{*}\\mathbb{P}_L\\right]^q$ and\na $\\mathbf{\\varphi}_{v} \\in \n\\left[\\mathbf{T}_{\\mathfrak{u}}^{*}\\mathbb{P}_L\\right]^q$\nsuch that $\\mathbf{\\varphi}=\\mathbf{\\varphi}_{q}\n+\\mathbf{\\varphi}_{v}$. In terms\nof local coordinates \n$\\mathbf{\\varphi}_{q} :=\\varphi_{qa} \\ \\mathbf{d}q^{a}$ and \n$\\mathbf{\\varphi}_{v} :=\\varphi_{va} \\mathbf{d}v^{a}$. \n\nFollowing \\cite{Got1979, Got1980}, the Lagrangian two-form is defined\nas $\\mathbf{\\Omega}_L := -\\mathbf{d}\\mathbf{d}_JL$, \nwhere $\\mathbf{d}_J$ is the vertical derivative (see\n\\cite{Got1979}). This two-form can be expressed as\n$\\mathbf{\\Omega}_{L}:=\\mathbf{\\Omega}_{F}+\\mathbf{\\Omega}_{M}$ such\nthat for any $\\mathbf{X},\n\\mathbf{Y}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L$. \n\\begin{equation}\n\\mathbf{\\Omega}_F(\\mathbf{X},\\mathbf{Y}) :=\n\\mathbf{\\Omega}_L(\\mathbf{T}\\tau_{\\mathbb{Q}}\\mathbf{X},\n\\mathbf{T}\\tau_{\\mathbb{Q}}\\mathbf{Y}),\n\\nonumber\n\\end{equation}\nand is thus the \\textbf{horizontal two-form} of $\\mathbf{\\Omega}_L$. As\n$\\mathbf{\\Omega}_M(\\mathbf{X},\\mathbf{Y})=\\mathbf{\\Omega}_L(\\mathbf{X},\\mathbf{Y})- \n\\mathbf{\\Omega}_F(\\mathbf{X},\\mathbf{Y})$, $\\mathbf{\\Omega}_M$ is then\na \\textbf{mixed two-form} of \n$\\mathbf{\\Omega}_L$. In terms of local coordinates, \n\\begin{equation}\n\\mathbf{\\Omega}_{L}=-\\mathbf{d{\\theta}}_{L},\\quad \\hbox{where} \\quad\n\\mathbf{\\theta}_{L}:=\\frac{\\partial L}{\\partial v^{a\n}\\mathbf{d}q^{a},\n\\nonumber\n\\end{equation}\nwhile\n\\begin{equation}\n\\mathbf{\\Omega\n}_{F}:=\\frac{1}{2}F_{ab}\\mathbf{d}q^{a}\\wedge\\mathbf{d}q^{b\n,\\ \\hbox{and} \\ \\mathbf{\\Omega}_{M}:=M_{ab}\\mathbf{d}q^{a}\\wedge\\mathbf{d}v^{b}.\n\\nonumber\n\\end{equation}\n\nFor regular Lagrangians $\\mathbf{X}_L$ is the unique solution of\nEq.~$(\\ref{EnergyE})$. For almost regular Lagrangians, on the other\nhand, this solution is not unique, but instead depends on \n\\begin{equation}\n\\ker\\> \\mathbf{\\Omega}_{L}\\left( \\mathfrak{u}\\right) \n:=\\left\\{ \\mathbf{K}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_{L\n\\ \\vert\\ \\ i_{\\mathbf{K}}\\mathbf{\\Omega}_{L}=0\\right\\}.\n\\nonumber\n\\end{equation}\nFrom \\textbf{Section \\ref{&Sym}} we expect this kernel to\nplay a role in determining the generators of the generalized Lie \nsymmetry of both the Euler-Lagrange equations of motion and the\naction. Indeed, consider the natural isomorphism $iso: (t, q, \\dot{q}, \n\\ddot{q}) \\in \\mathbb{M}^{(2)} \\to (t, q, v, X^{va}_{L})$ defined in\n\\cite{ADS2020}, and the prolongation $\\hbox{\\textbf{pr} }\\mathbf{g}$ \nof a generator $\\mathbf{g}\\in\\mathbf{\\mathfrak{g}}$ of a \ngeneralized Lie symmetry of the Euler-Lagrange equations of\nmotion. This $\\hbox{\\textbf{pr }}\\mathbf{g}$ contains the vector \n\\begin{equation}\n \\mathbf{k}=\\rho\\cdot\\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial} q} +\n \\dot{\\rho}\\cdot\\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial}\\dot{q}}.\n \\nonumber\n\\end{equation}\nThe collection of all such vectors has been shown to be\ninvolutive (see \\cite{ADS2020}). The isomorphism maps $iso:\n\\mathbf{k}\\to\\mathbf{k}'$ where \n\\begin{equation}\n \\mathbf{k}'=\\rho\\cdot\\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial} q} +\n \\dot{\\rho}\\cdot\\frac{\\mathbf{\\partial} \\>\\>\\>}{\\mathbf{\\partial} v}.\n \\nonumber\n\\end{equation}\nThen $\\mathbf{k}'\\in\\mathbf{T}\\mathbb{P}_L$, and from\n\\textbf{Lemma \\ref{GS}}, $\\mathbf{k}'\\in\\hbox{ker\n}\\mathbf{\\Omega}_L(\\mathfrak{u})$ as well. A similar result holds for\nthe generators in $\\mathbf{\\mathfrak{g}}_L$ after \n\\textbf{Lemma \\ref{Action-Sym}} and \\textbf{Lemma \\ref{AllS}} are used.\n\nThe two-form $\\mathbf{\\Omega}_L$ gives the lowering map\n$\\mathbf{\\Omega}_L^{\\flat}:\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_{L}\\rightarrow \n\\mathbf{T}_{\\mathfrak{u}}^{\\ast}\\mathbb{P}_{L}$, with\n$\\Omega_L^{\\flat}\\mathbf{X}:=i_{\\mathbf{X}}\\mathbf{\\Omega}_L$.\nThis map consists of\n$\\Omega_{L}^{\\flat}=\\Omega_{F}^{\\flat}+\\Omega_{M}^{v\\flat}+\\Omega_{M}^{q\\flat}$, \nwith $\\Omega_{F}^{\\flat}:\n\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L \\to\n\\left[\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^q$; \n$\\Omega_{M}^{q\\flat}:\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L \n\\to\\left[\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^q$; and\n$\\Omega_{M}^{v\\flat}:\n\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L \\to\n\\left[\\mathbf{T}^{*}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^v$. In terms of local\ncoordinates, \n$\\Omega_{F}^{\\flat}\\mathbf{X} = F_{ab}X^{qa}\\mathbf{d}q^b$, \n$\\Omega_{M}^{q\\flat}\\mathbf{X}= -M_{ab}X^{va}{}\\mathbf{d}q^{b}$, and\n$\\Omega_{M}^{v\\flat}\\mathbf{X}= M_{ab}X^{qa}\\mathbf{d}v^{b}$.\n\nFor almost regular Lagrangians $\\ker\\>\n \\Omega_{M}^{v\\flat} = \\mathcal{C}\\oplus\n \\left[ \\mathbf{T}_{\\mathfrak{u}} \\mathbb{P}_{L}\\right] ^{v}$ \n while $\\ker\\> \\Omega_{M}^{q\\flat} =\\left[\n \\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_{L}\\right] ^{q}\\oplus \n \\mathcal{G}$. Here\n \\begin{equation}\n \\mathcal{C}:=\\left\\{\\mathbf{C}\\in[\\mathbf{T}_q\\mathbb{P}_L]^q\n \\ \\vert \\ i_{\\mathbf{C}}\\mathbf{\\Omega}_M =0\\right\\},\n \\nonumber\n \\end{equation}\n and\n \\begin{equation}\n \\mathcal{G}:=\\left\\{\\mathbf{G}\\in\n [\\mathbf{T}_q\\mathbb{P}_L]^v \\ \\vert\n \\ i_{\\mathbf{G}}\\mathbf{\\Omega}_M =0\\right\\}.\n \\nonumber\n \\end{equation}\nAs $M_{ab}(\\mathfrak{u})$ has constant rank on\n$\\mathbb{P}_L$, there exists a basis, \n\\begin{equation}\n\\Big\\{\n\\mathbf{\\mathfrak{z}}_{\\left( n\\right) }\\left( \\mathfrak{u}\\right) =\\left( \\mathfrak{z\n_{\\left( n\\right) }^{1}\\left( \\mathfrak{u}\\right) ,\\ldots,\\mathfrak{z\n_{\\left( n\\right) }^{D}\\left( \\mathfrak{u}\\right) \\right) \\vert\n\\ M_{ab}\\left( \\mathfrak{u}\\right) \\mathfrak{z}_{\\left( n\\right) \n^{b}\\left( \\mathfrak{u}\\right) =0,\nn=1,\\ldots,N_{0} \\Big\\} ,\n\\nonumber\n\\end{equation}\nfor $\\ker M_{ab}\\left( \\mathfrak{u}\\right) $ at each\n$\\mathfrak{u}\\in\\mathbb{P}_{L}$. Spans of both\n\\begin{eqnarray}\n \\mathcal{C}&=& \\hbox{span }\n \\left\\{\\mathbf{U}^q_{(n)}=\\mathbf{\\mathfrak{z}}_{(n)}\\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q}, n=1,\n \\dots, N_0\\right\\}, \\hbox{and}\\>\n \\nonumber\n \\\\\n \\mathcal{G}&=& \\hbox{span }\n \\left\\{\\mathbf{U}^v_{(n)}=\\mathbf{\\mathfrak{z}}_{(n)}\\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v}, n=1,\n \\dots, N_0\\right\\},\n \\nonumber\n\\end{eqnarray}\ncan then be constructed. Importantly, $\\mathcal{G}$ is involutive\n\\cite{Car1990a}, and when the rank of\n$\\mathbf{\\Omega}_L(\\mathfrak{u})$ is constant on $\\mathbb{P}_L$,\n$\\hbox{ker } \\mathbf{\\Omega}_L(\\mathfrak{u})$ is involutive as well.\n\nCorresponding to $\\mathbf{U}^q_{(n)}$ and $\\mathbf{U}^v_{(n)}$ we have\nthe one-forms \n$\\mathbf{\\Theta}^{(m)}_q$ and $\\mathbf{\\Theta}^{(m)}_v$\nwhere $\\langle\\mathbf{\\Theta}^{(m)}_q\\vert\n\\mathbf{U}^q_{(n)} \\rangle= \\delta_{(n)}^{(m)}$ and\n$\\langle\\mathbf{\\Theta}^{(m)}_v\\vert \\mathbf{U}^v_{(n)} \\rangle=\n\\delta_{(n)}^{(m)}$. Then $\\left[ \\mathbf{T\n_{\\mathfrak{u}}\\mathbb{P}_{L}\\right] ^{q}\n\\mathcal{C\n\\oplu\n\\mathcal{C\n_{\\perp}$ and $\\left[ \\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_{L}\\right] ^{v}\n\\mathcal{G\n\\oplu\n\\mathcal{G\n_{\\perp}$, where \n\\begin{eqnarray\n\\mathcal{C\n_{\\perp}:=\\bigg\\{ \\mathbf{X}\\in\\left[ \\mathbf{T}_{\\mathfrak{u}\n \\mathbb{P}_{L}\\right] ^{q}\\ &\\vert&\\\n\\left\\langle \\left. \\mathbf{\\Theta}_{q}^{(n)}\\right\\vert \\mathbf{X\n\\right\\rangle =0,\\>\\> n=1,\\dots,N_{0} \\ \\bigg\\} ,\n\\hbox{and\n\\nonumber\n\\\\\n\\mathcal{G}_{\\perp}:=\\bigg\\{ \\mathbf{X}\\in\\left[\n \\mathbf{T}_{\\mathfrak{u}}\n \\mathbb{P}_{L}\\right] ^{v}\\ &\\vert&\\\n\\left\\langle \\left. \\mathbf{\\Theta}_{v}^{(n)}\\right\\vert \\mathbf{X\n\\right\\rangle =0, \\>\\> n=1,\\dots,N_{0} \\ \\bigg\\}.\n\\nonumber\n\\end{eqnarray}\nThe vectors that lie in $\\hbox{ker\n}\\mathbf{\\Omega}_L(\\mathfrak{u})$ can be determined by using the reduced\nmatrix $ {F}_{nm}:=\\mathfrak{z}_{\\left( n\\right)\n}^{a}F_{ab}\\mathfrak{z}_{\\left( \nm\\right) }^{b}$ to define\n\\begin{equation}\n\\overline\n\\mathcal{C\n}:=\\left\\{ \\overline{\\mathbf{C}}\\i\n\\mathcal{C\n\\ \\bigg\\vert\\ \\sum_{m=1}^{N_{0}}\\bar{F}_{nm}\\overline{C}^{\\left( m\\right)\n}=0\\right\\} \\subse\n\\mathcal{C}\n\\nonumber\n\\end{equation}\nThen,\n\n\\begin{theorem}\n\\label{@NVGen}The vectors $\\mathbf{K=K}^{q}+\\mathbf{K}^{v}\\in\\ker\n\\mathbf{\\Omega}_{L}$ are given by\n\\begin{equation}\n\\mathbf{K}^{q}=\\overline{\\mathbf{C}}\\mathbf{,\\ \\ K}^{v}=\\mathbf{G+\n\\widehat{\\mathbf{C}},\n\\nonumber\n\\end{equation}\nwhere, $\\overline{\\mathbf{C}}\\in\\overline\n\\mathcal{C\n}$, $\\mathbf{G}\\i\n\\mathcal{G\n$, and $\\widehat{\\mathbf{C}}\\i\n\\mathcal{G\n_{\\perp}$ is the unique solution of $M_{ab}\\widehat{C}^{b}=-F_{ab}\\overline\n{C}^{b}$.\n\\end{theorem}\nWe found in \\cite{ADS2020} that $\\dim\\> \\left(\\ker\n\\mathbf{\\Omega}_{L}\\left( \\mathfrak{u}\\right)\\right) \n =N_{0}+\\bar{D}, $ where $\n\\ \\bar{D} :=\\dim \\>\\overline{\\mathcal{C}}\\le N_0$ (see \\cite{ADS2020}\nfor proof). However, the results of \\textbf{Lemma \\ref{AllS}} show\nthat we can construct from any vector $\\mathbf{U}^q\\in \\mathcal{C}$ a\nvector that lies in the $\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})$,\nand as dim $(\\mathcal{C})=N_0$, it follows that dim $(\\hbox{ker\n}\\mathbf{\\Omega}_L(\\mathfrak{u}))=2N_0$. \n\n\\subsection{First-order Lagrangian constraints\\label{&LCon}}\n\nFor singular Lagrangians solutions of the energy equation\n$\\mathbf{X}_E$ are not unique. It is well known that they also do not,\nin general, exist throughout $\\mathbb{P}_L$, but are instead confined\nto a submanifold of the space given by Lagrangian constraints. \n\nWith $\\mathbf{X}_{E}=\\mathbf{X}^q_{E}+\\mathbf{X}^v_{E}$, it is\nconvenient to use the one form $\\mathbf{\\Psi}$\n\\begin{equation}\n \\Omega_{M}^{q\\flat}\\mathbf{X}_{E}^{v}=\\mathbf{\\Psi}.\n\\nonumber\n\\end{equation}\nconstructed from the energy equation. The \\textbf{first-order\n constraint functions} are then $\\gamma_{n}^{\\left[\n 1\\right] }:=\\left\\langle \n\\left. \\mathbf{\\Psi}\\right\\vert \\mathbf{U}_{\\left( n\\right)\n}^{q}\\right\\rangle=0$ for $n=1,\\ldots,N_{0}$. In terms of local coordinates,\n\\begin{equation}\n\\gamma_{n}^{\\left[ 1\\right] }= U_{\\left( n\\right) \n^{qa}\\left( \\frac{\\partial E}{\\partial q^{a}}+F_{ab}v^{b}\\right).\n\\nonumber\n\\end{equation}\nThey may also be expressed \\cite{Got1979,\n Got1980} as $\\gamma^{[1]}_n = \\langle\n\\mathbf{d}E\\vert\\mathbf{P}_{(n)}\\rangle=\\mathbf{P}_{(n)}E$ for any\nbasis $\\{\\mathbf{P}_{(n)}\\}$ of ker $\\mathbf{\\Omega}_L(\\mathfrak{u})$\nfor which $\\langle\\mathbf{\\Theta}^{(m)}_q\\vert\n\\mathbf{P}_{(n)}\\rangle=\\delta^{(m)}_{(n)}$. \nIn general, $\\gamma_{n}^{\\left[ 1\\right] }\\ne0$ on\n$\\mathbb{P}_L$. Instead, the condition $\\gamma_{n}^{\\left[ 1\\right]\n}=0$ must be imposed, and this in turn defines a set of submanifolds\nof $\\mathbb{P}_L$ given by the collection $\\hbox{C}_{L}^{\\left[ 1\\right] }:=\\left\\{\n\\gamma_{1}^{\\left[ 1\\right] },\\ldots,\\gamma_{N_{0}}^{\\left[ 1\\right]\n}\\right\\} $. The collection of these\nsurfaces, \n$\\mathbb{P}_{L}^{\\left[ 1\\right] }:=\\left\\{ \n\\mathfrak{u}\\in\\mathbb{P}_{L}\\ \\vert\\ \\ \\gamma_{n}^{\\left[ 1\\right]\n}\\left( \\mathfrak{u}\\right) =0\\, , n=1,\\ldots,N_{0}\\ \\right\\}$ is\ncalled the \\textbf{first-order Lagrangian constraint submanifold}, and\nhas $\\dim \\mathbb{P}_{L}^{\\left[1\\right] } =2D-I_{\\left[\n 1\\right]}$. Here $I_{\\left[1\\right] }$ is the\nnumber of independent functions in $\\hbox{C}_{L}^{\\left[ 1\\right]\n}$ with $I_{\\left[ 1\\right] }=\\hbox{rank } \\left\\{\\mathbf{d}\\gamma_{n}^{\\left[\n 1\\right] }\\right\\} \\leq N_{0}$. \n\nThe \\textbf{constraint\n one-form} \n\\begin{equation}\n \\mathbf{\\beta}[\\mathbf{X}_E] :=\n \\mathbf{d}E-i_{\\mathbf{X}_E}\\mathbf{\\Omega}_L,\n\\nonumber\n\\end{equation}\nwas introduced in \\cite{ADS2020} with the condition \n$\\mathbf{\\beta}[\\mathbf{X}_E]=0$ giving both the \nsolution of the energy equation and the submanifold\n$\\mathbb{P}_L^{[1]}$. As $\\langle\\mathbf{\\beta} \\vert \\mathbf{U}^q_{(n)}\\rangle=\n\\gamma_n^{[1]}$, this $\\mathbf{\\beta}[\\mathbf{X}_E]$ can also\nbe expressed as \n\\begin{equation}\n \\mathbf{\\beta}[\\mathbf{X}_E]=\\sum_{n=1}^{N_0}\n \\gamma_n^{[1]}\\mathbf{\\Theta}^{(n)}_q.\n \\label{beta}\n\\end{equation}\n\n\\subsection{The Generalized Lie Symmetry Group for the Euler-Lagrange\n Equations of motion\\label{&GenEL-Sym}} \n\nThe generalized Lie symmetry group for $\\mathcal{O}(\\mathfrak{u}_0)$\nis determined using\n\\begin{equation}\n \\overline{\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})} :=\n \\{\\mathbf{P} \\in \\hbox{ker } \\mathbf{\\Omega}_L(\\mathfrak{u})\\\n \\vert \\ [\\mathbf{G},\\mathbf{P}]\n \\in\\left[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^v \\>\\>\\forall\n \\>\\>\\mathbf{G}\\in\\mathcal{G}\\}, \n\\end{equation}\nalong with the following collection of functions on $\\mathbb{P}_L$, \n\\begin{equation}\n \\overline{\\mathcal{F}} := \\{f\\in C^\\infty \\hbox{on } \\mathbb{P}_L\n \\ \\vert\\ \\ \\mathbf{G}f = 0 \\>\\>\\forall\\>\\> \\mathbf{G} \\in\n \\mathcal{G}\\}.\n\\nonumber\n\\end{equation}\nThis $\\overline{\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ is also\ninvolutive.\n\nThe following results were proved in \\cite{ADS2020}.\n\n\\begin{lemma} \\label{basic} Let \n $\\mathbf{X}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L$ \n and $\\mathbf{G}\\in\\mathcal{G}$ such that $[\\mathbf{G},\n \\mathbf{X}]\\in\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})$. Then \n $[\\mathbf{G},\n \\mathbf{X}]\\in\\left[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^v$\n iff $[\\mathbf{G}, \\mathbf{X}]\\in\\mathcal{G}$.\n\\end{lemma}\nIt then follows that $[\\mathbf{G},\\mathbf{P}]\\in\\mathcal{G}$ for all\n$\\mathbf{P}\\in \\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$. As $\\mathcal{G}$ is involutive\nand as $\\mathcal{G}\\subset \\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})$,\n$\\mathcal{G}\\subset \\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ as well, and thus \n$\\mathcal{G}$ is an ideal of $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$. \n\n\\begin{lemma}\n \\label{basis}\n There exists a choice of basis for ker\n $\\mathbf{\\Omega}_L(\\mathfrak{u})$ that is \n also a basis of $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$.\n\\end{lemma}\n\nAs $\\mathcal{G}$ is an ideal of $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$, we may define for\nany $\\mathbf{P}_1, \\mathbf{P}_2\\in\\overline{\\hbox{ker \n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ the equivalence relation:\n$\\mathbf{P}_1\\sim\\mathbf{P}_2$ iff\n$\\mathbf{P}_1-\\mathbf{P}_2\\in\\mathcal{G}$. The equivalence class,\n\\begin{equation}\n \\left[\\mathbf{P}\\right]:=\n \\{\\mathbf{Y}\\in \\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\\ \\vert \n \\ \\mathbf{Y}\\sim\\mathbf{P}\\}, \n\\end{equation}\ncan be constructed along with the quotient space\n$\\overline{\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}$. \n(For the sake of notational clarity we will suppress\nthe square brackets for equivalence classes when there\nis no risk of confusion.) This space is a collection of\nvectors that lie in the kernel of $\\mathbf{\\Omega}_L$, but with the\nvectors in $\\mathcal{G}$ removed; $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}$ thereby addresses the\nfirst two observations listed at the end of \\textbf{Section\n \\ref{&EL-Sym}}. \n\nWe now turn our attention to the third observation. Because the\nintegral flow $\\mathfrak{u}_{\\mathbf{X}}(t)$ of any solution \n$\\mathbf{X}$ of the energy equation must lie on $\\mathbb{P}_L^{[1]}$, a \nsymmetry transformation of $\\mathfrak{u}_{\\mathbf{X}}(t)$ must result\nin an integral flow $\\mathfrak{u}_{\\mathbf{Y}}(t)$ of another solution\n$\\mathbf{Y}$ of the energy equation, which must also lie on\n$\\mathbb{P}_L^{[1]}$. Implementing this condition is done through\n$\\mathbf{\\beta}[\\mathbf{X}_E]$. \n\nAs $\\langle\\mathbf{\\beta}[\\mathbf{X}_E]\\vert \\mathbf{G}\\rangle=\\langle\n\\mathbf{d}E\\vert\\mathbf{G}\\rangle= \\mathbf{G}E=0$ for all \n$\\mathbf{G}\\in\\mathcal{G}$ on $\\mathbb{P}_L^{[1]}$, the Lie derivative\n$\\mathfrak{L}_{\\mathbf{G}}$ of $\\mathbf{\\beta}$ along $\\mathbf{G}$ is, \n\\begin{equation}\n \\mathfrak{L}_{\\mathbf{G}}\\mathbf{\\beta}[\\mathbf{X}_E]= \\sum_{n=1}^{N_0}\n \\left(\\mathbf{G}\\gamma_n^{[1]}\\right) \\mathbf{\\Theta}^{(n)}_q.\n \\nonumber\n\\end{equation}\nGiven a\n$\\mathbf{P}_{(n)} \\in\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ such that $\\mathbf{P}_{(n)} = \n\\mathbf{U}_{(n)}^q + \\widehat{\\mathbf{U}}_{(n)} +\\mathbf{G}'$ with\n$\\mathbf{G}'\\in\\mathcal{G}$, $\\mathbf{G}\\gamma_n^{[1]}=\n [\\mathbf{G}, \\mathbf{P}_{(n)}]E + \n\\mathbf{P}_{(n)}\\mathbf{G}E$. But $\\mathcal{G}$ is an ideal of\n$\\overline{\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})}$, and thus\n$\\mathbf{G}\\gamma_{n}^{[1]}=0$ on the first-order constraint\nmanifold. It follows that $\\mathfrak{L}_{\\mathbf{G}}\\mathbf{\\beta}\n=0$ on $\\mathbb{P}^{[1]}_L$. The collection of vectors, \n\\begin{equation}\n \\mathcal{S}\\hbox{ym} := \\big\\{\\mathbf{P}\\in \\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}\n \\ \\vert\\ \\\n \\mathfrak{L}_{\\mathbf{P}}\\mathbf{\\beta}[\\mathbf{X}_E] =\n \\mathbf{d}\\langle \\mathbf{\\beta}[\\mathbf{X}_E]\\vert \\mathbf{P}\\rangle \\hbox{ on }\n \\mathbb{P}_L^{[1]}\\big\\}, \n \\nonumber\n\\end{equation}\nis therefore well defined, and is involutive. It follows that\n$\\mathbf{P}\\in\\mathcal{S}\\hbox{ym}$ iff \n$\\langle \\mathbf{d}\\mathbf{\\beta}[\\mathbf{X}_E]\\vert \\mathbf{P}\\otimes\n\\mathbf{X}\\rangle=0$ for all $\\mathbf{X}\\in\n\\mathbf{T}\\mathcal{P}_L$. We are then able to construct from each\n$\\mathbf{P}\\in\\mathcal{S}\\hbox{ym}$ a one-parameter subgroup\n$\\mathbf{\\sigma}_{\\mathbf{P}}(\\epsilon,x)$ defined as the solution\nto \n\\begin{equation}\n \\frac{d\\mathbf{\\sigma}_{\\mathbf{P}}}{d\\epsilon} :=\n \\mathbf{P}\\left(\\mathbf{\\sigma}_{\\mathbf{P}}\\right), \n \\nonumber\n\\end{equation}\nwhere $\\sigma_{\\mathbf{P}}(0,\\mathfrak{u}) = \\mathfrak{u}$ for\n$\\mathfrak{u}\\in\\mathbb{P}_L$. The collection of such \nsubgroups with give the Lie group\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$.\n\n\\subsection{Euler-Lagrange Solutions of the Energy\n Equation\\label{&Sol}}\n\nWe denote the set of \\textbf{general solutions} to the energy equation as\n\\begin{equation}\n\\mathcal{S}\\hbox{ol}\n:=\\{\\mathbf{X}_{E}\\in\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\n\\ \\vert\\ \\ i_{\\mathbf{X}_{E}}\\mathbf{\\Omega}_L = \\mathbf{d}E\n\\hbox{ on } \\mathbb{P}_L^{[1]}\\}.\n\\nonumber\n\\end{equation}\nIf $\\mathfrak{u}(t)$ is the integral flow of a vector in\n$\\mathcal{S}\\hbox{ol}$ whose projection onto\n$\\mathbb{Q}$ corresponds to a solution of\nthe Euler-Lagrange equations of motion, then\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ must map one of\nsuch flows into another one. However, while\n$\\mathfrak{L}_{\\mathbf{G}}\\mathbf{X}_{L} = \n[\\mathbf{G}, \\mathbf{X}_{L}]\\in \\hbox{ker\n}\\mathbf{\\Omega}_L(\\mathfrak{u})$, in general\n$\\mathfrak{L}_{\\mathbf{G}}\\mathbf{X}_{L} \\notin \\mathcal{G}$. The \naction of $\\sigma_{\\mathbf{P}}$ on the flow\n$\\mathfrak{u}_{\\mathbf{X}_{L}}$ will in general result in a flow\n$\\mathfrak{u}_{\\mathbf{Y}}$ generated by a $\\mathbf{Y}$ that is\n\\textit{not} a SOLVF. It need not even be a solution of the energy\nequation. By necessity, general solutions of the energy equation must\nbe considered, leading us to consider the collection of solutions\n\\begin{equation}\n\\overline{\\mathcal{S}\\hbox{ol}}\n:=\\{\\overline{\\mathbf{X}}_{EL}\\in\\mathcal{S}\\hbox{ol} \n\\ \\vert \\ [\\mathbf{G},\n \\overline{\\mathbf{X}}_{EL}]\\in\n\\left[\\mathbf{T}_{\\mathfrak{u}}\\mathbb{P}_L\\right]^v \\>\\>\n\\>\\> \\forall \\mathbf{G}\\in\n\\mathcal{G}\\}. \n\\nonumber\n\\end{equation}\nThis collection generates the family of integral flows \n\\begin{equation}\n \\mathcal{O}_{EL}(\\mathfrak{u}_0) := \\bigg\\{\\mathfrak{u}(t) \\ \\bigg\\vert \\\n \\frac{d\\mathfrak{u}}{dt}=\\overline{\\mathbf{X}}_{EL}(\\mathfrak{u}),\n \\overline{\\mathbf{X}}_{EL}\\in\\overline{\\mathcal{S}\\hbox{ol}},\n \\>\\>\\hbox{and }\n \\mathfrak{u}(t_0)=\\mathfrak{u}_0\\bigg\\}.\n \\nonumber\n\\end{equation}\nImportantly, if $\\mathbf{P}\\in\\mathcal{S}\\hbox{ym}$, then \n\\begin{equation}\n i_{[\\mathbf{X}_E,\n \\mathbf{P}]}\\mathbf{\\Omega}_L=i_{\\mathbf{P}}\\mathbf{d}\\mathbf{\\beta}[\\mathbf{X}_E]=0.\n \\nonumber\n\\end{equation}\nAs such, we find that \n\n\\begin{lemma} \\label{X_EL} $[\\overline{\\mathbf{X}}_{EL}, \\mathbf{P}] \\in\n \\overline{\\hbox{ker } \\mathbf{\\Omega}_L(\\mathfrak{u})}$ for all\n $\\mathbf{P}\\in \\mathcal{S}\\hbox{ym}$. \n\\end{lemma}\n\n\\noindent It then follows that\n\n\\begin{theorem}\\label{Group-EL} $\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ forms a group of\n symmetry transformations of $\\mathcal{O}_{EL}(\\mathfrak{u}_0)$.\n\\end{theorem}\n\n\\noindent Proof of both assertions can be found in \\cite{ADS2020}.\n\nThe generators of the generalized Lie symmetry for\n$\\mathcal{O}_{EL}(\\mathfrak{u}_0)$ are \nthus given by $\\mathcal{S}\\hbox{ym}$. The corresponding solutions\nto the Euler-Lagrange equations that have this symmetry are given by\n$\\overline{\\mathcal{S}\\hbox{ol}}$, and a vector\n$\\overline{\\mathbf{X}}_{EL} \\in \\overline{\\mathcal{S}\\hbox{ol}}$ is\ncalled a \\textbf{second-order, Euler-Lagrange vector field (SOELVF)}. It has\nthe general form,\n\\begin{equation}\n \\overline{\\mathbf{X}}_{EL}=\\overline{\\mathbf{X}}_{L} +\n \\sum_{m=1}^{N_0} u^m(\\mathfrak{u}) \\left[\\mathbf{P}_{(m)}\\right],\n \\label{EL}\n\\end{equation}\nwhere $u^m(\\mathfrak{u})\\in\\overline{\\mathcal{F}}$ and\n$\\{[\\mathbf{P}_{(n)}], n=1, \\dots, N_0\\}$ is a choice of basis \nfor $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}$. The vector field\n$\\overline{\\mathbf{X}}_L$ is constructed from the second order\nLagrangian vector field $\\mathbf{X}_L$ and vectors in $\\hbox{ker\n}\\mathbf{\\Omega}_L(\\mathfrak{u})$ by requiring\n$\\overline{\\mathbf{X}}_L\\in\\overline{\\mathcal{S}\\hbox{ol}}$. This\nconstruction is described in \\cite{ADS2020}; we will only need the\nexistence of such a vector field in this paper.\n\n\\section{Generalized Lie Symmetries of the Action and its Impact on\n Dynamics\\label{&A-S}} \n\nWe now turn our attention to the generators of the\ngeneralized Lie symmetry of the action, and the impact this symmetry\nhas on the evolution of dynamical systems.\n\n\\subsection{The Generalized Lie Symmetry of the Action\\label{SandA}}\n\nIn determining the conditions (as listed in \\textbf{Lemma \\ref{Action-Sym}}) under\nwhich the action admits a generalized Lie symmetry, the understanding\nthat the action must have this symmetry for all possible paths on\n$\\mathbb{Q}$ played an essential role. By necessity, these\nconditions could only be placed on $\\rho_L$, and not on \n$\\dot{\\rho}_L$; unlike $\\rho_L$, $\\dot{\\rho}_L$ depends explicitly on the\nevolution of a particular path, while the symmetry must hold\nfor all paths. We note, however, that the family $\\mathcal{O}_{EL}$ of\ntrajectories determined by the Euler-Lagrange equations of motion also consists of\npaths on $\\mathbb{Q}$, and as such the generalized Lie symmetry of the\naction is also a symmetry of $\\mathcal{O}_{EL}$. Importantly, how these\ntrajectories evolve with time is known, and as such, the $\\dot{\\rho}_L$ for a\ngiven $\\rho_L$ is also known for these trajectories. With this\nunderstanding, and after comparing \\textbf{Lemma \\ref{Action-Sym}} and\nthe results of \\textbf{Lemma \\ref{AllS}} with \\textbf{Lemma \\ref{GS}},\nwe conclude that the generators of the generalized Lie symmetry of the\naction must also be generators of the generalized Lie symmetry of the\nEuler-Lagrange equations of motion. This leads us to consider the\nfollowing collection of vectors. \n\\begin{equation}\n \\mathcal{S}\\hbox{ym}\\mathcal{L} = \\{\\mathbf{P}\\in \\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}\\ \\vert\\\n \\gamma_{\\mathbf{P}}^{[1]}=\\langle \\mathbf{\\beta}\\vert\\mathbf{P}\\rangle\n =0 \\hbox{ on }\\mathbb{P}_L \\}.\n \\nonumber\n\\end{equation}\nWe will also need $N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}=\\hbox{dim\n}(\\mathcal{S}\\hbox{ym}\\mathcal{L})$ in the following. \n\n\\begin{lemma}\\label{subset}\n $\\mathcal{S}\\hbox{ym}\\mathcal{L}\\subset\\mathcal{S}\\hbox{ym}$.\n \n \\begin{proof}\n Let $\\{\\mathbf{P}_{(l)}, l = 1, \\dots, N_0 \\}$ be a basis of\n $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}$ such that\n $\\mathbf{P}_{(l)}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$ for\n $l= 1, \\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$. We may choose the basis of\n $\\mathcal{C}$ such that $\\langle \\mathbf{\\Theta}^{(m)}_q\\vert\n \\mathbf{P}_{(l)}\\rangle=\\delta^{(m)}_{(l)}$. Then for any\n $\\mathbf{P}_{(n)}\\in\\mathcal{S}\\hbox{ym}$, we see from\n Eq.~$(\\ref{beta})$ that, \n \\begin{equation}\n \\langle\\mathbf{d}\\mathbf{\\beta}\\vert \\mathbf{P}_{(n)}\\otimes\\mathbf{Y}\\rangle= \\sum_{m=1}^{N_0}\n \\left(\n \\langle\\mathbf{d}\\gamma_m^{[1]}\\vert\n \\mathbf{P}_{(n)}\\rangle \\langle \\mathbf{\\Theta}^{(m)}_q\\vert\n \\mathbf{Y}\\rangle-\n \\langle\\mathbf{d}\\gamma_m^{[1]}\\vert\\mathbf{Y}\n \\rangle \\langle \\mathbf{\\Theta}^{(m)}_q\\vert\n \\mathbf{P}_{(n)}\\rangle+\n \\gamma_m^{[1]}\\langle\\mathbf{d}\\mathbf{\\Theta}^{(m)}_q\\vert\n \\mathbf{P}_{(n)}\\otimes\\mathbf{Y}\\rangle \n \\right),\n \\nonumber\n \\end{equation}\n for any $\\mathbf{Y}\\in \\mathbf{T}\\mathbb{P}_L$. The last term\n vanishes on the first-order constraint manifold\n $\\mathbb{P}_L^{[1]}$, while for the second term, \n $\\langle\\mathbf{d}\\gamma_m^{[1]}\\vert\\mathbf{Y} \\rangle\n \\langle \\mathbf{\\Theta}^{(m)}_q\\vert \\mathbf{P}_{(n)}\\rangle=\n \\langle\\mathbf{d}\\gamma_n^{[1]}\\vert\\mathbf{Y} \\rangle\n \\delta ^{(m)}_{(n)}$. But as\n $\\mathbf{P}_{(n)}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$,\n $\\gamma_n^{[1]}=0$ on $\\mathbb{P}_L$, and this term\n vanishes as well. Finally, for the first term,\n $\\langle\\mathbf{d}\\mathbf{\\gamma}_m^{[1]}\\vert\n \\mathbf{P}_{(n)}\\rangle= \\mathbf{P}_{(n)}\\mathbf{P}_{(m)}E=\n [\\mathbf{P}_{(n)},\\mathbf{P}_{(m)}]E+\\mathbf{P}_{(m)}\\mathbf{P}_{(n)}E$. But\n $\\gamma_n^{[1]}=\\mathbf{P}_{(n)}E=0$ on $\\mathbb{P}_L$, while\n $\\overline{\\hbox{ker }\\mathbf{\\Omega}(\\mathfrak{u})}$ is\n involutive. There then exists a $\\mathbf{P}_{(nm)}\\in\n \\overline{\\hbox{ker }\\mathbf{\\Omega}(\\mathfrak{u})}$ such that\n $\\mathbf{P}_{(nm)}=[\\mathbf{P}_{(n)}, \\mathbf{P}_{(m)}]$. As\n $\\mathbf{P}_{(nm)}E:=\\gamma_{(nm)}^{[1]}$, this\n $\\gamma_{(nm)}^{[1]}$ must be a linear\n combination of first-order constraint functions, and they also vanish\n on $\\mathbb{P}_L^{[1]}$. It then follows that\n $\\langle\\mathbf{d}\\mathbf{\\beta}\\vert\n \\mathbf{P}_{(n)}\\otimes\\mathbf{Y}\\rangle=0$ on\n $\\mathbb{P}_L^{[1]}$, and\n $\\mathbf{P}_{(n)}\\in\\mathcal{S}\\hbox{ym}$.\n \\end{proof}\n\\end{lemma}\n\nIf $\\mathbf{P}_1, \\mathbf{P}_2 \\in \\mathcal{S}\\hbox{ym}\\mathcal{L}$, then\n$\\gamma^{[1]}_{[\\mathbf{P}_1,\\mathbf{P}_2]} =\\mathbf{P}_1\\mathbf{P}_2E-\n\\mathbf{P}_2\\mathbf{P}_1E=\\mathbf{P_1}\\gamma_{\\mathbf{P}_2}-\n\\mathbf{P_2}\\gamma_{\\mathbf{P}_1}=0$, and thus\n$\\mathcal{S}\\hbox{ym}\\mathcal{L}$ is involutive. Then for \neach $\\mathbf{P}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$ we once again have the\none-parameter subgroup\n$\\sigma^{\\mathcal{S}\\hbox{ym}\\mathcal{L}}_{\\mathbf{P}}(\\epsilon,\\mathfrak{u})$\ndefine as the integral flow of\n\\begin{equation}\n \\frac{\\mathbf{d}\\sigma^{\\mathcal{S}\\hbox{ym}\\mathcal{L}}_{\\mathbf{P}}}{\\mathbf{d}\\epsilon}\n :=\\mathbf{P},\n \\nonumber\n\\end{equation}\nwith\n$\\sigma^{\\mathcal{S}\\hbox{ym}\\mathcal{L}}_{\\mathbf{P}}(0,\\mathfrak{u})=\\mathfrak{u}$\nfor $\\mathfrak{u}\\in\\mathbb{P}_L$. The collection of such subgroups\ngives the Lie group $\\hbox{Gr}_{\\mathcal{S}{ym}\\mathcal{L}}$. As\n$\\mathcal{S}{ym}\\mathcal{L}\\subset \\mathcal{S}{ym}$,\n$\\hbox{Gr}_{\\mathcal{S}{ym}\\mathcal{L}}$ is a Lie subgroup of\n$\\hbox{Gr}_{\\mathcal{S}{ym}}$. It then follows from \\textbf{Theorem \\ref{Group-EL}}\nthat $\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ also forms a group of\n symmetry transformations of $\\mathcal{O}_{EL}(\\mathfrak{u}_0)$. As\n the family $\\mathcal{O}_{EL}(\\mathfrak{u}_0)$ of trajectories are\n paths on $\\mathbb{Q}$, and as the symmetry transformation of the\n action must be the same for all paths on $\\mathbb{Q}$, it also follows that, \n\n\\begin{theorem}\\label{Group-L}\n $\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ forms the group of\n symmetry transformations of the action $S$.\n\\end{theorem}\n\n\\subsection{Symmetries and Dynamics\\label{SandD}}\n\nWhile $\\mathcal{O}_{EL}(\\mathfrak{u}_0)$ gives the family of integral\nflows on which both $\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}}$ and\n$\\hbox{Gr}_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ act, a general flow in\n$\\mathcal{O}_{EL}(\\mathfrak{u}_0)$ need not be confined to $\\mathbb{P}_{L}^{\\left[1\\right]\n}$, and yet this is the submanifold on which the solutions \n$\\overline{\\mathbf{X}}_{EL}\\in mathcal{S}{\\mathrm{ol}}$ of the energy\nequations exist. In such cases it is necessary to jointly choose a\nSOELVF $\\overline{\\mathbf{X}}_{EL}$ and a submanifold \nof $\\mathbb{P}_{L}^{\\left[ 1\\right] }$ on which the resultant flow\n$\\mathfrak{u}_{\\overline{\\mathbf{X}}_{EL}}$ will be confined. This is\ndone through the implementation of a constraint algorithm, one of\nwhich was proposed in \\cite{ADS2020}. In that paper the product of this\nalgorithm was the most that could be said about the general structure \nof SOELVFs that have integral flow fields which lie on\n$\\mathbb{P}_L^{[1]}$. Here, with the results obtained in\n\\textbf{Section \\ref{SandA}}, we can say much more, and we will see\nthat the presence of a generalized Lie symmetry of the action\ngreatly restricts the structure of the SOELVFs that such systems can have.\n\nFollowing \\cite{ADS2020}, we introduce for a \n$\\overline{\\mathbf{X}}_{EL}\\in\\overline{\\mathcal{S}\\hbox{ol}}$ the\nnotation \n\\begin{equation}\n\\overline{\\mathbf{X}}_{EL}^{[1]} := \\overline{\\mathbf{X}}_{EL}, \\>\n\\overline{\\mathbf{X}}_{L}^{[1]} := \\overline{\\mathbf{X}}_{L}, \\>\n\\mathbf{P}^{[1]}_{(n)} := \\mathbf{P}_{(n)}, \\> u_{[1]}^m := u^m, \\>\nN_0^{[1]} := N_0,\n\\nonumber\n\\end{equation}\nwhen the constraint algorithm is implemented, with the superscript $[1]$ denoting\nthe first iteration of this algorithm. (This notation is only used in\nthis section.) In addition, we choose \n$\\mathbf{P}^{[1]}_{(n)}\\in \\mathcal{S}\\hbox{ym}\\mathcal{L}$ for\n$n=1,\\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$.\n\nFor the integral flow field of $\\overline{\\mathbf{X}}_{EL}$ to lie on \n$\\mathbb{P}_L^{[1]}$,\n\\begin{equation}\n \\mathfrak{L}_{\\overline{\\mathbf{X}}_{EL}}\\mathbf{\\beta}=0,\n \\label{stable}\n\\end{equation}\nwhich reduces to\n$\\mathfrak{L}_{\\overline{\\mathbf{X}}_{EL}}\\gamma_n^{[1]}=0$ on $\\mathbb{P}_L^{[1]}$.\nThis is called the \\textbf{constraint condition}. \nAs both $u_{[1]}^n,\n\\gamma^{[1]}_n\\in \\overline{\\mathcal{F}}$,\n$\\left[\\mathbf{P}_{(n)}^{[1]}\\right]\\gamma^{[1]}_m = \n\\mathbf{P}_{(n)}\\gamma^{[1]}_m$, and after making use of the general\nform of a SOELVF given in Eq.~$(\\ref{EL})$, Eq.~$(\\ref{stable})$ reduces to\n\\begin{equation}\n \\sum_{m=1}^{N_0} \\Gamma^{[1]}_{nm} u^m_{[1]} =\n -\\left\\langle \\mathbf{d} \\gamma^{[1]}_n\\Big\\vert\n \\overline{\\mathbf{X}}^{[1]}_{L}\\right\\rangle, \\>\\hbox{with }\n \\Gamma^{[1]}_{nm} := \\left\\langle\n \\mathbf{d}\\gamma^{[1]}_n\\Big\\vert \\mathbf{P}^{[1]}_{(m)}\\right\\rangle.\n \\label{first-order}\n\\end{equation}\nSince $\\left\\langle\n\\mathbf{d}\\gamma^{[1]}_n\\Big\\vert\n\\mathbf{P}^{[1]}_{(m)}\\right\\rangle=\\mathbf{P}^{[1]}_{(m)}\\mathbf{P}^{[1]}_{(n)}E=\n [\\mathbf{P}^{[1]}_{(m)}, \\mathbf{P}^{[1]}_{(n)}]E\n+\\Gamma^{[1]}_{mn}$.\nBut $\\overline{\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ is\ninvolutive, and thus $[\\mathbf{P}^{[1]}_{(m)},\n \\mathbf{P}^{[1]}_{(n)}]E$ is a linear combination of first-order\nLagrangian constraints. As these constraints vanishes on\n$\\mathbb{P}_L^{[1]}$, \n$\\Gamma^{[1]}_{nm}=\\Gamma^{[1]}_{mn}$ on the first-order constraint\nmanifold.\n\nNext, when $n=1, \\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$,\n$\\mathbf{P}_{(n)}^{[1]}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$, and\n$\\gamma_n^{[1]}=0$. Thus, $\\Gamma^{[1]}_{nm}=0$ when $n=1, \\dots,\nN_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$, and as $\\Gamma^{[1]}_{nm}$ is a\nsymmetric matrix on $\\mathbb{P}_L^{[1]}$, $\\Gamma^{[1]}_{mn}=0$ for\nthese values of $n$ as well. Thus while\n$\\Gamma^{[1]}_{nm}$ is a $N_0\\times N_0$ matrix, \nthe only nonzero components of this matrix lie in the\n$\\left(N_0-N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}\\right)\\times\n\\left(N_0-N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}\\right)$ submatrix\n$\\bar{\\Gamma}^{[1]}_{\\overline{n}\\,\\overline{m}}:=\n\\left\\langle\\mathbf{d}\\gamma^{[1]}_{\\overline{n}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\\bigg\\vert\n \\mathbf{P}^{[1]}_{\\overline{m}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\\right\\rangle$\nwhere $\\overline{n}, \\overline{m} = 1, \\dots,\nN_0-N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$. As $\\left\\langle \\mathbf{d}\n\\gamma^{[1]}_n\\Big\\vert\n\\overline{\\mathbf{X}}^{[1]}_{L}\\right\\rangle=0$ as well when $n=1,\n\\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$, Eq.~$(\\ref{first-order})$\n reduces to \n\\begin{equation}\n \\sum_{\\overline{m}=1}^{N_0-N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\n \\bar{\\Gamma}^{[1]}_{\\overline{n}\\bar{m}}\n u^{\\overline{m}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}_{[1]} = -\\left\\langle \\mathbf{d}\n \\gamma^{[1]}_{\\overline{n}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\\Big\\vert \n \\overline{\\mathbf{X}}^{[1]}_{L}\\right\\rangle.\n \\label{red-first-order}\n\\end{equation}\nIt is then readily apparent that the\n$N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ arbitrary functions $u^m_{[1]}$\nfor $m=1, \\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ are not\ndetermined at this iteration, while $r^{[1]} =\\hbox{rank }\n\\bar{\\Gamma}^{[1]}_{\\overline{n}\\,\\overline{m}}$ of the $u^m_{[1]}$ for\n$m>N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ are. There\nare then $N_0^{[2]} :=N_0^{[1]}-r^{[1]}$\n \\textbf{second-order Lagrangian constraint functions} \n\\begin{equation}\n \\gamma^{[2]}_{n_{[2]}} := \\left\\langle\n \\mathbf{d}\\gamma^{[1]}_{n_{[2]}}\\Big\\vert \n \\overline{\\mathbf{X}}_{L}^{[1]}\\right\\rangle, n_{[2]}=1, \\cdots,\n N_0^{[2]},\n \\nonumber\n\\end{equation}\nwith the conditions $\\gamma^{[2]}_{n_{[2]}}=0$ imposed if necessary. In general\nthere will be $I_{[2]}:= \n\\hbox{rank }\\left\\{\\mathbf{d}\\gamma^{[1]}_{n_{[1]}},\n \\mathbf{d}\\gamma^{[2]}_{n_{[2]}}\\right\\}$\nindependent functions in $\\hbox{C}^{[2]}_L :=\n\\hbox{C}^{[1]}\\cup \\left\\{\\gamma^{[2]}_{n_{[2]}}\\ \\vert\n\\ n_{[2]} = 1, \\dots, N_0^{[2]}\\right\\}$, and $\\mathbb{P}_L^{[1]}$ is\nreduced to the \\textbf{second-order constraint submanifold}, \n\\begin{equation}\n \\mathbb{P}_L^{[2]} := \\left\\{\\mathfrak{u}\\in\\mathbb{P}_L^{[1]}\n \\ \\Big\\vert \\ \\gamma^{[2]}_{[n_2]}(\\mathfrak{u})=0, n_{[2]} = 1, \\dots,\n N_0^{[2]} \\right\\}, \n \\nonumber\n\\end{equation}\nwhere dim $\\mathbb{P}^{[2]}_L = 2D-I_{[2]}$. At this point, there are two\npossibilities. If $I_{[2]}=I_{[1]}$ or\n$I_{[2]}=2D$, the iterative process stops, and no new Lagrangian\nconstraints are introduced. If not, the process continues. \n\nFor the second iteration in the constraint algorithm, we choose a basis\n$\\left\\{\\mathbf{P}_{(n)}^{[2]}\\right\\}$ for $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G}$ and the arbitrary functions\n$\\left\\{u_{[2]}^m\\right\\}$ such that for $m=1, \\dots, N_0^{[2]}$,\n$u_{[2]}^m$ are linear combinations of $u_{[1]}^m$ that lie in the kernel\n$\\Gamma^{[1]}_{nm}$. We once again require that\n$\\mathbf{P}_{(n)}^{[2]}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$\nfor $n=1, \\dots, N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$. \nThen\n\\begin{equation}\n\\overline{\\mathbf{X}}_{EL}^{[2]} = \\overline{\\mathbf{X}}_{L}^{[2]} +\n\\sum_{m=1}^{N_0^{[2]}} u_{[2]}^m \\left[\\mathbf{P}_{(m)}^{[2]}\\right],\n\\nonumber\n\\end{equation}\nwith\n\\begin{equation}\n \\overline{\\mathbf{X}}_{L}^{[2]} = \\overline{\\mathbf{X}}_{L}^{[1]} +\n \\sum_{m=N_0^{[2]}+1} ^{N_0^{[1]}}u^m_{[2]}\\left[\\mathbf{P}^{[2]}_{(m)}\\right].\n \\nonumber\n\\end{equation}\nHere, the functions $u^m_{[2]}$ for $m = N_0^{[2]}+1, \\dots, N_0^{[1]}$ have been\ndetermined through the constraint analysis of $\\gamma^{[1]}_n$. \n\nAs shown in \\cite{ADS2020}, $\\mathbf{G}u_{[1]}^m=0$. Similarly,\n $\\mathbf{G}\\gamma^{[2]}_n = \\mathfrak{L}_{[\\mathbf{G},\n \\overline{\\mathbf{X}}_{EL}]}\\mathbf{d}\\gamma^{[2]}_n=0$. Clearly\n $\\gamma^{[2]}_n\\in\\overline{\\mathcal{F}}$ and we may require\n $u^m_{[2]}\\in\\overline{\\mathcal{F}}$ as well. It \n then follows that $\\left[\\mathbf{P}^{[2]}_{(n)}\\right]\\gamma^{[2]}_m \n=\\mathbf{P}^{[2]}_{(n)}\\gamma^{[2]}_m$, and imposing Eq.~$(\\ref{stable})$\non $\\gamma^{[2]}_n $, gives\n\\begin{equation}\n \\sum_{m=1}^{N_0^{[2]}}\\Gamma^{[2]}_{nm} u^m_{[2]} =\n -\\left\\langle \\mathbf{d}\\gamma^{[2]}_n\\Big\\vert\n \\overline{\\mathbf{X}}^{[2]}_{L}\\right\\rangle,\\>\\hbox{where }\n \\Gamma^{[2]}_{nm} := \\left\\langle\n \\mathbf{d}\\gamma^{[2]}_n\\Big\\vert \\mathbf{P}^{[2]}_{(m)}\\right\\rangle,\n\\>\\> n=1, \\dots,\nN_0^{[2]}.\n\\label{second-order}\n\\end{equation}\nOnce again, $\\Gamma^{[2]}_{nm}=\\Gamma^{[2]}_{mn}$, but now on the constraint\nmanifold $\\mathbb{P}^{[2]}_L$. Moreover, since\n$\\gamma^{[2]}_n=\\gamma^{[1]}_n=0$ for $n=1, \\dots,\nN_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$, \n$\\Gamma^{[2]}_{nm}=0=\\Gamma^{[2]}_{mn}$, and $\\left\\langle\n\\mathbf{d}\\gamma_n^{[2]}\\Big\\vert\\overline{\\mathbf{X}}^{[2]}_L\\right\\rangle\n=0$. There is once again a reduction of Eq.~$(\\ref{second-order})$,\nand we are left with \n\\begin{equation}\n \\sum_{\\overline{m}=1}^{N_0^{[2]}-N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\n \\overline{\\Gamma}^{[2]}_{\\overline{n}\\,\\overline{m}} u^{\\overline{m}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}_{[2]} =\n -\n \\langle \\mathbf{d}\\gamma^{[2]}_{\\overline{n}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\\vert\n \\overline{\\mathbf{X}}^{[2]}_{L} \\rangle.\n\\nonumber\n\\end{equation}\nwhere\n$\\bar{\\Gamma}^{[2]}_{\\overline{n}\\,\\overline{m}}:=\n\\left\\langle\\mathbf{d}\\gamma^{[2]}_{\\overline{n}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}}\n\\bigg\\vert\\mathbf{P}^{[2]}_{(\\overline{m}+N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}})}\\right\\rangle$. As\nbefore, the $N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ arbitrary functions\n$u^{m}_{[2]}$ are not determined, while $r^{[2]} := \\hbox{rank\n}\\bar{\\Gamma}^{[2]}_{\\overline{n}\\,\\overline{m}}$ of the\nremaining $u^m_{[2]}$ for $m>N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$\nare. There are now \n$N^{[3]}_0=N_0^{[2]}-r^{[2]}$ \\textbf{third-order Lagrangian\n constraint functions},\n\\begin{equation}\n \\gamma^{[3]}_{n_{[3]}} =\n \\left\\langle\\mathbf{d}\\gamma^{[2]}_{n_{[3]}}\\Big\\vert\n \\overline{\\mathbf{X}}^{[2]}_{L}\\right\\rangle, \\> n_{[3]} = 1, \\dots,\n N_0^{[3]},\n \\nonumber\n\\end{equation}\nwith the conditions $\\gamma^{[3]}_{n_{[3]}}=0$ \nimposed if necessary. With \n\\begin{equation}\nI_{[3]} := \\hbox{rank } \\left\\{ \n \\mathbf{d}\\gamma_{n_{[1]}}^{[1]},\n \\mathbf{d}\\gamma_{n_{[2]}}^{[2]},\n \\mathbf{d}\\gamma_{n_{[3]}}^{[3]}\n \\right\\},\n \\nonumber\n\\end{equation}\nindependent functions in $\\hbox{C}^{[3]}_L :=\n\\hbox{C}_L^{[2]}\\cup\\left\\{\\gamma_{n_{[3]}}^{[3]}, n_{[3]}=1,\n\\dots, N_0^{[3]}\\right\\}$, we now have the \\textbf{third-order\n constraint submanifold}, \n\\begin{equation}\n \\mathbb{P}_L^{[3]}:=\\left\\{\\mathfrak{u}\\in \\mathbb{P}^{[2]}_L\n \\ \\Big\\vert \\ \\gamma^{[3]}_{n_{[3]}}(\\mathfrak{u})=0, n_{[3]}=1,\n \\dots, N_0^{[3]}\\right\\}.\n \\nonumber\n\\end{equation}\nOnce again, the process stops when\n$I_{[3]}=I_{[2]}$ or $I_{[3]}=2D$. However, if $I_{[2]}\\>\\vert\\>\n \\mathfrak{L}_{\\overline{\\mathbf{X}}_{EL}}\\mathbf{\\beta}=0\\}.\n \\nonumber\n\\end{equation}\nImportantly, $\\hbox{dim }\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}\\ge\nN_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$. \n\n\\section{The Generalized Lie Symmetries of Three Dynamical\n Systems\\label{&Exam}}\n\nThree examples of dynamical systems with almost regular Lagrangians\nwere introduced in \\cite{ADS2020}. In that paper the focus of these\nexamples was on the explicit construction of the dynamical structures \nneeded to describe and predict motion in the Lagrangian phase space,\nand to show that these structures are projectable to \nthe Hamiltonian phase space. We return to these examples here, but\nwith the focus now being on the generalized Lie symmetries of each, and the\napplication of the results we have found in this paper. In particular,\nwe are in interested in the dimensionality of the symmetry groups for\neach of the systems as compared to the dimensionality of\n$\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}$ of each. A\nsummary of our results can be found in Table $\\ref{Table}$\n\n\\subsection{A Lagrangian With and Without a Generalized Lie Symmetry\\label{&special}} \n\nWhether the action\n\\begin{equation}\n S_{1}:=\\int\\left[\\frac{1}{2}m\\left(\\frac{d\\widehat{q}}{dt}\\right)^2\n -V(q^a)\\right]dt, \n \\nonumber\n\\end{equation}\nwith $\\vert q\\vert=\\sqrt{q^a q_a}$ and $\\widehat{q}^a := q^a\/\\vert\nq\\vert$, $a=1, \\dots, D$, has a generalized Lie symmetry depends on the\nchoice of potential $V(q)$. With one choice both the Lagrangian and\nthe Euler-Lagrange equations of \nmotion have a generalized gauge symmetry; with a second choice the equations\nof motion has a generalized Lie symmetry while the Lagrangian\ndoes not; and with a third choice neither the action nor the equations of motion have\na symmetry. Irrespective of the choice of $V(q)$, however, $L$ is\nsingular, demonstrating that while all actions with a \ngeneralized Lie symmetry have a singular Lagrangian, not all singular\nLagrangians have a generalized Lie symmetry. \n\nDefining $\\Pi_{ab}(q):= \\delta_{ab} - \\widehat{q}_a\\widehat{q}_b$, we find\n\\begin{equation}\n \\mathbf{\\Omega}_M = \\frac{m}{\\vert q\\vert^2}\\Pi_{ab}(q)\n \\mathbf{d}q^a\\wedge\\mathbf{d}v^b,\\qquad\n \\mathbf{\\Omega}_F=\\frac{m}{\\vert q\\vert^3}\n \\left(\\widehat{q}\\cdot\\mathbf{d}q\\right)\\wedge\n \\left(v\\cdot\\Pi(q)\\cdot\\mathbf{d}q\\right).\n\\nonumber\\end{equation}\nThen $\\mathcal{C}$ and $\\mathcal{G}$ are spanned by $\\mathbf{U}^q_{(1)}\n= \\widehat{q}\\cdot\\mathbf{\\partial}\/\\mathbf{\\partial} q$ and \n $\\mathbf{U}^v_{(1)} =\n \\widehat{q}\\cdot\\mathbf{\\partial}\/\\mathbf{\\partial} v$, \nrespectively, while $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ is spanned by\n$\\mathbf{U}^v_{(1)}$ and \n\\begin{equation}\n P_{(1)}=\\widehat{q}\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q} +\n \\frac{1}{\\vert q\\vert}v\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}\n v}.\n\\nonumber\\end{equation}\nThat $\\hbox{dim }(\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}\/\\mathcal{G})=1$ then follows.\n\nThe energy is\n\\begin{equation}\n E=\\frac{1}{2}\\frac{m}{\\vert q\\vert^2}v\\cdot\\Pi(q)\\cdot v + V(q),\n \\nonumber\n\\end{equation}\nand there is only one first-order Lagrangian constraint,\n\\begin{equation}\n \\gamma^{[1]}=\\mathbf{U}^q_{(1)} V,\n \\label{g-con}\n\\end{equation}\nso that $\\mathbf{\\beta}[\\mathbf{X}_{EL}] =\n\\gamma^{[1]}\\mathbf{\\Theta}^{(1)}_q$, where $\\mathbf{\\Theta}^{(1)}_q =\n\\widehat{q}\\cdot \\mathbf{d} q$. Using Eq.~$(\\ref{g-con})$,\n\\begin{equation}\n \\mathfrak{L}_{\\mathbf{P}_{(1)}} \\mathbf{\\beta} =\n \\mathbf{d}\\left[\\mathbf{U}^q_{(1)}V\n \\right]-\\frac{1}{\\vert\n q\\vert^2}\\widehat{q}\\cdot\\frac{\\partial\\>\\>\\>}{\\partial \n q}\\left(\\Pi_a^{\\>\\>b}(q)\\frac{\\partial V}{\\partial\\widehat{q}^b}\\right)\n \\mathbf{d}q^a.\n \\label{e84}\n\\end{equation}\nWhether or not $\\mathcal{S}\\hbox{ym}$ or\n$\\mathcal{S}\\hbox{ym}\\mathcal{L}$ is empty therefore depends on the\nsymmetries of $V(q)$, as we would expect. \n\nIt was found in \\cite{ADS2020} that\n\\begin{equation}\n \\overline{\\mathbf{X}}_{L} = v\\cdot{\\Pi(q)}\\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q} +\n \\frac{(\\widehat{q}\\cdot v)}{\\vert q\\vert}\n v\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\partial v} -\n \\frac{\\vert q\\vert^2}{m}\\frac{\\partial V}{\\partial\n q}\\cdot\\Pi(q)\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v},\n\\nonumber\\end{equation}\nand a general SOELVF is given by\n$\\overline{\\mathbf{X}}_{EL}=\\overline{\\mathbf{X}}_{L} + u(\\mathfrak{u})\n\\left[\\mathbf{P}_{(1)}\\right]$, where\n$u(\\mathfrak{u})\\in\\overline{\\mathcal{F}}$.\nAs the constraint \nalgorithm gives \n\\begin{equation}\n \\mathfrak{L}_{\\overline{\\mathbf{X}}_{EL}}\\gamma^{[1]} = v\\cdot\\Pi\n \\cdot\\frac{\\partial \\gamma^{[1]}}{\\partial q} + u(\\mathfrak{u})\n \\mathbf{U}^q_{(1)}\\gamma^{[1]},\n \\label{stab}\n\\end{equation}\nwhether or not $u(\\mathfrak{u})$ (which in turn determines the\ndimensionality of\n$\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}_L^{[n_f]}}$) is determined\nby the constraint condition also depends on the symmetries of $V(q)$. \n\nThere are three cases to consider.\n\n\\bigskip\n\\noindent{\\textit{The symmetric potential}}\n\\bigskip\n \nFor $\\mathbf{P}_{(1)}$ to generate a generalized Lie symmetry of the\nEuler-Lagrange equations of motion, \n\\begin{equation}\n 0=\\frac{1}{\\vert q\\vert^2} \\widehat{q}\\cdot\\frac{\\partial \\>\\>\\>}{\\partial q} \\left(\\Pi_a^{\\>\\>b}(q)\\frac{\\partial\n V}{\\partial \\widehat{q}^b}\\right),\n\\nonumber\\end{equation}\nand as such the potential must satisfy\n\\begin{equation}\n \\frac{\\partial V}{\\partial\\widehat{q}^a} = \\frac{\\partial\n V_{AS}(\\widehat{q}^a)}{\\partial \\widehat{q}^a},\n\\nonumber\\end{equation}\nwhere $V_{AS}$ is a function of $\\widehat{q}^a$ only. It follows that\n$\\mathbf{P}_{(1)}$ generates a generalized Lie symmetry iff $V(q^a) =\nV_{Sph}(\\vert q\\vert)+V_{AS}(\\widehat{q}^a)$, where $V_{Sph}$ is a function of\n$\\vert q\\vert$ only. For this potential, $\\mathcal{S}\\hbox{ym}$ is \none-dimensional, and is spanned by $\\mathbf{P}_{(1)}$. \n\nThe constraint condition Eq.~$(\\ref{stab})$ for this potential\nreduces to \n\\begin{equation}\n 0=u(\\mathfrak{u})\\frac{d^2V_{Sph}(q)}{d\\vert q\\vert^2},\n\\nonumber\\end{equation}\nwhich must be satisfied on $\\mathbb{P}_L^{[1]}$. There are two possibilities.\n\n\\bigskip\n\\textit{Case 1:} $\\frac{d^2V_{Sph}}{d\\vert q\\vert^2}=0$.\n\\bigskip\n\n\\noindent Then $V_{Sph}(\\vert q\\vert) = a\\vert q\\vert+b$, but since \n\\begin{equation}\n \\gamma^{[1]}=\\frac{dV_{Sph}}{d\\vert q\\vert} =a,\n\\nonumber\\end{equation}\nthe condition $\\gamma^{[1]}=0$ requires $a=0$. It then follows that\n$\\gamma^{[1]} =0$ on $\\mathbb{P}_L$, and thus\n$\\mathcal{S}\\hbox{ym}\\mathcal{L}$ is one-dimensional; it also is\nspanned by $\\mathbf{P}_{(1)}$. The potential is then $V(q) =\nb+V_{AS}(\\widehat{q}^a)$, and the Lagrangian is \ninvariant under the transformation $q^a\\to \\alpha q^a$,\nwhere $\\alpha$ is an arbitrary, nonvanishing function on\n$\\mathbb{P}_L$. This Lagrangian therefore has a local conformal symmetry.\nImportantly, the function $u(\\mathfrak{u})$ is\nnot determined, and thus the dynamics of the \nparticle is given only up to an arbitrary function. Then \n$\\hbox{dim\n}(\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L})=1$ as well,\nand is also spanned by $\\mathbf{P}_{(1)}$.\n\n\\bigskip\n\\textit{Case 2:} $\\frac{d^2V_{Sph}}{d\\vert q\\vert^2}\\ne0$.\n\\bigskip\n\n\\noindent In this case $u(\\mathfrak{u})=0$, and the dynamics of the\nparticle is completely determined by its initial data;\n$\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}=\\{\\overline{\\mathbf{X}}_{L}\\}$.\nThe first-order Lagrangian constraint $\\gamma^{[1]}$ does not vanish\nautomatically, but instead defines a surface\non $\\mathbf{P}_L$, and it follows that\n$\\mathcal{S}\\hbox{ym}\\mathcal{L}=\\emptyset$. Indeed, the action's lack\nof a local gauge symmetry in this case can be seen explicitly.\n\n\nEquation $(\\ref{g-con})$ reduces to\n\\begin{equation}\n 0=\\widehat{q}\\cdot\\frac{\\mathbf{\\partial}V_{sph}}{\\mathbf{\\partial}q},\n\\nonumber\\end{equation}\nand for dynamics to be possible the set of solutions \n\\begin{equation}\n \\left\\{R_i\\in\\mathbb{R} \\ \\Bigg\\vert\n \\ \\frac{dV_{Sph}}{d\\vert q\\vert}\\Bigg\\vert_{R_i} =0\\right\\},\n\\nonumber\\end{equation}\nmust be non-empty. Dynamics are on the surfaces $\\vert q\\vert\n-R_i =0$ where the potential reduces to $V(q)=V_{Sph}(R_i) +\nV_{AS}(\\widehat{q}^a)$. This reduced potential has the same symmetry\nas the potential $V_{AS}(\\widehat{q}^a)$ in \\textit{Case 1}, and it is for\nthis reason that the Euler-Lagrange equations of motion have the same\ngeneralized Lie symmetry for the two cases. This is explicitly shown\nin the appendix.\n\nIn \\textit{Case 1} the action has a local conformal symmetry, while in\n\\textit{Case 2} it does not. (In \\cite{ADS2020} it was\nerroneously stated that in this case the action has a global rotational\nsymmetry.) The Lagrangian for the two cases do not have the same \ninvariances, resulting in one case dynamics that are determined\nonly up to \nan arbitrary $u(\\mathfrak{u})$, and in the other case to a\n$u(\\mathfrak{u})=0$ and dynamics\nthat are instead completely determined by the choice of initial data. \n\n\\bigskip\n\\noindent{\\textit{The asymmetric potential}}\n\\bigskip\n\nFor a general $V$ the second term in\nEq.~$(\\ref{e84})$ does not vanish, $\\mathbf{P}_{(1)}$ does not\ngenerate a symmetry of the equations of motion, and \n$\\mathcal{S}\\hbox{ym}=\\{\\emptyset\\}$. As before, $\\gamma^{[1]}$ does\nnot vanish, and thus $\\mathcal{S}\\hbox{ym}\\mathcal{L}=\\{\\emptyset\\}$\nas well. Furthermore, as Eq.$(\\ref{stab})$ results in\n\\begin{equation}\n \\overline{\\mathbf{X}}_E =\\overline{\\mathbf{X}}_L-\\frac{v\\cdot\\Pi\n \\cdot\\frac{\\partial \\gamma^{[1]}}{\\partial q}}\n {q^2\\mathbf{U}^q_{(1)}\\gamma^{[1]}}[\\mathbf{P}_{(1)}],\n\\nonumber\\end{equation}\nthe dynamics of the particle is uniquely determined by its initial\ndata, and\n$\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}=\\{\\overline{X}_{EL}\\}$\nonce again consists of a single point.\n\n\\subsection{A Lagrangian with Local\n Conformal Symmetry}\n\nThe action,\n\\begin{equation}\n S_{2} := \\int\\Bigg\\{\\frac{1}{2}m\n \\left(\\frac{d\\widehat{q}_1}{dt}\\right)^2+\\frac{1}{2}m\n \\left(\\frac{d\\widehat{q}_2}{dt}\\right)^2+\n \\frac{\\lambda}{2}\\left[\\frac{q_1^a}{\\vert q_2\\vert}\n \\frac{d\\>\\>}{dt}\\left(\\frac{q_{2a}}{\\vert q_1\\vert}\\right) -\n \\frac{q_2^a}{\\vert q_1\\vert}\n \\frac{d\\>\\>}{dt}\\left(\\frac{q_{1a}}{\\vert q_2\\vert}\\right)\n \\right]\n \\Bigg\\} dt,\n\\nonumber\\end{equation}\nwhere $a=1, \\dots, d$, $D=2d$, describes an interacting, two particle\nsystem that is invariant under the local conformal transformation\n$q_1^a \\to \\alpha(\\mathfrak{u}) q^a_1$ and $q_2^a \\to\n\\alpha(\\mathfrak{u}) q^a_2$. \n\nWith\n\\begin{widetext}\n\\begin{eqnarray}\n \\mathbf{\\Omega}_M &=& \\frac{m}{\\vert q_1\\vert^2} \\Pi_{ab}(q_1)\n \\mathbf{d}q_1^a\\wedge \\mathbf{d} v_1^b + \\frac{m}{\\vert q_2\\vert^2}\n \\Pi_{ab}(q_2) \\mathbf{d}q_2^a\\wedge \\mathbf{d} v_2^b, \\hbox{ and}\n \\nonumber\n \\\\\n \\mathbf{\\Omega}_F &=& \\frac{m}{\\vert q_1\\vert^3}\n \\left(\\widehat{q}_1\\cdot\\mathbf{d}\n q_1\\right)\\wedge\\left(v_1\\cdot\\Pi(q_1)\\cdot \\mathbf{d} q_1\\right) + \n \\frac{m}{\\vert q_2\\vert^3} \\left(\\widehat{q}_2\\cdot\\mathbf{d} \n q_2\\right)\\wedge\\left(v_2\\cdot\\Pi(q_2)\\cdot \\mathbf{d} q_2\\right)- \n \\nonumber\n \\\\\n &{}&\n \\frac{\\lambda}{\\vert q_1\\vert \\vert\n q_2\\vert}\\left[\\mathbf{d}q_1^a\\wedge\\left(\\Pi(q_2)\\cdot \n \\mathbf{d} q_2\\right)_a+ \\left(\\Pi(q_1)\\cdot\n \\mathbf{d} q_1\\right)_a\\wedge \\mathbf{d}q_2^a -\\left(\\Pi(q_1)\\cdot\n \\mathbf{d} q_1\\right)^a\\wedge \\left(\\Pi(q_2)\\cdot\n \\mathbf{d} q_2\\right)_a\\right]-\n \\nonumber\n \\\\\n &{}& \\frac{\\lambda}{\\vert q_1\\vert^2}\n \\left(\\widehat{q}_1\\cdot\\mathbf{d}q_1\\right)\\wedge\n \\left(\\widehat{q}_2\\cdot\\Pi(q_1) \\cdot \\mathbf{d}q_1\\right) +\n \\frac{\\lambda}{\\vert q_2\\vert^2} \n \\left(\\widehat{q}_2\\cdot\\mathbf{d}q_2\\right)\\wedge\n \\left(\\widehat{q}_1\\cdot\\Pi(q_2) \\cdot \\mathbf{d}q_2\\right),\n \\nonumber\n\\end{eqnarray}\n\\end{widetext}\n$\\mathcal{C}$ and $\\mathcal{G}$ are two-dimensional, and\nare spanned by\n\\begin{equation}\n \\mathbf{U}^q_{(1)} =\n \\widehat{q}_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_1},\n \\quad\n \\mathbf{U}^q_{(2)} =\n \\widehat{q}_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_2},\n \\quad \\hbox{and}\\quad \n \\mathbf{U}^v_{(1)} =\n \\widehat{q}_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v_1},\n \\quad\n \\mathbf{U}^v_{(2)} =\n \\widehat{q}_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v_2},\n\\nonumber\\end{equation}\nrespectively. The reduced $\\bar{F}=0$, and $\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L(\\mathfrak{u})}$ is spanned by $\\mathbf{U}^v_{(1)},\n\\mathbf{U}^v_{(2)}$, \n\\begin{eqnarray}\n \\mathbf{P}_{(+)} &=&\n q_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_1}+q_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_2}\n +\n v_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v_1} \n +\n v_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v_2},\n \\nonumber\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n \\mathbf{P}_{(-)} &=&\n q_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_1}-q_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q_2}\n +\n v_1\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v_1} \n -\n v_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}v_2} \n -\n \\nonumber\n \\\\\n &{}&\n 2\\frac{\\lambda}{m}\n \\left[\\frac{\\vert q_1\\vert}{\\vert\n q_2\\vert}q_2\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} \n v_1}\n +\\frac{\\vert q_2\\vert}{\\vert q_1\\vert}q_1\\cdot\n \\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v_2}\\right].\n \\nonumber\n\\end{eqnarray}\nAs such, $\\hbox{dim }\\overline{(\\hbox{ker }\\mathbf{\\Omega}_L)}\/\\mathcal{G}=2$. \n\nThe energy is\n\\begin{equation}\n E=\\frac{1}{2}\\frac{m}{\\vert q_1\\vert^2}v_1\\cdot\\Pi(q_1)\\cdot v_1 +\n \\frac{1}{2}\\frac{m}{\\vert q_2\\vert^2}v_2\\cdot\\Pi(q_2)\\cdot v_2.\n\\nonumber\n\\end{equation}\nWe find that $\\gamma^{[1]}_{(+)} =0$ while\n\\begin{equation}\n \\gamma^{[1]}_{(-)} =-\\frac{2\\lambda}{\\vert q_1\\vert \\vert q_2\\vert}\\left(\n q_2\\cdot \\Pi(q_1)\\cdot v_1 +\n q_1\\cdot \\Pi(q_2) \\cdot v_2\\right),\n\\nonumber\\end{equation}\ngiving,\n\\begin{eqnarray}\n \\mathbf{\\beta}[\\mathbf{X}_{EL}] &=& \\frac{1}{2}\\gamma^{[1]}_{(-)}\n \\left(\\frac{\\mathbf{\\Theta}_q^{(1)}}{\\vert q_1\\vert} -\n \\frac{\\mathbf{\\Theta}_q^{(2)}}{\\vert q_2\\vert}\\right).\n \\nonumber\n\\end{eqnarray}\nThen $\\mathcal{S}\\hbox{ym}\\mathcal{L}$ is\none-dimensional and spanned by $\\mathbf{P}_{(+)}$. \nAs expected, $\\mathfrak{L}_{\\mathbf{P}_{(+)}}\n\\mathbf{\\beta} = 0$. Because\n\\begin{equation}\n \\mathfrak{L}_{\\mathbf{P}_{(-)}} \\mathbf{\\beta} =\n -\\frac{4\\lambda}{m}\\left[1-(\\widehat{q}_1\\cdot\\widehat{q}_2)^2\\right]\\left(\\frac{\\mathbf{\\Theta}_q^{(1)}}{\\vert q_1\\vert} -\n \\frac{\\mathbf{\\Theta}_q^{(2)}}{\\vert q_2\\vert}\\right),\n \\nonumber\n\\end{equation}\n$\\mathcal{S}\\hbox{ym}$ is also one-dimensional, and\nis also spanned by $\\mathbf{P}_{(+)}$. \n\nA general SOELVF is \n\\begin{equation}\n \\overline{\\mathbf{X}}_{EL} = \\overline{\\mathbf{X}}_{L} -\n \\frac{m}{8\\lambda^2}\\frac{\\overline{\\mathbf{X}}_{L}\\gamma^{[1]}_{(-)}}{ \n \\left[1-(\\widehat{q}_1\\cdot\\widehat{q}_2)\\right]}\\left[\\mathbf{P}_{(-)}\\right] +\n u^{(+)}(\\mathfrak{u}) \\left[\\mathbf{P}_{(+)}\\right],\n \\label{103}\n\\end{equation}\nwhere $u^{(+)}(\\mathfrak{u})\\in\\overline{\\mathcal{F}}$, and from \\cite{ADS2020},\n\\begin{eqnarray}\n\\overline{\\mathbf{X}}_{L} &=&\nv_1\\cdot\\Pi(q_1)\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}\n q_1} + v_2\\cdot\\Pi(q_2)\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial}\n q_2} +\n\\nonumber\n\\\\\n&{}&\n\\left(\\frac{\\widehat{q}_1\\cdot v_1}{\\vert q_1\\vert}\\right)\nv_1\\cdot\\Pi(q_1)\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v_1} +\n\\left(\\frac{\\widehat{q}_2\\cdot v_2}{\\vert q_2\\vert }\\right)\nv_2\\cdot\\Pi(q_2)\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v_2} \n+\n\\nonumber\n\\\\\n&{}&\n\\frac{\\lambda}{m} \\left(\\frac{\\vert q_1\\vert}{\\vert q_2\\vert}\n v_2\\cdot\\Pi(q_2)\\cdot\\Pi(q_1)\\cdot \\frac{\\mathbf{\\partial}\n \\>\\>\\>}{\\mathbf{\\partial} v_1}-\\frac{\\vert q_2\\vert}{\\vert q_1\\vert}\n v_1\\cdot\\Pi(q_1)\\cdot\\Pi(q_2)\\cdot \\frac{\\mathbf{\\partial}\n \\>\\>\\>}{\\mathbf{\\partial} v_2} \\right),\n \\nonumber\n\\end{eqnarray}\nafter the constraint algorithm is applied. Equation $(\\ref{103})$ is\na consequence of the identity\n$\\langle\\mathbf{d}\\gamma^{[1]}_{(+)}\\vert \\overline{\\mathbf{X}}_{L}\\rangle=0$ and \n\\begin{eqnarray}\n -\\frac{1}{2\\lambda} \\langle\\mathbf{d}\\gamma^{[1]}_{(-)}\\vert \\overline{\\mathbf{X}}_{L}\\rangle&=& -\n 2(\\widehat{q}_1\\cdot\\widehat{q}_2)\\frac{E}{m}+ \\frac{2}{\\vert\n q_1\\vert \\vert q_2\\vert}\n v_1\\cdot\\Pi(q_1)\\cdot\\Pi(q_2)\\cdot v_2\n -\n \\nonumber\n \\\\\n &{}&\n \\frac{\\lambda}{m}(\\widehat{q}_1\\cdot\\widehat{q}_2)\\left[v_2\\cdot\\Pi(q_2)\\cdot\\widehat{q}_1\n - v_1\\cdot\\Pi(q_1)\\cdot \\widehat{q}_2\\right].\n\\nonumber\n\\end{eqnarray}\nWe see that $\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}$ is also\none-dimensional, and is also spanned by $\\mathbf{P}_{(+)}$. \n\n\\subsection{A Lagrangian with Local Conformal and\n Time-reparametization Invariance}\n\nThe action\n\\begin{equation}\n S_{3} := sm\\int\\left[s\\left(\\frac{d\\widehat{q}}{dt}\\right)^2\\right]^{1\/2} dt,\n\\nonumber\\end{equation}\nwhere $s=\\pm1$, is invariant under both the local conformal\ntransformations, $q^a\\to \\alpha(\\mathfrak{u}) q^a$, and the\nreparametization of time $t\\to \\tau(t)$ where $\\tau$ is a\nmonotonically increasing function of $t$. Then\n\\begin{equation}\n \\mathbf{\\Omega}_L = \\frac{m}{\\vert q\\vert}\n \\frac{P_{ab}(u)}{\\sqrt{sv\\cdot \\Pi(q)\\cdot\n v}}\\mathbf{d}q^a\\wedge\\mathbf{d}v^b,\n\\nonumber\\end{equation}\nand $\\mathbf{\\Omega}_F=0$. Here, $a=1. \\dots, D$,\n\\begin{equation}\n u_a=\\frac{\\Pi_{ab}(q)v^b}{\\sqrt{sv\\cdot\\Pi(q)\\cdot v}},\n\\nonumber\\end{equation}\nso that $u^2 = s$, while $P_{ab}(u) = \\Pi_{ab}(q) -su_a u_b$. As such,\n$\\hbox{ker }\\mathbf{\\Omega}_L(\\mathfrak{u}) = $ ker \n$\\mathbf{\\Omega}_M(\\mathfrak{u})$. Both $\\mathcal{C}$ and\n$\\mathcal{G}$ are two-dimensional, and are spanned by\n\\begin{equation}\n \\mathbf{U}^q_{(1)} = \n \\widehat{q}\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q}, \\quad\n \\mathbf{U}^q_{(2)} =\n u\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} q}, \\quad\n \\hbox{and}\\quad \n \\mathbf{U}^v_{(1)} =\n \\widehat{q}\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v}, \\quad\n \\mathbf{U}^v_{(2)} =\n u\\cdot\\frac{\\mathbf{\\partial}\\>\\>\\>}{\\mathbf{\\partial} v},\n\\nonumber\\end{equation}\nrespectively. It follows that $\\hbox{dim }(\\overline{\\hbox{ker\n }\\mathbf{\\Omega}_L}\/\\mathcal{G})=2$. \n\nBecause this system is fully constrained, $E=0$. As\n$\\mathbf{\\Omega}_F=0$ as well, there are no\nLagrangian constraints. It follows that $\\mathcal{S}\\hbox{ym}\\mathcal{L}$ is two\ndimensional and spanned by $\\mathbf{U}_{(1)}^q$ and\n$\\mathbf{U}_{(2)}^q$. As $\\mathbf{\\beta}=0$ as well, $\\mathcal{S}\\hbox{ym}$\nis also two dimensional, and is also spanned by \n$\\mathbf{U}^q_{(1)}$ and $\\mathbf{U}^q_{(2)}$. \n\nWe found in \\cite{ADS2020} that $\\overline{\\mathbf{X}}_{L} = 0$. A\ngeneral SOELVF is then $\\overline{\\mathbf{X}}_{EL} = u^{1}(\\mathfrak{u}) \n\\left[\\mathbf{U}^q_{(1)}\\right] + u^{2}(\\mathfrak{u})\n\\left[\\mathbf{U}^q_{(2)}\\right]$, with\n$u^{n}(\\mathfrak{u})\\in\\overline{\\mathcal{F}}$ for $n=1,2$. It follows\nthat $\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L}$ \nis also two-dimensional, and is spanned by $\\mathbf{U}^{q}_{(1)}$ and\n$\\mathbf{U}^q_{(2)}$ as well. \n\n\\begin{table}\n \\begin{tabular}{c|c|ccccc}\n \\hline\n Action & Potential & $\\>\\>\\>\\overline{\\hbox{ker }\\mathbf{\\Omega}_L}\/\\mathcal{G}\\>\\>\\>$ &\n $\\>\\>\\>\\mathcal{S}\\hbox{ym}\\>\\>\\>$ & $\\>\\>\\>\\mathcal{S}\\hbox{ym}\\mathcal{L}\\>\\>\\>$\n & $\\>\\>\\> I_{\\left[1\\right] }\\>\\>\\>$ &\n $\\>\\>\\>\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}^{[n_F]}_L} \\>\\>\\>$ \\\\\n \\hline\n {} & $V_{AS}(\\hat{q}^a)$ & 1 & 1 & 1 & 0 & 1 \\\\\n \n $S_{1}$ & $V_{sph}(\\vert q\\vert)+V_{AS}(\\hat{q}^a)$ & 1 & 1 & 0 & 1 & 0 \\\\\n \n {} & $V(q^a)$ & 1 & 0 & 0& 1& 0 \\\\\n \\hline\n $S_{2}$ & $\\frac{\\lambda}{2}\\left[\\frac{q_1^a}{\\vert q_2\\vert}\n \\frac{d\\>\\>}{dt}\\left(\\frac{q_{2a}}{\\vert q_1\\vert}\\right) -\n \\frac{q_2^a}{\\vert q_1\\vert}\n \\frac{d\\>\\>}{dt}\\left(\\frac{q_{1a}}{\\vert q_2\\vert}\\right)\n \\right]$ & 2 & 1 & 1& 1& 1 \\\\\n \\hline\n $S_{3}$ & $0$ & 2 & 2& 2& 0& 2\\\\\n \\hline\n \\end{tabular}\n \\caption{A summary of the symmetries of the three examples\n considered in this paper. With the \n exception of the $I_{[1]}$ column, the numerical entries are the\n dimensionality of the vector spaces listed along the first row. Notice\n the case where the Euler-Lagrange equations of motion has a\n generalized Lie symmetry while the action itself does not. In all\n three examples, $\\hbox{dim\n }(\\mathcal{S}\\hbox{ym}\\mathcal{L})=\\hbox{dim\n }(\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}_L^{[n_f]}})$. \n }\n \\label{Table}\n\\end{table}\n\n\n\\section{Concluding Remarks\\label{&Conc}}\n\nThat each generalized Lie symmetry of the action\ncontributes one arbitrary function to the SOELVF for a dynamical\nsystem is known anecdotally, and is a result expected on physical\ngrounds. For almost regular Lagrangians, the\nappearance in physics of a generalized Lie symmetry is due to a local\ngauge symmetry of the \ndynamical system, and thus to the absence of a gauge\\textemdash the\nlength of vectors for local conformal invariance, or a measure\nfor time for time-reparametization invariance\\textemdash for some\ndynamical property of the system. As the generalized Lie symmetries of\nthe action for an almost regular Lagrangian would have\n$N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ of these gauge freedoms, it is\nreasonable that the absence of these gauges will result in an \nequal number of arbitrary functions in the SOELVF. An equal\nnumber of terms to fix these gauges would then be needed to determine the\ndynamics of the system uniquely. But while these expectations are\nreasonable, up to now they have been fulfilled only on a case-by-case\nbasis. This is in great part because the analysis of dynamical systems\nwith a local gauge symmetry has traditionally been done using\nconstrained Hamiltonian mechanics. Such analysis relies on the\ncanonical Hamiltonian, however, and the connection\nbetween the canonical Hamiltonian and the symmetries of the Lagrangian\nis indirect at best, in contrast to the Lagrangian approach followed\nhere. Moreover, the process of determining the total Hamiltonian\nfor the system is often prescriptive, with results that are specific to\nthe system at hand. By focusing on the Lagrangian and on the Lagrangian\nphase space, we have been able to show for all systems with an almost\nregular Lagrangian that has a constant rank Lagrangian two-form,\na direct link between local gauge symmetries and its dynamics. In\nparticular, it establishes a link between the number of gauge\nsymmetries of the action and the number of arbitrary functions that\nnaturally appear in the evolution of such dynamical systems. \n\nAs $\\gamma_{\\mathbf{P}}^{[1]}=0$ for any choice of\n$\\mathbf{P}\\in\\mathcal{S}\\hbox{ym}\\mathcal{L}$, the vectors in\n$\\mathcal{S}\\hbox{ym}\\mathcal{L}$ do not contribute to the\nfirst-order constraint manifold $\\mathbb{P}_L^{[1]}$, and as such do\nnot contribute to the Lagrangian constraint algorithm at this order, or at\nany higher orders. It is for this reason that the\n$N_{\\mathcal{S}\\hbox{ym}\\mathcal{L}}$ arbitrary functions $u^m_{[1]}$\nare not determined by the algorithm, and why these functions will\nstill contribute to $\\overline{\\mathbf{X}}_{EL}$ even after the algorithm has been\ncompleted. It also means that if second- and higher-order Lagrangian constraints\nare introduced, they are accidental and cannot be due to the local\ngauge symmetries of the action. Interestingly, we have yet to find a dynamical system\nwith a Lagrangian that is both almost-regular and has a Lagrangian\ntwo-form with constant rank where second- or higher-order Lagrangian\nconstraints are introduced. \n\nThis impact of generalized Lie symmetries on the dynamics of particles\nillustrates the inherent differences between the analysis of the\nsymmetries of regular Lagrangians and that of almost regular\nLagrangians. For regular Lagragians, the generator of the generalized\nLie symmetry (at times referred to as a global symmetry) gives rise to\na prolongation vector, and the action of this prolongation on the Lagrangian gives\nthe variation of the action, $\\delta S$, under this symmetry. When the\nEuler-Lagrange equations of motion are thenimposed, the conserved\nquantity for this symmetry along the path given by the solution of these equations of\nmotion is then obtained. While the generator of the generalized \nLie symmetry for the almost regular Lagrangian $\\mathbf{g}_L$ does\ngive a prolongation vector \\textbf{pr} $\\mathbf{g}_L$ Eq.~$(\\ref{prog})$, and\nwhile the action of \\textbf{pr} $\\mathbf{g}_L$ on $L$ does give\n$\\delta S$, imposing the Euler-Lagrange equations of motion on $\\delta\nS$ in Eq.~$(\\ref{Ae1})$ gives the vacuous statement $\\delta\nS=0$. Instead, the requirement that $\\delta S=0$ for all paths on\n$\\mathbb{Q}$ gives the conditions that the \ngenerators of the symmetry must satisfy. This in turn shows that the\nexistence of these generators is due solely to the Lagrangian being\nsingular. These conditions then \naffect the dynamics of the system through\n$\\gamma_{\\mathbf{P}}^{[1]}=0$, and in doing so, sets a lower bound to\nthe dimensionality of \n$\\overline{\\mathcal{S}\\hbox{ol}}_{\\mathbb{P}_L^{[n_f]}}$. \n\nWe have found it quite difficult to construct more than one example of a\ndynamical system that has an almost regular Lagrangian with both a\ngeneralized Lie symmetry and a \nLagrangian two-form with constant rank on $\\mathbb{P}_L$. We have,\non the other hand, found it quite easy to construct examples of dynamical\nsystems that have an almost regular Lagrangian with a generalized Lie\nsymmetry and a Lagrangian two-form whose rank varies across\n$\\mathbb{P}_L$. Indeed, it is the latter case that is the more\nprevalent one, and yet much of the results of this \npaper and a good portion of the results of our previous one \\cite{ADS2020} relies\non the condition that the rank of the Lagrangian two-form be constant on\n$\\mathbb{P}_L$. This is even more concerning when we realize that these\nmore prevalent systems are expected, by their nature, to have much\nricher dynamics and mathematical structures (indeed, we have found that\nsuch systems often require the introduction of second- or higher-order\nLagrangian constraints), and yet it is not known\nwhich of the results that have been shown to hold for systems with\nconstant rank Lagrangian two-forms will still hold when the rank varies\nacross $\\mathbb{P}_L$. Determining the generalized Lie symmetries of\nthese systems; showing that the passage from the Lagrangian to the\nHamiltonian phase space is possible; and finding the links between\nsymmetry and dynamics is a necessity for future research. \n\n\\begin{acknowledgments}\n This paper would not have been possible without the contributions by\n John Garrison, who provided much of the underlying symmetry analysis\n of the action used in \\textbf{Section \\ref{&A-Sym}}, and most\n of the essential mathematics in \\textbf{Section\n \\ref{&review}}. Publication made possible in \n part by support from the Berkeley Research Impact Initiative (BRII)\n sponsored by the UC Berkeley Library. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nThe simple loop conjecture, proved by Gabai in \\cite{Ga}, states that any non-injective homomorphism from a closed surface group to another closed surface group has an element represented by a simple closed curve in the kernel. It has been conjectured that the result still holds if the target is replaced by the fundamental group of an orientable 3-manifold (see Kirby's problem list in \\cite{Ki}). Although special cases have been proved (e.g. \\cite{Ha}, \\cite{RW}), the general hyperbolic case is still open. \n\nRecently, Cooper and Manning showed that if instead of a 3-manifold group the target group is $\\SL(2, \\mathbb{C})$, then the conjecture is false. Precisely, they show: \n\n\\begin{theorem}[Cooper-Manning \\cite{CM}] \\label{cmthm} Let $\\Sigma$ be a closed orientable surface of genus $g \\geq 4$. Then there is a homomorphism $\\rho: \\pi_1(\\Sigma) \\to \\SL(2, \\mathbb{C})$ such that \n\\begin{enumerate} \n\\item $\\rho$ is not injective\n\\item If $\\rho(\\alpha) = \\pm I$, then $\\alpha$ is not represented by a simple closed curve\n\\item If $\\rho(\\alpha)$ has finite order, then $\\rho(\\alpha) = I$\n\\end{enumerate}\nThe third condition implies in particular that no \\emph{power} of a simple closed curve lies in the kernel. \n\\end{theorem}\n\nInspired by this, we asked whether a similar result holds for $\\PSL(2,\\mathbb{R})$, this being an intermediate case between Gabai's result for surface groups and Cooper and Manning's for $\\SL(2,\\mathbb{C})$. Cooper and Manning's proof uses a dimension count on the $\\SL(2,\\mathbb{C})$ character variety and a proof that a specific subvariety is irreducible and smooth on a dense subset, much of which does not carry over to the $\\PSL(2, \\mathbb{R})$ case. In general, complex varieties and their real points can behave quite differently. However, we show here with different methods that an analogous result does hold. \n\nWhile this note was in progress, we learned of work of Louder and Calegari (independently in \\cite{Lo} and \\cite{Ca}) that can also be applied to answer our question in the affirmative. Louder shows the simple loop conjecture is false for representations into limit groups, and Calegari gives a practical way of verifying no simple closed curves lie in the kernel of a non-injective representation using stable commutator length and the Gromov norm. \n\nThe difference here is that our construction is entirely elementary. We use an explicit representation from DeBlois and Kent in \\cite{DK} and verify that this representation it is non injective and kills no simple closed curve by elementary means. Our end result parallels that of Cooper and Manning but also include surfaces with boundary and all genera at least 1: \n\n\\begin{theorem}\\label{main}\nLet $\\Sigma$ a surface of negative Euler characteristic and of genus $g \\geq 1$\n, possibly with boundary. \nThen there is a homomorphism $\\rho: \\pi_1(\\Sigma) \\to \\SL(2, \\mathbb{R})$ such that \n\\begin{enumerate} \n\\item $\\rho$ is not injective\n\\item If $\\rho(\\alpha) = \\pm I$, then $\\alpha$ is not represented by a simple closed curve\n\\item In fact, if $\\alpha$ is represented by a simple closed curve, then $\\rho(\\alpha^k) \\neq 1$ for any $k \\in \\mathbb{Z}$. \n\\end{enumerate}\nMoreover, there are uncountably many non-conjugate representations satisfying 1. through 3. \n\\end{theorem}\n\n\n\n\n\n\n\n\n \n\n\n\n\n\\section{Proof of theorem \\ref{main}} \n\nWe first present a construction of a (non-injective) representation from DeBlois and Kent in \\cite{DK}, and then show that no power of a simple closed curve lies in the kernel of this representation. The full construction appears in \\cite{DK}, we describe it here for convenience. \n\nLet $\\Sigma$ be a surface of genus $g \\geq 1$ and negative Euler characteristic, possibly with boundary. Assume for the moment that $\\Sigma$ is not the once-puntured torus -- Theorem \\ref{main} for this case will follow easily later on. \n\nLet $c \\subset \\Sigma$ be a simple closed curve separating $\\Sigma$ into a genus 1 subsurface with single boundary component $c$, and a genus ($g-1$) subsurface with one or more boundary components. Let $\\Sigma_A$ denote the genus $(g-1)$ subsurface and $\\Sigma_B$ the genus 1 subsurface. See Figure \\ref{setup} below. Finally, we let $A= \\pi_1(\\Sigma_A)$ and $B = \\pi_1(\\Sigma_B)$, so that $\\pi_1(\\Sigma) = A \\ast_C B$, where $C$ is the $\\mathbb{Z}$-subgroup generated by the element $[c]$ represented by the curve $c$. We assume that the basepoint for $\\pi_1(\\Sigma)$ lies on $c$. \n\n \\begin{figure*}[h]\n \\centerline{\n \\mbox{\\includegraphics[width=2.5in]{simpleloopsetup.pdf}}}\n \\caption{The setup: decomposition of $\\Sigma$ and generators $x$ and $y$ for $B$}\n \\label{setup}\n \\end{figure*}\n\nLet $x \\in B$ and $y \\in B$ be generators such that $B = \\langle x, y \\rangle$, and that $c$ represents the commutator $[x,y]$. Fix $\\alpha$ and $\\beta$ in $\\mathbb{R} \\setminus \\{0,\\pm1\\}$, and following \\cite{DK} define $\\phi_B: B \\to \\SL(2,\\mathbb{R})$ by \n\n$$\\phi_B(x) = \n \\begin{pmatrix}\n \\alpha & 0 \\\\\n 0 & \\alpha\n \\end{pmatrix}$$\n $$\\phi_B(y) = \n \\begin{pmatrix}\n \\beta & 1 \\\\\n 0 & \\beta^{-1} \n \\end{pmatrix}$$ \n We have then \n $$\\phi_B([x, y]) = \\begin{pmatrix}\n 1 & \\beta(\\alpha^2 - 1) \\\\\n 0 & 1 \n \\end{pmatrix}$$\nso that $\\phi_B([x, y])$ is invariant under conjugation by the matrix $\\lambda_t := \\bigl( \\begin{smallmatrix} \n 1 & t \\\\\n 0 & 1 \n\\end{smallmatrix} \\bigr)$. \n \nProjecting this representation to $\\PSL(2,\\mathbb{R})$ gives a representation which is upper triangular, hence solvable and therefore non-injective. Abusing notation, let $\\phi_B$ denote the representation to $\\PSL(2,\\mathbb{R})$.\n \nNow let $\\phi_A : A \\to \\PSL(2, \\mathbb{R})$ be Fuchsian and such that the image of the boundary curve $c$ under $\\phi_A$ agrees with $\\phi_B([x,y])$. Such a representation exists for the following reasons. First, if $\\Sigma$ has negative Euler characteristic, genus $g>1$, and is not the once punctured torus, then $\\Sigma_A$ will have negative Euler characteristic as well and admit a hyperbolic structure. Secondly, the Fuchsian representation coming from the hyperbolic structure will send the element $[c]$ representing the boundary curve to a parabolic, so after conjugation we may assume that it is equal to $\\phi_B([x,y])$, provided $\\phi_B([x,y])$ is parabolic, i.e. $\\beta(\\alpha^2 - 1) \\neq 0$. \n \nFinally, combine $\\phi_A$ and $\\phi_B$ to get a one-parameter family of representations $\\phi_t$ of $\\pi_1(\\Sigma) = A \\ast_C B$ to $\\PSL(2,\\mathbb{R})$ as follows. For $t \\in \\mathbb{R}$ and $g \\in A \\ast_C B$, let\n\n$$\\phi_t(g) = \\left\\{ \\begin{array}{rcl} \\phi_A(g) & \\mbox{if} & g \\in A \\\\ \\lambda_t \\circ \\phi_B(g) \\circ (\\lambda_t)^{-1} & \\mbox{if} & g \\in B\n\\end{array}\\right.$$\n\nThis representation is well defined because $\\phi_B([x, y]) = \\phi_A([x,y])$ and $\\phi_B([x, y])$ is invariant under conjugation by $\\lambda_t$. \n\nOur next goal is to show that for appropriate choice of $\\alpha, \\beta$ and $t$, the representation $\\phi_t$ satisfies the criteria in Theorem \\ref{main}. The main difficulty will be checking that no element representing a simple closed curve is of finite order. To do so, we employ a stronger form of Lemma 2 from \\cite{DK}. This is:\n\n\\begin{lemma} \\label{transcendental}\nSuppose $w \\in A \\ast_C B$ is a word of the form $w = a_1b_1a_2b_2 ... a_lb_l$ with $a_i \\in A$ and $b_i \\in B$ for $1 \\leq i \\leq l$. Assume that for each $i$, the matrix $\\phi_t(a_i)$ has a nonzero 2,1 entry and $\\phi_t(b_i)$ is hyperbolic. If $t$ is transcendental over the entry field of $\\phi_0(A \\ast_C B)$, then $\\phi_t(w)$ is not finite order. \n\\end{lemma}\n\n\\noindent By \\emph{entry field} of a group $\\Gamma$ of matrices, we mean the field generated over $\\mathbb{Q}$ by the collection of all entries of matrices in $\\Gamma$.\n\n\\begin{remark} \nLemma 2 of \\cite{DK} is a proof that $\\phi_t(w)$ is not the \\emph{identity}, under the assumptions of Lemma \\ref{transcendental}. We use some of their work in our proof. \n\\end{remark}\n\n\\begin{proof}[Proof of Lemma \\ref{transcendental}]\nIn \\cite{DK}, DeBlois and Kent show by a straightforward induction that the entries of $\\phi_t(w)$ are polynomials in $t$, where the degree of the 2,2 entry is $l$, the degree of the 1,2 entry is at most $l$, and the other entries have degree at most $l-1$. Now suppose that $\\phi_t(w)$ is finite order. Then it is conjugate to a matrix of the form \n$ \\bigl( \\begin{smallmatrix} \n u & v \\\\\n -v & u \n\\end{smallmatrix} \\bigr)$. \nwhere $u = \\cos(\\theta)$ and $v = \\sin(\\theta)$ for some rational angle $\\theta$. In particular, it follows from the deMoivre formula for sine and cosine that $u$ and $v$ are algebraic. \n\nNow suppose that the matrix conjugating $\\phi_t(w)$ to \n$ \\bigl( \\begin{smallmatrix} \n u & v \\\\\n -v & u \n\\end{smallmatrix} \\bigr)$ has entries $a_{ij}$. Then we have\n$$ \\phi_t(w) = \\begin{pmatrix}\nu - (a_{12}a_{22} - a_{11}a_{21})v & (a_{12}^2 a_{11}^2) v \\\\\n -(a_{22}^2 a_{21}^2)v & u + (a_{12}a_{22} + a_{11}a_{21})v\n \\end{pmatrix}$$\nLooking at the 2,2 entry we see that $a_{12}a_{22} + a_{11}a_{21}$ must be a polynomial in $t$ of degree $l$. But this means that the 1,1 entry is also a polynomial in $t$ of degree $l$, contradicting Deblois and Kent's calculation. This proves the lemma.\n\\end{proof} \n\n\nTo complete our construction, choose $t$ to be transcendental over the entry field of $\\phi_0(A\\ast_C B)$. We want to show that no power of an element representing a simple closed curve lies in the kernel of $\\phi_t$. To this end, consider any word $w$ in $A \\ast_C B$ that has a simple closed curve as a representative. There are three cases to check. First, if $w$ is a word in $A$ alone, then $\\phi_t(w)$ is not finite order, since $\\phi_t(A)$ is Fuchsian and therefore injective. Secondly, if $w$ is a word in $B$, then an elementary geometric argument shows that $w$ can only be represented by a simple closed curve if it has one of the following forms: \n\\begin{enumerate} \n\\item $w=x ^{\\pm 1}$ or $w= y^{\\pm 1}$\n\\item $w = [x^{\\pm1},y^{\\pm1}]$\n\\item Up to replacing $x$ with $x^{-1}$, $y$ with $y^{-1}$ and interchanging $x$ and $y$, there is some $n \\in \\mathbb{Z}^+$ such that $w = x^{n_1}y x^{n_2} y ... x^{n_s}y$ where $n_i \\in \\{n, n+1\\}$. \n\\end{enumerate}\nWe leave this as an exercise for the reader. \nThis classification of words representing simple closed curves in $\\Sigma_B$ also follows from a much more general theorem in \\cite{BS}.\n\nBy construction, no word of type 1, 2 or 3 is finite order provided that $\\alpha^s \\beta^k \\neq 1$ for any integers $s$ and $k$ other than zero -- indeed, we only need to check words of type 3, and these necessarily have trace $\\alpha^s\\beta^k + \\alpha^{-s}\\beta^{-k}$ for some $s, k \\neq 0$. Note that in particular, under the condition that $\\alpha^s \\beta^k \\neq 1$ for $s, k \\neq 0$, all type 3 words are hyperbolic. We will use this fact again later on. \n\nFor the remaining case where $w$ is a word with letters in both $A$ and $B$, we claim that it can be written in a form where Lemma \\ref{transcendental} applies. To write it this way, use the following procedure: First take a simple representative $\\gamma$ for $w$ and apply an isotopy so that each crossing of $\\gamma$ with $c$ occurs in some small neighborhood of the basepoint $p$. This gives us a well defined shortest path along $c$ to $p$ from each crossing. After further isotopy, we may assume additionally that no segment of $\\gamma$ together with the shortest path along $c$ from its endpoints to $p$ bounds a disc, and that $\\gamma$ is transverse to $c$. All this can be done without introducing any self-crossings in $\\gamma$. Now $\\gamma$ is of the form $\\gamma_1 \\delta_1 \\gamma_2 \\delta_2 ... \\gamma_l \\delta_l$ where $\\gamma_i$ is a simple arc in $\\Sigma_A$ and $\\delta_i$ a simple arc in $\\Sigma_B$. Close each $\\gamma_i$ and $\\delta_i$ into a simple loop by connecting its endpoints to $p$ using the shortest path along $c$ and let $a_i \\in A$ (respectively $b_i \\in B$) be the corresponding element of the fundamental group. See Figure \\ref{fig1}. This gives us a word $a_1b_1a_2b_2 ... a_lb_l$ equivalent to $w$ after cyclic reduction, and each $a_i$ is represented by a simple closed curve in $\\Sigma_A$ and each $b_i$ by a simple closed curve in $\\Sigma_B$. The elimination of discs bounded between segments of $\\gamma$ and short segments of $c$ ensures that each $a_i$ and $b_i$ is nontrivial. \n\n \\begin{figure*}\n \\centerline{\n \\mbox{\\includegraphics[width=3.3in]{simpleloop.pdf}}}\n \\caption{$a_1$ and $b_1$ in $w$, represented by $\\gamma_i$ and $\\delta_i$ joined to $p$}\n \\label{fig1}\n \\end{figure*}\n\nWe can also show that each $a_i$ either has a non-zero 2,1 entry or is represented by the curve $c$ or its inverse. This is because $\\phi_A$ is Fuchsian, so the only elements fixing infinity -- that is, with 2,1 entry equal to zero -- are powers of $c$, and no powers of $c$ other than $c^{\\pm1}$ have a simple closed curve representative. Similarly, the classification of words representing simple closed curves in $\\Sigma_B$ shows that each $b_i$ is either hyperbolic or represented by $c$ or $c^{-1}$. We claim that we may now rewrite $w$ to eliminate all appearances of $c$, keeping each $a_i$ with a non-zero 2,1 entry and each $b_i$ hyperbolic. After doing so, we will have $w$ in a form where we can apply Lemma \\ref{transcendental}. \n\nTo rewrite $w$ in the desired form, first note that all $\\gamma_i$ such that $a_i$ is represented by $c$ may be homotoped (simultaneously) into $\\Sigma_b$ without introducing any self intersections of $\\gamma$. Thus, we can replace each such $\\delta_{i-1} \\gamma_i \\delta_i$ with a simple loop $\\delta'_i$ in $\\Sigma_B$ alone, and rewrite $w = a_1b_1... a_{i-1} b'_{i} a_{i+1} ... a_lb_l$. Reindex so that $w = a_1b_1a_2b_2 ... a_kb_k$ for $k < l$, and reindex the corresponding $\\delta_i$ and $\\gamma_i$ as well. \nNow repeat the procedure on this new word with each $b_i$: homotope all $\\delta_i$ such that $b_i$ is represented by $c$ over to $\\Sigma_A$ without introducing any self intersections of $\\gamma$, and then replace each such $\\gamma_i \\delta_i \\gamma_{i-1}$ with a simple loop $\\gamma_i'$ in $\\Sigma_B$ alone. Then rewrite $w$ so that, after reindexing, $w = a_1b_1a_2b_2 ... a_mb_m$ with $m2$\\,s and are attributed to core collapse supernovae \\citep[e.g.][]{galama98,bloom98}.\nIt was therefore the near-simultaneous detection of GW170817 and GRB 170817A \\citep[the latter of which was a SGRB detected by {\\em Fermi}{};][]{abbott17b} and the late-time radio and X-ray follow-up confirming the presence of an off-axis jet \\citep[e.g.][]{mooley18b,ghirlanda19,troja19gw} that strongly supported the link between BNS mergers and SGRBs.\n\nThe detection of the electromagnetic counterpart to a aLIGO\\slash Virgo-detected BNS merger is of great importance as it enables the localisation of the source, along with providing complementary information such as an independent distance measurement, insight into the central engine, the energy released, and the final merger remnant. However, the initial localisation of a GW event by aLIGO\/Virgo is tens to hundreds of square degrees, making it difficult to search for counterparts. \nWe therefore introduce a method designed to exploit the established link between GW-detected BNS mergers and SGRBs by using the Australia Telescope Compact Array (ATCA) rapid-response mode to trigger on {\\em Swift}{}-detected SGRBs. \n\nWhile the radio emission from SGRBs is usually short-lived \\citep[$\\lesssim2$\\,days;][]{fong15}, the ATCA rapid-response mode is capable of being on-source within $10$\\,minutes. \nBy rapidly responding to {\\em Swift}{} SGRB triggers, ATCA can become a new diagnostic tool for\nuncovering the range of radio behaviour shown by SGRBs to help interpret what to look for from GW events that have off-axis gamma-ray jets. \nAs targeted observations can usually reach deeper sensitivities than wide-field surveys, ATCA observations can provide\na template of the radio brightness and timing properties of BNS mergers, which will \nin-turn inform the follow-up strategies of the next era of aLIGO\/Virgo GW events by wide-field radio telescopes, such as Australian instruments like the Murchison Widefield Array \\citep[MWA;][]{tingay13} and the Australian Square Kilometre Array Pathfinder \\citep[ASKAP;][]{johnston08}.\n\nThe jet launched during an SGRB is expected to produce a radio afterglow as predicted by the fireball model \\citep{cavallo78,rees92}. \nIn this model, the relativistic ejecta interact with the circumstellar medium (CSM) producing a forward shock that accelerates electrons and generates synchrotron emission. Reverse shock synchrotron emission, produced by the shock that propagates back into the post-shock ejecta, may also be observed \ndepending on the density of the CSM and the ejecta. \nThe broadband spectrum produced by the jet interactions in the GRB afterglow is described by the peak flux and 3 characteristic frequencies ($\\nu_m$, the minimum electron energy frequency; $\\nu_{sa}$, the synchrotron self-absorption frequency; and $\\nu_c$, the electron cooling frequency), which evolve over time \\citep{sari98,wijers99,granot99}. Only early-time radio observations are able to properly constrain 2 of these 3 frequencies ($\\nu_m$ and $\\nu_{sa}$), and also disentangle the reverse and forward shock components. \nBy combining ATCA observations with \nmulti-wavelength observations to perform SED modelling, these parameters can be derived, thus providing information about the blast wave kinetic energy, the CSM density, the magnetic field energy density and the power law electron energy distribution \\citep{sari98,wijers99,granot99}. \nLimits on the linear polarisation of the reverse shock can also provide information on the jet magnetic field structure \\citep{granot14}.\nEarly-time radio observations of SGRBs are also sensitive to temporal steepening from the jet-break \\citep{sari99},\nwhich constrains the jet opening-angle used to calculate the true energy released \\citep[and therefore merger BNS\/GW event rates, e.g.][]{fong14,fong15,deugartepostigo14}. \nEven early-time non-detections in the radio band can allow us to make predictions about when the forward-shock emission may peak, which can inform the cadence and duration of follow-up radio observations, potentially optimising the success of a late-time detection as we demonstrate in this paper. \nIn addition, sensitive, multi-frequency, high-cadence radio observations may allow us to distinguish between more exotic emission models caused by the ejection of neutron star material or the propagation of shocks caused by the merger event, which may produce non- to ultra-relativistic omnidirectional radio emission \\citep[e.g.][]{nakar11,kyutoku14}.\nIt is therefore crucial to obtain early-time radio observations (within minutes to days) of a larger sample of SGRBs to better characterise the timescales and frequencies necessary for understanding the range of behaviours we might expect from GW radio counterparts. \n\nThere are also several BNS merger models that suggest \na short-lived, supramassive and highly magnetised neutron star (NS) or ``magnetar'', supported by rotation, can exist for a short time ($<10^{4}$\\,s) before finally forming a stable magnetar or further collapsing into a black hole \\citep[BH, e.g.][]{usov92,zhang01,falcke14,zhang14}. \nEvidence for such merger products comes from the detection of a ``plateau phase'' in some SGRB X-ray light curves between $\\sim10^{2}-10^{4}$\\,s post-burst, where this departure from power-law decay indicates ongoing energy injection \\citep{rowlinson13}. \nSuch merger remnant scenarios may be sources of prompt, coherent radio emission \\citep[see][for a review]{rowlinson19}. However, no continuous monitoring of the radio behaviour has yet been performed at GHz frequencies during the plateau phase. Such detections or upper limits could constrain different central engine models as has been done at late-times \\citep[e.g.][]{fong16}. \n\nOnly eight SGRBs have published detections in the radio band to date: GRB 050724A, 051221A, 130603B, 140903A, 141212A, 150424A, 160821B and 200522A\n\\citep{berger05,soderberg06,fong14,fong15,fong17,troja16,zhang17,troja19,lamb19,fong21}.\nNote that this does not include GW170817 as it had a far more off-axis outflow \nthan standard cosmological SGRBs so the corresponding radio afterglow was detected much later when the ejecta had moved into our line-of-sight \\citep{mooley18b}. \nOut of a sample of $>70$ radio-observed SGRBs, only $\\sim10$\\% have been detected in the radio band at GHz frequencies \\citep{fong21}.\nThis low detection rate may be due to an observed fast rise in radio emission with a potentially short radio afterglow lifetime. \nFor example, 7 of the 8 radio-detected SGRBs were detected within 1\\,day post-burst, at least half of which faded below detectability within $\\sim2$ days (see Figure~\\ref{fig:lc}). \nGiven these short timescales, it is possible the radio emission is frequently dominated by the reverse-shock \\citep[as was the case for GRB 051221A;][]{soderberg06} since simulations of BNS mergers demonstrate forward shock radio emission may evolve over days to weeks \\citep{hotokezaka16} as is also the case for many LGRBs \\citep[e.g.][]{vanderhorst08,vanderhorst14}. \nIf we instead compare the radio-detected sample to those SGRBs that were initially observed at radio wavelengths $<1$\\,day post-burst, this gives a much higher radio detection rate of $\\sim30$\\% \\citep{fong15}. \nHowever, while the first four radio-detected SGRBs showed initial flux densities of $>0.1$ mJy\/beam at GHz frequencies, few of the other $<1$\\,day post-burst pre-2016 observations had sufficient sensitivity to detect a predicted peak flux density of $\\sim40\\mu$\\,Jy\/beam at 10\\,GHz for an SGRB at an average redshfit of $z=0.5$ with an expected CSM density of $n_{0} \\sim0.1$\\,cm$^{-3}$ \\citep{berger14}. In fact, the four most recent radio-detected SGRBs peak at $\\lesssim40\\mu$\\,Jy\/beam.\n\nThe small sample of radio detected SGRBs therefore provides limited knowledge of their radio afterglow brightnesses and timescales, and is insufficient for deriving the energy outputs and environmental properties of the population through multi-wavelength modelling. It is therefore vital to perform both rapid and sensitive radio follow-up observations of SGRBs to capture these short-lived and faint events. The key to achieving this is through the use of rapid-response (also known as triggering) systems, where a telescope has the ability to automatically respond to a transient alert, and either repoint at the event or update its observing schedule to begin observations when the source has risen above the horizon. Rapid-response radio telescopes have been in use since the 1990's \\citep[for example see][]{green95,dessenne96,bannister12,palaniswamy14,kaplan15} \nbut predominantly at low radio frequencies (100\\,MHz to 2.3\\,GHz), with the majority of experiments being designed to search for prompt, coherent radio emission. However, until recently, the only high frequency ($>5$\\,GHz) rapid-response program designed to target incoherent (synchrotron) radio emission from GRBs has been run on the Arcminute Microkelvin Imager (AMI) Large Array (LA), known as ALARRM (the AMI-LA Rapid Response Mode), which has been active since 2012 \\citep[][]{staley13,anderson18}. It was only through ALARRM that it was possible to be on-source fast enough to detect the rise and peak in the reverse-shock radio emission at 15\\,GHz from GRB 130427A within 1\\,day post-burst, which also represents one of the earliest radio detections of a GRB to date \\citep{anderson14}. In addition, the radio catalogue of AMI observations of 139 GRBs (12 were short GRBs but non-detections), the majority of which were automatically triggered on using the rapid-response mode within $1$\\,day post-burst, was the first representative sample of GRB radio properties that was unbiased by multi-wavelength selection criteria \\citep{anderson18}. This work revealed that possibly up to $\\sim44-56$\\% of {\\em Swift}{}-detected LGRBs have a radio counterpart (down to $\\sim0.1-0.15$\\,mJy\/beam), with the increase in detection rate from previous studies \\citep[$\\sim30$\\%;][]{chandra12} likely being due to the AMI rapid-response mode, which allows observations to begin while the reverse-shock is contributing to the radio afterglow. This program has motivated the installation of a rapid-response mode on the ATCA. \n\nHere we present the first triggered observation of a SGRB using the new ATCA rapid-response mode. \nATCA is an ideal instrument for performing triggered radio follow-up of {\\em Swift}{} SGRBs due to its high sensitivity and broadband receivers that provide simultaneous multi-frequency coverage. The ATCA response times (which can be as short as minutes) \nhave the potential to be \nmuch faster than the current median SGRB response of the Karl G. Jansky Very Large Array (VLA; $\\sim24.7$\\,hrs), which rely on manually scheduling target-of-opportunity observations \\citep{fong15}.\nIn Section 2, we describe the ATCA rapid-response system from the observer interaction (front-end) level and the observatory (back-end) level.\nIn Section 3, we describe the triggered ATCA observation and data reduction of GRB 181123B, and corresponding results. \nThis is followed by a comparison of our radio limits for GRB 181123B to the sample of radio-detected SGRBs and a discussion of the parameter space that the triggered ATCA observations are probing in Section 4. \nFinally, we perform modelling of the GRB 181123B afterglow \nand thus demonstrate the usefulness of obtaining early-time (within 1 day) radio observations of an SGRB (regardless of whether or not there is a detection) to place constraints on the GRB physics. \n\n\\section{ATCA rapid-response mode}\n\nATCA is a six element, 22\\,m dish, East-West interferometer based in New South Wales in Australia. Its maximum baseline length is 6\\,km and it is capable of observing in multiple, broad frequency bands \nwith full polarisation, and in a variety of array configurations. \nATCA is currently equipped with the Compact Array Broadband Backend \\citep[CABB;][]{wilson11}, which has a 2\\,GHz bandwidth that is capable of observing in two frequency bands simultaneously with tunable receivers that operate between 1.1-105\\,GHz. \n\nSince 2017 April 18, ATCA has been capable of rapidly responding to transient alerts. \nThe rapid-response mode can trigger using the 16\\,cm, 4\\,cm and 15\\,mm receivers, corresponding to a usable frequency range of $1.1-25$\\,GHz, and can observe in any CABB mode. \nIn the following, we describe both the observer front-end and the observatory back-end of this new triggering system.\n\n\\subsection{VOEvent parsing\/front-end}\\label{sec:front-end}\n\nThe front-end software we use to interface with the ATCA rapid-response system ({\\sc vo\\_atca})\\footnote{https:\/\/github.com\/mebell\/vo\\_atca} is designed to trigger on Virtual Observatory Events (VOEvent; \\citealt{seaman11}), which are the standard format for broadcasting machine readable astronomical alerts related to transient events. \nA VOEvent package contains all the required data (in {\\sc xml} format) that allow automated decisions to be made in real-time given certain keywords and parameters. \nVOEvents are brokered via the 4 Pi Sky VOEvent Broker \\citep{staley16pp} and the {\\sc comet} VOEvent client \\citep{swinbank14}. \nThese packages allow us to listen to multiple VOEvent streams, including those broadcast by {\\em Swift}{}. \nWe use the {\\sc Python} package {\\sc vovent-parse} \\citep{staley_voevent-parse_2014} as the main tool to read the VOEvents \nand to extract the required information to be assessed by the triggering algorithm. \n\nUpon receiving a {\\em Swift}{} VOEvent, the ATCA VOEvent parser uses the keyword {\\sc grb\\_identified = true} to initially identify a GRB packet. \nPackets containing {\\sc startrack\\_lost\\_lock=true} are ignored as it means that {\\em Swift}{} has lost its celestial position-lock so such an alert is unlikely to be from a real transient. \nWhile the observatory back-end prevents the telescope from overriding for sources that are too far north (see Section~\\ref{sec:back-end}), we impose an additional declination cut-off for all SGRBs north of +15$^{\\circ}$ to \nensure the potential for $>8$\\,hr integrations for the triggered observations. \n\nOn passing these stages, the parser then assesses the duration of the trigger so that SGRB candidates can be identified. However, on the short timescales following the alert, and with growing uncertainty as the GRB burst duration increases, it is difficult to classify {\\em Swift}{} GRBs as short or long in an automated way.\nA rigorous classification of the GRB requires human inspection of the data, which is only published online on the Gamma-ray Coordinates Network Circulars (GCN) Archive,\\footnote{https:\/\/gcn.gsfc.nasa.gov\/gcn3\\_archive.html} usually between 10\\,mins and 1\\,hr post-burst and therefore not via a VOEvent. \nTo account for this, we implemented a three-tiered system to flexibly respond to different GRB durations and therefore filter for those events more likely to be SGRBs. \nThe keyword {\\sc integ\\_time} (the length of time for the transient signal to reach a significant threshold) parameter is used as an estimator of the incoming GRB's true duration. \n\n\\begin{itemize}\n\\item GRBs with {\\sc integ\\_time}$<$0.257\\,s have a high probability of being SGRBs so the VOEvent parser will automatically submit these triggers to the observatory and alert team members via text and email of the override observation. \n\n\\item With durations $0.257$\\,s\\,$<${\\sc integ\\_time}\\,$<1.025$\\,s, we have implemented a \"wait-to-proceed\" algorithm as the probability of the GRB being a SGRB decreases with increasing {\\sc integ\\_time}. In this case, we issue email and text alerts so that team members can check the GCN Archive for adequate verification of the GRB classification. If the GRB is confirmed to be short, then the duty team member responds \"YES\" to the detection email, and this email reply is read by an algorithm (via the Google email Application Programming Interface\\footnote{https:\/\/developers.google.com\/gmail\/api}) \nthat then proceeds with submitting the trigger to ATCA, resulting in an override observation. This provides an easy interface to assess and submit triggers via a mobile phone, which can receive SMS alerts and allow responding to emails away from a computer. \n\n\\item If {\\sc integ\\_time}$>$1.025\\,s then we presume that the GRB is long and we do not proceed with submitting a trigger to override the telescope. \n\\end{itemize}\n\nAfter the parser (or duty team member) has successfully identified the event as an SGRB, \nour algorithm then searches the ATCA calibrator database for a nearby and suitable phase calibrator. \nIt then automatically builds a schedule file (we use the ATCA scheduler software {\\sc cabb-schedule-api})\\footnote{https:\/\/github.com\/ste616\/cabb-schedule-api} for a 12-hour observation of the GRB in the requested frequency band (for GRB triggering we currently use the 4\\,cm receiver), which has interleaved \nphase calibrator observations every 20 minutes. \nNote that the total exposure time is also limited by how far the GRB is above the horizon at the time of the trigger. \nThe schedule file and override request is then submitted to the observatory where it is assessed for submission to the observing queue by the ATCA back-end.\n\n\\subsection{Observatory back-end}\\label{sec:back-end}\n\nTime on the ATCA is scheduled into two 6-month long semesters, and the order of observations in each semester is set months in advance. This is done to allow the project investigators, who are also responsible for conducting the observations, to plan their activities. A rapid-response system is not easily compatible with this mode of operation.\n\nNevertheless, demand for the telescope to quickly respond to events has been steadily rising. In 2016, roughly 10\\% of telescope time was given to \nNAPA (Non A-priori Assignable) or ToO (Target of Opportunity) projects, while in 2019 this figure had risen to 19\\%. \nFor a NAPA project, a science case is given to the time assignment committee (TAC), which ranks its significance against the other projects for that semester. Provided the science is considered compelling, these projects are allowed to displace time from other projects during the semester, with the philosophy being that were we to know during the scheduling process when an event would happen, a compelling project would have been scheduled to observe it.\n\nRapid-response NAPAs operate in the same way. A scientific justification must be supplied to the TAC, who must agree that rapid response is warranted. The observatory then supplies an authentication Javascript Web Token (JWT) to the project, and assists the investigators to test their automatic triggering system.\n\nA web service is provided so that the trigger to start observations can be sent from any internet-connected device. A Python library ({\\sc atca-rapid-resonse-api}) is also available to make it easier to send requests to this service.\\footnote{https:\/\/github.com\/ste616\/atca-rapid-response-api} All requests must contain a valid schedule file, and must nominate the target coordinates and a nearby phase calibrator.\n\nUpon receipt of a trigger, the web service tries to schedule the observation as soon as possible. If the source is above the horizon and the user-nominated minimum useful observing time can be obtained before the source sets, the current and subsequent observations can be displaced \nand the system can start the observations within 2 seconds of the trigger's arrival. Within that time, emails are sent to the projects that will be displaced, and to the triggering team, describing the new order of the scheduling. The schedule is also altered as necessary to add a scan on a flux density calibrator at an opportune time, and potentially to shorten the observations to fit the available time. At all times, the emphasis is to move the telescope to the target coordinates as quickly as possible.\n\nThe service can also provide an immediate rejection should no suitable time be found for the observation. For example, if no available time can be found up to 100 hours in the future (generally because the request was made during a time when the array is shutdown for maintenance or participating in VLBI observing), the observations are rejected and the proposal team are notified. \nWhile no explicit limit is set for the source declination, sources too far north may not be available for the user-nominated minimum useful observing time, and will thus be rejected.\n\nIf the web service can schedule the observations, a separate service then takes over, and takes control of the observing control software. Some more checks are made to see if the array can be used for observing, and will delay the start of the observations if the weather conditions are unsuitable. This service also monitors the observations for interruptions due to weather, equipment failure and human intervention. Rudimentary responses are pre-programmed for any such eventuality. The service stops once the observations have finished, the target sets, or the observations are cancelled, whichever comes first. Control of the telescope then goes to the investigators whose project was scheduled to be running at this end time.\n\nA more complete guide to the operation of the rapid-response system is provided in the ATCA Users Guide.\\footnote{https:\/\/www.narrabri.atnf.csiro.au\/observing\/users\\_guide\/html\/atug.html}\n\n\\subsection{Triggering performance} \\label{sec:trig_per}\n\nSince the commencement of the program, we have worked with the observatory to improve the success of SGRB triggered observations with ATCA, which involved extensive system and software debugging. Many SGRBs were missed due to the telescope being in uninterruptible modes such as maintenance, reconfiguration, participating in VLBI or operating in an incompatible correlator mode (the latter has since been resolved). \n\nOur original override strategy involved triggering on all {\\em Swift}{} GRBs with {\\sc integ\\_time}$<1.025$\\,s as SGRBs have been detected with {\\sc integ\\_time} up to $1.024$\\,s. However, as mentioned in Section~\\ref{sec:front-end}, the majority of events within this timescale are LGRBs. \n{\\em Swift}{} data requires a human in the loop to classify the event as long or short, which is usually based on the duration and the hardness of the event (note that SGRBs often produce higher energy prompt emission than LGRBs) and are only published on the GCN Archive up to an hour post-burst \\citep[also note that the distinction between events with durations between $1-2$\\,s can be tenuous and has led to discussions regarding intermediate GRB classes; e.g.][]{mukherjee98,horvath98,huja09,deugartepostigo11}. \nThis original strategy therefore resulted in several false ATCA triggers, most of which were identified and cancelled before the telescope was overridden as there was additional lead time before the event in question had risen above the horizon. However, \nthere were a few instances where some data was collected on LGRBs. \nRecent edits to the VOEvent parsing of event timescales using the keyword {\\sc integ\\_time}, which are described in Section~\\ref{sec:front-end} have resulted in a significant reduction in ATCA triggers of LGRB contaminants. \n\nWhen ATCA receives a trigger of an event that is above the horizon, the main limitation to the response time is the telescope slew speed. On receiving the VOEvent via the parsing code, it takes $2-3$\\,s for the observation to be queued and the subsequent maximum observing time calculated. Following a {\\em Swift}{} alert on the long GRB 190519A \\citep{ukwatta19}, ATCA was on target and observing the event in 2\\,min and 39\\,s. Other response times range between $3-6$\\,min post-burst, which make the ATCA rapid-response system competitive with other triggering facilities such as AMI \\citep[e.g.][]{anderson18} yet is also based in the Southern Hemisphere, has more collecting area, a larger number of frequency bands, polarisation capabilities, and (in some configurations) better angular resolution.\n\n\\subsection{Short GRB experimental design}\n\nThe majority of GRBs detected by {\\em Swift}-BAT are LGRBs, with SGRBs (in this case events with $T_{90} \\leq 2$\\,s including those found in ground analysis) only accounting for $\\sim7-8$\\% \n(this is based on event numbers between 2017 and 2019 using the {\\em Swift}{} GRB Look-up Table,\\footnote{https:\/\/swift.gsfc.nasa.gov\/archive\/grb\\_table\/} where $T_{90}$ is the time between 5 and 95\\% of the fluence being emitted). We therefore expect $\\sim5-10$ SGRBs to be detected by {\\em Swift}{} per year, and therefore predict $\\lesssim2$ will be observable with ATCA (below a Declination cut-off of +15\\,deg) during an observing semester. \n\nOur rapid-response observations are performed using the $4$\\,cm receiver, which has dual 2\\,GHz windows that are usually centered at 5.5 and 9\\,GHz, which is the most sensitive ATCA band. This choice is based on several factors: the full-width half-maximum of the primary beam encompasses the initial {\\em Swift}{}-BAT positional uncertainty of newly detected GRBs \\citep[1-4 arcmin;][]{barthelmy05}, it is largely immune to atmospheric instabilities, and is less disrupted by RFI than other ATCA bands.\nIn addition, as synchrotron emission from GRB reverse and forward shocks peaks earlier and brighter with increasing frequency, the 4\\,cm band (5.5\/9 GHz) is optimal for ensuring the source will be bright but not peaking before the telescope is on-target.\n\nAs mentioned in Section~\\ref{sec:intro}, the radio afterglows from SGRBs are usually detected within 1\\,day post-burst \\citep[e.g.][]{fong15}, which strongly motivates our need for the ATCA rapid-response mode. The triggered observations are designed to observe between $2-12$\\,h (depending on how long the source is above the horizon following the trigger). \nAs previous SGRB radio studies have shown that the radio afterglow has already switched-on within $4-16$\\,h post-burst \\citep[e.g.][]{anderson18}, a $\\leq12$\\,hr observation allows us to track the rapid rise in emission with a sensitivity of $\\sim60 \\, \\mu$Jy ($3\\sigma$) on one hour timescales.\\footnote{https:\/\/www.narrabri.atnf.csiro.au\/myatca\/interactive\\_senscalc.html} This means that any delays of $\\leq1$\\,hr related to waiting for the GRB classification does not affect the rapid-response science goal (see Section~\\ref{sec:front-end}). \nA $\\leq12$\\,hr track also ensures some periods of simultaneous {\\em Swift}{} X-ray Telescope \\citep[XRT, observing band between $0.3-10$\\,keV;][]{lien18} observations, \nwhich is essential for modelling the spectral energy distribution (SED), and for exploring the radio properties associated with the plateau phase (e.g. see our modelling in Section~\\ref{sec:mod}). \n\nFollowing the triggered, rapid-response observation, we also request three $\\sim4$\\,hr follow-up observations in the $4$\\,cm band to occur between $1-3$, $4-6$, and $8-12$\\,days post-burst, which can reach a sensitivity of $30\\,\\mu$Jy ($3\\sigma$). \nWhile 3 of the previous radio-detected SGRBs faded below detectability within 2 days post-burst, the other 2 were detected up to 10\\,days post-burst (see Figure~\\ref{fig:lc}), thus motivating this more long-term monitoring of any triggered candidate. \n\n\\begin{figure*}\n \\begin{center}\n \\includegraphics[width=0.49\\textwidth]{figures\/SGRBs_flux_lc_v3.pdf}\\label{fig:flux}\n \\includegraphics[width=0.49\\textwidth]{figures\/SGRBs_lum_lc_v3.pdf}\\label{fig:lum}\n \\caption{Radio light curves of SGRB radio detections ($1\\sigma$ error bars) and $3\\sigma$ upper-limits observed at frequencies between 6 and 10\\,GHz. \n Left: Radio flux density vs days post-burst and Right: $k$-corrected spectral luminosity vs days post-burst in the rest-frame. The 9\\,GHz upper-limit of GRB 181123B is depicted as a large white triangle. For those GRBs without a known redshift we assume $z=0.5$. The $3\\sigma$ upper limits of those SGRBs that were observed but not detected in the radio band are depicted as grey triangles. \n References for all radio flux densities and redshifts for radio-detected SGRBs: \\citet{berger05}, \\citet{fox05}, \\citet{prochaska05}, \\citet{soderberg06}, \\citet{cucchiara13}, \\citet{deugartepostigo14}, \\citet{cucchiara14}, \\citet{chornock14}, \\citet{fong14}, \\citet{fong15}, \\citet{troja19}, \\citet{lamb19}, \\citet{paterson20}, \\citet{fong21}. All radio upper limits shown in grey were taken from \\citet[][see references therein]{fong15}.}\n \\label{fig:lc}\n \\end{center}\n\\end{figure*}\n\n\\section{ATCA observations of GRB 181123B}\n\n{\\em Swift}{}-BAT detected the short GRB 181123B at 05:33:03 UT (trigger=873186), which was rapidly detected in the X-rays by the {\\em Swift}{}-XRT \nand localised to the position $\\alpha \\mathrm{(J2000.0)} = 12^{\\mathrm{h}}17^{\\mathrm{m}}28\\overset{\\mathrm{s}}{.}05$ and $\\delta (\\mathrm{J2000.0}) = +14^{\\circ}35'52\\overset{''}{.}4~$ with a 90\\% confidence of $1\\overset{''}{.}8$ \\citep{osborne18}. Further optical and near-infrared follow-up detected a source coincident with the {\\em Swift}{}-XRT position \\citep{fong18,paterson18a,paterson18b}, \nresulting in the identification of the host galaxy at redshift $z=1.754$ and the detection of the optical afterglow to GRB 181123B \\citep[$i=25.1$\\,mag at $9.1$\\,h post-burst;][]{paterson20}. This makes GRB 181123B one of only three SGRBs at $z>1.5$ \\citep{paterson20}. \n\nOn receiving the VOEvent trigger, ATCA was automatically scheduled to begin observations on 2018 Nov 23 at 18:07:24.9 UT (12.6\\,h post-burst) for 8.3\\,h \\citep{anderson18gcn}, when the GRB had risen above the horizon (minimum elevation of 12\\,deg). On this date, ATCA was in the 6B array configuration, and the triggered observations were taken in the 4\\,cm band, with the dual 2\\,GHz bandwidth windows centered at 5.5 and 9\\,GHz. The observation pointing was at the initial BAT position, which was $1.2$\\,arcmin offset from the final {\\em Swift}{}-XRT position of GRB 181123B. \nNote that we requested no follow-up ATCA observations due to the imminent reconfiguration and correlator reprogramming, with many subsequent programmes having priority.\n\nThe ATCA rapid-response observation was reduced and analysed with the radio reduction software {\\sc MIRIAD} \\citep{sault95} using standard techniques. Flux and bandpass calibration were conducted using PKS 1934-638 and phase calibration with PKS 1222+216. Several rounds of phase and amplitude self calibration were also applied \\citep[this was possible due to the nearby bright field source FIRST~J121731.7+143953;][]{helfand15}. In order to obtain the most robust flux density upper limits at the position of the GRB, we used {\\sc mfclean} to create a clean model of the sources in the field (manually drawing clean boxes) and subtracted this model from the visibilities. A primary beam correction was then applied due to the 1.2\\,arcmin offset between the pointing centre and the best known GRB position from the {\\em Swift}{}-XRT. GRB 181123B was not detected, and the final $3\\sigma$ upper-limits can be found in Table~\\ref{tab:obs}. \n\nAs we know the precise location of GRB 181123B to within the ATCA beam, we also report the peak force-fitted flux density at both 5.5 and 9\\,GHz in Table~\\ref{tab:obs}.\nThese were calculated using the task {\\sc imfit} to force-fit a Gaussian to the beam that was fixed at the {\\em Swift}{}-XRT position of the GRB (errors are the $1\\sigma$ rms).\nThe advantage of quoting the force-fitted flux density over an upper-limit is that such a measurement also accounts for the presence of nearby sources, as well as variations in the noise across the image.\nThe data were also divided into 3\\,h and 1\\,h timescales and then re-imaged to search for evidence of emission that may have switched on nearer the end of the observation; however none was detected. \n\n\\begin{center}\n\\begin{table}\n\\caption{ATCA observations of GRB 181123B at 5.5 and 9 GHz, which began on 2018 Nov 23 at 18:07:24.9 UT (12.6\\,h post-burst) for 8.3\\,h.}\n\\label{tab:obs}\n\\begin{tabular}{lcc}\n\\\\\n\\hline\nFrequency & $3\\sigma$ Upper-limit & Forced-fit flux density\\\\\n(GHz) & ($\\mu$Jy\/beam) & ($\\mu$Jy\/beam)\\\\\n\\hline\n 5.5 & 34 & $7 \\pm 12$ \\\\\n 9.0 & 32 & $15 \\pm 11$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\end{center}\n\n\\section{Discussion}\n\nIn this section, we first demonstrate that our radio flux density limits for GRB 181123B are consistent and competitive with previous studies of the radio-detected SGRB population. This is followed by afterglow modelling to demonstrate the importance of obtaining early-time radio observations (regardless of whether there is a detection) to better constrain the properties of the blast wave.\n\nIn Figure~\\ref{fig:lc}, we show the light curves of SGRBs observed in the radio band between 6 and 10\\,GHz. The 8 radio-detected SGRBs are colour-coded with $3\\sigma$ upper-limits represented by triangles. The $3\\sigma$ upper limits of those SGRBs observed but not detected in the radio band have been plotted as grey triangles. The ATCA 9\\,GHz $3\\sigma$ upper-limit of GRB 181123B is shown as a large white triangle. In the left panel of Figure~\\ref{fig:lc}, we have plotted the observed radio flux density vs days post-burst, whereas in the right panel we have plotted the spectral luminosity vs days post-burst in the rest frame, assuming a redshift of $z=0.5$ \\citep{berger14} for those events with no known redshift. \nWhen converting the flux ($F$) to luminosity ($L$), a $k$-correction was also applied such that $L=4 \\pi F d_{L}^{2} (1+z)^{\\alpha - \\beta -1}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$, where $d_{L}$ is the luminosity distance for the redshift $z$ \\citep[assuming $\\Lambda$CDM cosmology with $H_{0}=68$\\,km\\,s$^{-1}$\\,Mpc$^{-1}$ and $\\Omega_m=0.3$;][]{planck16}, and $\\alpha$ and $\\beta$ are the temporal and spectral indices defined as $F \\propto t^{\\alpha}\\,\\nu^{\\beta}$ \\citep{bloom01}. We assume $\\alpha=0$ and $\\beta=1\/3$, which are appropriate for an optically thin, post-jet-break light curve \\citep[see][]{chandra12}.\n\n\nFrom Figure~\\ref{fig:lc} we can see that the ATCA flux limit for GRB 181123B is extremely competitive and consistent with the most constraining lower-limits. \nUsing formalism by \\citet{granot02}, \\citet{berger14} showed that if we assume fiducial parameters for SGRBs, along with typical microphysical parameters for LGRBs, the expected peak flux density at a redshift of $z=0.5$ is $F_{\\nu}\\sim40\\mu$\\,Jy at $\\sim10$\\,GHz for an ambient medium density of $n_{0}=0.1$\\,cm$^{-3}$. \nOur $3\\sigma$ sensitivity at 9\\,GHz was 32$\\mu$\\,Jy, and therefore sensitive enough to detect emission from a GRB with the above properties, however, it is important to note that some GRB microphysical and macrophysical parameters like the kinetic energy and the CSM density can vary by several orders of magnitude \\citep{granot14}.\n\nThe luminosity light curves in Figure~\\ref{fig:lc} show the $3\\sigma$ upper-limit for GRB 181123B at $\\sim1$\\,day post-burst (in the rest frame). \nGiven the very high redshift of GRB 181123B, even these sensitive ATCA observations would not have detected the radio counterpart to the seven SGRBs detected at early times (within a day post-burst in the rest frame) if they were placed at $z=1.754$.\nWe therefore cannot draw any further comparisons between the physical properties of the radio-detected GRB sample and GRB 181123B based on luminosity alone and require more detailed multi-wavelength light curve modelling (see Section~\\ref{sec:mod}).\n\n\\subsection{Modelling constraints} \\label{sec:mod}\n\nIn this section, we model the afterglow of GRB 181123B in order to explore how early-time ($<1$\\,day) radio observations of SGRBs can help to constrain the dynamical and microphysical parameters of such blast waves in the context of the fireball model. \nUsing the redshift derived from the identification of the host galaxy of GRB 181123B \\citep[$z=1.754$;][]{paterson20}, \nwe model the force-fitted flux density values at the {\\em Swift}{}-XRT position of the GRB from our ATCA observations together with the {\\em Swift}-XRT light curve \\citep{evans09,evans10}. \nWe have chosen to use the force-fitted flux measurements plus errors in our modelling as it allows us to assign a likelihood to a predicted model flux for a set of model parameters, which is not possible with an upper limit \n\\citep[for some examples of where radio force-fitted flux measurements are quoted and used in afterglow modelling see][]{galamawijers98,kulkarni99,vanderhorst11,vanderhorst15}. \n\nFor this modelling, we have chosen to only consider the forward-shock component to minimise complexity, particularly as we are dealing with a small number of data points. \nAs previously mentioned, \nthe reverse-shock could be dominant at early times ($\\lesssim1$\\,day) in the radio band as has been observed for some SGRBs \\citep[e.g.][]{soderberg06,lamb19,troja19}. \nGiven that the reverse-shock evolves to lower frequencies more rapidly than the forward-shock and we have no radio detection, our modelling depends primarily on the X-ray detections, which are always dominated by the forward-shock, thus motivating our model choice. \nOur afterglow fitting also does not rule out a reverse shock contribution. \nWe therefore assume a spherical, relativistic, blast wave interacting with the circumburst medium and generating synchrotron emission. Since SGRBs are known to occur in homogeneous, low density environments \\citep[median densities of $n_{0}\\approx (3-15) \\times 10^{-3}$\\,cm$^{-3}$ with $\\approx80-95\\%$ of events being situated in environments of $n_{0}<1$\\,cm$^{-3}$;][]{fong15}, \nwe assume a constant density circumburst medium. \n\nWe use the \\texttt{boxfit} code to model the afterglow emission \\citep{vanEerten2012}. \\texttt{boxfit} makes use of pre-calculated hydrodynamics data to calculate the dynamics of the blast wave, and solves radiative transfer equations on the go. \nSince in this work we assume a spherical blast wave, we fix the opening angle ($\\theta_0$) to $\\pi \/ 2$. \nWe then use the C++ implementation of the \\texttt{MultiNest} nested sampling algorithm, which is a Bayesian inference tool, to determine the posterior distributions of the free parameters \\citep{feroz09}. The free parameters of our model are defined as:\n\n\\begin{itemize}\n \\item $E_{K, \\mathrm{iso}}$: Isotropic equivalent kinetic energy in units of erg.\n \\item $n_0$: Circumburst medium number density in units of $\\mathrm{cm}^{-3}$.\n \\item $p$: Power-law index of the accelerated electron distribution, such that $N(\\gamma) \\propto \\gamma^{-p}$, with some minimum Lorentz factor $\\gamma_{m}$ \\citep{wijers99}.\n \\item $\\epsilon_B$: Fraction of thermal energy in the magnetic fields.\n \\item $\\epsilon_e$: Fraction of thermal energy in the electrons.\n\\end{itemize}\n\nIn order to demonstrate how the inclusion of early-time radio data helps to further constrain \nthe dynamical and microphysical parameters (when combined with {\\em Swift}{}-XRT observations and regardless of whether or not there is a radio detection), we model the afterglow of GRB 181123B with and without the ATCA force-fitted fluxes and compare the posterior distributions of the free parameters. \nIn both fits, we use the same prior for the free parameters (Table~\\ref{tab:prior}) and the resulting best fit values in Table~\\ref{tab:post_param} are set with the lowest chi-squared value in the posterior. \n\nLight curves for the posterior predictive distribution when the ATCA force-fitted flux density values are included in the modelling, together with the best fit, can be found in Figure \\ref{fig:lc_post}. \nGiven that the modelling of the X-ray detections of GRB 181123B alone suggests an energetic solution, the inclusion of radio information aids to pull down the overall fit so that at both 5.5 and 9\\,GHz, the best fit light curves are clustered around the ATCA force-fitted flux densities. \nWhile the resulting model is consistent with the {\\em Swift}{} Ultraviolet\/Optical Telescope \\citep[UVOT;][]{roming05} upper-limits \\citep{oates18}, it over-predicts the Galactic extinction corrected $i$-band flux reported by \\citep{paterson20} by a factor of $\\sim3$ or 1.2 magnitudes. \nAt this high redshift, an $i$-band detection indicates the afterglow emission was produced at ultraviolet wavelengths in the rest frame, and would therefore be quite prone to extinction by dust.\nGiven our model does not consider extinction, intrinsic or otherwise, this over-prediction may therefore not be unreasonable. \nHowever, our $i$-band prediction is much higher than the host optical extinction calculated by \\citet{paterson20} from photometric observations ($A_{V}$=0.23) or calculated from their observed excess hydrogen column density ($N_{H}$; derived from X-ray afterglow spectral modelling), which is known to scale to optical extinction \\citep{guver09}, predicting $A_{V}$=0.38.\nThere are also other potential sources of optical and infrared emission from SGRBs such as a kilonova from r-process radiative decay \\citep[e.g.][]{metzger10}, which our model does not include. However, such emission usually does not dominate over the afterglow until $>1$\\,d post-burst \\citep[e.g.][]{tanvir13}. \n\nAs can be seen in Table~\\ref{tab:post_param}, the inclusion of the ATCA force-fitted fluxes in our modelling allows for much better constraints to be placed on $\\epsilon_e$ (see also Figure~\\ref{fig:marge_comp}, which shows a comparison between the marginal distributions of the parameters for both cases - modelling with and without the ATCA data). \nThe rest of the parameters are consistent between both modelling experiments but the $E_{K,\\mathrm{iso}}$ is on the brighter end of known SGRBs \\citep{fong15}. \nOur findings are consistent with those by \\cite{beniamini17}, who have shown that the flux density and time of the GRB radio light curve peak can be used to particularly constrain $\\epsilon_e$. We also note that our constraint on $\\epsilon_e$ is also consistent (within the 95\\% credible interval) with the distribution of $\\epsilon_e$ ($0.13-0.15$) found through the analysis of 36 GRB radio afterglows performed by \\citet{beniamini17}.\nThe predicted radio peak also suggests that at later times ($\\gtrsim3-4$\\,days post-burst), the forward shock radio emission from GRB 181123B may have been detectable at 5.5 and 9\\,GHz with $\\geq4$\\,hr ATCA integrations (see Figure~\\ref{fig:lc_post}). \nTherefore, the inclusion of early-time radio data in GRB afterglow modelling (regardless of whether or not there is a detection), \ntogether with an X-ray light curve, allows us to predict the forward shock peak radio flux density, \nthus constraining the fraction of shock energy in the relativistic electrons ($\\epsilon_e$). \n\\citet{paterson20} also derived these same afterglow parameters for GRB 181123B but assumed fixed values of $\\epsilon_e=0.1$ and $\\epsilon_B=0.1$ or 0.01. While our parameters are far less constrained, our values for $E_{K,iso}$ and $n_{0}$ (as well as $\\epsilon_e$ and $\\epsilon_B$) are consistent with \\citet{paterson20} within the 95\\% credible intervals. However, our value range for $p$ was higher and did not overlap with the range derived by \\citet{paterson20}. Note that our value range for $p$ is more consistent with those calculated for radio-detected SGRBs (see Section~\\ref{sec:comp}).\n\n\\begin{table}\n \\setlength{\\tabcolsep}{5pt}\n \\def1.25{1.25}\n \\caption{Assumed priors for the free parameters for all modelling efforts.}\n \\begin{tabular}{l|r}\n \\hline\n \\multicolumn{1}{l}{Parameter range} & \\multicolumn{1}{r}{Prior distribution} \\\\\n \\hline\n $10^{49}