diff --git a/data_all_eng_slimpj/shuffled/split2/finalzxbg b/data_all_eng_slimpj/shuffled/split2/finalzxbg new file mode 100644 index 0000000000000000000000000000000000000000..19b35cbc5de7bd7b43325a632d87281a726c7b52 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzxbg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nDuring the last two years a race of industrial and research organizations has been opened to develop a ready-to-implement engineering solution for quantum computing (QC). It resulted in the QC market closely resembling the ascent ages of classical computing industry.\nNamely, there were many underdeveloped computing architectures which being incompatible with each other required significant efforts in porting software and algorithmic solutions between them. Given a broadly supported opinion that in the near term we are unlikely to become witnesses to flexible large-scale quantum architectures, there is a critical need to develop portable, architecture-agnostic hybrid quantum-classical frameworks that will allow solving large-scale computational problems on small-scale quantum architectures. \n\n\nThere are multiple emerging quantum computation paradigms. The performance comparison of these paradigms is an important research topic. In this paper, we present for the first time a performance comparison of two leading quantum computation paradigms - D-Wave quantum annealing and gate-based universal quantum computation. Both approaches have great potential for achieving quantum speedup for a number of important problems~\\cite{king2018observation,romero2018strategies,ambainis2018quantum,dunjko2018computational}.\n\n\n\nThe first approach, quantum annealing (QA), is based on adiabatic quantum computation (AQC)~\\cite{mcgeoch2014adiabatic}. QA solves computational problems by using a guided quantum evolution~\\cite{yang2017optimizing}. \nThe evolution starts with an initial Hamiltonian with an easy-to-prepare ground state and ends up in the ground state of the problem Hamiltonian. QA is based on the adiabatic theorem that guarantees that if the Hamiltonian is evolved slowly then transitions to excited states are suppressed during the adiabatic evolution~\\cite{yang2017optimizing}.\nThe D-Wave quantum annealer uses superconducting flux qubits \\cite{amin2004,dwave2018} and has been shown to solve optimization problems on graphs \\cite{ushijima2017graph}, machine learning \\cite{omalley2017}, traffic flow optimization \\cite{neukart2017}, and simulation problems \\cite{harris2018}.\nQuantum and hybrid quantum-classical approaches have been employed.\n\nThe second approach is often referred to as the gate-based or universal QC. This mode of QC was theoretically demonstrated to have a great potential for exponential speedups over best known classical algorithms~\\cite{nielsen2002quantum}. In the near term, the capability of the quantum devices is limited by the number of qubits, low fidelity of gates, and lack of error correction. These limitations constrain us to using low-depth quantum circuits (i.e., quantum circuits with few gates) on a small number of qubits. Within the constraints of near-term intermediate-scale quantum (NISQ) technology~\\cite{preskill2018quantum}, a number of hybrid quantum-classical algorithms were developed and experimentally demonstrated to solve small problems. One of the most promising of such algorithms is Quantum Approximate Optimization Algorithm (QAOA)~\\cite{farhi2014quantum,farhi2016quantum}. QAOA is inspired by adiabatic quantum computation. Similarly to AQC and QA, the evolution path starts with an easy-to-prepare Hamiltonian in the ground state and evolves to the final Hamiltonian that encodes the solution of the problem remaining in the ground state. However, unlike QA in QAOA the evolution is performed by applying a series of parametrized gates called ansatz~\\cite{mcclean2016theory} which is parametrized by a set of variational parameters. This is accomplished by a hybrid approach that combines quantum evolution and classical variational optimization for optimal QAOA parameters~\\cite{yang2017optimizing} with the goal of finding the evolution path that prepares the ground state of the problem Hamiltonian. \n\n\\section{Methodology}\n\nThis work addresses three main challenges. First, we show how to use quantum computing to solve the community detection problem, a well known NP-hard problem. Second, we present an approach to solving realistic large problems using the NISQ hardware with a limited number of noisy qubits. Third, we demonstrate a method that is portable across two leading quantum computation paradigms and can be easily extended to future hardware.\n\nThe community detection problem (or modularity graph clustering) has a variety of applications ranging from biology to social network analysis~\\cite{palla2005uncovering,su2010glay,bardella2016hierarchical,nicolini2017community}. \nIts complexity \\cite{brandes2006maximizing} and practical importance justify an attempt to solve it using QC. The goal of the community detection is to split nodes of a graph $G=(V,E)$ into communities by maximizing its modularity~\\cite{2006PNAS..103.8577N}:\n\n\\begin{equation}\nH = \\frac{1}{4|E|}\\Sigma_{ij}(A_{ij} - \\frac{k_ik_j}{2|E|})s_is_j = \\frac{1}{4|E|}\\Sigma_{ij}B_{ij}s_is_j,\n\\label{eq:mod}\n\\end{equation}\n\n\\noindent where $s_i\\in\\{-1,+1\\}$ are variables indicating node $i$th community assignment, $k_i$ is a degree $i\\in V$, and $A$ is the adjacency matrix of $G$.\nIn this paper, we will focus on clustering the graph into two communities. There are several approaches to generalize the problem for cases when the number of communities is greater than 2.\n\nThe clustering of large networks is currently impossible with existing quantum computers because of the small number of available qubits.\nThis limitation applies both to quantum annealing~\\cite{ushijima2017graph} and universal quantum computing~\\cite{otterbach2017unsupervised}. To tackle large problems using available quantum hardware, we use a hybrid quantum-classical local-search approach. Our approach is inspired by existing numerous local-search heuristics (see~\\cite{rotta2011multilevel} for a review). Our algorithm finds a solution to the global community detection problem by selecting subproblems small enough to fit on the target quantum computer, solving them using a quantum algorithm and iterating until the solution to the global problem is found. The outline is presented in Algorithm~\\ref{alg:outline}.\n\n\\begin{algorithm}\n \\caption{Community Detection}\\label{alg:outline}\n\\begin{algorithmic}\n\\Procedure{Community detection}{Graph $G$}\n\\State solution = initial\\_guess($G$)\n \\While{not converged}\n \t\\State $X$ = populate\\_subset($G$)\n \t\\State \/\/ \\textit{using QAOA or D-Wave QA}\n \t\\State candidate = solve\\_subproblem($G$, $X$)\n \t\\If{$\\mbox{candidate} > \\mbox{solution}$}\n \t\\State solution = candidate\n \\EndIf\n \\EndWhile\n \\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\nIn particular, we start with a random community assignment. At each step we select a subproblem (subset of vertices $X\\subset V$) by taking the vertices with highest potential gain if moving them from one community to another. The gain for each vertex can be computed efficiently~\\cite{2006PNAS..103.8577N}. Then we fix the community assignment of all $i\\not\\in X$, encode them into the problem as boundary conditions (denoted by $\\tilde{s}_j$, a typical technique in many heuristics \\cite{leyffer2013fast,hager2018multilevel}) and maximize\n\n\\begin{equation}\n\\label{eq:subproblem}\n\\arraycolsep=1.4p\n\\begin{array}{r c l}\nQ_{s} & = & \\sum_{i>j | i,j\\in X}2B_{ij}s_is_j + \\sum_{i\\in X}\\sum_{j\\not\\in X}2B_{ij}s_i\\tilde{s}_j \\\\\n & = & \\sum_{i>j| i,j\\in X}2B_{ij}s_is_j + \\sum_{i\\in X}C_{i}s_i. \n\\end{array}\n\\end{equation}\n\nThe subproblems are solved using QC. To satisfy the constraints of available hardware, we fix the subproblem size to some small number (in our experiments, it was 25). \n\n\n\\section{Implementation details and Results}\n\nWe implement our local search algorithm in Python using the graph methods provided by NetworkX~\\cite{hagberg2008}. The novelty of our approach is that it allows to use D-Wave QA, QAOA and classical Gurobi \\cite{optimization2014inc} solvers interchangeably simply by passing different flags, enabling rapid prototyping and direct comparison of different methods as the hardware and its capabilities evolve. Additionally, Gurobi was used as a global optimization solver for the sake of quality comparison. To our knowledge this is the first attempt to directly compare universal quantum computing and quantum annealing. Our framework is also easily extendable, making it possible for researchers to add new backends as they become available. We plan to release the framework as an open-source project.\n\nOur results are presented in Figure \\ref{fig:results}. In these experiments, we used the Intel-QS~\\cite{smelyanskiy2016qhipster} simulator for QAOA (at the time our group did not have access to a universal quantum computer of sufficient size). We use six real-world networks from the KONECT dataset~\\cite{kunegis2013konect} with up to 400 nodes as our benchmark. For each network, we ran 30 experiments with different random seeds. The same set of seeds is used between three backend solvers, making the results directly comparable. The subproblem size is fixed at 25 (i.e., 25 qubits are used). Our results demonstrate that the quantum local search approach with both quantum methods is capable of achieving results comparable to state-of-the-art, with a potential to outperform as hardware evolves.\n\n\n \n\n\n\n\n \\vspace{-0.78cm}\n \n\\begin{figure}[htb]\n \\begin{tikzpicture} \n \\node (img) {\\includegraphics[width=0.8\\linewidth]{.\/fig\/mod_iter.pdf}};\n \\node (temp) [left=of img]{};\n \\node[below=of img, node distance=0cm, yshift=1.1cm,font=\\color{black}] {Network Name};\n \\node[below=of temp, node distance=2cm, anchor=center,yshift=-1.0cm,,xshift=1.2cm, rotate=90,font=\\color{black}] {Num. Solver Calls};\n \\node[above=of temp, node distance=2cm, xshift=1.2cm,yshift=0.8cm, rotate=90, anchor=center,font=\\color{black}] {Modularity};\n \\end{tikzpicture}\n \\vspace{-0.4cm}\n \\caption{Box-plots comparing modularity scores (greater is better) and number of solver calls (less is better) respectively for the three different solvers. For the graph {\\tt oz}, Gurobi and D-Wave returned a modularity score greater than the Global Solver (best known value)}\n \\label{fig:results}\n\\end{figure}\n \\vspace{-0.6cm}\n\\section{Discussion}\n\n\n\n\nIn the near term, quantum hardware will be in a constant state of change. Many different NISQ-era hardware solutions will appear and some will be abandoned. In the midst of such evolutionary times, we want to be able to continue research in quantum algorithms and head towards solving real-world problems. To accomplish this, we need portable, architecture-agnostic hybrid quantum-classical frameworks that will allow solving large-scale computational problems on small-scale quantum architectures. Moreover, these frameworks need to be robust and future-proof. In this work, we have presented a prototype of such a framework for solving the problem of community detection in networks on two distinctively different architectures: D-Wave quantum annealer and universal quantum computer. We suggest extending this approach for solving other types of problems in science.\n\nThe constant change of hardware and overall immaturity of the existing technology leads to many risks in QC. In spite of major effort, it has not been experimentally demonstrated yet an ability to achieve speedups over state-of-the-art classical supercomputers and there are valid concerns about scalability of existing implementations~\\cite{brugger2018quantum}. Advances in material design and engineering will allow the community to overcome those hurdles. We expect QC to eventually become a part of the HPC ecosystem with an initial role as an accelerator providing a new layer of parallelism. Our approach will provide for co-design exploration towards the best QC accelerator choice for an application mix.\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\appendices\n\n\\section*{Acknowledgment}\nThis research used the resources of the Argonne Leadership Computing Facility, which is a U.S. Department of Energy (DOE) Office of Science User Facility supported under Contract DE-AC02-06CH11357. We gratefully acknowledge the computing resources provided and operated by the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory. The authors would also like to acknowledge the NNSA's Advanced Simulation and Computing (ASC) program at Los Alamos National Laboratory (LANL) for use of their Ising D-Wave 2X quantum computing resource and D-Wave Systems Inc. for use of their 2000Q resource. The LANL research contribution has been funded by LANL Laboratory Directed Research and Development (LDRD). LANL is operated by Los Alamos National Security, LLC, for the National Nuclear Security Administration of the U.S. DOE under Contract DE-AC52-06NA25396. Clemson University is acknowledged for generous allotment of compute time on Palmetto cluster.\n\n\n\n\\bibliographystyle{.\/IEEEtranS}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Motivation}\n\n\n This paper is motivated by work of Szlam, Maggioni, Coifman \\& Bremer \\cite{szlam2} and an observation made explicit by\nSzlam \\cite{szlam}: taking iterated spectral cuts induced by the nodal set of the first non-constant eigenfunction for the Neumann $p$-Laplacian seems to converge to rectangles in shape. \nIt is already observed in \\cite{szlam} that starting with an isosceles right triangle will lead to a spectral cut along the symmetry axis and produce two smaller isoceles right triangles: no convergence to rectangles takes place. However, this is unstable under small perturbations of the initial domain. \n\\begin{center}\n\\begin{figure}[ht!]\n\\includegraphics[width=0.85\\textwidth]{Fig1}\n\\caption{Iterated spectral cuts of the standard graph Laplacian seem to lead to rectangles in a generic, non-convex, non-smooth domain.}\n\\end{figure}\n\\end{center}\n\\vspace{-20pt}\n\nWe believe this to be a fascinating question in itself but an affirmative answer would also be useful in guaranteeing that iterative spectral partitioning is an effective method to partition domains and, ultimately, graphs and data (see Irion \\& Saito \\cite{saito} where this phenomenon is exploited). More importantly, a better understanding of this problem will shed light on the more general data analysis algorithms based on the $p$-Laplacian. More precisely, let $\\Omega \\subset \\mathbb{R}^2$ be an open, bounded, and connected domain. We now propose an iterative subdivision of $\\Omega$ as follows.\nFor any $1 \\leq p < \\infty$, the ground state of the $p-$Laplacian can be written as\n\\begin{equation} \\label{definition p Lap ground state}\n\\lambda_{1,p}(\\Omega) = \\inf_{\\int_{\\Omega}{f dx} = 0} \\frac{\\int_{\\Omega}{|\\nabla f|^p dx}}{ \\int_{\\Omega}{|f|^p dx}}.\\end{equation}\nIt is known that the function $f$ minimizing this functional exists and we can use it to iteratively define\n\\begin{equation}\n\\label{speccut}\n \\Omega_{n+1} = \\left\\{ x \\in \\Omega_n: f(x) \\geq 0\\right\\}\\,,\n \\end{equation}\nwhere $n=0,1,2,\\ldots$ and $\\Omega_0:=\\Omega$. \nThe function $f$ is only defined up to sign, so restricting to the part of the domain where it is positive is without loss of generality. We raise the following conjecture (and refer to the subsequent paragraphs for clarification and obvious obstacles).\n\\begin{quote} \\textbf{Main Conjecture.} If $\\Omega_0$ is not the isosceles right triangle (having angles 45-90-45), then the sequence of sets $(\\Omega_n)_{n=1}^{\\infty}$ converges to the set of rectangles with eccentricity\nbounded by 2 in the Gromov-Hausdorff distance.\n\\end{quote}\nIt is clear that one cannot expect convergence to a fixed rectangle: in general, the spectral cut of an $a \\times b$ rectangle with $a > b$ will be given by two $(a\/2) \\times b$ rectangles and, as long as $a \\leq 2b$,\nthe next cut would then yield two $(a\/2) \\times (b\/2)$ rectangles. This motivates a refined question.\n\n\\begin{quote} \\textbf{Question.} Do $(\\Omega_{2n})_{n=1}^{\\infty}$ and $(\\Omega_{2n+1})_{n=1}^{\\infty}$ converge in shape to a fixed rectangle?\n\\end{quote}\n\nThere is one obvious obstruction: if $\\Omega_{0}$ is not already a rectangle, then while performing iterated subdivisions, there is always a sequence of choices for the sign of the\neigenfunction that ensures that part of the boundary of $\\Omega_{n}$ coincides with part of the boundary of $\\Omega$ which could possibly be quite ill-behaved (say, fractal).\nHowever, for a sequence of choices of signs that leads to domains bounded away from $\\partial \\Omega_0$ ('deep' inside the domain) this should not be the case.\n\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=.4\\textwidth]{Fig2-1}\\hspace{60pt}\n\\includegraphics[width=.4\\textwidth]{Fig2-2}\n\\caption{Left: the spectral cut of a quadrilateral determined by four corners, $(0, 0)$, $(\\pi\/25, 3\/5)$, $(1,0)$, and $(1, 3\/5-\\exp(1)\/100)$ provided by the graph 1-Laplacian. Right: the spectral cut of a shape close to a quadrilateral provided by the graph 1-Laplacian.}\\label{Fig:One}\n\\end{figure}\n\n\nWhile this particular question seems to be novel, the problem of trying to understand the geometry of nodal cuts induced by the $p-$Laplacian or general nonlinear operators has\nbeen studied for a long time. We refer to \\cite{plap,plap2,plap3,plap4,plap5,plap6,plap7,plap8,plap9,plap0,plap10,plap11} and references therein. We especially\nemphasize the works of Gajewski \\& G\\\"artner \\cite{plap7, plap8} who study the behavior of the cut as $p \\rightarrow 1$ as a means of finding effective ways of separating the\ndomain into two roughly equally sized domains, as well as the work of Parini \\cite{plap11} studying the limit of the cut as $p \\rightarrow 1$ under Dirichlet boundary conditions.\nMany of these results are posed for Dirichlet conditions where the effective functional as $p \\rightarrow 1$ is given by\n$$ \\inf_{E \\subset \\Omega}{ \\frac{ \\mathcal{H}^{n-1}(\\partial E)}{\\mathcal{H}^{n}(E)}} \\quad \\mbox{while the Neumann case induces} \\quad \\inf_{E \\subset \\Omega}{ \\frac{ \\mathcal{H}^{n-1}(\\partial E)}{\\mathcal{H}^{n}(E)\\mathcal{H}^{n}(\\Omega \\setminus E)}}.$$\nWhen the domain has Neumann boundary conditions, the quantity above is referred to as the {\\it Ratio Cut}.\nNew phenomena arise as a consequence. One interesting problem, that may also be of interest in the Dirichlet case, is the stability of the nodal set of the $p-$Laplacian\nas a function of $p$. \n\nIn the result presented here, we will focus on the $1$-Laplacian, where we can establish some preliminary results that suggest stability of rectangular domains as attractors for the proposed spectral dynamical system on domains. In particular, from henceforward, we refer to the {\\it $1$-spectral cut iteration} of a domain as the operation defined in \\eqref{speccut} with $p=1$ and the {\\it $1$-spectral cut} as the set $C_{n} := \\left\\{ x \\in \\Omega_n: f(x) = 0\\right\\}$. To simplify the terminology, we use {\\it spectral cut} and {\\it $1$-spectral cut} interchangeably. \n\n\\textbf{Numerics.} For the reader's convenience, we briefly recall here the numerical implementation of the p-Laplacian and its related spectral cut \\cite{Buhler_Hein:2009,Hein_Buhler:2010}.\nTake a point cloud $\\mathcal{X}:=\\{x_i\\}_{i=1}^N\\subset (\\mathcal M,d)$, a metric space $\\mathcal M$ with the metric $d$. Construct an affinity matrix ${\\bf W}\\in \\mathbb{R}^{N\\times N}$, where $\\bf W_{ij}$ is the affinity between $x_i$ and $x_j$. It is associated with an undirected affinity graph with $N$ vertices, where the affinity between $x_i$ and $x_j$, $w_{ij}$, is ${\\bf W}_{ij}$ for $i,j=1,\\ldots,N$. The {\\em graph $p$-Laplacian} is defined by\n\\begin{equation}\n\\Delta_p f(i)=\\sum_{j=1}^N w_{ij}\\phi_p(f(i)-f(j)),\n\\end{equation}\nwhere $f\\in\\mathbb{R}^N$ is a function defined on the vertices and $\\phi_p(x)=\\texttt{sign}(x)|x|^{p-1}$ for $x\\in \\mathbb{R}$. When $p=2$, this gives the bilinear form defined by the standard graph Laplacian ${\\bf D} - {\\bf W}$, where ${\\bf D}=\\texttt{diag}({\\bf W}{\\bf 1})$ and ${\\bf 1}$ is a $N$-dim vector with all entries $1$. Clearly, in general the graph p-Laplacian is nonlinear. We have\n\\begin{equation}\n\\langle f, \\Delta_p f \\rangle = \\frac12 \\sum_{i,j=1}^N w_{ij} |f_i - f_j|^p\n\\end{equation}\nfor $1 \\leq p < \\infty$. See for instance \\cite{Buhler_Hein:2009} for a discussion of the relationship between the variational formulation and the discrete operator formulation.\n\nA real number $\\lambda$ is called an eigenvalue for the graph $p$-Laplacian if there exists a non-zero vector $f\\in\\mathbb{R}^N$ so that \\cite[Definition 3.1]{Buhler_Hein:2009}\n\\begin{equation}\n(\\Delta_p f)_i=\\lambda \\phi_p(f_i)\\,,\n\\end{equation}\nwhere $i=1,\\ldots,N$. We call $f$ the $p$-eigenfunction of of the graph $p$-Laplacian associated with the eigenvalue $\\lambda$. We know that ${\\bf 1}$ is the trivial eigenvector with eigenvalue $0$, and we have \\cite[Lemma 3.2]{Buhler_Hein:2009}\n\\begin{equation}\n{\\bf 1}^T\\phi_p(f)=0\n\\end{equation} \nif the eigenvector $f$ is non-trivial.\nIt is shown in \\cite{Buhler_Hein:2009} that a non-zero function $f$ is an eigenvector if and only if it is a critical point (local minima) of the functional \n\\begin{equation}\nF_p(f)=\\frac{\\langle f, \\Delta_p f \\rangle}{\\|f\\|_{\\ell^p}^p}\\,.\n\\end{equation}\nNote that $F_p(f)$ is the discretization of the functional shown in \\eqref{definition p Lap ground state}.\nIn this work, our main interest is the second $p$-eigenvector of the graph $p$-Laplacian for the spectral clustering purpose. In \\cite{Buhler_Hein:2009}, the second eigenvalue is shown to be the global minimum of the functional\n\\begin{equation}\nF^{(2)}_p(f):=\\frac{\\langle f, \\Delta_p f \\rangle}{\\texttt{var}_p(f)},\n\\end{equation}\nwhere \n\\begin{equation}\n\\texttt{var}_p(f)=\\min_{c\\in\\mathbb{R}}\\sum_{i=1}^N |f_i-c|^p\\,,\n\\end{equation} \nand the corresponding eigenvector is then given by \n\\begin{equation}\nf_p^{(2)}=f^*-c^*{\\bf 1}, \n\\end{equation}\nwhere $f^*$ is any global minimizer of $F^{(2)}_p$ and $c^*=\\arg\\min_{c\\in \\mathbb{R}} \\sum_{i=1}^N |f^*_i-c|^p$. \nLike the usual spectral clustering, once we have $f_p^{(2)}$, we cluster the point cloud by the signs of its entries. \nFor $p=1$, \nthe consistency of spectral cut with the graph $1$-Laplacian was studied in \\cite{trillos2016consistency}.\nNumerically, we apply the nonlinear inverse power method proposed in \\cite{Hein_Buhler:2010} to evaluate the iterative bi-partition of a given 2-dimensional domain.\nIn Figure \\ref{Fig:One}, we present some numerically computed Ratio Cuts for nearly rectangular domains. \n\n\n\\textbf{Outline.} The paper proceeds as follows. In Section \\ref{sec:results}, we highlight the two main theorems we can prove on stability of near rectangular domains. We also present some open problems to be considered naturally as generalizations of these theorems. In Section \\ref{sec:pf1}, we give the full proof that the spectral cut algorithm converges to a rectangle with bounded aspect ratio if the initial domain is near a rectangle in Gromov-Hausdorff sense as will be carefully laid out below. Section \\ref{sec:pf3} provides the details for the proof that quadrilaterals near the rectangle of aspect ratio $2$ in terms of small angle deviations from $90$ degrees will converge under the spectral cut algorithm to rectangles with bounded aspect ratio. In the appendix we gather some long calculations that are useful in analyzing the Ratio Cut in a neighborhood of a quadrilateral.\n\n\n\n\n\\section*{Acknowledgements} \nWe thankfully acknowledge the generous support of NSF CAREER Grant DMS-1352353 (J.L. Marzuola \\& W. Hamilton). We thank Stefan Steinerberger for starting this project with us and for many helpful conversations along the way. \n\n\n\\section{results} \n\\label{sec:results}\nWe prove two results that, while not establishing the main conjecture, do seem to suggest a mechanism by which this procedure happens.\n$$ \\mbox{certain shapes} \\underbrace{\\implies}_{\\mbox{Theorem 1}} \\mbox{nearly straight cuts} \\implies \\mbox{curved quadrilaterals} \\underbrace{\\implies}_{\\mbox{Theorem 2}} \\mbox{rectangles.}$$\nThe missing steps are as follows: (1) we do not know whether a generic $\\Omega_0 \\subset \\mathbb{R}^2$ will ever produce domains $\\Omega_n$ for which Theorem 1 becomes applicable, for example, when $\\Omega_0$ has a fractal boundary; \nand (2) we do not know whether the dynamical system on the space of rectangles ever produces a quadrilateral sufficiently close to the set of rectangles for Theorem 2 to become applicable.\n\n\n\\subsection{Rectangular stability.} The first of the two main results states that the spectral cut of domains that 'roughly' look like rectangles are being cut\nin the middle. More formally, let us carefully describe the domains we will consider.\n\\begin{assumption}\n\\label{rectangle_assumptions}\nWe will work with domains that are small perturbations of a rectangle in the following sense:\n\\begin{itemize} \n\\item The domain $Q$ is a perturbed\nrectangle that is close in the Gromov-Hausdorff distance to a reference rectangle $R$: \n$$d_{GH}(Q, R) \\leq \\varepsilon \\left( \\mbox{length of the shorter side of}~R\\right)\\,,$$\nwhere $\\varepsilon>0$ is sufficiently small.\n\\item\nIn a roughly $10\\sqrt{\\varepsilon}-$neighborhood of the two intersection points of the $1$-spectral cut of $R$ with the boundary, the boundary of $Q$ \ncan be written as graphs of functions of the associated boundary segments of $R$ (see Fig. \\ref{fig:cut}). Moreover, each of these functions can be well approximated by a parabola with bounded, small curvature and small Lipschitz constant.\n\\end{itemize}\n\\end{assumption}\n\n\\begin{center}\n\\begin{figure}[ht!]\n\\begin{tikzpicture}[scale = 2]\n\\draw [] (0,0) -- (1.61,0) -- (1.61, 1) -- (0, 1) -- (0,0);\n\\draw [dashed] (0.8,-0.2) -- (0.8, 1.2);\n\\draw [ultra thick] (0,0) to[out=10,in=170] (1, 0);\n\\draw [ultra thick] (1,0) to[out=340,in=200] (1.61, 0);\n\\draw [ultra thick] (1.61,0) to[out=80,in=280] (1.61, 1);\n\\draw [ultra thick] (1.61,1) to[out=170,in=10] (0, 1);\n\\draw [ultra thick] (0,1) to[out=260,in=100] (0, 0);\n\\draw [ultra thick, dashed] (0.8, -0.2) to[out=85,in=277] (0.81, 1.2);\n\\end{tikzpicture}\n\\caption{A perturbed rectangle $Q$ close to a reference rectangle $R$: the longer part of the boundary of $Q$ can be written as the graph of a Lipschitz function. The goal is to show that the spectral cut of $Q$ is close to that of $R$.}\n\\label{fig:cut}\n\\end{figure}\n\\end{center}\n\n\\begin{theorem}\n\\label{thm1} \nFor a domain satisfying Assumption \\ref{rectangle_assumptions}, the spectral cut occurs in a $\\sqrt{\\varepsilon}-$neighborhood of the midpoint of the long axis. Moreover, there is a function $c(\\varepsilon)$ tending to\n0 as $\\varepsilon \\rightarrow 0$ such that, if said part of the boundary can be written as a Lipschitz function with Lipschitz constant $L \\leq \\sqrt{2} - c(\\varepsilon)$ in the $\\sqrt{\\varepsilon}-$neighborhood of the spectral cut of $R$, then the spectral cut of $Q$\nis a circular arc near a straight line.\n\\end{theorem}\n\nNumerical examples suggest that this result starts becoming applicable rather quickly: already a small number of cuts seems to suffice to produce roughly\nrectangular shapes. As soon as Theorem 1 becomes applicable the spectral cuts will start being closer and closer to straight lines, which immediately implies that many shapes will end\nup approaching quadrilaterals after several additional steps. The next result shows that as soon as we are dealing with quadrilaterals close to a rectangle, the procedure\nis smoothing and produces quadrilaterals closer to the rectangle; this has to be understood in the usual sense of 'cuts' away from the boundary: a quadrilateral\nhaving a $91^{\\circ}$ degree angle will always pass this angle on to one of its descendants. \nWe note a certain similarity of Theorem \\ref{thm1} to the result of Grieser-Jerison \\cite{GrieserJerison} for the standard Laplacian on rectangular domains of high aspect ratio with one side described by a curve. See also the recent work \\cite{BCM} where general curvilinear quadrilaterals were considered. \n\n\\subsection{Near Curvilinear Quadrilaterals} We settle our conjecture in the case of $\\Omega$ being near a quadrilateral that is close to a rectangle: we prove convergence to a rectangular shape in the Gromov-Hausdorff distance 'away from the boundary' (in the sense\ndiscussed above: rectangles incorporating parts of the initial boundary or boundaries created by very early initial spectral cuts need never be regular).\nWe introduce a qualitative way of measuring distance to quadrilaterals as follows: given a curvilinear quadrilateral $Q$ with angles $\\alpha, \\beta, \\gamma, \\delta$ and sides described by the curves $y = \\gamma_1 (x)$, $y = \\gamma_2(x)$, $x = \\gamma_3(y)$ and $x = \\gamma_4 (y)$, we\ndefine the functional\n\\begin{equation}\\label{Definition I(Q)}\nI(Q) = \\left| \\alpha- \\frac{\\pi}{2} \\right| + \\left| \\beta- \\frac{\\pi}{2} \\right| + \\left| \\gamma- \\frac{\\pi}{2} \\right| + \\left| \\delta- \\frac{\\pi}{2} \\right| + \\max_{1 \\leq j \\leq 4} \\sup_s (|\\gamma_j' |(s) + |\\gamma_j'' |(s)+ |\\gamma_j''' |(s)) .\n\\end{equation}\nWe particularly want our domains to be well approximated by parabolic curves on the sides intersecting the ratio cut, and will refer to such domains as {\\it approximately parabolic curvilinear quadrilaterals}. \n\n\\begin{remark}\nThis is actually quite a bit more restrictive than one needs, though is certainly sufficient; otherwise, the theorem becomes harder to frame as one must introduce a large number of cases for behaviors of each boundary component of the domain. However, in the calculations below, we will work with domains that are perturbations of a base rectangle with a reasonable aspect ratio, and whose top and bottom boundary components can be well-approximated by parabolas in a neighborhood near the axis of symmetry of the base rectangle. As a result, we can dramatically change regularity requirements for the left and right curves in our model calculations as long as the perturbations are nearly symmetric in area and bounded by a sufficiently small constant. \n\\end{remark}\n\n\\begin{theorem} \n\\label{thm2}\nFor an approximately parabolic curvilinear quadrilateral of aspect ratio near $1:2$, there exists $\\varepsilon_1 > 0$ such that if $I(Q) \\leq \\varepsilon_1$, then the $1$-spectral cut induced\nby the $L^1-$Laplacian is a circular arc with opening angle bounded above by $\\varepsilon_1\/8$. \n\\end{theorem}\nThe proof also shows that the constant $1\/8$ cannot be further improved.\nThis result implies that near-quadrilateral regions have $1$-spectral cuts that are closer to a quadrilateral in Gromov-Hausdorff distance. This implies exponentially fast convergence to rectangles in shape. The way the result is obtained actually allows for a fairly precise understanding of what happens (in particular, it can be used to show that there is a choice of signs such that $I(Q_n)$ grows substantially along the subsequence). We refer to the proof for details. \n\n\n\\subsection{Open problems} These results naturally suggest several open problems; we only name a few that seem \nparticularly natural.\n\n\\begin{enumerate}\n\n\\item Prove the main conjecture for $p=1$; that is, show that a generic $\\Omega_0 \\subset \\mathbb{R}^2$ will produce domains $\\Omega_n$ for which Theorem 1 can be applied, and show that the dynamical system on the space of rectangles produces a quadrilateral sufficiently close to the set of rectangles for Theorem 2 to become applicable. Is it possible to transfer some of the arguments to the range $p \\in (1,1+\\varepsilon_0)$? What happens\nas $p \\rightarrow \\infty$? \n\\item What happens in higher dimensions, or even to domains with curvature, like manifolds? One would still expect the cut to have a smoothing effect but the types of geometric obstruction that\none could encounter in the process may be more complicated. Is the generic limit given by a rectangular box? If not, is there a finite set of shapes\nthat can arise in the limit?\n\\item It seems natural to expect that similar results should hold true under Dirichlet conditions. This would require the study of the variational problem \n$$ \\inf_{E \\subset \\Omega}{ \\frac{ \\mathcal{H}^{n-1}(\\partial E)}{\\mathcal{H}^{n}(E)}}.$$\nCan the results be extended to this case?\n\\item Experimentally, we observe that the nodal line is (generically) fairly stable under small perturbations of the value of $p$. Can any result in this\ndirection be made precise? Does it help to assume the domain $\\Omega$ to be convex?\n\\end{enumerate}\n\n\n\n\n\\section{proof of theorem 1}\n\\label{sec:pf1}\n\n\\subsection{Preliminaries.} The crucial ingredient in our approach is that the $p$-Laplacian degenerates as $p \\rightarrow 1^+$. The minimum of the associated energy functional characterizing the ground state is not assumed by any continuous\nfunction: reinterpreting the functional in terms of total variation, the extremal function is constant on two sets in the domain that are separated by\nan $(n-1)-$dimensional hypersurface. More precisely, we have the following consequence of the coarea formula (see \\cite{stein} for further implications\nof this result).\n\n\n\\begin{thm}[Cianchi, \\cite{Cianchi}] Let $\\Omega \\subset \\mathbb{R}^{n}$ be open, bounded and connected. Then we have the sharp Poincar\\'e-type inequality for any sufficiently smooth functions $u$:\n$$\\left\\|u-\\frac{1}{|\\Omega|}\\int_{\\Omega}{u(z)dz}\\right\\|_{L^{1}(\\Omega)} \\leq\n \\left(\\sup_{\\Gamma \\subset \\Omega}{\\frac{2}{\\mathcal{H}^{n-1}(\\Gamma)}\\frac{|S||\\Omega \\setminus S|}{|\\Omega|}}\\right)\\left\\|\\nabla u\\right\\|_{L^{1}(\\Omega)},$$\nwhere $\\Gamma \\subset \\Omega$ ranges over all surfaces which divide $\\Omega$ into two connected open subsets $S$ and $\\Omega \\setminus \\overline S$. \n\\end{thm}\n\nThis means that the nodal line $\\Gamma$ defining the $1$-spectral cut is a hypersurface partitioning $\\Omega$ into two sets $S, \\Omega \\setminus \\overline S$\n$$ \\mbox{so as to minimize the quantity} \\qquad \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|}.$$\nIt is relatively easy to check that this value is assumed if we relax the $L^1$-norm of the gradient and re-interpret it as total variation. Let us assume that $f(x) = a \\chi_{S} + b \\chi_{\\Omega \\setminus S}$.\nWe want $f$ to have mean value 0, which leads to\n$$ a |S| + b |\\Omega \\setminus S| = 0 \\quad \\mbox{and thus} \\quad b = -a\\frac{|S|}{|\\Omega \\setminus S|} \\quad \\mbox{implying} \\quad \\|f\\|_{L^1} = 2a |S|,$$\nwhere we take $a>0$ without loss of generality.\nMoreover, the ``formal'' contribution to the total variation interpretation of the gradient is given by\n$$ \\|\\nabla f \\|_{L^1} = |\\Gamma| (a - b) =|\\Gamma| \\left(a + a\\frac{|S|}{|\\Omega \\setminus S|} \\right).$$\nThis implies\n$$ \\frac{ \\|\\nabla f \\|_{L^1} }{ \\|f\\|_{L^1}} = \\frac12 |\\Gamma| \\left( \\frac{1}{|S|} + \\frac{1}{|\\Omega\\setminus S|}\\right) = \\frac12 |\\Gamma| \\frac{|\\Omega|}{|S| |\\Omega \\setminus S|}.$$\nTo be rigorous, convolution with a smooth mollifier shows that one can get arbitrarily close to the optimal constant with smooth functions. We will now show that, for a rectangle, the optimal spectral cut splits the domain via a straight line cut intersecting the longest sides at right angles. The crucial ingredient here is that the argument does not appeal to symmetry,\nis stable under perturbations and easily implies the same results for domains that are merely close to rectangles in the Gromov-Hausdorff distance.\n\n\\begin{lemma} \\label{Lemma: rectangle result}\nLet $R = [0,a] \\times [0,b] \\subset \\mathbb{R}^2$ with $a > b$. The $1$-spectral cut is exactly at $\\left\\{a\/2\\right\\} \\times [0,b]$.\n\\end{lemma}\n\\begin{proof}\nThe cut in the middle of the longer side yields\n$$ \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|} = \\frac{b (ab)}{(ab)^2\/4} = \\frac{4}{a}.$$\nIt is clear that other spectral cuts touching the longer sides have $ \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|} \\geq \\frac{4}{a},$ since $\\mathcal{H}^1(\\Gamma)\\geq b$ and $|S| |\\Omega \\setminus S|\\leq (ab)^2\/4$. \n\\begin{center}\n\\begin{figure}[ht!] \n\\begin{tikzpicture}[scale = 2]\n\\draw [ultra thick] (0,0) -- (1.61,0) -- (1.61, 1) -- (0, 1) -- (0,0);\n\\draw [dashed] (0.8,0) -- (0.8, 1);\n\\draw [thick] (0.7,0) to[out=19,in=140] (1.61, 0.6);\n\\node at (1.5,0.2) {$S$};\n\\node at (0.3,0.6) {$\\Omega \\setminus S$};\n\\end{tikzpicture}\n\\caption{The geometric construction in the proof of Lemma 1. }\n\\label{fig4}\n\\end{figure}\n\\end{center}\n\nIf $\\Gamma$ touches two opposite sides, then\n$$ \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|} \\geq b \\frac{ |\\Omega|}{|S| |\\Omega \\setminus S|} \\geq \\frac{b (ab)}{(ab)^2} \\min_{0 \\leq x \\leq 1}{\\frac{1}{x(1-x)}} \\geq \\frac{4}{a}.$$\nEquality can only arise if the cut has length $b$ (forcing it to be a line) and $x=1\/2$ (forcing a split into two domains of the same area). This characterizes\nexactly the cut in the middle.\n\n\nIt remains to deal with the case where the spectral cut touches two adjacent sides; this case is illustrated in Figure \\ref{fig4}. Let us assume that the enclosed domain is denoted by $S$, $|S| = x |\\Omega| = x ab$ for some $0 < x < 1$, and $\\Gamma = \\partial S \\cap \\mbox{int}~R$ is the part of the boundary curve strictly inside the rectangle.\nThe isoperimetric inequality implies that\n$$ \\mathcal{H}^1(\\Gamma) \\geq \\sqrt{\\pi}\\sqrt{x a b},$$\nand thus\n$$ \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|} \\geq \\frac{ \\sqrt{\\pi x a b} ab }{a^2 b^2 x (1-x)} = \\frac{\\sqrt{\\pi}}{\\sqrt{x}(1-x)} \\frac{1}{\\sqrt{a}\\sqrt{b}}.$$\nAn explicit computation shows that for all $0 < x < 1$\n$$ \\frac{1}{\\sqrt{x}(1-x)} \\geq \\frac{3\\sqrt{3}}{2}$$\nand therefore, using $b \\geq a,$\n$$ \\frac{\\mathcal{H}^1(\\Gamma) |\\Omega|}{|S| |\\Omega \\setminus S|} \\geq \\frac{3\\sqrt{3 \\pi}}{2} \\frac{1}{a} \\geq \\frac{4.6}{a}.$$\nThis is always a constant fraction worse than the cut along the longest axes, concluding the argument.\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk1}\nIt is easily seen that all aspects of the argument are stable under (even moderately large) perturbations of the domain in the sense of Gromov-Hausdorff distance. To be specific, take a domain $Q$ satisfying Assumption \\ref{rectangle_assumptions}, and denote the length of the shorter side of the associated rectangle $R$ by $h$. By Assumption \\ref{rectangle_assumptions}, the Gromov-Hausdorff distance between $Q$\nand $R$ is $d_{GH}(Q, R) \\leq \\varepsilon h$. There is clearly a straight line that is parallel to the shorter side of $R$ and bisects the area: the\nbound on the Gromov-Hausdorff distance implies that this straight line has length at most $h + 2h\\varepsilon$. Therefore,\n$$ \\inf_{\\Gamma}{\\frac{\\mathcal{H}^1(\\Gamma)}{|S| |\\Omega \\setminus S|}} \\leq 4h + 8h \\varepsilon.$$\nArguments as in Lemma 1 show that it has to cut the longer side roughly in the middle. \n\\end{remark}\n\n\n\nThe second ingredient is a straightforward consequence of the isoperimetric inequality that we need to make quantitative.\n\\begin{center}\n\\begin{figure}[ht!]\n\\begin{tikzpicture}[scale = 6]\n\\draw [ultra thick] (0,0) -- (1,0);\n\\node at (-0.02, -0.04) {$a$};\n\\node at (1.02, -0.04) {$b$};\n\\node at (0.5, 0.05) {$\\Omega$};\n\\draw [ultra thick] (0,0) to[out=20,in=180] (0.4, 0.08);\n\\draw [ultra thick] (0.4,0.08) to[out=20,in=160] (1, 0);\n\\end{tikzpicture}\n\\caption{A curve slightly longer than the straight line.}\n\\label{fig:areagain}\n\\end{figure}\n\\end{center}\n\n\n\n\n\\begin{lemma} Consider a simply connected domain $\\Omega$ as shown in Figure \\ref{fig:areagain} that is comprised of a straight line segment between $a$ and $b$ and some arbitrary curved line segment enclosing a domain $\\Omega$. For every $\\varepsilon_0 >0$\nthere exists an $\\varepsilon_1 > 0$ such that if\n\\begin{equation}\n\\label{assump1}\n|\\partial \\Omega| \\leq \\left(1+\\varepsilon_1\\right) 2\\|a - b\\|,\n\\end{equation}\nthen\n$$ |\\Omega| \\leq \\frac{1 + \\varepsilon_0}{\\sqrt{6}} \\| a - b\\|^{3\/2} \\sqrt{|\\partial \\Omega| - 2 \\|a-b\\|}.$$\n\\end{lemma}\n\\begin{proof} The isoperimetric inequality immediately implies that the optimal curve defining the Ratio Cut for $Q$ satisfying Assumption \\ref{rectangle_assumptions} must take the form of a circular arc; otherwise, we could fix $a,b$ at the boundary of a circle\nand use the novel shape to create a set in the plane that contains more area than the disk whose boundary has the same size. \n\n It remains to compute the constants for the circular arc.\nChoose a coordinate system with $a,b$ at $(0,0),(\\|a-b\\|,0)$ respectively. The opening angle $\\alpha$ of the sector of the disk with radius $r>0$ that produces such an arc is given by\n$$ \\alpha = 2 \\arcsin{\\left( \\frac{\\|a-b\\|}{2r} \\right)},$$\nwhere $0 < \\alpha < \\frac{\\pi}{2}$ is sufficiently small by assumption \\eqref{assump1}.\nAs a consequence, the length of the circular arc is given by\n$$ r \\alpha = 2r \\arcsin{\\left( \\frac{\\|a-b\\|}{2r}\\right)} \\geq \\|a-b\\| + \\frac{\\|a-b\\|^3}{24} \\frac{1}{r^2},$$\nand so\n\\begin{equation}\n\\label{pomegabd}\n |\\partial \\Omega| \\geq 2\\|a-b\\| + \\frac{\\|a-b\\|^3}{24} \\frac{1}{r^2}.\n \\end{equation}\nIn particular, by assumption \\eqref{assump1}, we have that \n$$ \\frac{\\|a-b\\|^3}{24} \\frac{1}{r^2} \\leq 2\\varepsilon_1\\|a-b\\| \\qquad \\mbox{and thus} \\qquad \\frac{\\|a-b\\|^2}{r^2} \\leq 48 \\varepsilon_1.$$\n\nThe enclosed area captured by the line segment and the circular arc is \n$$\\frac{ r^2 \\pi \\alpha}{2\\pi} - r^2 \\cos{\\left(\\frac{\\alpha}{2}\\right)} \\sin{\\left(\\frac{\\alpha}{2}\\right)}.$$\nWe have\n$$ \\frac{ r^2 \\pi \\alpha}{2\\pi} = r^2 \\arcsin{\\left( \\frac{\\|a-b\\|}{2r} \\right)} = r \\frac{\\|a-b\\|}{2} + \\frac{\\|a-b\\|^3}{48 r} + \\mbox{higher order terms}$$\nas well as\n$$ \\cos{\\left(\\frac{\\alpha}{2}\\right)} = \\sqrt{1 - \\frac{\\|a-b\\|^2}{4r^2}} \\quad \\mbox{and} \\quad \\sin{\\left(\\frac{\\alpha}{2}\\right)} = \\frac{\\|a-b\\|}{2r}.$$\nThe higher order terms in the expansion are an infinite series in $\\|a-b\\|\/r$ with exponents decaying fast enough. For $\\varepsilon_1$ sufficiently\nsmall depending on $\\varepsilon_0$, we can bound\n$$ \\mbox{higher order terms} \\leq \\varepsilon_0 \\frac{\\|a-b\\|^3}{48 r} .$$\nAltogether this implies that\n$$\\mbox{area} \\leq (1 + \\varepsilon_0) \\frac{\\|a-b\\|^3}{12r}$$\nand plugging in the bound on $1\/r$ from \\eqref{pomegabd} gives\n$$\\mbox{area} \\leq \\frac{1 + \\varepsilon_0}{\\sqrt{6}} \\| a - b\\|^{3\/2} \\sqrt{|\\partial \\Omega| - 2 \\|a-b\\|}.$$\n\\end{proof}\n\n\n\n\n\n\\subsection{Proof of Theorem 1}\n\n\\begin{proof} \nWe assume without loss of generality $|Q| = 1$. For this proof we minimize, over all possible set partitions $Q = S \\cup (Q \\setminus S)$, the quotient\n$$ \\inf_{\\Gamma} \\frac{\\mathcal{H}^1(\\Gamma) }{|S| | Q \\setminus S|},$$\nwhere $\\Gamma=\\overline S\\cap \\overline{Q \\setminus S}$.\n\nLemma \\ref{Lemma: rectangle result} and Remark \\ref{rmk1} show that the cut must intersect opposite sides; if the aspect ratio is large enough, then these opposite sides will be the longest sides. \n\n\n\n\n\nWe now show that the distance to the axis of symmetry of $R$ is of order $\\lesssim \\sqrt{\\varepsilon}$.\nSince the Gromov-Hausdorff distance between the domain and the rectangle is $\\varepsilon$, by the same calculation in Remark \\ref{rmk1} we see that any cut has to have length at least\n$ h (1- 2\\varepsilon)$, where $h$ is the length of the shorter side of the associated rectangle $R$. Note that the shortest line between two points may not make the optimal Ratio Cut, though, as that may result in a less favorable area splitting. Hence, one possibility is that we get a gain from having a slightly longer curve to encompass a more favorable area, but then by the isoperimetric inequality, it can be seen that circular arcs are favorable to any other configuration in terms of length to area trade-offs. A key example of this is the trapezoidal domain with angled top and bottom boundary curves. Therefore the optimal spectral cut $Q = A \\cup B$ satisfies\n$$ \\frac{h(1-2\\varepsilon)}{|A| |B|} = \\frac{h(1-2\\varepsilon)}{|A| (1-|A|)} \\leq \\frac{\\mathcal{H}^1(\\Gamma)}{|S| |Q \\setminus S|} \\leq 4h(1 + 2 \\varepsilon)$$\nfor any $\\Gamma$.\nThus, for $\\varepsilon \\leq 0.1$\n\\begin{equation}\n\\frac{1}{|A| (1-|A|)} \\leq 4\\frac{1 + 2\\varepsilon}{1 - 2 \\varepsilon} \\leq 4 + 20\\varepsilon.\\label{bound of A(1-A)}\n\\end{equation}\nWhen combined with the elementary inequality\n$$ \\frac{1}{x(1-x)} \\geq 4 + 16\\left(x-\\frac{1}{2}\\right)^2,$$\nwe have that the optimal spectral cut yields two sets satisfying\n$$ \\frac{1}{2} - \\sqrt{\\frac{5\\varepsilon}{4}} \\leq |A| \\quad \\mbox{and}\\quad |B| \\leq \\frac{1}{2} + \\sqrt{\\frac{5\\varepsilon}{4}}.$$\nThe Lipschitz bound then ensures that the spectral cut's intersection with $\\partial Q$ occurs in a $\\sqrt{\\epsilon}$ small neighborhood of $Q$'s axis of symmetry. \n\nSince the functional itself only contains the length $\\mathcal{H}^1(\\Gamma),$ as well as a term depending on the partition of the areas $|S||\\Omega\\setminus S|$, we can conclude that the optimal $\\Gamma$ is a circular arc; otherwise, we can minimize the length of the curve while keeping the bounded area fixed. Indeed, this boils down to the question of minimizing the arc-length of a curve of fixed area,\n\\[\n\\text{arg min}_{y = \\gamma (x)} \\left\\{ \\int_a^b \\sqrt{1 + (\\gamma'(x) )^2} dx \\ \\bigg| \\ \\int_a^b |\\gamma| dx \\,\\,\\mbox{ fixed.}\\right\\},\n\\] \nwhich is minimized by a circular arc via the isoperimetric inequality.\nWe already know from above that $\\mathcal{H}^1(\\Gamma) \\leq (1+2\\varepsilon) h$, so Lemma 2 implies that the area captured by the curved\narc satisfies\n$$ \\mbox{captured area} \\leq \\frac{1 + c_{\\varepsilon}}{\\sqrt{6}} h^{3\/2} \\sqrt{\\mathcal{H}^1(\\Gamma) - h} \\,\\leq \\frac{1 + c_{\\varepsilon}}{\\sqrt{3}} h^2 \\sqrt{\\varepsilon} ,$$\nwhere $c_{\\varepsilon} > 0$ depends on $\\varepsilon$, but does tend to 0 as $\\varepsilon$ tends to $0$. \n\nIn summary, the above conclusions show that the Ratio Cut $\\Gamma$ must be a circular arc in a $\\sqrt{\\epsilon}$ neighborhood of the axis of symmetry of the reference rectangle from Assumption $1$. \n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm2} for Curvilinear Quadrilaterals with Parabolic Top and Bottom Curves}\n\\label{sec:pf3}\n\nTo prove Theorem $2$, we will first analyze how the Ratio Cut depends on parameters determining the domain $Q$ and circular arc cut $\\Gamma$ for a true quadrilateral. Then, we will demonstrate for a simple example of a curvilinear trapezoid with one parabolic edge and three flat edges that the curvature has a smaller order impact on the Ratio Cut than the trapezoidal feature. Lastly, we will prove the theorem for even more general domains with parabolic top and bottom bounding curves. These domains will be shown to be generic in section 5, at least to leading order in how the Ratio Cut depends upon the structure of the curves that make up the longest sides of our approximately parabolic curvilinear rectangles. \n\nConsider the quadrilateral $Q$ determined by the vertices\n$$ (0,0), (x_1, a), (1,0) \\quad \\mbox{and} \\quad (1+x_2, a+x_3),$$\nwhere we assume that $ 0 < a < 1$ (implying that we have normalized the quadrilateral and put the longer side on the $x-$axis). The quantities $x_1, x_2, x_3$\nare perturbation parameters and assumed to be small, in the sense that $|x_i| \\ll \\min\\left\\{a,1- a\\right\\}$. The condition $|x_i| \\ll a$ is clear; we want the perturbation to be small with respect to length and height of the rectangle. The other condition is slightly more subtle: if it is not satisfied, then the quadrilateral might be close to a square and the location of the spectral cut will depend nonlinearly on the perturbation parameters. The cut is still going to be a circular arc nearly bisecting the domain, but a slight perturbation of $x_1, x_2, x_3$ may send $\\Gamma$ from being nearly vertical to nearly horizontal (see Fig. \\ref{fig:quad}).\nThe same arguments used to prove Theorem 1 show that the spectral cut will be a circular arc connecting two points ${\\bf q} = (q,0)$ and ${\\bf p} = (p,y(p))$.\n\n\n\\begin{center}\n\\begin{figure}[ht!]\n\\begin{tikzpicture}[scale=4]\n\\draw [ultra thick] (0,0) -- (1,0) -- (1.1, 0.8) -- (-0.1, 0.7) -- (0,0);\n\\filldraw (0,0) circle (0.015cm);\n\\node at (-0.08, -0.08) {(0,0)};\n\\node at (0.08, 0.08) {$\\alpha$};\n\\filldraw (1,0) circle (0.015cm);\n\\node at (0.95, 0.05) {$\\beta$};\n\\node at (1.08, -0.08) {(1,0)};\n\\filldraw (1.1,0.8) circle (0.015cm);\n\\node at (1.2, 0.8+0.08) {($1+x_2$,$a+x_3$)};\n\\filldraw (-0.1,0.7) circle (0.015cm);\n\\node at (-0.24, 0.8-0.08) {($x_1$,$a$)};\n\\filldraw (0.5,0) circle (0.015cm);\n\\node at (0.5, -0.08) {${\\bf q}:=(q,0)$}; \n\\filldraw (0.45,0.75) circle (0.015cm);\n\\node at (0.48,0.82) {${\\bf p}:=(p,y (p)) $}; \n\\draw [thick,dashed] (0.45, 0.75) -- (0.5,0);\n\\draw [thin,dashed] (0.5,0) to[out=45,in=330] (0.45, 0.75);\n\\node at (0.42, 0.35) {$\\Gamma$};\n\\node at (0.69, 0.35) {$\\tilde{\\Gamma}$};\n\\node at (1, 0.7) {$\\gamma$};\n\\node at (-0.03, 0.63) {$\\delta$};\n\\node at (0.43, 0.05) {$\\eta$};\n\\node at (0.55, 0.1) {$\\nu$};\n\\node at (0.4, 0.67) {$\\phi$};\n\\node at (0.51, 0.65) {$\\mu$};\n\\end{tikzpicture}\n\\caption{A general quadrilateral $Q$ after rescaling.}\n\\label{fig:quad}\n\\end{figure}\n\\end{center}\n\n\n\\subsection{Circular Arc to Triangle in terms of the sector angle}\n\n\nLet us assume the cut, $\\tilde E$, is a circular arc from $\\bf p$ to $\\bf q$, where $\\bf p$ and $\\bf q$ appear nearly opposite each other on the longest sides. We will take $|\\tilde \\Gamma| = r \\theta$ for some radius $r>0$ and sector angle $\\theta\\geq0$, to be determined, and observe then the area contained between the line $\\Gamma$ connecting $\\bf p$ and $\\bf q$ and the circular arc $\\tilde \\Gamma$, which we will call $\\tilde \\Omega$, satisfies\n\\[\n| \\tilde \\Omega | = \\frac{r^2 \\theta}{2} - \\frac{\\| {\\bf p}- {\\bf q}\\|^2}{4 \\tan (\\theta\/2)}.\n\\]\nWe also have that\n\\[\nr = \\frac{\\| {\\bf p}- {\\bf q} \\|}{2 \\sin (\\theta\/2)},\n\\]\nfrom which we conclude, after Taylor expanding in $\\theta$, that\n\\[\n| \\tilde \\Omega | = \\|{\\bf p}- {\\bf q}\\|^2 \\left( \\frac{\\theta^2}{48} + O( \\theta^4) \\right).\n\\]\nAlso, Taylor expanding once more, we have\n\\[\n|\\tilde \\Gamma| = r \\theta = \\| {\\bf p}- {\\bf q}\\| \\left( 1 + \\frac{\\theta^2}{24} + O( \\theta^4) \\right).\n\\]\n\nWith these identities in hand, we can proceed to explore how the ratio cut depends upon the curves defining the longest aspect ratio sides of our curvilinear quadrilateral. To illustrate how to compute the dependence of the cut locally on the parametrization of a curve, we will first work with a toy model that is flat on $3$ sides and has a smooth quadratic curve on one of the long sides. \n\n\\subsection{The parabolic trapezoid}\n\\label{partrap}\n\nWe take a domain that is a parabolic trapezoid bounded by a straight line from $(0,0)$ to $(1,0)$, a straight line from $(0,0)$ to $(0,1\/2)$, the straight line from $(1,0)$ to $(1, 1\/2+a)$ and the curve $y(x) = \\epsilon x^2 + (a - \\epsilon) x + 1\/2$; note that, for convenience, we have chosen our aspect ratio to be approximately $2$. Our goal is to track how the cut changes as a function of $\\epsilon,a$ close to $0$. The Ratio Cut will intersect the bottom and top boundary curves at the points $(q,0)$ and $(p,y(p))$ respectively, with the cut itself a circular arc. Splitting the domain by a circular arc, we can decompose one of the cut domains into two curvilinear quadrilaterals, see Figure \\ref{fig:quad2} for an illustration.\n\n\\begin{center}\n\\begin{figure}[ht!]\n\\begin{tikzpicture}[scale=4]\n\\draw [ultra thick] (0,1\/2)--(0,0) -- (1,0) -- (1, 1\/2+0.1) \n\\draw [ultra thick] (0,0.5) to[out=25,in=160] (1, 0.6);\n\\filldraw (0,0) circle (0.015cm);\n\\node at (-0.08, -0.08) {(0,0)};\n\\filldraw (1,0) circle (0.015cm);\n\\node at (1.08, -0.08) {(1,0)};\n\\filldraw (1,1\/2+0.1) circle (0.015cm);\n\\node at (1.08, 1\/2+0.1+0.08) {($1$,$0.6$)};\n\\filldraw (0,0.5) circle (0.015cm);\n\\node at (-0.1, 0.5+0.08) {($0$,$0.5$)};\n\\filldraw (0.5,0) circle (0.015cm);\n\\node at (0.5, -0.08) {${\\bf q}:=(q,0)$}; \n\\filldraw (0.52,0.67) circle (0.015cm);\n\\node at (0.48,0.82) {${\\bf p}:=(p,y (p)) $}; \n\\draw [thin,dashed] (0.5,0) to[out=75,in=285] (0.52, 0.67);\n\\node at (0.45, 0.35) {$\\Gamma$};\n\\end{tikzpicture}\n\\caption{A parabolic trapezoid $Q$ as a simplified model.}\n\\label{fig:quad2}\n\\end{figure}\n\\end{center}\n\nIt can be easily seen that to compute the Ratio Cut, we can split the domains into components given by the region $[0,p]$, a triangle with base $[p,q]$, and the cap of a circular arc that is either added or subtracted depending upon orientation ($p 0$ or $\\theta < 0$ respectively). Decomposing the domain into these components, we compute\n\\begin{align*}\n&A_{total} = \\int_0^1 y(x) dx = \\frac{\\epsilon}{3} + \\frac{a-\\epsilon}{2} + \\frac12, \\ \\ \\text{(Total Area)} \\\\\n&A_1 (p,q,\\theta) = \\int_0^p y(x) dx + \\frac12 y(p) (q-p)+\\frac12 R^2 (p,q,\\theta) \\theta \\\\\n& \\hspace{4cm} - \\frac14 \\left[ (q-p)^2 + y(p)^2 \\right] \\cot \\left( \\frac{\\theta}{2} \\right), \\ \\ \\text{(Left Area)} \\\\\n& A_2 (p,q,\\theta) = A_{total} - A_1 (p,q,\\theta), \\ \\ \\text{(Right Area)} \n\\end{align*}\nwhere $R(p,q,\\theta)$ is the radius of the circle, given by\n\\begin{align}\nR(p,q,\\theta) = \\frac{\\sqrt{ (q-p)^2 + y(p)^2 }}{ 2 \\sin \\left( \\frac{\\theta}{2} \\right)}\\,.\n\\end{align}\nThe above quantities give the ratio cut, denoted by $\\text{RC} (p,q,\\theta)$:\n\\begin{align*}\n \\text{RC} (p,q,\\theta) = \\frac{ R(p,q,\\theta) \\theta}{ A_1 (p,q,\\theta) A_2 (p,q,\\theta)}.\n\\end{align*}\nIt follows via direct calculation that $ \\text{RC} (p,q,\\theta)$ is a smooth function of $(p,q,\\theta)$ in a neighborhood of $(1\/2,1\/2,0)$. Indeed, since $\\frac12 R^2 (p,q,\\theta) \\theta - \\frac14 \\left[ (q-p)^2 + y(p)^2 \\right] \\cot \\left( \\frac{\\theta}{2} \\right)\\to 0$ and $R(p,q,\\theta) \\theta\\to \\sqrt{ (q-p)^2 + y(p)^2 }$ when $\\theta\\to 0$, $\\text{RC} (p,q,\\theta)$ is smooth in a neighborhood of $(1\/2,1\/2,0)$. Hence we can explore behaviors nearby using the Implicit Function Theorem. \n\nAs an illustrative calculation, let us simply take the full quadratic approximation in $\\theta$, $p-\\frac12$, $q-\\frac12$, $a$ and $\\epsilon$ to the Ratio Cut. A direct calculation gives:\n\\begin{small}\n\\begin{align*}\n& \\text{RC} (p,q,\\theta) {\\color{red}=} \\left( 8 + 24 \\left(q - \\frac12 \\right)^2 - 16 (q-\\frac12) (p-\\frac12) + 24 (p - \\frac12)^2 \\right) \\\\\n& - a \\left(8 + 8\\left(q - \\frac12 \\right) - 8 \\left(p - \\frac12 \\right) - 56\\left(q - \\frac12 \\right) ^2 - 56 \\left(p - \\frac12 \\right)^2 + 80 \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) \\\\\n& + \\epsilon \\left( \\frac43 + \\frac{52}{3} \\left(q - \\frac12 \\right)^2 + \\frac{100}{3} \\left(p - \\frac12 \\right)^2 - 40 \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) + \\frac43 \\theta \\left( \\left(q - \\frac12 \\right) + \\left(p - \\frac12 \\right) \\right) \\\\\n& + a^2 \\left( 10 + 16 \\left(q - \\frac12 \\right) - 16 \\left(p - \\frac12 \\right) + 120 \\left(q - \\frac12 \\right)^2 + 104 \\left(p - \\frac12 \\right)^2 -192 \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) \\\\\n& + \\epsilon^2 \\left( \\frac{122}{9} \\left(q - \\frac12 \\right)^2 + \\frac{218}{9} \\left(p - \\frac12 \\right)^2 - \\frac{284}{9} \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) + \\frac{7}{18} \\theta^2 \\\\\n& -a \\epsilon \\left( \\frac83 + \\frac{8}{3} \\left(q - \\frac12 \\right) - 8 \\left(p - \\frac12 \\right) + 72 \\left(q - \\frac12 \\right)^2 + 104 \\left(p - \\frac12 \\right)^2 - \\frac{464}{3} \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) \\\\\n& - a \\theta \\left( \\frac23 + 8 \\left(q - \\frac12 \\right)^2 - \\frac{32}{3} \\left(q - \\frac12 \\right) \\left(p - \\frac12 \\right) \\right) - \\frac89 \\epsilon \\theta \\left( \\left(q - \\frac12 \\right) + \\left(p - \\frac12 \\right) \\right) \n\\end{align*}\\end{small} \nup to a higher order error, when $a$ and $\\epsilon$ are sufficiently small.\nWhile this may not look so useful, we get a great deal of information by looking at the system when $a = \\epsilon = 0$. In such a case, the equations for a critical point in $p,q,\\theta$ become\n\\begin{align*}\n48 \\left(q - \\frac12 \\right) - 16 \\left(p - \\frac12 \\right) + \\frac43 \\theta & = 0 \\\\\n-16 \\left(q - \\frac12 \\right) + 48 \\left(p - \\frac12 \\right) + \\frac43 \\theta & = 0 \\\\\n\\frac43 \\left(q - \\frac12 \\right) + \\frac43 \\left(p - \\frac12 \\right) + \\frac79 \\theta & = 0.\n\\end{align*}\nThus, the Jacobian matrix is\n\\begin{equation*}\nJ_{a=0,\\epsilon=0} = \\left[ \\begin{array}{rrr}\n48 & -16 & \\frac43 \\\\\n-16 & 48 & \\frac43 \\\\\n\\frac43 & \\frac43 & \\frac79\n\\end{array}\n\\right],\n\\end{equation*}\nwhich is non-singular. As a result, we observe that if $a = \\epsilon = 0$, the optimal solutions is $p = q = \\frac12$, $\\theta = 0$. We know this from symmetry arguments, but now we also have set ourselves up for an application of the Implicit Function Theorem in order approximate the RC for near rectangular domains. \n\nWe next observe what happens if $a \\neq 0$, $\\epsilon = 0$. This gives the modified system\n\\begin{equation*}\nJ_{a,\\epsilon=0} \\left[ \\begin{array}{c}\n\\left(q - \\frac12 \\right) \\\\\n\\left(p - \\frac12 \\right) \\\\\n\\theta \n\\end{array} \\right] = a \\left[ \\begin{array}{r}\n8 \\\\\n-8 \\\\\n\\frac23\n\\end{array} \\right] + \\text{Quadratic Error in $a, q-\\frac12, p-\\frac12$},\n\\end{equation*}\nwhere\n\\begin{equation*}\nJ_{a,\\epsilon=0} = \\left[ \\begin{array}{ccc}\n48 -112 a & -16 + 80 a& \\frac43 \\\\\n-16 +80 a & 48 -112 a & \\frac43 \\\\\n\\frac43 & \\frac43 & \\frac79\n\\end{array}\n\\right],\n\\end{equation*}\nHence, $\\vec v$ satisfying $J_{a,\\epsilon=0} \\vec{ v }= \\vec{0}$ is given by \n\\begin{equation*}\n\\vec{v} = a J_{a=0,\\epsilon=0}^{-1} \\left[ \\begin{array}{c}\n8 \\\\ -8 \\\\ \\frac23\n\\end{array} \\right] + O(a^2) = a \\left[ \\begin{array}{c}\n\\frac{1}{12} \\\\ -\\frac16 \\\\ 1 \\end{array} \\right] + O(a^2).\n\\end{equation*}\nNote that the distance between $p$ and $q$ is then $\\frac{a}{4} < a$, with the maximal amplitude of the bulge from the circular arc of size \n\\[ \\frac{a}{8}\\sqrt{y(p)^2+(p-q)^2} < \\frac{a}{8}. \\] \nSince the terms that are linear in $\\epsilon$ are quadratic or higher in $q-1\/2,p-1\/2$, a similar analysis including $\\epsilon$ shows that the curvature of the parabolic curve is actually a lower order deformation for the Ratio Cut than the trapezoidal deflection. Importantly, this argument demonstrates that the trapezoidal deflection is decreasing in the new cut domain. Though the new cut domain will be closer to aspect ratio $1$, the overall deflections are still decreasing on subsequent domains.\n\n\nUsing the Implicit Function Theorem, we are able to compute comparable results for the full Ratio Cut. Indeed, to turn this into a rigorous argument, we need first observe that the minimum Ratio Cut at $a=\\epsilon = 0$ is uniquely $p=q=\\frac12$, $\\theta=0$, which is easily seen by looking at the Hessian. Then, we look at $ \\text{RC} (p,q,\\theta; a, \\epsilon)$ as a map from $\\mathbb{R}^5 \\to \\mathbb{R}^3$, and use the Implicit Function Theorem to construct the desired local map from $(p,q,\\theta;\\,a,\\epsilon)$ in an open set around $(a,\\epsilon) = (0,0)$. \n\n\\subsection{The Ratio Cut with sides given by parabolic approximations}\n\\label{arcs}\n\n\nNow, we proceed to handle a more general family of domains. Given our assumption on the smoothness of the curves, we can assume that the top and bottom curves are approximated by quadratic curves to high accuracy near points of intersection with the ratio cut. Specifically, let us consider an arbitrary domain $Q$ that can be approximated (in the Gromov-Hausdorff sense) by a parabolic trapezoid $Q_0$ with vertices $(0,0),(0,\\frac{1}{2}+a_1),(1,\\frac{1}{2}+a_2),$ and $(1,0)$. Note, the inclusion of $a_1$ and $a_2$ here will allow us to vary the aspect ratio. We fix the width of $Q_0$ to $1$, however, as can always be done by a scaling of the domain. The top paraboloid of $Q_0$ is parametrized as \n$$\ny_{T}(x) = \\epsilon_{t}x^2 + (a_2 - a_1 - \\epsilon_{t})x + a_1 + \\frac{1}{2},\n$$ \nand the bottom paraboloid is parametrized as \n$$\ny_{B}(x) = \\epsilon_{b}x^2 -\\epsilon_b x.\n$$ \nWe assume that $Q$ and $Q_0$ differ by two sufficiently small and bounded ``black-box'' regions on the left and right.\nThe areas of these two small regions will be denoted $A_{\\texttt{WL}}$ and $A_{\\texttt{WR}}$. \n\n\n\\begin{center}\n\\begin{figure}[ht!]\n\\begin{tikzpicture}[scale=4]\n\\draw [ultra thick] (0,1\/2)--(0,0) ;\n\n\\draw [red,ultra thick] (0,0) to[out=5,in=195] (0, 0.5);\n\\node[red] at (-0.15, 0.25) {$A_{\\texttt{WL}}$};\n\n\\draw [ultra thick] (1,0) -- (1, 1\/2+0.1) ;\n\n\\draw [red,ultra thick] (1,0) to[out=105,in=335] (1, 0.6);\n\\node[red] at (1.15, 0.25) {$A_{\\texttt{WR}}$};\n\n\\draw [ultra thick] (0,0.5) to[out=25,in=160] (1, 0.6);\n\\draw [ultra thick] (0,0) to[out=-10,in=190] (1, 0);\n\\filldraw (0,0) circle (0.015cm);\n\\node at (-0.08, -0.08) {(0,0)};\n\\filldraw (1,0) circle (0.015cm);\n\\node at (1.08, -0.08) {(1,0)};\n\\filldraw (1,1\/2+0.1) circle (0.015cm);\n\\node at (1.08, 1\/2+0.1+0.08) {$(1,0.6)$};\n\\filldraw (0,0.5) circle (0.015cm);\n\\node at (-0.1, 0.5+0.08) {($0$,$0.5$)};\n\\filldraw (0.5,-0.05) circle (0.015cm);\n\\node at (0.5, -0.15) {${\\bf q}:=(q,y_B(q))$}; \n\\filldraw (0.52,0.67) circle (0.015cm);\n\\node at (0.48,0.82) {${\\bf p}:=(p,y (p)) $}; \n\\draw [thin,dashed] (0.5,-0.05) to[out=65,in=295] (0.52, 0.67);\n\\draw [dotted] (0.5,-0.05) -- (0.52, 0.67) ;\n\\node at (0.65, 0.48) {$\\Gamma$};\n\\node at (0.55, 0.35) {$\\Omega$};\n\\end{tikzpicture}\n\\caption{A more general domain $Q$ approximated by a parabolic trapezoid.}\n\\label{fig:quad3}\n\\end{figure}\n\\end{center}\n\nWe start by preparing quantities associated to $Q_0$. A circular arc $\\Gamma$ passing through the points $(p,y_{T}(p))$ and $(q, y_B(q))$, with angle $\\theta$, cuts the parabolic trapezoid into a left and right domain, denoted by $S$ and $Q_0\\backslash S$.\n\tTo compute an equivalent analytic expression for the ratio cut, we use Stoke's theorem\n\tto compute the left area $A_L = |S|$ and total $A_T = |Q_0|$, which we use to compute the right area $A_R = |Q_0\\backslash S| = |Q_0|-|S|$. In particular, \n\t$$\n\tA_T = |Q_0| = \\int_{Q_0} dA = \\frac{1}{2} \\int_{\\partial Q_0} xdy - ydx,\n\t$$\n\twhere we integrate along the left vertical boundary $\\{(0,t): 0\\leq t\\leq \\frac{1}{2}+a_1\\}$, along the top parabolic curve $\\{(x,y_T(x)): 0\\leq x\\leq 1 \\},$ along the right vertical boundary $\\{(1,t): \\frac{1}{2}+a_2\\leq t\\leq 0 \\}$ (with the indicated orientation), and finally along the bottom parabolic curve $\\{(x,y_B(x)): 1\\leq x\\leq 0 \\}$ (with the indicated orientation). \n\tWe can compute the (indefinite) integrals for the two parabolic pieces fairly easily:\n\t\\begin{align*}\n\t\tdy_T \t&= (2\\epsilon_t x + a_2-a_1-\\epsilon_t)dx,\\\\\n\t\tdy_B \t&= (2\\epsilon_b x -\\epsilon_b)dx,\n\t\\end{align*}\n\tand so\n\t\\begin{align*}\n\t\t\\frac{1}{2} \\int xdy_T - y_T dx \t&= \\frac{1}{2} \\int x(2\\epsilon_t x + a_2-a_1-\\epsilon_t)dx - (\\epsilon_{t}x^2 + (a_2 - a_1 - \\epsilon_{t})x + a_1 + \\frac{1}{2})dx\\\\\n\t\t\t&= \\frac{1}{2}\\int \\epsilon_t x^2-(a_1+\\frac{1}{2}) dx\n\t\t\t= \\frac{\\epsilon_t}{6}x^3 - \\left(\\frac{a_1}{2}+ \\frac{1}{4} \\right)x;\\\\\n\t\t\\frac{1}{2} \\int xdy_B - y_B dx \t&= \\frac{1}{2}\\int x(2\\epsilon_b x -\\epsilon_b)dx - (\\epsilon_{b}x^2 + -\\epsilon_b x)dx\\\\\n\t\t\t&= \\frac{1}{2} \\int \\epsilon_b x^2 dx\n\t\t\t= \\frac{\\epsilon_b}{6}x^3.\n\t\\end{align*}\nThe $1$-forms for the vertical components are \n\t\\begin{align*}\n\t\t\\frac{1}{2} \\int 0dt - t d(0) &= \\frac{1}{2} \\int 0 = 0,\\\\\n\t\t\\frac{1}{2} \\int 1 dt - t d(1) \t&= \\frac{1}{2} \\int dt - 0 = \\frac{t}{2}.\n\t\\end{align*}\n\tPutting it all together, we have\n\t\\begin{align*}\n\t\tA_T &= 0 + \\left[ \\frac{\\epsilon_t}{6}x^3 - \\left(\\frac{a_1}{2}+ \\frac{1}{4} \\right)x \\right]_{x=0}^{x=\\frac{1}{2}+a_1} + \\left[\\frac{t}{2} \\right]_{t=\\frac{1}{2}+a_2}^{0} + \\left[ \\frac{\\epsilon_b}{6}x^3 \\right]_{x=1}^{x=0}\\\\\n\t\t\t&= \\frac{\\epsilon_t a_1^3}{6} + \\frac{\\epsilon_t a_1^2}{4} + \\frac{\\epsilon_t a_1}{8} + \\frac{\\epsilon_t}{48} -\\frac{\\epsilon_b}{6} - \\frac{a_2}{2} - \\frac{a_1^2}{2} - \\frac{a_1}{2} - \\frac{3}{8}.\n\t\\end{align*}\n\t\n\tFor $A_L = |S|$, we use Stokes' theorem again but split $S$ into two parts divided by the straight line connecting $(p,y_t(p))$ and $(q,y_b(q))$; these parts are a curvilinear quadrilateral and circular cap. The area of the curvilinear quadrilateral is computed by Stokes' theorem, while the area of the circular cap is determined by an elementary formula. Adding these two quantities gives us $A_L$.\n\t%\n\t\n\t\n\tThe line connecting $(p,y_T(p))$ to $(q,y_B(q))$ is parametrized as $\\{ (1-t)(p,y_T(p)) + t(q,y_B(q)): 0\\leq t \\leq 1 \\},$ which componentwise becomes\n\t$$ \n\t( (1-t)p + t q, (1-t)y_T(p) + ty_B(q)), \n\t$$\n\tand so the integral along this straight line becomes\n\t\\begin{align*}\n\t\t\\frac{1}{2} \\int xdy - ydx \t=\\,& \\frac{1}{2} \\int ((1-t)p + t q)d((1-t)y_T(p) + ty_B(q))\\\\ &\\quad - ((1-t)y_T(p) + ty_B(q))d((1-t)p + t q)\\\\\n\t\t\t=\\,& \\frac{1}{2} \\int ((1-t)p + t q)(-y_T(p) dt + y_B(q) dt)\\\\\n\t\t\t\t&\\quad - ((1-t)y_T(p) + ty_B(q)) ( -p dt + q dt)\\\\\n\t\t\t=\\,& t q (p (\\epsilon_b q-\\epsilon_b-\\epsilon_t p+\\epsilon_t)-a_2 (p+1))\\,,\n\t\\end{align*}\n\twhere we denote the last quantity by $SL(t)$.\n\t\n\tGiven two points $p$ and $q$, and an angle $\\theta$, the radius of the circle containing points $p$ and $q$ separated by angle $\\theta$ is \n\t$$\n\tR = \\frac{\\|p-q\\|_2}{2\\sin(\\frac{\\theta}{2})}.\n\t$$ \n\tThus, the area of the circular segment is\t{\\allowdisplaybreaks\n\t$$\n\t|\\Omega|= \\frac{R^2}{2}(\\theta - \\sin(\\theta)) = \\frac{(p - q)^2 + (y_T(p) - y_B(q))^2}{2}(\\theta - \\sin(\\theta)),\n\t$$ \n\tand so\n\t\\begin{align*}\n\t\tA_L =\\,& 0 + \\left[ \\frac{\\epsilon_t}{6}x^3 - \\left(\\frac{a_1}{2}+ \\frac{1}{4} \\right)x \\right]_{x=0}^{x=p} + [SL(t)]_{t=0}^{t=1} + \\left[ \\frac{\\epsilon_b}{6}x^3 \\right]_{x=q}^{x=0} + |\\Omega|\\\\\n\t\t=\\,& \\frac{1}{6} \\left( -\\epsilon_b q^3 - \\frac{3}{2}(1+2a_1)p + \\epsilon_t p^3 + 6q (-a_2(1+p) + p( - \\epsilon_b + \\epsilon_t + \\epsilon_t q - \\epsilon_t p))\\right.\\\\\n\t\t& \\left.+ 3\\left( (q-p)^2 + \\left( a_2 + \\epsilon_b q - \\epsilon_b q^2 + a_2p - \\epsilon_t p + \\epsilon_t p^2\\right)^2\\right) (\\theta - \\sin(\\theta))\\right).\n\t\\end{align*}\n\tWe can use this to get the area of the right portion:\n\t\\begin{align*}\n\t\tA_R =& A_T - A_L \\\\\n\t\t\t=& \\frac{1}{48} \\bigg(-18-24 a_1-24 a_1^2-24 a_2-8 \\epsB+\\epsT+6 a_1 \\epsT+12 a_1^2 \\epsT\\\\\n\t\t\t&+8 a_1^3 \\epsT+8 \\epsB \\tBott^3+12 (1+2 a_1) \\tTop-8 \\epsT \\tTop^3\\\\\n\t\t\t&-48 \\tBott \\big(-a_2 (1+\\tTop)+\\tTop (\\epsT+\\epsB (-1+\\tBott)-\\epsT \\tTop)\\big)\\\\\n\t\t\t&-24 \\big((\\tBott-\\tTop)^2+(a_2+\\epsB \\tBott-\\epsB \\tBott^2+a_2 \\tTop-\\epsT \\tTop+\\epsT \\tTop^2)^2\\big) (\\theta-\\sin(\\theta))\\bigg).\n\t\\end{align*}\n\t}\n\tFinally, the length of the ratio cut takes the form $$|\\Gamma| = R\\theta = \\frac{\\|(p-q, y_T(p)-y_B(q))\\|_2 \\frac{\\theta}{2}}{\\sin(\\frac{\\theta}{2})} = \\frac{\\|(p-q, y_T(p)-y_B(q))\\|_2}{\\sinc(\\frac{\\theta}{2})}.$$\n\n\tWith the above pieces from $Q_0$, the ratio cut of $Q$\n\n\tcan be expressed in terms of variables $p, q,$ and $\\theta$, together with parameters $a_1,a_2, \\epsT, \\epsB, A_{\\texttt{WL}}, A_{\\texttt{WR}}$ that determine the domain. Specifically, by letting $\\sigma = (a_1,a_2, \\epsT, \\epsB, A_{\\texttt{WL}}, A_{\\texttt{WR}})$ be the parameter vector, we can express the ratio cut as \n\t\\begin{align*}\n\t\tRC(q, p, \\theta; \\sigma) \t&= \\frac{\\|(p-q, y_T(p)-y_B(q))\\|_2}{\\sinc(\\frac{\\theta}{2})(A_L + A_{\\texttt{WL}})(A_R+A_{\\texttt{WR}})}.\n\t\\end{align*}\n\t%\n\tThe series expansion of $RC(q, p, \\theta;\\sigma)$ near $\\left( \\frac{1}{2}, \\frac{1}{2}, 0; \\vec{0} \\right)$ is\n\t\\begin{align*}\n\t\tRC \t=& 8 + 24 \\tBCent^2 - 16 \\tBCent \\tTCent + \\frac{4}{3} \\tBCent \\Th\\\\\n\t\t\t&+ 24 \\tTCent^2 + \\frac{4}{3} \\tTCent \\Th + \\frac{7}{18}\\Th^2\\\\\n\t\t\t&+ a_1 p_{a_1}(q, p, \\theta) + a_2 p_{a_2}(q, p, \\theta) + a_3 p_{a_3}(q, p, \\theta)\\\\\n\t\t\t&+ \\epsilon_t p_{\\epsilon_t}(q, p, \\theta) + \\epsilon_b p_{\\epsilon_b}(q, p, \\theta)\\\\\n\t\t\t&+ A_{\\texttt{WL}} p_{A_{\\texttt{WL}}}(q, p, \\theta) + A_{\\texttt{WR}} p_{A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ a_1^{\\,2} p_{a_1 a_1}(q, p, \\theta) + a_1a_2 p_{a_1 a_2}(q, p, \\theta) + a_1a_3 p_{a_1 a_3}(q, p, \\theta)\\\\\n\t\t\t&+ a_2^{\\,2} p_{a_2a_2}(q, p, \\theta) + a_2a_3 p_{a_2a_3}(q, p, \\theta) + a_3^{\\, 2} p_{a_3a_3}(p, q, \\theta)\\\\\n\t\t\t&+ a_1A_{\\texttt{WL}} p_{a_1A_{\\texttt{WL}}}(q, p, \\theta) + a_1A_{\\texttt{WR}} p_{a_1A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ a_1 \\epsT p_{a_1 \\epsT}(q, p, \\theta) + a_1 \\epsB p_{a_1 \\epsB}(q, p, \\theta)\\\\\n\t\t\t&+ a_2 A_{\\texttt{WL}} p_{a_2A_{\\texttt{WL}}}(q, p, \\theta) + a_2A_{\\texttt{WR}} p_{a_2 A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ a_2 \\epsT p_{a_2 \\epsT}(q, p, \\theta) + a_2 \\epsB p_{a_2 \\epsB}(q, p, \\theta)\\\\\n\t\t\t&+ a_3 A_{\\texttt{WL}} p_{a_3A_{\\texttt{WL}}}(q, p, \\theta) + a_3A_{\\texttt{WR}} p_{a_3 A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ a_3 \\epsT p_{a_3 \\epsT}(q, p, \\theta) + a_3 \\epsB p_{a_3 \\epsB}(q, p, \\theta)\\\\\n\t\t\t&+ A_{\\texttt{WL}}A_{\\texttt{WL}} p_{A_{\\texttt{WL}} A_{\\texttt{WL}}}(q, p, \\theta) + A_{\\texttt{WL}} A_{\\texttt{WR}} p_{A_{\\texttt{WL}} A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ A_{\\texttt{WR}} A_{\\texttt{WR}} p_{A_{\\texttt{WR}} A_{\\texttt{WR}}}(q, p, \\theta)\\\\\n\t\t\t&+ A_{\\texttt{WL}} \\epsT p_{A_{\\texttt{WL}} \\epsT}(q, p, \\theta) + A_{\\texttt{WL}} \\epsB p_{A_{\\texttt{WL}} \\epsB}(q, p, \\theta)\\\\\n\t\t\t&+ A_{\\texttt{WR}} \\epsT p_{A_{\\texttt{WR}} \\epsT}(q, p, \\theta) + A_{\\texttt{WR}} \\epsB p_{A_{\\texttt{WR}} \\epsB}(q, p, \\theta)\\\\\n\t\t\t&+ \\epsT \\epsT p_{\\epsT \\epsT}(q, p, \\theta) + \\epsT \\epsB p_{\\epsT \\epsB}(q, p, \\theta) + \\epsB \\epsB p_{\\epsB \\epsB}(q, p, \\theta)\\,,\n\t\\end{align*}\nup to a higher order error, where the polynomial $p_{\\sigma^{\\alpha}}$ is the partial derivative $\\left. \\frac{\\partial^{\\alpha} RC}{\\partial \\sigma^{\\alpha}} \\right|_{\\sigma = 0}$ for a multi-index $\\alpha$. Detailed fomulae for these terms are given in the Appendix.\t\n\t\n\t\n\tNext, we compute the linearization of $RC$ near the point $(q,p,\\theta) = (\\frac{1}{2}, \\frac{1}{2}, 0)$:\n\t\\begin{align*}\n\t\t\\left.\\frac{\\partial RC}{\\partial q}\\right|_{\\sigma = 0} &= 48\\big (q - \\frac{1}{2}\\big) - 16 \\big(p - \\frac{1}{2}\\big) + \\frac{4}{3}\\theta,\\\\\n\t\t\\left.\\frac{\\partial RC}{\\partial p}\\right|_{\\sigma = 0} &= -16 \\big(q - \\frac{1}{2}\\big) +48 \\big(p - \\frac{1}{2}\\big) + \\frac{4}{3}\\theta,\\\\\n\t\t\\left.\\frac{\\partial RC}{\\partial \\theta}\\right|_{\\sigma = 0} &= \\frac{4}{3} \\big(q - \\frac{1}{2}\\big) + \\frac{4}{3} \\big(p - \\frac{1}{2}\\big) + \\frac{7}{9}\\theta.\n\t\\end{align*}\n\t%\n\tThe Jacobian of $RC$ at $\\sigma = 0$ is thus\n\t$$\n\tJ = \\begin{pmatrix} 48 & -16 & \\frac{4}{3}\\\\ -16 & 48 & \\frac{4}{3} \\\\ \\frac{4}{3} & \\frac{4}{3} & \\frac{7}{9} \\end{pmatrix}.\n\t$$\n\t%\nHere, and below, we have kept the $0$-coefficient terms solely as a place keeper to demonstrate that we have actually computed the coefficients of all the terms in our expansion, as well as to make it easier to verify the formulae for the interested reader.\n\tNext, we explore the other pieces of the linearization of $RC$, namely all first-order terms in the variables, together with the parameters.\n\t{\\allowdisplaybreaks\n\tExplicitly, we have\n\t\\begin{align*}\n\n\t\t\\frac{\\partial RC}{\\partial q} \t&= \\left( 48 - 112 a_1 - 112 a_2 + 112 a_3 - 256 A_{\\texttt{WL}} - 256 A_{\\texttt{WR}} - \\frac{200}{3} \\epsilon_b + \\frac{104}{3} \\epsilon_t \\right)\\left( q - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(-16 + 80 a_1 + 80a_2 - 80a_3 + 40 \\epsilon_b - 40\\epsilon_t \\right)\\left( p - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(\\frac{4}{3} - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t \\right)\\theta\\\\\n\t\t&\\quad + \\Big(8a_1 -8a_2 - 8a_3 + 32 A_{\\texttt{WL}} - 32 A_{\\texttt{WR}} - 32 a_1 a_1 + 0 a_1a_2 + 32 a_1a_3\\\\\n\t\t&\\quad\\quad +32 a_2a_2 + 0 a_2a_3 - 32 a_3a_3 - 128a_1 A_{\\texttt{WL}} + 0 a_1 A_{\\texttt{WR}} + \\frac{8}{3} a_1 \\epsilon_t - \\frac{8}{3} a_1\\epsilon_b\\\\\n\t\t&\\quad \\quad + 0a_2A_{\\texttt{WL}} + 128 a_2 A_{\\texttt{WR}} - \\frac{8}{3} a_2 \\epsilon_t + \\frac{8}{3} a_2 \\epsilon_b\\\\\n\t\t&\\quad \\quad + 64 a_3 A_{\\texttt{WL}} - 64 a_3 A_{\\texttt{WR}} - 8 a_3 \\epsilon_t + 8 a_3 \\epsilon_b\\\\\n\t\t&\\quad \\quad - 512 A_{\\texttt{WL}} A_{\\texttt{WL}} + 0 A_{\\texttt{WL}}A_{\\texttt{WR}} + 512 A_{\\texttt{WR}}A_{\\texttt{WR}} \\\\\n\t\t&\\quad \\quad + \\frac{32}{3} A_{\\texttt{WL}} \\epsilon_t - \\frac{32}{3} A_{\\texttt{WL}} \\epsilon_b - \\frac{32}{3} A_{\\texttt{WR}} \\epsilon_t + \\frac{32}{3} A_{\\texttt{WR}}\\epsilon_b + 0 \\epsilon_t \\epsilon_t + 0 \\epsilon_t \\epsilon_b + 0 \\epsilon_b \\epsilon_b \\Big)\\\\\n\t\t&\\quad + \\text{ cubic terms},\n\t\\end{align*}\n\t\\begin{align*}\n\n\t\t\\frac{\\partial RC}{\\partial p} \t&= \\left(-16 + 80 a_1 + 80a_2 - 80a_3 + 40 \\epsilon_b - 40\\epsilon_t \\right)\\left( q - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(48 - 112 a_1 - 112 a_2 +112 a_3 - 256 A_{\\texttt{WL}} - 256 A_{\\texttt{WR}} - \\frac{104}{3} \\epsilon_b + \\frac{200}{3} \\epsilon_t \\right)\\left( p - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(\\frac{4}{3} - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t \\right)\\theta\\\\\n\t\t&\\quad + \\left(-8a_1 + 8a_2 + 8a_3 + 32A_{\\texttt{WL}} - 32 A_{\\texttt{WR}} +32 a_1 a_1 + 0 a_1a_2 - 32 a_1a_3\\right.\\\\\n\t\t&\\quad\\quad \\left.-32 a_2a_2 + 0 a_2a_3 + 32 a_3a_3 - 64 a_1 A_{\\texttt{WL}} + 64 a_1 A_{\\texttt{WR}} - 8 a_1 \\epsilon_t + 8 a_1\\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left. - 64 a_2A_{\\texttt{WL}} + 64 a_2 A_{\\texttt{WR}} + 8 a_2 \\epsilon_t - 8 a_2 \\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left. + 0 a_3 A_{\\texttt{WL}} - 128 a_3 A_{\\texttt{WR}} + \\frac{8}{3} a_3 \\epsilon_t - \\frac{8}{3} a_3 \\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left. - 512 A_{\\texttt{WL}} A_{\\texttt{WL}} + 0 A_{\\texttt{WL}}A_{\\texttt{WR}} + 512 A_{\\texttt{WR}}A_{\\texttt{WR}} \\right.\\\\\n\t\t&\\quad \\quad \\left. + \\frac{32}{3} A_{\\texttt{WL}} \\epsilon_t - \\frac{32}{3} A_{\\texttt{WL}} \\epsilon_b + \\frac{32}{3} A_{\\texttt{WR}} \\epsilon_t + \\frac{32}{3} A_{\\texttt{WR}}\\epsilon_b + 0 \\epsilon_t \\epsilon_t + 0 \\epsilon_t \\epsilon_b + 0 \\epsilon_b \\epsilon_b \\right)\\\\\n\t\t&\\quad + \\text{ cubic terms},\n\t\\end{align*}\n\tand\n\t\\begin{align*}\n\n\t\t\\frac{\\partial RC}{\\partial \\theta} \t&= \\left(\\frac{4}{3} - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t \\right)\\left( q - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(\\frac{4}{3} - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t \\right)\\left( p - \\frac{1}{2} \\right)\\\\\n\t\t&\\quad + \\left(\\frac{7}{9} - \\frac{5}{9} a_1 - \\frac{5}{9}a_2 + \\frac{5}{9} a_3 - \\frac{32}{9}A_{\\texttt{WL}} - \\frac{32}{9}A_{\\texttt{WR}} + \\frac{1}{54}\\epsilon_b - \\frac{1}{54} \\epsilon_t \\right)\\theta\\\\\n\t\t&\\quad + \\left(\\frac{2}{3}a_1 - \\frac{2}{3}a_2 + \\frac{2}{3} a_3 + \\frac{8}{3}A_{\\texttt{WL}} - \\frac{8}{3}A_{\\texttt{WR}} - \\frac{4}{3} a_1 a_1 + 0 a_1a_2 + 0 a_1a_3\\right.\\\\\n\t\t&\\quad\\quad \\left.+ \\frac{4}{3} a_2a_2 - \\frac{4}{3} a_2a_3 + \\frac{4}{3} a_3a_3 - 8 a_1 A_{\\texttt{WL}} - \\frac{8}{3} a_1 A_{\\texttt{WR}} - \\frac{1}{9} a_1 \\epsilon_t + \\frac{1}{9} a_1\\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left.+ \\frac{8}{3} a_2A_{\\texttt{WL}} + 8 a_2 A_{\\texttt{WR}} + \\frac{1}{9} a_2 \\epsilon_t - \\frac{1}{9} a_2 \\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left. - \\frac{8}{3} a_3 A_{\\texttt{WL}} - 8 a_3 A_{\\texttt{WR}} - \\frac{1}{9} a_3 \\epsilon_t + \\frac{1}{9} a_3 \\epsilon_b\\right.\\\\\n\t\t&\\quad \\quad \\left. - \\frac{128}{3} A_{\\texttt{WL}} A_{\\texttt{WL}} + 0 A_{\\texttt{WL}}A_{\\texttt{WR}} + \\frac{128}{3} A_{\\texttt{WR}}A_{\\texttt{WR}} \\right.\\\\\n\t\t&\\quad \\quad \\left. - \\frac{4}{9} A_{\\texttt{WL}} \\epsilon_t + \\frac{4}{9} A_{\\texttt{WL}} \\epsilon_b + \\frac{4}{9} A_{\\texttt{WR}} \\epsilon_t - \\frac{4}{9} A_{\\texttt{WR}}\\epsilon_b + 0 \\epsilon_t \\epsilon_t + 0 \\epsilon_t \\epsilon_b + 0 \\epsilon_b \\epsilon_b \\right)\\\\\n\t\t&\\quad + \\text{ cubic terms}.\n\t\\end{align*}\n\t%\n\tWith these expansions, we can express the linearization as $J+ J_{\\sigma}$, where the terms $J_{\\sigma,ii}$ come from the partials computed above. Explicitly,\n\t\\begin{align*}\n\t\tJ_{\\sigma,11} \t&= - 112 a_1 - 112 a_2 - 256 A_{\\texttt{WL}} - 256 A_{\\texttt{WR}} - \\frac{200}{3} \\epsilon_b + \\frac{104}{3} \\epsilon_t,\\\\\n\t\tJ_{\\sigma,12}\t&= 80 a_1 + 80a_2 + 40 \\epsilon_b - 40\\epsilon_t,\\\\\n\t\tJ_{\\sigma,13} \t&= - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t,\\\\\n\t\tJ_{\\sigma,21} \t&= 80 a_1 + 80a_2 + 40 \\epsilon_b - 40\\epsilon_t,\\\\\n\t\tJ_{\\sigma,22} \t&= - 112 a_1 - 112 a_2 - 256 A_{\\texttt{WL}} - 256 A_{\\texttt{WR}} - \\frac{104}{3} \\epsilon_b + \\frac{200}{3} \\epsilon_t,\\\\\n\t\tJ_{\\sigma,23} \t&= - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t,\\\\\n\t\tJ_{\\sigma,31} \t&= - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t,\\\\\n\t\tJ_{\\sigma,32} \t&= - \\frac{32}{3}A_{\\texttt{WL}} - \\frac{32}{3}A_{\\texttt{WR}} + \\frac{8}{9} \\epsilon_b - \\frac{8}{9}\\epsilon_t,\\\\\n\t\tJ_{\\sigma,33} \t&= - \\frac{5}{9} a_1 - \\frac{5}{9}a_2 - \\frac{32}{9}A_{\\texttt{WL}} - \\frac{32}{9}A_{\\texttt{WR}} + \\frac{1}{54}\\epsilon_b - \\frac{1}{54} \\epsilon_t.\n\t\\end{align*}\n\t\n\t\n\tWith these pieces we can express the criterion for being a critical point of the Ratio Cut, incorporating linear terms in the parameters, as\n\t\\begin{align*}\n\t\t(J+J_{\\sigma}) \\begin{pmatrix} q - \\frac{1}{2}\\\\ p - \\frac{1}{2}\\\\ \\theta \\end{pmatrix} &= L(\\sigma)\\\\\n\t\t&:= a_1 \\begin{pmatrix} 8 \\\\ -8 \\\\ \\frac{2}{3}\\end{pmatrix} + a_2 \\begin{pmatrix}-8\\\\ 8\\\\ -\\frac{2}{3}\\end{pmatrix} + a_3 \\pmat{-8 \\\\ 8 \\\\ \\frac{2}{3}} + A_{\\texttt{WL}} \\begin{pmatrix}32 \\\\ 32 \\\\ \\frac{8}{3} \\end{pmatrix} + A_{\\texttt{WR}} \\begin{pmatrix}-32\\\\ -32 \\\\ -\\frac{8}{3}\\end{pmatrix}\\\\\n\t\t&\\quad + a_1a_1 \\pmat{-32 \\\\ 32 \\\\ - \\frac{4}{3}} + a_1a_2 \\pmat{0 \\\\ 0 \\\\ 0} + a_1a_3 \\pmat{32 \\\\ -32 \\\\ 0} \\\\\n\t\t&\\quad + a_2a_2 \\pmat{32 \\\\ -32 \\\\ \\frac{4}{3}} + a_2a_3 \\pmat{0 \\\\ 0 \\\\ -\\frac{4}{3}} + a_3a_3 \\pmat{ - 32 \\\\ 32 \\\\ \\frac{4}{3} } \\\\\n\t\t&\\quad \t+ a_1A_{\\texttt{WL}} \\pmat{ -128 \\\\ -64 \\\\ -8 } + a_1 A_{\\texttt{WR}} \\pmat{ 0 \\\\ 64 \\\\ - \\frac{8}{3} } + a_1 \\epsilon_t \\pmat{ \\frac{8}{3} \\\\ -8 \\\\ - \\frac{1}{9} } + a_1 \\epsilon_b \\pmat{ - \\frac{8}{3} \\\\ 8 \\\\ \\frac{1}{9} } \\\\\n\t\t&\\quad \t+ a_2 A_{\\texttt{WL}} \\pmat{ 0 \\\\ -64 \\\\ \\frac{8}{3} } + a_2 A_{\\texttt{WR}} \\pmat{ 128 \\\\ 64 \\\\ 8 } + a_2 \\epsilon_t \\pmat{ - \\frac{8}{3} \\\\ 8 \\\\ \\frac{1}{9} } + a_2 \\epsilon_b \\pmat{ \\frac{8}{3} \\\\ - 8 \\\\ -\\frac{1}{9} } \\\\\n\t\t&\\quad + a_3 A_{\\texttt{WL}} \\pmat{ 64 \\\\ 0 \\\\ -\\frac{8}{3} } + a_3 A_{\\texttt{WR}} \\pmat{ -64 \\\\ - 128 \\\\ -8 } + a_3 \\epsilon_t \\pmat{ -8 \\\\ \\frac{8}{3} \\\\ -\\frac{1}{9} } + a_3 \\epsilon_b \\pmat{ 8 \\\\ -\\frac{8}{3} \\\\ \\frac{1}{9} } \\\\\n\t\t&\\quad + A_{\\texttt{WL}} A_{\\texttt{WL}} \\pmat{ -512 \\\\ -512 \\\\ -\\frac{128}{3} } + A_{\\texttt{WL}} A_{\\texttt{WR}} \\pmat{ 0 \\\\ 0 \\\\ 0 } + A_{\\texttt{WR}} A_{\\texttt{WR}} \\pmat{ 512 \\\\ 512 \\\\ \\frac{128}{3} } \\\\\n\t\t&\\quad + A_{\\texttt{WL}} \\epsilon_t \\pmat{\\frac{32}{3} \\\\ \\frac{32}{3} \\\\ - \\frac{4}{9} } + A_{\\texttt{WL}} \\epsilon_b \\pmat{ - \\frac{32}{3} \\\\ - \\frac{32}{3} \\\\ \\frac{4}{9} } \\\\\n\t\t&\\quad + A_{\\texttt{WR}} \\epsilon_t \\pmat{ - \\frac{32}{3} \\\\ \\frac{32}{3} \\\\ \\frac{4}{9} } + A_{\\texttt{WR}} \\epsilon_b \\pmat{ \\frac{32}{3} \\\\ \\frac{32}{3} \\\\ -\\frac{4}{9} } \\\\\n\t\t&\\quad + \\epsilon_t \\epsilon_t \\pmat{0\\\\0\\\\0 } + \\epsilon_t \\epsilon_b \\pmat{0\\\\ 0\\\\ 0 } + \\epsilon_b \\epsilon_b \\pmat{0\\\\ 0 \\\\ 0 }\\\\\n\t\t&\\quad + \\text{ cubic terms in the parameters}.\n\t\\end{align*}\n\t}\n\t\\subsection*{Using the Implicit Function Theorem}\n\t\n\tSuppose $v$ is a critical triple of values for the Ratio Cut, i.e. $(J+J_{\\sigma})v = 0$. Then near $\\sigma = 0$, we can solve for $v$ in terms of the parameters in $\\sigma$:\n\t\\begin{align*}\n\t\tv \t&= J^{-1} L(\\sigma)\\\\\n\t\t\t&= \\pmat{\\frac{1}{12}(a_1 - a_2 - 2a_3) + (A_{\\texttt{WL}} - A_{\\texttt{WR}}) \\\\ -\\frac{1}{12} (2a_1 - 2a_2 - a_3) + (A_{\\texttt{WL}} - A_{\\texttt{WR}}) \\\\ a_1 - a_2 + a_3} + \\text{quadratic terms in the parameters}.\n\t\\end{align*}\n\t%\n\tWe will not write out the full $J^{-1}L(\\sigma)$ here, though it is used for the approximations later and its form becomes clear in the representations below.\n\t%\n\t\n\t\n\tTo summarize, we have shown in this section that, near the critical point $(p,q,\\theta) = \\left( \\frac{1}{2}, \\frac{1}{2}, 0 \\right)$ of the ratio cut, and for $\\sigma=(a_1, a_2, \\epsilon_b, \\epsilon_t, A_{\\texttt{WL}}, A_{\\texttt{WR}})$ near $0$, we have \n\t\\begin{align*}\n\t\t\\begin{pmatrix} q - \\frac{1}{2} \\\\ p - \\frac{1}{2} \\\\ \\theta \\end{pmatrix} = \\pmat{\\frac{1}{12}(a_1 - a_2 - 2a_3) + (A_{\\texttt{WL}} - A_{\\texttt{WR}}) \\\\ -\\frac{1}{12} (2a_1 - 2a_2 - a_3) + (A_{\\texttt{WL}} - A_{\\texttt{WR}}) \\\\ a_1 - a_2 + a_3}+ \\text{quadratic terms in the parameters}.\n\t\\end{align*}\n\t\n\t\t\n\t\\section{Are our domains generic?}\n\t\n\tSince Ratio Cuts are circular, it seems more natural to prove Theorem 2 for circular, instead of parabolic, boundary curves. Using circular arcs, however, proved to be intractable for our computational methods, whereas parabolic arcs could be used. This section shows that such approximations only incur third order errors, and thus Theorem 2 is still applicable for circular boundary curves. This first lemma shows this approximation is valid near the intersection of the cut and the original domain's boundary. \n\t\n\t\\begin{lemma}\n\tProvided $I(Q)$ defined in \\eqref{Definition I(Q)} is sufficiently small, we may approximate the top and bottom curves by parabolas in a neighborhood of the Ratio Cut.\n\t\\end{lemma}\n\t\n\t\\begin{proof}\n\tThe small $I(Q)$ makes sure the domain is approximately trapezoidal with a, for example, $1:2$, aspect ratio, and that the top and curves are $C^3$ with small oscillations on a uniform spatial scale. Hence, Taylor expanding the top and bottom curves out to $2$nd order on this scale near $p=1\/2, q = 1\/2$, we can create a parabolic approximation that is accurate up to $O( |p-\\frac12|^3, |q-\\frac12|^3)$ in this region. Potentially modifying the left and right wings in order to ensure the wings of domain have the appropriate error outside this uniform neighborhood, we see that these curvilinear trapezoids will have ratio cuts that are indeed approximated up to lower order terms in the same fashion as our exact parabolic trapezoid calculations. \n\t\\end{proof}\n\t\t\n\tIn practice, we are given a domain $Q$ whose top and bottom boundary curves are more general to start, and upon iteration of our domains we are most interested when sides are given by circular arcs. To handle smooth enough domains of this form in full generality, we can construct a parabolic trapezoid by: rescaling the domain to fit our aspect ratio; locally using a parabolic approximation for the top and bottom curves; and finally cutting off left and right portions of $\\Omega$, which are treated as black-box regions. The Ratio Cut function sees these regions as general wing areas, denoted $A_{\\texttt{WL}}$ and $A_{\\texttt{WR}}$ for the left- and right-wing areas respectively. The measure of distance from the rectangle, $I(Q)$, in which we measure the deflection of our domains, allows us to approximate our quadrilateral by a parabolic curve with errors that are higher order in the parameter space. \n\t\t\n\t\\subsection{Can a circular arc be parabolically approximated?}\n\t\n\tOptimal cuts for generic parabolic trapezoids are circular. Because of this, we would expect the top and bottom boundary curves of the domains in the next iteration we have been considering to be circular, but not parabolic. Here we show that parabolic arcs approximate circular arcs well, and that the ratio cut for circular boundary curves differs from the ratio cut for parabolic boundary curves at third order and higher. Thus, working with parabolic curves does not incur any significant error in the ratio cut series expansion.\n\t\n\t\\begin{lemma}\n\t Up to higher order error terms in the curve parameters, we may approximate the top and bottom circular arc curves as paraboloids encompassing the correct area.\n\t \\end{lemma}\n\t \n\t The remainder of this section will be devoted to the proof of this Lemma.\n\t\n\t\\subsubsection{Approximating Circular Arcs} \n\tFirst let us consider the most important example for our iterated domain conjecture, in which the top and bottom boundaries are circular arcs. In what follows, the top and bottom parabolic curves will be, respectively,\n\t\\begin{align*}\n\t\ty_T^P(x) \t&= \\epsilon_t x^2 + (a_1 - a_2 - \\epsilon_t)x+ \\left( a_1 + \\frac{1}{2} \\right),\\\\\n\t\ty_B^P(x) \t&= \\epsilon_b x^2 + (a_3 - \\epsilon_b)x.\n\t\\end{align*}\n\t\n\tThe parameters $a_1, a_2, a_3$ denote horizontal perturbations from the top-left, top-right, and bottom-right vertices of a rectangle, so our parabolic trapezoid has vertices $(0,0)$, $\\left(0, \\frac{1}{2}+a_1 \\right)$, $\\left(1, \\frac{1}{2}+a_2 \\right),$ and $\\left(1, a_3 \\right)$. The terms $\\epsilon_t$ and $\\epsilon_b$ are curvature parameters that specify the shape of the two boundary curves.\n\t\n\tThe circular arcs are given by the formulas\n\t\\begin{align*}\n\t\ty_T^C(x) &= c_{y,t} + \\sqrt{r_t^2 - (x-c_{x,t})^2}, \\\\\n\t\ty_B^C(x) \t&= c_{y,b} - \\sqrt{r_b^2 - (x-c_{x,b})^2},\n\t\\end{align*}\n\twhere $r_i$ and $(c_{x,i},c_{y,i})$ are the radius and center of the corresponding top ($i=T$) or bottom ($i=B$) circle:\n\t\n\t\\begin{align*}\n\t\tr_t &= \\frac{\\sqrt{1 + (a_1-a_2)^2}}{2 \\sin(\\frac{\\theta}{2})},\\\\\n\t\tr_b &= \\frac{\\sqrt{1+a_3^2}}{2 \\sin(\\frac{\\theta}{2})},\n\t\\end{align*}\n\tand the center point coordinates are found from the systems of equations (for $i=T$ and $i=B$)\n\t\\begin{align*}\n\t\t&\\begin{cases}\n\t\t(c_{x,t})^2 + (\\frac{1}{2} + a_1 - c_{y,t})^2 \t= r_t^2,\\\\\n\t\t(1-c_{x,t})^2 + (\\frac{1}{2}+a_2 - c_{y,t})^2 \t= r_t^2, \n\t\t\\end{cases}\\\\\n\t\t\\text{and } \n\t\t&\\begin{cases}\n\t\t(c_{x,b})^2 + (c_{y,b})^2 \t= r_b^2,\\\\\n\t\t(1-c_{x,b})^2 + (a_3 - c_{y,b})^2 \t= r_b^2.\n\t\t\\end{cases}\n\t\\end{align*}\n\t\n\tAs an explicit example, expressing $y_B^C$ in terms of the parameters gives\n\t\\begin{align*}\n\n\t\ty_B^C(x) \t&= \\frac{a_3}{2} + \\cot\\left( \\frac{\\theta_b}{2} \\right) - \\left( \\frac{1}{1+a_3^2} \\left( a_3^4 - (1-2x)^2 - 4a_3^2 (x-1) x \\right. \\right. \\\\\n\t\t&\\quad \\left.\\left. + 2a_3 (1+a_3^2) (1-2x) \\cot\\left( \\frac{\\theta_b}{2} \\right) + (1+a_3^2) \\csc^2\\left( \\frac{\\theta_b}{2} \\right) \\right) \\right)^{\\frac{1}{2}}.\n\t\\end{align*}\n\t\n\tThe next lemma shows how to choose $\\epsilon_t$ and $\\epsilon_b$ so that the parabolic trapezoid's area approximates the circular trapezoid's area up to third order. These curvature terms are chosen so that the quadrilateral's parabolic caps have, up to third order terms, the same area as corresponding circular arcs.\n\t\n\t\\begin{lemma}\n\tLet $l_T(x) = (1-t)\\left(\\frac{1}{2}+a_1 \\right) + t \\left(\\frac{1}{2}+a_2 \\right)$, and write \n\t\\begin{equation}\n\tS_T^C = \\{ (x,y) \\colon l_T(x) \\leq y \\leq y_T^C(x) \\}, \\,\\, S_T^P(x) = \\{ (x,y) \\colon l_T(x) \\leq y \\leq y_T^P(x) \\}\n\t\\end{equation} \n\tfor the regions bounded below by the straight line $\\{\\{ t, l_T(x) \\} \\colon t \\in [0,1] \\}$ and above by the curves $\\{\\{t, y_T^i(x) \\} \\colon x\\in[0,1], i = C \\text{ or } P \\}$ respectively. Similarly, let $l_B(x) = t a_3,$ and write\n\t \\begin{equation}\n\tS_B^C = \\{ (x,y) \\colon l_B(x) \\leq y \\leq y_B^C(x) \\},\\,\\, S_B^P(x) = \\{ (x,y) \\colon l_B(x) \\leq y \\leq y_B^P(x) \\}\n\t\\end{equation} \n\tfor the regions bounded above by the straight line $\\{\\{ t, l_B(x) \\} \\colon t \\in [0,1] \\}$ and below by the curves $\\{\\{t, y_B^i(x) \\} \\colon x\\in[0,1], i = C \\text{ or } P\\}$ respectively.\n\t\n\t\tFor $\\epsilon_t = -\\frac{1+(a_1-a_2)^2}{2}\\theta_t$ and $\\epsilon_b = \\frac{1+a_3^2}{2}\\theta_b$, we have\n\t\t\\begin{align*}\n\t\t\t|S_T^C| -|S_T^P| \t&= O(\\theta_t^3)\n\t\t\t\\end{align*}\n\t\t\tand\n\t\t\t\\begin{align*}\n\t\t\t\t\t\t|S_B^C| - |S_B^P| \t&= O(\\theta_b^3) .\n\t\t\\end{align*}\n\t\\end{lemma}\n\t\n\t\\begin{proof}\n\t\tBoth $|S_T^C|$ and $|S_B^C|$ can be found using basic trigonometry: the area of a circular segment with radius $r$ and angle $\\theta$ is $\\frac{1}{2} r^2 (\\theta - \\sin(\\theta))$. In our case,\n\t\t\\begin{align*}\n\t\t\t|S_T^C| \t&= \\frac{1}{2} \\left(\\frac{\\sqrt{1+(a_1-a_2)^2}}{2\\sin(\\frac{\\theta_t}{2})} \\right)^2 (\\theta_t - \\sin(\\theta_t) )\\\\\n\t\t\t\t&= \\frac{1+(a_1-a_2)^2}{12}\\theta_t + \\frac{1+(a_1-a_2)^2}{360}\\theta_t^3 + O(\\theta_t^4),\\\\\n\t\t\t|S_B^C| \t&= \\frac{1}{2} \\left(\\frac{\\sqrt{1+a_3^2}}{2\\sin(\\frac{\\theta_b}{2})} \\right)^2(\\theta_b - \\sin(\\theta_b))\\\\\n\t\t\t\t&= \\frac{1+a_3^2}{12} \\theta_b + \\frac{1+a_3^2}{360} \\theta_b^3 + O(\\theta_b^4).\\\\\n\t\t\\end{align*}\n\t\t\n\t\tFor $|S_T^P|$ and $|S_B^P|$, we integrate:\n\t\t\\begin{align*}\n\t\t\t|S_T^P| \t&= \\int_{0}^1 y_T^P(x) - l_T(x)dx\\\\\n\t\t\t\t&= \\int_0^1 \\epsilon_t x^2 + (a_2 - a_1 - \\epsilon_t)x + \\left( \\frac{1}{2} + a_1 \\right) - (1-x)\\left( \\frac{1}{2}+a_1 \\right) - x\\left(\\frac{1}{2}+a_2 \\right) dx\\\\\n\t\t\t\t&= -\\frac{\\epsilon_t}{6}\n\t\t\t\t\\end{align*}\n\t\tand \n\t\t\\begin{align*}\n\t\t |S_B^C| \t&= \\int_0^1 l_B(x) - y_B^C(x) dx\\\\\n\t\t\t&= \\int_0^1 a_3 x -\\epsilon_b x^2 - (a_3 - \\epsilon_b)x dx\n\t\t\t= \\frac{\\epsilon_b}{6}.\n\t\t\\end{align*}\n\t\tWe want to choose constants $C_t$ and $C_b$ so that setting $\\epsilon_t = C_t \\theta_t$ and $\\epsilon_b = C_b \\theta_b$ gives us $|S_T^C|-|S_T^P| = O(\\theta_t^3)$ and $|S_B^C| - |S_B^P| = O(\\theta_b^3)$. Indeed, letting $\\epsilon_t = -\\frac{1+(a_1-a_2)^2}{2}\\theta_t$ and $\\epsilon_b = \\frac{1+a_3^2}{2}\\theta_b$ gives us this result.\n\t\\end{proof}\n\t\n\tSince the parabolic and circular trapezoids only differ in the type of caps on each, making these choices for $\\epsilon_t$ and $\\epsilon_b$ ensures the area of the circular and parabolic trapezoids are very well approximated. In addition, the boundary curves agree up to third order, and so the left areas and ratio cut lengths also agree to at least third order (in the parameters).\n\t\\begin{cor} \n\t\tSetting $\\epsilon_t = -\\frac{1+(a_1-a_2)^2}{2}\\theta_t$ and $\\epsilon_b = \\frac{1+a_3^2}{2}\\theta_b$, we get the following approximations:\n\t\t\\begin{align*}\n\t\ty_T^C(x) - y_T^P(x) &= \\frac{1}{48}(1+(a_1-a_2)^2) \\theta_t^2 \\left( O(a_1) + O(a_2) + O(\\theta_t) \\right),\\\\\n\t\ty_B^C(x) - y_B^P(x) &= \\frac{1}{48} (1+a_3^2) \\theta_b^2 (O(a_3) + O(\\theta_b)).\n\t\t\\end{align*}\n\t\t\n\t\tMoreover, we have\n\t\t\\begin{align*}\n\t\tA_T^C \t&= A_T^P + O(\\theta_t^3) + O(\\theta_b^3),\\\\\n\t\t\\text{and } A_L^C \t&= A_L^P + O(\\theta_t^2)(O(a_1) + O(a_2)) + O( \\theta_b^2)O(a_3),\n\t\t\\end{align*}\n\t\twhere $A_T^i$, $A_L^i$ is the total area and left area respectively of the parabolic ($i=P$) or circular ($i=C$) trapezoid, and\n\t\t\\begin{align*}\n\t\t\t\\|(p,y_T^C(p)) - (q,y_B^C(q)) \\| \t&= \\|(p,y_T^P(p)) - (q,y_B^P(q))\\| + O(\\theta_t^3)+O(\\theta_b^3).\n\t\t\\end{align*}\n\t\\end{cor}\n\t\n\tThe corollary justifies our use of only quadratic terms in the Ratio Cut Taylor expansion, since the parabolic and circular Ratio Cuts (i.e. Ratio Cut functions of parabolic or circular trapezoids) agree up to third order in the parameters of the domains, given the proper curvature term substitutions.\n\t\n\t\\begin{proof}\n\t\tThe parabolic and circular boundary curve approximations come by Taylor expansions.\n\t\t%\n\t\tFor the areas, recall that the parabolic and circular trapezoids differ only in their ``caps''. Thus, assuming $a_1 B_\\star^{\\rm drip}\\equiv \\frac{1}{2}\\left(\\frac{ \\mu_e^{\\rm drip}(A,Z)}{m_e c^2}\\right)^2 \n\\biggl[1-\\frac{8 C \\alpha Z^{2\/3}}{3(2 \\pi^2)^{1\/3}} \\biggr] \\, .\n\\end{equation}\n\n\n\\subsection{Accreting neutron stars}\n\nIn accreting neutron stars, the magnetic field is typically negligibly small ($B\\ll B_\\star$) and will thus be ignored. \nFor an accretion rate $\\dot{M}=10^{-9}~{\\rm M_\\odot}$~yr$^{-1}$ the original outer crust is replaced by accreted matter in $10^4$~yr.\nFor low-mass binary systems, the accretion stage can last for $10^9$~yr.\nAt densities above $\\sim 10^8$~g~cm$^{-3}$, matter is highly degenerate and relatively cold ($T\\lesssim 5\\times 10^8$~K) so that thermonuclear processes are strongly suppressed, since their rates are many orders of magnitude lower than the compression rate due to accretion \\cite{haensel2007}. The only relevant \nprocesses are electron captures and neutron-emission processes, whereby the nucleus $^A_ZX$ is transformed into a nucleus $^{A-\\Delta N}_{Z-1}Y$ with \nproton number $Z-1$ and mass number $A-\\Delta N$ by capturing an electron with the emission of $\\Delta N$ neutrons $n$ and an electron neutrino $\\nu_e$:\n\\begin{equation}\n\\label{eq:e-capture+n-emission}\n^A_ZX+ e^- \\rightarrow ^{A-\\Delta N}_{Z-1}Y+\\Delta N n+ \\nu_e\\, .\n\\end{equation}\nIn this case, the condition for the onset of neutron drip becomes~\\cite{chamel2015a}\n\\begin{eqnarray}\n\\label{eq:e-capture+n-emission-gibbs-approx}\n\\mu_e + C e^2 n_e^{1\/3}\\biggl[Z^{5\/3}-(Z-1)^{5\/3} + \\frac{1}{3} Z^{2\/3}\\biggr] = \\mu_e^{\\rm drip-acc} \\, ,\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\label{eq:muebetan}\n\\mu_e^{\\rm drip-acc}(A,Z)\\equiv M'(A-\\Delta N,Z-1)c^2-M'(A,Z)c^2 +m_n c^2 \\Delta N + m_e c^2 \\, .\n\\end{equation}\nThe neutron-drip density and pressure are approximately given by~\\cite{chamel2015a}\n\\begin{equation}\n\\label{eq:ndrip-acc}\nn_{\\rm drip-acc}(A,Z) \\approx \\frac{A}{Z} \\frac{\\mu_e^{\\rm drip-acc}(A,Z)^3}{3\\pi^2 (\\hbar c)^3} \n \\biggl[1+\\frac{C \\alpha}{(3\\pi^2)^{1\/3}}\\left(Z^{5\/3}-(Z-1)^{5\/3}+\\frac{Z^{2\/3}}{3}\\right)\\biggr]^{-3}\\, ,\n\\end{equation}\n\\begin{equation}\n\\label{eq:Pdrip-acc}\nP_{\\rm drip-acc}(A,Z) \\approx \\frac{\\mu_e^{\\rm drip-acc}(A,Z)^4}{12 \\pi^2 (\\hbar c)^3}\\biggl[1+\\frac{4C \\alpha Z^{2\/3}}{(81\\pi^2)^{1\/3}} \\biggr]\n \\biggl[1+\\frac{C \\alpha}{(3\\pi^2)^{1\/3}}\\left(Z^{5\/3}-(Z-1)^{5\/3}+\\frac{Z^{2\/3}}{3}\\right)\\biggr]^{-4}\\, .\n\\end{equation}\nAs discussed in Ref.~\\cite{chamel2015a}, the dripping nucleus can be determined as follows. Given the mass number $A$ and the initial atomic number \n$Z_0$ of the ashes of x-ray bursts, the atomic number $Z$ at the neutron-drip point is the highest number of protons lying below $Z_0$ for which the $\\Delta N$-neutron \nseparation energy defined as\n\\begin{equation}\n\\label{eq:sn-dn}\nS_{\\Delta N n}(A,Z-1) \\equiv M(A-\\Delta N,Z-1)c^2 - M(A,Z-1)c^2 + \\Delta N m_n c^2\n\\end{equation}\nis negative. \n\n\n\n\\section{Numerical results}\n\\label{sec:results}\n\n\\subsection{Nonaccreting neutron stars}\n\\label{sec:results-nonaccreting}\n\nWe have calculated the properties of neutron-star crusts at the neutron-drip point by minimizing the Gibbs free energy per nucleon~(\\ref{eq:gibbs}), \nboth in the absence and in the presence of a strong magnetic field. In the latter case, we have set $B_\\star=500$, $1000$, $1500$ and $2000$ corresponding \nto magnetic fields in the range $2.2 \\times 10^{16}$~G to $8.8 \\times 10^{16}$~G. Since nuclear masses at this depth of the outer crust are not experimentally \nknown, the predictions for the dripping nucleus are model dependent. The neutron-drip properties are summarized in Table~\\ref{tab:drip-cat} in \nunmagnetized neutron stars, and in Tables~\\ref{tab:drip-cat-mag-500}-\\ref{tab:drip-cat-mag-2000} in strongly magnetized neutron stars (magnetars). \n\nFigure~\\ref{fig:cat-ndrip-B} shows that for any given value of the magnetic field strength, the neutron-drip density increases almost linearly with the slope \nof the symmetry energy $L$ (or equivalently with $J$ since the two coefficients are strongly correlated, as previously discussed in Sec.~\\ref{sec:hfb-models}). \nOn the other hand, the behavior of the neutron-drip density with respect to the magnetic field strength exhibits typical quantum oscillation whereas the \nneutron-drip pressure increases monotonically, as recently discussed in Ref.~\\cite{chamel2015b}. The errors of the analytical formulas~(\\ref{eq:ndrip-cat-approx})-(\\ref{eq:Pdrip-cat-approx}) amount to $0.1\\%$ at most, as compared to the numerical solution of Eq.~(\\ref{eq:n-drip-mue}). The proton fraction $Z\/A$ \nat the neutron-drip point is also found to be strongly correlated with the symmetry energy. As shown in the right panel of Fig.~\\ref{fig:cat-AZ-B}, $Z\/A$ decreases \nalmost linearly with increasing $L$ (or $J$). Similar behaviors of $Z\/A$ and $n_{\\rm drip}$ with $L$ have been recently obtained in Ref.~\\cite{bao2014}, and \ncan be inferred from the discussions in Refs.~\\cite{roca2008,provid2014,grill2014}. Nevertheless, in all cases they considered the limiting case $B_\\star=0$. \nIn Ref.~\\cite{bao2014}, the authors studied the role of the symmetry energy on the properties of neutron-star crusts around \nthe neutron-drip threshold using two sets of relativistic mean field (RMF) models based on the TM1 and IUFSU parametrizations respectively. They generated \nseries of models so as to achieve different values of $L$ keeping the symmetry energy at $n=0.11$~fm$^{-3}$ fixed. In our case, the fixed value of the symmetry \nenergy at $n\\approx 0.11$~fm$^{-3}$ results from the mass fit without any further constraint. Although the variations of $Z\/A$ and $n_{\\rm drip}$ they found \nare nonlinear over this range of values of $L$, the variations become almost linear on the narrower range we consider (from about 37~MeV to about 69~MeV). \nAlthough it has been found that a soft symmetry energy favors neutron drip in isolated nuclei~\\cite{todd2003}, this result does not necessarily imply the observed \ncorrelation between $n_{\\rm drip}$ and $L$. Indeed, as recently discussed in Ref.~\\cite{chamel2015a}, the dripping nucleus in the crust is actually stable \nagainst neutron emission, but unstable against electron captures followed by neutron emission. Actually, as will be discussed in Sec.~\\ref{sec:results-accreting}, \naccreting neutron star crusts exhibit different correlations between $n_{\\rm drip}$ and $L$. \n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_cat_22-25_B_ndrip.eps}\n\\end{center}\n\\caption{(Color online) Neutron-drip density in nonaccreting neutron-star crusts as a function of the slope $L$ of the symmetry energy of infinite homogeneous nuclear matter at saturation\nand for different magnetic field\nstrengths, as obtained using the HFB-22 to HFB-25 Brussels-Montreal nuclear mass models~\\cite{goriely2013}.}\n\\label{fig:cat-ndrip-B}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_cat_22-25_AZ_B.eps}\n\\end{center}\n\\caption{(Color online) (a) Mass number $A$ and (b) proton fraction $Z\/A$ at the neutron-drip transition for nonaccreting neutron-star crusts as a function of the slope $L$ of the \nsymmetry energy of infinite homogeneous nuclear matter at saturation, \nas obtained using the HFB-22 to HFB-25 Brussels-Montreal nuclear mass models~\\cite{goriely2013}. Squares (circles) correspond to $B_\\star=0, 500$ and $1500$ \n($B_\\star =1000$ and $2000$).}\n\\label{fig:cat-AZ-B}\n\\end{figure*}\n\n\nThe role of the symmetry energy on the properties of the crust at the neutron-drip transition can be understood as follows. Neglecting electron-ion interactions, \nthe neutron-drip condition~(\\ref{eq:n-drip-mue}) reduces to\n\\begin{equation}\n\\label{eq:mue-approx}\n\\mu_e\\approx \\mu_e^{\\rm drip} = m_e c^2 + \\frac{A}{Z}\\left(m_n c^2- \\frac{M^\\prime(A,Z)c^2}{A}\\right)\\, .\n\\end{equation}\nFor the sake of simplicity, let us consider a two-parameter mass formula: \n\\begin{equation}\n\\label{eq:2par-ldm}\nM^\\prime(A,Z)c^2=A \\left[a_{\\rm eff}+J_{\\rm eff} \\left(1-2 \\frac{Z}{A} \\right)^2+m_u c^2 \\right] + Z m_e c^2\\, ,\n\\end{equation}\nwhere $m_u$ is the unified mass unit, $a_{\\rm eff}<0$ is the contribution from charge-symmetric matter, while the deviations introduced by \nthe charge asymmetry are embedded in the coefficient $J_{\\rm eff}>0$. Note that due to nuclear surface effects the values of these coefficients \ndo not need to be the same as their corresponding values in infinite homogeneous nuclear matter at saturation. In particular, as already discussed \nin Sec.~\\ref{sec:hfb-models}, the ``effective'' symmetry energy coefficient $J_{\\rm eff}$ is expected to be smaller than $J$, and to decrease \nwith increasing $J$ or $L$. Minimizing $g$ using Eq.~(\\ref{eq:mue-approx}) and the mass formula~(\\ref{eq:2par-ldm}), the equilibrium proton fraction \nat the neutron-drip transition is approximately given by~\\cite{chamel2012} \n\\begin{equation}\n\\label{eq:zovera-cat}\n\\frac{Z}{A}\\approx \\frac{1}{2}\\sqrt{1+\\frac{a_{\\rm eff}}{J_{\\rm eff}}}\\, .\n\\end{equation}\nThis shows that $Z\/A$ decreases with increasing $L$ (decreasing $J_{\\rm eff}$). Note that in this analysis we have not made any assumption \nregarding the magnetic field. In other words, the correlation between $Z\/A$ and $L$ (or $J$) is thus expected to be independent of the magnetic \nfield strength (at least at the level of accuracy of the simple mass formula considered here), in agreement with the results plotted in the right panel of \nFig.~\\ref{fig:cat-AZ-B}. It follows from Eq.~(\\ref{eq:2par-ldm}) that decreasing $J_{\\rm eff}$ increases $M^\\prime(A,Z)$ (the energy cost associated \nwith charge asymmetry is reduced, therefore nuclei are more bound). Using Eq.~(\\ref{eq:mue-approx}), we find that $\\mu_e^{\\rm drip}$ increases with \n$L$. Since $n_{\\rm drip}$ and $P_{\\rm drip}$ increase with $\\mu_e^{\\rm drip}$, as shown in Eqs.~(\\ref{eq:ndrip-cat-approx}) and (\\ref{eq:Pdrip-cat-approx})\nin the absence of magnetic field, and in Eqs.~(\\ref{eq:ndrip-cat-approx-b}) and (\\ref{eq:Pdrip-cat-approx-b}) in the presence of a strongly quantizing \nmagnetic field, we can thus conclude that the neutron-drip transition is shifted to higher density and pressure with increasing the symmetry energy, as \nshown in Fig.~\\ref{fig:cat-ndrip-B}. \n\n\nThe equilibrium nucleus at the neutron-drip transition is less sensitive to the symmetry energy, as previously noticed in Ref.~\\cite{bao2014} in the absence \nof magnetic fields (see their Fig.~9). This can be understood as follows. The equilibrium with respect to weak interaction processes requires \n\\begin{equation}\n\\label{eq:betaeq}\n\\mu_p+\\mu_e=\\mu_n\\, ,\n\\end{equation}\nwhere $\\mu_p$ ($\\mu_n$) is the proton (neutron) chemical potential. Substituting the neutron-drip value of the neutron chemical potential $\\mu_n=m_n c^2$ in\nEq.~(\\ref{eq:betaeq}) and using Eq.~(\\ref{eq:mue-approx}), we obtain\n\\begin{equation}\n\\label{eq:mup}\n\\mu_p-m_p c^2 = Q_{n,\\beta}+\\frac{A}{Z}\\left(\\frac{M^\\prime(A,Z)c^2}{A}-m_n c^2 \\right)\\, ,\n\\end{equation}\nwhere $m_p$ is the proton mass and $Q_{n,\\beta} = 0.782$~MeV is the $\\beta$-decay energy of the neutron. The quantity on the left-hand side of Eq.~(\\ref{eq:mup}) \nis approximately equal to the opposite of the one-proton separation energy. This shows that the equilibrium nucleus is uniquely determined by nuclear masses only, \nand is sensitive to the details of the nuclear structure. As a consequence, the predicted nucleus depends on the nuclear mass model employed (see, e.g., \nRefs.~\\cite{roca2008,pearson2011,wolf2013,kreim2013,chamel2015c}). \nTo better illustrate this point, we have plotted in Fig.~\\ref{fig:mass_diff} the differences in \nthe mass predictions between HFB-22, HFB-25, and HFB-24 mass models for two isotopic chains, corresponding to the proton number at the neutron-drip \npoint (see also Table~\\ref{tab:drip-cat}). As shown in Fig.~\\ref{fig:mass_diff}, the HFB-22 model deviates more significantly from the ``reference'' \nmodel HFB-24 than HFB-25, thus explaining the quantitative differences in the dripping nucleus. The variations of $Z$ and $A$ with $L$ we find \nappear to be more irregular than those shown in Fig.~9 of Ref.~\\cite{bao2014}. This stems from the fact that in Ref.~\\cite{bao2014} nuclear masses were \ncalculated using the semi-classical Thomas-Fermi approximation, which does not take into account pairing and shell effects contrary to the fully quantum mechanical \nmass models~\\cite{goriely2013} employed here. \n\n\nThe presence of a strong magnetic field can change the composition at the neutron-drip point, as shown in the left panel of Fig.~\\ref{fig:cat-AZ-B} \n(see also Tables~\\ref{tab:drip-cat-mag-500}-\\ref{tab:drip-cat-mag-2000}). \nHowever, such behavior is only observed for the nuclear mass model HFB-22. In particular, the equilibrium nucleus is $^{122}$Kr for $B_\\star=0, 500$ and $1500$, \nwhile for $B_\\star=1000$ and $2000$ it is $^{128}$Sr. These results can be understood as follows. As discussed in Sec.~\\ref{sec:neutron-drip}, the equilibrium \nnucleus at the neutron-drip pressure $P_{\\rm drip}$ must be such as to minimize the Gibbs free energy per nucleon, therefore we must have \n\\begin{equation}\\label{eq:stability}\ng(A,Z,P_{\\rm drip}) < g(A^\\prime,Z^\\prime,P_{\\rm drip})\\, ,\n\\end{equation}\nfor any values of $A^\\prime\\neq A$ and $Z^\\prime\\neq Z$. This condition can be approximately expressed as~\\cite{chamel2015b}\n\\begin{equation}\\label{eq:dripping-nucleus}\n\\frac{M^\\prime(A^\\prime,Z^\\prime)c^2}{Z^\\prime} - \\frac{M^\\prime(A,Z)c^2}{Z} > \\left(\\frac{A^\\prime}{Z^\\prime}-\\frac{A}{Z}\\right) m_n c^2 + C e^2 n_e^{1\/3}\\left( Z^{2\/3}-Z^{\\prime\\, 2\/3}\\right)\\, ,\n\\end{equation}\nwhere the electron density $n_e$ has to be determined from Eq.~(\\ref{eq:n-drip-mue}). Equation~(\\ref{eq:dripping-nucleus}) can be equivalently written \nas $n_e < n_e^0$, where \n\\begin{equation}\\label{eq:ne0}\nn_e^0\\equiv \\biggl[\\frac{M^\\prime(A^\\prime,Z^\\prime)c^2}{Z^\\prime} - \\frac{M^\\prime(A,Z)c^2}{Z} - \\left(\\frac{A^\\prime}{Z^\\prime}-\\frac{A}{Z}\\right) m_n c^2\\biggr]^3\n\\biggl[C e^2 \\left( Z^{2\/3}-Z^{\\prime\\, 2\/3}\\right)\\biggr]^{-3}\\, .\n\\end{equation}\nThe HFB-22 nuclear mass model predicts very similar values for the \nthreshold electron Fermi energy $\\mu_e^{\\rm drip}$ for nuclei $^{128}$Sr and $^{122}$Kr: $24.970$ and $25.006$~MeV respectively. Substituting the theoretical \nvalues of the masses of $^{128}$Sr and $^{122}$Kr in Eq.~(\\ref{eq:ne0}) with $Z=36$, $A=122$, $Z^\\prime=38$, $A^\\prime=128$, we obtain $n_e^0\\approx 8.54\\times 10^{-5}$~fm$^{-3}$. \nDue to Landau quantization of electron motion, $n_e$ varies non-monotonically with $B_\\star$. As a consequence, the lattice term in Eq.~(\\ref{eq:dripping-nucleus}) \ncan thus become comparable to the other terms depending on $B_\\star$ to the effect that the condition (\\ref{eq:stability}) may be violated (i.e. $n_e\\geq n_e^0$), as \nshown in Fig.~\\ref{fig:drip-nuc-hfb22}. \nTransitions between $^{128}$Sr and $^{122}$Kr are found to occur at magnetic field strengths $B_\\star\\approx 861, 1239$ and $1883$. As shown in the right panel of \nFig.~\\ref{fig:cat-AZ-B}, the proton fraction $Z\/A$ is barely affected by these changes of composition. In other words, the correlation between $Z\/A$ and the symmetry \nenergy is almost independent of the magnetic field strength, as previously discussed. \n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{mass_differ_cat.eps}\n\\end{center}\n\\caption{(Color online) Difference in mass predictions for two pairs of Brussels-Montreal nuclear mass models along the two isotopic chains $Z=36$ and $Z=38$ relevant at neutron drip.}\n\\label{fig:mass_diff}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_nuc_hfb22.eps}\n\\end{center}\n\\caption{Electron density $n_e$ (solid line) at the neutron-drip transition in nonaccreting neutron-star crusts as a \nfunction of the magnetic field strength, using the Brussels-Montreal nuclear mass model HFB-22 and assuming that the neutron-drip nucleus \nis $^{122}$Kr, as in the absence of magnetic field. The horizontal dotted line represents $n_e^0$, as given by Eq.~(\\ref{eq:ne0}). \nSee text for details. }\n\\label{fig:drip-nuc-hfb22}\n\\end{figure*}\n\n\n\\begin{table}\n\\centering\n\\caption{Neutron-drip transition in the crust of nonaccreting and unmagnetized neutron stars, as predicted by the HFB-22 to HFB-25 Brussels-Montreal nuclear mass models: mass and atomic numbers of the dripping nucleus, baryon density, and corresponding pressure. }\\smallskip\n\\label{tab:drip-cat}\n\\begin{tabular}{ccccc}\n\\hline\n & $A$ & $Z$ & $n_{\\rm drip}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip}$ ($10^{-4}$ MeV~fm$^{-3}$)\\\\\n\\hline \\noalign {\\smallskip}\nHFB-22 & 122 & 36 & 2.71 & 4.99 \\\\\nHFB-23 & 126 & 38 & 2.63 & 4.93 \\\\\nHFB-24 & 124 & 38 & 2.56 & 4.87 \\\\\nHFB-25 & 122 & 38 & 2.51 & 4.83 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\begin{table}\n\\centering\n\\caption{Neutron-drip transition in the crust of nonaccreting magnetized neutron stars with $B_\\star=500$, as predicted by the HFB-22 to HFB-25 Brussels-Montreal nuclear mass models: mass and atomic numbers of the dripping nucleus, baryon density, and corresponding pressure.}\\smallskip\n\\label{tab:drip-cat-mag-500}\n\\begin{tabular}{ccccc}\n\\hline\n & $A$ & $Z$ & $n_{\\rm drip}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip}$ ($10^{-4}$ MeV~fm$^{-3}$)\\\\\n\\hline \\noalign {\\smallskip}\nHFB-22 & 122 & 36 & 2.74 & 5.52 \\\\\nHFB-23 & 126 & 38 & 2.66 & 5.45 \\\\\nHFB-24 & 124 & 38 & 2.61 & 5.39 \\\\\nHFB-25 & 122 & 38 & 2.55 & 5.35 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-cat-mag-500} but for $B_\\star=1000$.}\\smallskip\n\\label{tab:drip-cat-mag-1000}\n\\begin{tabular}{ccccc}\n\\hline\n & $A$ & $Z$ & $n_{\\rm drip}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip}$ ($10^{-4}$ MeV~fm$^{-3}$)\\\\\n\\hline \\noalign {\\smallskip}\nHFB-22 & 128 & 38 & 3.06 & 6.70 \\\\\nHFB-23 & 126 & 38 & 2.98 & 6.63 \\\\\nHFB-24 & 124 & 38 & 2.91 & 6.56 \\\\\nHFB-25 & 122 & 38 & 2.85 & 6.52 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-cat-mag-500} but for $B_\\star=1500$.}\\smallskip\n\\label{tab:drip-cat-mag-1500}\n\\begin{tabular}{ccccc}\n\\hline\n & $A$ & $Z$ & $n_{\\rm drip}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip}$ ($10^{-4}$ MeV~fm$^{-3}$)\\\\\n\\hline \\noalign {\\smallskip}\nHFB-22 & 122 & 36 & 2.30 & 8.66 \\\\\nHFB-23 & 126 & 38 & 2.24 & 8.60 \\\\\nHFB-24 & 124 & 38 & 2.20 & 8.56 \\\\\nHFB-25 & 122 & 38 & 2.16 & 8.52 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-cat-mag-500} but for $B_\\star=2000$.}\\smallskip\n\\label{tab:drip-cat-mag-2000}\n\\begin{tabular}{ccccc}\n\\hline\n & $A$ & $Z$ & $n_{\\rm drip}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip}$ ($10^{-3}$ MeV~fm$^{-3}$)\\\\\n\\hline \\noalign {\\smallskip}\nHFB-22 & 128 & 38 & 3.06 & 11.6 \\\\\nHFB-23 & 126 & 38 & 3.00 & 11.6 \\\\\nHFB-24 & 124 & 38 & 2.95 & 11.5 \\\\\nHFB-25 & 122 & 38 & 2.89 & 11.4 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Accreting neutron stars}\n\\label{sec:results-accreting}\n\nFor accreting neutron-star crusts, we have studied the neutron-drip transition as explained in Sec.~\\ref{sec:neutron-drip} (see also Ref.~\\cite{chamel2015a}). \nWe have considered different initial compositions: the ashes produced by an $rp$-process during an x-ray burst \\cite{schatz2001}, and the ashes produced by \nsteady state hydrogen and helium burning \\cite{schatz2003} as expected to occur during superbursts \\cite{gupta2007}. After determining the dripping nucleus, we \nhave calculated the neutron-drip density and pressure by solving numerically Eq.~(\\ref{eq:e-capture+n-emission-gibbs-approx}) considering all possible \nneutron-emission processes. Results are summarized in Tables~\\ref{tab:drip-acc-22}-\\ref{tab:drip-acc-25} for different nuclear mass models. \nFor comparison with previous works~\\cite{hz1990, hz2003}, we have also considered ashes of x-ray bursts consisting of pure $^{56}$Fe. Results are indicated in \nTable~\\ref{tab:drip-acc-a56}. \n\nAs already pointed out in Ref.~\\cite{chamel2015a}, for a given nuclear mass model the neutron-drip transition in accreting neutron stars can occur at \neither lower or at higher densities and pressures than in nonaccreting neutron stars. Depending on the mass model adopted, the neutron-drip density thus \nranges from $1.60 \\times 10^{-4}$~fm$^{-3}$ to $3.90 \\times 10^{-4}$~fm$^{-3}$, \nand the corresponding pressure from $2.77 \\times 10^{-4}$~MeV~fm$^{-3}$ to $7.77 \\times 10^{-4}$~MeV~fm$^{-3}$. The numerical results \nobtained solving Eq.~(\\ref{eq:e-capture+n-emission-gibbs-approx}) are reproduced by the analytical formulas~(\\ref{eq:ndrip-acc}) and \n(\\ref{eq:Pdrip-acc}) with an error of $0.1\\%$ at most. \n\nThe change of the neutron-drip density with the slope $L$ of the symmetry energy is found to be very different from that obtained in \nnonaccreting neutron-star crusts. As shown in Fig.~\\ref{fig:acc-ndrip}, for some ashes like $^{104}$Cd, no obvious correlation is observed while for \nother ashes like $^{66}$Ni, $n_{\\rm drip-acc}$ appears to be \\emph{anticorrelated} with $L$: $n_{\\rm drip-acc}$ decreases with increasing $L$. \nThis behavior can be understood as follows. Ignoring electron-ion interactions, the neutron-drip condition~(\\ref{eq:e-capture+n-emission-gibbs-approx}) \nreduces to\n\\begin{equation}\n\\mu_e\\approx \\mu_e^{\\rm drip-acc} = M^\\prime(A-\\Delta N,Z-1)c^2-M^\\prime(A,Z)c^2 +m_n c^2 \\Delta N + m_e c^2\\, , \n\\end{equation}\nwhich can be more conveniently written as \n\\begin{equation}\\label{eq:muedrip-acc-s}\n\\mu_e^{\\rm drip-acc} = S_{\\Delta N n}(A,Z-1) + \\mu_e^\\beta(A,Z)\\, ,\n\\end{equation}\nusing Eq.~(\\ref{eq:sn-dn}) and introducing the threshold electron Fermi energy for the onset of electron captures (see, e.g. Ref.~\\cite{chamel2015d} for a recent \ndiscussion)\n\\begin{equation}\n\\mu_e^\\beta(A,Z)=M^\\prime(A,Z-1)c^2 -M^\\prime(A,Z)c^2 + m_e c^2\\, .\n\\end{equation}\nThe mass difference $\\Delta M^\\prime=M'(A,Z-1) -M'(A,Z)$, which represents the change of mass associated with the substitution of a proton by a neutron, \nis expected to be mainly determined by symmetry energy effects. On the other hand, the $\\Delta N$-neutron separation energy $S_{\\Delta N n}(A,Z-1)$ \nis likely to be more dependent on the details of the nuclear structure than on the symmetry energy. As shown in Tables~\\ref{tab:drip-acc-22}-\\ref{tab:drip-acc-a56}, \n$|S_{\\Delta N n}(A,Z-1)| \\ll \\Delta M^\\prime c^2$ therefore $\\mu_e^{\\rm drip-acc}\\approx \\mu_e^\\beta$. On the other hand, as discussed in Sec.~\\ref{sec:neutron-drip}, \nthe composition of accreting neutron-star crusts at the neutron-drip transition is directly determined by the condition $S_{\\Delta N n}(A,Z-1)<0$. Provided \nthe dependence on the symmetry energy of $S_{\\Delta N n}(A,Z-1)$ is weak enough, the dripping nucleus will thus be independent of $L$. In this case, the \nvariations of $S_{\\Delta N n}(A,Z-1)$ with $L$ are typically much smaller than the variations of $\\Delta M^\\prime c^2$, as shown in Figs.~\\ref{fig:deltamprime} and \n\\ref{fig:Sdeltan}. Using the simple mass formula~(\\ref{eq:2par-ldm}), we find\n\\begin{equation}\n\\mu_e^\\beta(A,Z)=\\Delta M^\\prime c^2 +m_e c^2 \\approx 4J_{\\rm eff}\\left(1+\\frac{1-2Z}{A}\\right)\\, .\n\\end{equation}\nThis means that with increasing $J$ or $L$ (decreasing $J_{\\rm eff}$), $\\Delta M^\\prime$ and $\\mu_e^\\beta(A,Z)$ both \\emph{decrease}. It thus follows from \nEqs.~(\\ref{eq:ndrip-acc}), (\\ref{eq:Pdrip-acc}), and Eqs.~(\\ref{eq:muedrip-acc-s}), that $\\mu_e^{\\rm drip-acc}$, $n_{\\rm drip-acc}$ and $P_{\\rm drip-acc}$ \nalso decrease with $L$, as shown in the upper panel of Fig.~\\ref{fig:acc-ndrip}. \nIn the peculiar case of $A=105$, the rather low value of $\\Delta M^\\prime c^2$ predicted by HFB-25 is compensated by a comparatively high value $S_{\\Delta N n}(A,Z-1)$, \nas can be seen in Figs.~\\ref{fig:deltamprime} and \\ref{fig:Sdeltan}. As a result, $n_{\\rm drip-acc}$ is still found to decrease with increasing $L$ despite the nonmonotonic \nvariation of $\\Delta M^\\prime$, as shown in Fig.~\\ref{fig:acc-ndrip}. \nFor some ashes, the variations of $S_{\\Delta N n}(A,Z-1)$ are comparable to those of $\\Delta M^\\prime c^2$, and large enough to even change the composition. This \nleads to nonmonotonic variations of the neutron-drip density and pressure, as illustrated in the lower panel of Fig.~\\ref{fig:acc-ndrip}. In these cases, \neffects other than the symmetry energy play a role. The anticorrelation between $n_{\\rm drip-acc}$ (or $P_{\\rm drip-acc}$) and $L$ thus relies to \na large extent on the importance of nuclear structure effects far from the stability valley. \n\n\\begin{table}\n\\centering\n\\caption{Neutron-drip transition in the crust of accreting neutron stars, as predicted by the HFB-22 Brussels-Montreal nuclear mass model: mass and atomic numbers of the dripping nucleus, number of emitted neutrons, baryon density $n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$), and corresponding pressure $P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$), $S_{\\Delta N n}(A,Z-1)$ (MeV), $\\Delta M^\\prime$ (MeV$\/c^2$), and $\\mu_e^{\\rm drip-acc}$ (MeV).\nThe mass numbers $A$ are listed from top to bottom considering that the ashes are produced by ordinary\nx-ray bursts (upper panel) or superbursts (lower panel). See text for details.}\\smallskip\n\\label{tab:drip-acc-22}\n\\begin{tabular}{cccccccc}\n\\hline\n $A$ & $ Z$ & $\\Delta N$& $n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$) & $S_{\\Delta N n}$ (MeV) & $\\Delta M^\\prime$ (MeV$\/c^2$) & $\\mu_e^{\\rm drip-acc}$ (MeV) \\\\\n \\hline\n 104 & 32 & 1 & 2.71 & 5.31 & -0.79 & 25.14 & 24.86 \\\\\n 105 & 33 & 1 & 1.90 & 3.40 & -1.00 & 22.70 & 22.21 \\\\\n 68 & 22 & 1 & 2.31 & 4.64 & -0.28 & 24.14 & 24.37 \\\\\n 64 & 18 & 5 & 3.90 & 7.77 & -1.85 & 29.23 & 27.89 \\\\\n 72 & 22 & 1 & 2.89 & 5.78 & -0.31 & 25.55 & 25.75 \\\\\n 76 & 24 & 1 & 2.86 & 5.95 & -0.17 & 25.52 & 25.86 \\\\\n 98 & 32 & 1 & 1.93 & 3.66 & -0.20 & 22.34 & 22.65 \\\\\n 103 & 33 & 1 & 1.60 & 2.77 & -0.02 & 20.62 & 21.11 \\\\\n 106 & 32 & 1 & 2.92 & 5.71 & -0.69 & 25.50 & 25.32 \\\\\n \\hline\n 66 & 22 & 1 & 1.99 & 3.95 & -0.19 & 23.09 & 23.41 \\\\\n 64 & 18 & 5 & 3.90 & 7.77 & -1.85 & 29.23 & 27.89 \\\\\n 60 & 20 & 1 & 1.83 & 3.55 & -1.67 & 24.02 & 22.86 \\\\\n \\end{tabular}\n \\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-acc-22} but for the HFB-23 Brussels-Montreal nuclear mass model.}\\smallskip\n\\label{tab:drip-acc-23}\n\\begin{tabular}{cccccccc}\n\\hline\n $A$ & $Z$ & $\\Delta N$& $n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$) & $S_{\\Delta N n}$ (MeV) & $\\Delta M^\\prime$ (MeV$\/c^2$) & $\\mu_e^{\\rm drip-acc}$ (MeV) \\\\\n \\hline\n 104 & 32 & 1 & 2.83 & 5.62 & -1.02 & 25.73 & 25.22 \\\\\n 105 & 33 & 1 & 1.97 & 3.57 & -1.33 & 23.30 & 22.48 \\\\\n 68 & 22 & 1 & 2.35 & 4.73 & -0.56 & 24.54 & 24.49 \\\\\n 64 & 20 & 1 & 3.27 & 7.04 & -0.13 & 26.75 & 27.13 \\\\\n 72 & 22 & 1 & 3.06 & 6.24 & -0.11 & 25.84 & 26.24\\\\\n 76 & 24 & 1 & 2.94 & 6.18 & -0.38 & 25.98 & 26.11 \\\\\n 98 & 32 & 1 & 1.98 & 3.77 & -0.50 & 22.82 & 22.83 \\\\\n 103 & 31 & 1 & 2.45 & 4.50 & -0.91 & 24.29 & 23.89 \\\\\n 106 & 34 & 1 & 2.11 & 4.02 & -0.009 & 22.63 & 23.13 \\\\\n \\hline\n 66 & 22 & 1 & 2.07 & 4.16 & -0.23 & 23.43 & 23.71 \\\\\n 64 & 20 & 1 & 3.27 & 7.04 & -0.13 & 26.75 & 27.13 \\\\\n 60 & 20 & 1 & 1.93 & 3.79 & -1.77 & 24.50 & 23.24 \\\\\n \\end{tabular}\n \\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-acc-22} but for the HFB-24 Brussels-Montreal nuclear mass model.}\\smallskip\n\\label{tab:drip-acc-24}\n\\begin{tabular}{cccccccc}\n\\hline\n $A$ & $Z$ & $\\Delta N$& $n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$) & $S_{\\Delta N n}$ (MeV) & $\\Delta M^\\prime$ (MeV$\/c^2$) & $\\mu_e^{\\rm drip-acc}$ (MeV) \\\\\n \\hline\n 104 & 32 & 1 & 2.87 & 5.73 & -1.49 & 26.32 & 25.34 \\\\\n 105 & 33 & 1 & 2.10 & 3.89 & -1.11 & 23.57 & 22.97 \\\\\n 68 & 22 & 1 & 2.45 & 5.00 & -0.75 & 25.07 & 24.83 \\\\\n 64 & 22 & 1 & 1.66 & 3.22 & -0.07 & 21.81 & 22.25 \\\\\n 72 & 22 & 1 & 3.08 & 6.28 & -0.29 & 26.07 & 26.29 \\\\\n 76 & 24 & 1 & 3.10 & 6.61 & -0.46 & 26.50 & 26.55 \\\\\n 98 & 32 & 1 & 2.04 & 3.94 & -0.38 & 22.95 & 23.08 \\\\\n 103 & 31 & 3 & 2.49 & 4.59 & -0.89 & 24.39 & 24.01 \\\\\n 106 & 34 & 1 & 2.17 & 4.16 & -1.10 & 23.92 & 23.33 \\\\\n \\hline\n 66 & 22 & 1 & 2.09 & 4.22 & -0.29 & 23.58 & 23.80 \\\\\n 64 & 22 & 1 & 1.66 & 3.22 & -0.07 & 21.81 & 22.25 \\\\\n 60 & 20 & 1 & 2.03 & 4.05 & -1.66 & 24.78 & 23.63 \\\\\n \\end{tabular}\n \\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Same as in Table~\\ref{tab:drip-acc-22} but for the HFB-25 Brussels-Montreal nuclear mass model.}\\smallskip\n\\label{tab:drip-acc-25}\n\\begin{tabular}{cccccccc}\n\\hline\n $A$ & $Z$ & $\\Delta N$& $n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$) & $P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$) & $S_{\\Delta N n}$ (MeV) & $\\Delta M^\\prime$ (MeV$\/c^2$) & $\\mu_e^{\\rm drip-acc}$ (MeV) \\\\\n \\hline\n 104 & 34 & 1 & 2.02 & 3.87 & -0.009 & 22.42 & 22.92 \\\\\n 105 & 33 & 1 & 2.18 & 4.07 & -0.57 & 23.30 & 23.24 \\\\\n 68 & 22 & 1 & 2.59 & 5.40 & -0.75 & 25.55 & 25.31 \\\\\n 64 & 22 & 1 & 1.79 & 3.59 & -0.18 & 22.52 & 22.85 \\\\\n 72 & 22 & 1 & 3.27 & 6.82 & -0.55 & 26.87 & 26.83 \\\\\n 76 & 24 & 1 & 3.14 & 6.73 & -0.58 & 26.74 & 26.67 \\\\\n 98 & 32 & 1 & 2.11 & 4.12 & -0.64 & 23.47 & 23.34 \\\\\n 103 & 33 & 1 & 1.78 & 3.18 & -0.76 & 22.10 & 21.85 \\\\\n 106 & 34 & 1 & 2.25 & 4.36 & -0.05 & 23.15 & 23.61 \\\\\n \\hline\n 66 & 22 & 1 & 2.20 & 4.52 & -0.22 & 23.92 & 24.21 \\\\\n 64 & 22 & 1 & 1.79 & 3.59 & -0.18 & 22.52 & 22.85 \\\\\n 60 & 20 & 1 & 2.08 & 4.21 & -1.94 & 25.28 & 23.85 \\\\\n \\end{tabular}\n \\end{table}\n\n\n\\begin{table}\n\\centering\n\\caption{Neutron-drip transition in the crust of accreting neutron stars, as predicted by different Brussels-Montreal nuclear mass models for $^{56}$Fe ashes: atomic number $Z$ of the dripping nucleus, number of emitted neutrons, density and corresponding pressure, $S_{\\Delta N n}(A,Z-1)$, $\\Delta M^\\prime$ (MeV$\/c^2$), and $\\mu_e^{\\rm drip-acc}$ (MeV). See text for details.}\\smallskip\n\\label{tab:drip-acc-a56}\n\\begin{tabular}{ccccc}\n\\hline\n & HFB-22 & HFB-23 & HFB-24 & HFB-25 \\\\\n\\hline\n$Z$ & 18 & 18 & 18 & 18 \\\\\n$\\Delta N$ & 1 & 1 & 3 & 1 \\\\\n$n_{\\rm drip-acc}$ ($10^{-4}$ fm$^{-3}$) & 2.49 & 2.58 & 2.73 & 2.84 \\\\\n$P_{\\rm drip-acc}$ ($10^{-4}$ MeV~fm$^{-3}$) & 5.10 & 5.34 & 5.77 & 6.07 \\\\\n$S_{\\Delta N n}$ (MeV) & -1.17 & -1.32 & -3.27 & -1.61 \\\\\n$\\Delta M^\\prime$ (MeV$\/c^2$) & 25.76 & 26.20 & 28.64 & 27.32 \\\\\n$\\mu_e^{\\rm drip-acc}$ (MeV) & 25.10 & 25.39 & 25.88 & 26.22 \\\\\n\\end{tabular}\n\\end{table}\n\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_acc_22-25_ndrip_new.eps}\n\\end{center}\n\\caption{(Color online) Neutron-drip density as a function of the slope $L$ of the symmetry energy of infinite homogeneous nuclear matter at saturation, as obtained using the HFB-22 to HFB-25 Brussels-Montreal nuclear mass \nmodels, for accreting neutron-star crusts with different initial composition of ashes (see text for details).}\n\\label{fig:acc-ndrip}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_acc_22-25_deltam.eps}\n\\end{center}\n\\caption{(Color online) Mass difference $\\Delta M^\\prime$ (in units of MeV$\/c^2$) as a function of the slope $L$ of the symmetry energy of infinite homogeneous nuclear matter at saturation, as obtained using the \nHFB-22 to HFB-25 Brussels-Montreal nuclear mass models, for accreting \nneutron-star crusts with different initial composition of ashes (see text for details).}\n\\label{fig:deltamprime}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[scale=0.45]{drip_acc_22-25_Sdeltan.eps}\n\\end{center}\n\\caption{(Color online) $\\Delta N$-neutron separation energy $S_{\\Delta N n}$ (in units of MeV) as a function of the slope $L$ of the symmetry energy of infinite homogeneous nuclear matter at saturation, as obtained using the \nHFB-22 to HFB-25 Brussels-Montreal nuclear mass models, for accreting \nneutron-star crusts with different initial composition of ashes (see text for details).}\n\\label{fig:Sdeltan}\n\\end{figure*}\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nWe have studied the role of the symmetry energy on the neutron-drip transition in both accreting and nonaccreting neutron-star crusts. We have also allowed for \nthe presence of a strong magnetic field, as in magnetars. The masses of nuclei encountered in this region of the neutron-star crust are experimentally unknown. \nFor this reason, we have employed a recent family of microscopic nuclear mass models, from HFB-22 to HFB-25, developed by the Brussels-Montreal collaboration~\\cite{goriely2013}. These models provide equally good fits to the 2353 measured masses of nuclei with $N$ and $Z \\geq 8$ from the 2012 Atomic Mass \nEvaluation~\\cite{audi2012}, with a root-mean-square deviation of about $0.6$~MeV. \nOn the other hand, these models lead to different predictions for the behavior of the symmetry energy in infinite homogeneous\nnuclear matter.\nIn particular, these functionals \nwere constrained so as to yield different \nvalues of the symmetry energy at saturation, from $J=29$~MeV to $J=32$~MeV, the slope of the symmetry energy ranging from $L=37$~MeV to $L=69$~MeV. \n\nFor nonaccreting weakly magnetized neutron stars, the neutron-drip density $n_{\\rm drip}$ is found to increase almost linearly with $L$ (or equivalently with $J$) \nwhile the proton fraction $Z\/A$ decreases, in agreement with previous studies~\\cite{bao2014} (see also Refs.~\\cite{roca2008,provid2014,grill2014}). \nIn the presence of a strong magnetic field, the dripping nucleus\nhence also $Z\/A$ is unchanged, as found in Refs.~\\cite{chamel2012,chamel2015b}, for all models but HFB-22. In this case, the dripping nucleus alternates between \n$^{122}$Kr and $^{128}$Sr depending on the magnetic field strength. This peculiar behavior arises from Landau quantization of electron motion and from the fact \nthat the threshold electron Fermi energy $\\mu_e^{\\rm drip}$ are almost equal. Their proton fraction are also very similar so that all in all the linear \ncorrelation between $Z\/A$ and $L$ is hardly affected by the magnetic field. The neutron-drip density $n_{\\rm drip}$ exhibits typical quantum oscillations as \na function of the magnetic field strength, as recently discussed in Ref.~\\cite{chamel2015b}. Still, $n_{\\rm drip}$ remains linearly correlated with $L$. \nAlthough a soft symmetry energy favors neutron drip in isolated nuclei~\\cite{todd2003}, this result does not necessarily imply the observed correlation \nbetween $n_{\\rm drip}$ and $L$. Indeed, as recently discussed in Ref.~\\cite{chamel2015a}, the dripping nucleus in the crust is actually stable \nagainst neutron emission, but unstable against electron captures followed by neutron emission. In fact, such a correlation is not found in accreting neutron-star crusts. \nDepending on the initial composition of the ashes from x-ray bursts and superbursts, $n_{\\rm drip}$ \n\\emph{decreases} almost linearly with increasing $L$ while the dripping nucleus remains the same. In other cases, the symmetry energy does not seem \nto play any role. \n\nWe have qualitatively explained these different behaviors using a simple mass formula, and making use of the analytical expressions for the neutron-drip density and \npressure obtained in Refs.~\\cite{chamel2015a,chamel2015b}. In particular, we have shown that the anticorrelation between $n_{\\rm drip}$ and $L$ in accreting \nneutron stars depends to a large extent to the relative importance of nuclear structure effects (like shell effects and pairing) and symmetry energy effects on the \nneutron separation energy. More precisely, the anticorrelation is broken whenever the differences between the neutron separation energies predicted by the different \nmass models are large enough to change the dripping nucleus. \n\nIn any case, the composition of the deepest layers of the outer crust of a neutron star is very sensitive to the details of the nuclear structure far from the stability valley. In nonaccreting neutron-star crusts, the neutron-drip transition is mainly governed by the values of the masses of very neutron-rich strontium and krypton isotopes. \nAlthough the masses of these nuclei have not yet been measured, the composition of nonaccreting neutron-star crusts has been recently constrained by experiment \nto deeper layers~\\cite{wolf2013}. The nuclei thought to be present in accreting neutron star crusts span a much larger region of the nuclear chart, depending on \nthe ashes from x-ray bursts and superbursts. In these neutron stars, the neutron-drip transition is not directly determined by nuclear masses but rather by some \ncombinations of masses, namely the (multiple) neutron separation energies and the isobaric two-point mass differences. \n\nThe onset of neutron emission by nuclei marks the transition to the inner region of the neutron-star crust, where neutron-proton clusters coexist with a neutron liquid. \nIn turn, this neutron liquid, which becomes superfluid at low enough temperatures, is expected to play a role in various observed astrophysical phenomena (see, e.g., \nRef.~\\cite{chamelhaensel2008} for a review) like sudden spin-ups and spin-downs (so-called ``glitches'' and ``antiglitches'', respectively)~\\cite{dib2008, gug2014, \narchibald2013, sasmaz2014, duncan2013, kantor2014}, quasiperiodic oscillations detected in the giant flares from soft $\\gamma$-ray repeaters~\\cite{passamonti2014}, \ncooling of strongly magnetized neutron stars~\\cite{aguilera2009}, deep crustal heating (most of the heat being released near the neutron-drip transition~\\cite{haensel2008}), \nand the thermal relaxation of quasipersistent soft x-ray transients~\\cite{shternin2007,brown2009,page2013}. By shifting the neutron-drip transition to higher or lower densities, the symmetry energy may thus leave its imprint on these astrophysical phenomena. \n\n\n\n\\begin{acknowledgments}\nThis work has been mainly supported by Fonds de la Recherche Scientifique - FNRS (Belgium), and by the bilateral project between Fonds de la Recherche Scientifique - FNRS (Belgium), Wallonie-Bruxelles-International (Belgium) and the Bulgarian Academy of Sciences. \nThis work has been also partially supported by the Bulgarian National Science Fund under Contract No. DFNI-T02\/19t and the Cooperation in Science and Technology (COST Action) MP1304 ``NewCompStar''. The authors would like to thank J.~M. Pearson for very fruitful discussions.\n\\end{acknowledgments}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzacyg b/data_all_eng_slimpj/shuffled/split2/finalzzacyg new file mode 100644 index 0000000000000000000000000000000000000000..d3a737e37f7f77f64ac71ceea92158104d371559 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzacyg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe mechanical response of materials to deformations is described by\nthe elasticity theory \\cite{lanlif}. The simplest homogeneous (affine) \ndeformation of a continuum can be expressed by a linear dependence \nof the distorted position ${\\bf r}$ on the original position of \nthat point ${\\bf R}$ via relation \n\\begin{equation}\\label{def:M}\nr_i=M_{ij}R_j\\ ,\n\\end{equation}\nwhere $M_{ij}$ is a {\\em constant} tensor. We will consider both \ntwo-dimensional (2D) and three-dimensional (3D) systems, in which\nLatin subscripts will indicate Cartesian coordinates. (Summation over\nrepeated Latin subscripts is implied.) Tensor $M_{ij}$ can be separated \ninto an identity tensor $I$ and a non-trivial part\n\\begin{equation}\\label{def:epsilon}\nM_{ij}=\\delta_{ij}+\\epsilon_{ij}\\ .\n\\end{equation}\nIn general, $\\epsilon_{ij}$ can be separated into a {\\em symmetric tensor}\nrepresenting deformation and an anti-symmetric tensor representing\nrotation. We neglect the rotation and assume that \n$\\epsilon_{ij}=\\epsilon_{ji}$. While the tensor $\\epsilon_{ij}$\nhas a convenient meaning in actual experiments, usually the\nelastic deformations are formulated in the terms of the\n{\\em Lagrangian strain tensor} $\\eta_{ij}$, which defines the\nchange in the distances between the points \\cite{etausage}:\n\\begin{equation}\\label{def:eta}\nr_ir_i=M_{ik}M_{il}R_kR_l\\equiv(\\delta_{kl}+2\\eta_{kl})R_kR_l\\ .\n\\end{equation}\n\nIn the case of affine deformations the definitions in \nEqs. \\ref{def:M}--\\ref{def:eta} are valid for arbitrarily values of\n$\\epsilon_{ij}$ and $\\eta_{ij}$. However, we will further assume that the\ndeformations are small. For a homogeneous continuum, it suffices\nto apply the deformation described by Eq.~\\ref{def:M} at the\n{\\em boundaries} of the system to assure that the same equation\ndescribes every internal point, while in the case of inhomogeneous\nsystem, application of such a deformation to the boundaries assures\nthat the {\\em mean} deformation is equal to $\\epsilon_{ij}$ \\cite{hashin}. \n\nElastic properties of a condensed matter system\ndescribe the energetic cost of a deformation. However, real system\nconsisting of many moving atoms\/molecules cannot be simply represented\nby a strain tensor assigned to every point in space. Rather, we can\nassume that the {\\em boundaries} of such system undergo an affine\ndeformation described by Eq.~\\ref{def:M}. In such a case, the mean \nfree energy density, $f$, which is the free energy $F$ divided by the\noriginal (unstrained) volume $V_0$ of the system, can be expanded \nin a power series in the strain variables\n\\begin{equation}\\label{def:C}\nf(\\{\\eta\\})=f(\\{0\\})+\\sigma_{ij}\\eta_{ij} +\\frac{1}{2}\nC_{ijkl}\\eta_{ij} \\eta_{kl}+\\ldots\n\\end{equation}\nThe coefficients in this expansion are the stress tensor\n$\\sigma_{ij}$ and the tensor of the (second order) elastic \nconstants $C_{ijkl}$, characterizing a given material. \nIn the case of isotropic pressure $p$, the stress can be\nwritten as $\\sigma_{ij}=-p\\delta_{ij}$. Elastic\nconstants may serve as an indicator of instabilities\nassociated with phase transitions \\cite{birch,zhou}. \n\nElastic response of a system to a deformation can be determined\nwithout actually distorting the system, since equilibrium\ncorrelation functions contain all the necessary information.\nIndeed almost four decades ago Squire, Holt and Hoover (SHH) \\cite{shh}\ndeveloped a formalism that extended the theory of elasticity of\nBorn and Huang \\cite{born} to a finite temperature situation\nand expressed the elastic properties of a system as thermal\naverages of various derivatives of interparticle potentials.\nIn a certain sense the formalism is an extension of the virial\ntheorem \\cite{virtheorem} which relates the thermal averages\nof the products of interparticle forces and the interparticle \nseparations to the stress tensor. (Similar formalism enables\nevaluation of the elastic properties of 2D membranes in 3D space\n\\cite{farago_pincus}.) The SHH method is very well \nadapted for use in numerical simulations in constant volume (and \nshape) ensembles. Other methods, extracting the elastic properties\nfrom shape and volume changes of systems, have also been developed\nand extensively used \\cite{other}.\n\nUsually molecules are not spherically symmetric and we may expect\ninteractions that depend on the orientation of the molecules. The\nintroduction of rotational degrees of freedom into the theoretical\ndescription of a system has an interesting effect on the stress and \nelastic constants. At very low densities (almost ideal gas) the\nrotational degrees of freedom add a large contribution to the\ntotal kinetic energy of a molecule, but do not contribute to the\npressure. In a thermodynamical treatment of stress we are interested\nin the translational degrees of freedom. However, for \nnon-spherically-symmetric potentials, the particle rotation may \nhave a significant indirect contribution even at moderate densities. \nAt larger densities phase diagram may be strongly influenced by\nthose degrees of freedom.\nThe general approach of SHH \\cite{shh} (see also Refs.\n\\cite{zhou,bavaud}) was applied in a detailed form to the \ncase of particles interacting via central two-body forces.\nHowever, the formalism can be easily extended to the systems of\nparticles interacting via non-central potentials, as will be shown\nin Section \\ref{sec:softelast}. \n\nModelling of various systems frequently involves particles that \ninteract via {\\em hard} potentials which are either 0 or $\\infty$: \nin fact simulation of 2D hard disk system dates back to the origins \nof the Metropolis Monte Carlo (MC) method \\cite{origMetro}. An obvious \nreason for the use of such potentials in simulations is their numerical \nsimplicity. However, there are important physical reasons for such \nmodels: in many situations entropy plays a dominant role \nin physical processes, and the absence of energy scale in hard\npotentials ``brings out'' the entropic features of the behavior.\nHard sphere systems have been the subject of an intensive research\nfor several decades now (see \\cite{gast} and references therein).\nThey serve as the simplest models for real fluids, glasses, and\ncolloids. The phase diagram of hard spheres is well known. In\n3D this system undergoes an entropically driven first-order phase \ntransition from liquid to solid phase \\cite{hsfreezing}. Elastic \nconstants of such solids have been explored in the past \n\\cite{numres,runge}. Entropy also plays a crucial role in the \nsystems containing long polymers, such as gels and rubbers \n\\cite{polentropA,degennes_polymer,polentropB}. \nNot surprisingly, hard sphere potentials have been extensively\nused to represent excluded volume interactions between the\nmonomers (see \\cite{baum} and references therein).\nKantor {\\it et al.} \\cite{kkn} introduced {\\it tethering \npotential}, that has no\nenergy but limits the distance between bonded monomers, to\nrepresent covalent bonds in polymeric systems. Such hard potential\ncombined with hard sphere repulsion can be used to simulate a\nvariety of polymeric systems. Recently, Farago and Kantor \n\\cite{fk_formalism} adapted the formalism of SHH \\cite{shh} to\nhard potentials. This new formalism enabled a study of a sequence \nof entropy-dominated systems, such as 2D \\cite{fk_2D} and 3D \n\\cite{fk_3D} gels near sol-gel transition and other systems \\cite{fk_net}.\n\nOrientation of non-spherically-symmetric molecules plays a\ncrucial role in the properties of liquid crystals\n\\cite{degennes_liquid}. For instance, the nematic phase is \ntranslationally\ndisordered but it has orientational order of the molecules. From\nthe early stages of the liquid crystal research it has been\nrealized that the {\\it entropic} part of the free energy related\nto non-spherical shapes of the molecules, by itself, can explain\nmany of the properties of the systems \\cite{ons}. Not\nsurprisingly, hard potentials were frequently used to investigate\nthe properties of liquid crystals. Even such simplifications as\ninfinitely thin disks \\cite{frenkel_thindisk} or infinitely thin\nrods \\cite{frenkel_thinrod} provide valuable insights into the\nproblem. A slightly more realistic picture is provided by hard\nspheroids \\cite{fmm}. Such simulations were primarily motivated\nby the desire to understand the liquid phases. However,\ntwo interesting {\\it solid phases} have been detected: \nboth phases are translationally ordered, but only one\nof them has orientational order of spheroids. The orientational \norder is absent only when the spheroid resembles a sphere. \nSufficiently oblate or prolate spheroids are orientationally \nordered in the solid phase. During the last twenty\nyears hard spheroids have be studied in great detail\n\\cite{spheroids}. A similar hard potential system that is suitable\nfor the study of the liquid crystals is a collection of hard\nspherocylinders (cylinders capped at their ends by hemispheres).\nThese molecules have slightly more complex phase diagram (which\nincludes smectic-A liquid-crystalline phase), and also have been\nstudied in great detail \\cite{spherocyl}. Like spheroids, they\nhave two solid phases. (Spherocylinders do not have a shape\nresembling oblate spheroid.) Taken together, spheroids and\nspherocylinders provide a rather coherent picture of influence of\nmolecular shape on the phase diagram (see, e.g., \\cite{sp}). Hard\npotentials also have been used in other ways to represent\nnon-spherically symmetric molecules by combining several spheres\nor disks into more complicated shapes, such as heptagons\n\\cite{woj}, or long rods \\cite{hardlong}. To make the models more\nrealistic, sometimes attractive interaction has been added to the\nusual hard repulsive potential \\cite{attr}. \n\nIn Section \\ref{sec:softelast} we present the formalism for soft\nnon-central pair potentials. This formalism cannot be directly\napplied to the calculation of elastic constants of hard potential\nsystems. In Section \\ref{sec:hardelast} and Appendix \n\\ref{sec:regularization} we show how generalization of the approach \nused for centrally-symmetric hard potentials \\cite{fk_formalism} \ncan be used to derive expressions applicable to non-central hard \npotentials. In Section \\ref{sec:application} we detail the method\nby which the formal results can be applied to hard particles for \nwhich a ``contact function\" can be defined. In particular, we \ndescribe how our formalism can be used for a 2D system of hard \nellipses. In Section \\ref{sec:results} we demonstrate the \nimplementation by calculating stress and elastic constants in \ndifferent phases of a system of hard ellipses.\n\n\n\n\n\n\\section{Elastic properties for soft pair potentials}\\label{sec:softelast}\n\nIn this section we derive explicit expressions for the stress and elastic\nconstant tensors following the method of SHH \\cite{shh} for a more\ngeneral case. We will consider potential energy $\\cal V$ which \ncan be expressed as\n\\begin{equation}\\label{def:V}\n{\\cal V}=\\sum_{\\langle\\alpha\\beta\\rangle}\\Phi({\\bf r}^{\\alpha\\beta},\\Omega^\\alpha,\n\\Omega^\\beta) ,\n\\end{equation}\nwhere $\\Phi$ is the interaction potential of a pair of particles.\nGreek indices $\\alpha$ and $\\beta$ denote particles (atoms\/molecules), \nand $\\langle\\alpha\\beta\\rangle$ denotes a pair of particles. The above \nequation contains summation over all possible particle pairs. (In this \npaper we do {\\em not} assume summation over repeated Greek indices \nindicating particles.) Here \n${\\bf r}^{\\alpha\\beta}={\\bf r}^{\\beta}-{\\bf r}^{\\alpha}$ is the vector\nconnecting two particles, while $\\Omega^\\alpha$ is the orientation of particle \n$\\alpha$. The two-body potential is not necessarily spherically \nsymmetric. In fact, we will apply our results to particles that do not \nposses such a symmetry.\nWe denote all two-body potentials by the same letter $\\Phi$ although\nnowhere in this formal derivation it is required that they should be \nidentical for different pairs of particles. (It should be\ndenoted $\\Phi^{\\alpha\\beta}({\\bf r}^{\\alpha\\beta},\\Omega^\\alpha,\n\\Omega^\\beta)$; however, we omit the superscript of $\\Phi$ for brevity.)\nFrom the physical point of view we expect that the potential should be\nrotationally invariant, i.e. when the vector ${\\bf r}^{\\alpha\\beta}$ and the orientations of two molecules (described by $\\Omega^\\alpha$ and $\\Omega^\\beta$)\nperform a ``rigid body'' rotation, the interaction energy should not change.\nIn fact the symmetry of the stress tensor assumes the presence of\nrotational invariance. However, we do not explicitly use this property\nin the derivation of the following expressions.\n\nUnlike the central force case \\cite{shh} we will need to use both \n$\\eta_{ij}$ and $\\epsilon_{ij}$ in the process of derivation. Note that \nin the definition of $\\eta_{ij}$ in Eq.~\\ref{def:eta} only the symmetric\nsum $\\eta_{ij}+\\eta_{ji}$ appears for $i\\ne j$. Therefore, without loss \nof generality it is assumed that the Lagrangian strain is a \n{\\em symmetric} tensor ($\\eta_{ij}=\\eta_{ji}$). From Eqs.~\\ref{def:epsilon} and \\ref{def:eta} we find that\n$\\eta_{kl}=\\frac{1}{2}(\\epsilon_{kl}+\\epsilon_{lk}+\\epsilon_{ik}\\epsilon_{il})$.\nFor small deformations this relation can be inverted to the second order as\n\\begin{equation}\\label{epsiloneta}\n\\epsilon_{kl}=\\eta_{kl}-\\frac{1}{2}\\eta_{mk}\\eta_{ml}+\\dots\n\\end{equation}\n\nIn statistical-mechanical description of a solid in a canonical ensemble\nwe may ask how the free energy $F$ of the solid changes when the\n{\\em boundaries} of the solid undergo deformation described by Eq.~\\ref{def:M}.\nIn calculation of such $F(\\{\\eta\\})$ we do not impose any restrictions\non the positions or orientations of the particles except the\nchange in the boundary conditions. The free energy can be expressed via the\npartition function as \n\\begin{equation}\\label{def:F}\nF(\\{\\eta\\})=-kT\\ln [Z_0Z(\\{\\eta\\})] ,\n\\end{equation}\nwhere only the configurational part $Z(\\{\\eta\\})$ of the partition\ndepends on the deformation, while the remaining (``kinetic'') part\n$Z_0$ is independent of deformations. We note, that in classical\nphysics the details of the inertia tensors of the molecules can\nmodify the details of their actual motion, but play no role in the\nstatistical-mechanical properties of the system. Only the\nasphericity of the potential matters. The configurational part\n\\begin{equation}\nZ=\n\\int\\limits_{V(\\{\\eta\\})}\\prod_{\\lambda=1}^Nd{\\bf r}^\\lambda\n\\int\\prod_{\\lambda=1}^Nd\\Omega^\\lambda\\ \n{\\rm e}^{-{\\cal V}({\\bf r}^1,\\Omega^1,\\dots,\n{\\bf r}^N,\\Omega^N)\/kT},\n\\end{equation}\nwhere ${\\bf r}^\\alpha$ and $\\Omega^\\alpha$ represent the position and \norientation of particle $\\alpha$ and $\\cal V$ is the interaction potential, \ndepends on the deformation only through the distortion of the integration\nvolume $V(\\{\\eta\\})$ of the possible positions of each of the particles. The\nintegration over all possible spatial directions of each particle\nremains unchanged. If we formally change the integration variable\n${\\bf r}^\\alpha$ for each particle $\\alpha$ to the variable ${\\bf R}^\\alpha$,\nwhich are related by Eq.~\\ref{def:M}, then, the limits of interaction\nof the new variables will correspond to the undistorted volume $V(\\{0\\})\\equiv V_0$,\nand consequently\n\\begin{equation}\\label{def:Z}\nZ=\n\\int\\limits_{V_0}\\prod_{\\lambda=1}^Nd{\\bf r}^\\lambda\n\\int\\prod_{\\lambda=1}^Nd\\Omega^\\lambda\nJ{\\rm e}^{\n-{\\cal V}(M_{ij}R^1_j,\\Omega^1,\\dots,\nM_{ij}R^N_j,\\Omega^N)\/kT},\n\\end{equation}\nwhere $J$ is the Jacobian corresponding to the change of coordinates\n\\begin{equation}\nJ=|\\det(M)|^N=|\\det(I+2\\eta)|^{N\/2}\\ .\n\\end{equation}\nThe deformation now appears as distortion of the coordinates in the\npotential $\\cal V$.\n\nThus, the stress and the elastic constants can be viewed as the first\nand the second derivatives of the free energy density with respect\nto various $\\eta_{ij}$. Since in the expansion in Eq.~\\ref{def:C} always\nappear pairs of terms such as $C_{1123}\\eta_{11}\\eta_{23}+\nC_{1132}\\eta_{11}\\eta_{32}$\nwhich contain identical $\\eta$ terms, we can choose to define\nthe tensor in a symmetric form $C_{ijkl}=C_{jikl}=C_{ijlk}$.\n(An additional symmetry $C_{ijkl}=C_{klij}$ is also evident from\nthe definition of the tensor.) Strictly speaking, since \n$\\eta_{12}=\\eta_{21}$ they should be treated as a single variable \nwhile taking the derivatives\nof the free energy density. However, terms containing those two\nvariables also appear twice in Eq.~\\ref{def:C}. Thus, one can simply\ntreat $\\eta_{12}$ and $\\eta_{21}$ as independent variables, and symmetrize\nthe results with the interchange of indices at the end. Alternatively,\none may view each derivative $\\partial\/\\partial \\eta_{12}$\nas $\\frac{1}{2}(\\partial\/\\partial \\eta_{12}+\\partial\/\\partial \\eta_{21})$.\nBelow we always present fully symmetrized expressions.\n\nFrom Eqs. \\ref{def:C} and \\ref{def:F} we can express the stress tensor\n\\begin{equation}\nV_0\\sigma_{ij}=\\frac{\\partial{F}}{\\partial{\\eta_{ij}}}\n\\biggm|_{\\{\\eta\\}=\\{0\\}} \n=-\\frac{kT}{Z}\\frac{\\partial{Z}}{\\partial{\\eta_{ij}}}\\biggm|_{\\{\\eta\\}=\\{0\\}}\n\\end{equation}\nand the second order elastic constants\n\\begin{eqnarray}\n&&V_0C_{ijmn}=\\frac{\\partial^2 F}{\\partial\\eta_{mn}\\partial\\eta_{ij}}\n\\biggm|_{\\{\\eta\\}=\\{0\\}} \\nonumber\\\\\n&&=\\left[\\frac{kT}{Z^2}\\frac{\\partial{Z}}{\\partial{\\eta_{mn}}}\n\\frac{\\partial{Z}}{\\partial{\\eta_{ij}}} -\n \\frac{kT}{Z}\\frac{\\partial^2{Z}}{\\partial{\\eta_{mn}}\\partial{\\eta_{ij}}}\\right]\n\\biggm|_{\\{\\eta\\}=\\{0\\}} \n\\end{eqnarray}\nin terms of the derivatives of $Z$. As can be seen from Eq.~\\ref{def:Z}\nthe dependence of $Z$ on the deformation is contained in the Jacobian $J$\nand in the arguments of the potential $\\cal V$. The Jacobian depends directly\non $\\eta_{ij}$ and its derivatives can be easily calculated. In particular,\nwe find (see, e.g., Ref. \\cite{fk_formalism}) that \n\\begin{equation}\n\\frac{\\partial J}{\\partial\\eta_{ij}}\\biggm|_{\\{\\eta\\}=\\{0\\}}=N\\delta_{ij}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial^2 J}{\\partial\\eta_{mn}\\partial\\eta_{ij}}\\biggm|_{\\{\\eta\\}=\\{0\\}}=\nN^2\\delta_{ij}\\delta_{mn}-N\\delta_{im}\\delta_{jn}-N\\delta_{in}\\delta_{jm}.\n\\end{equation}\nTaking the derivatives of $\\cal V$ involves the differentiation of \nthe potential with respect to $M_{ij}$, followed by the differentiation \nof $M_{ij}$ with respect to $\\epsilon_{kl}$ using Eq.~\\ref{def:epsilon}, \nfollowed by the differentiation of $\\epsilon_{kl}$ with respect to \n$\\eta_{mn}$ using Eq.~\\ref{epsiloneta}. This leads to the following \nexpressions for the stress and elastic constants:\n\\begin{widetext}\n\\begin{equation}\\label{softstress}\nV_0\\sigma_{ij}= -NkT\\delta_{ij}+\\frac{1}{2}\\sum_{\\langle\\alpha\\beta\\rangle}\n\\left\\langle \\frac{\\partial\\Phi\\ }{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_j+\n \\frac{\\partial\\Phi\\ }{\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_i\\right\\rangle\\ ,\n\\end{equation}\n\\begin{eqnarray}\\label{softelast}\n&&V_0C_{ijmn}=NkT(\\delta_{im}\\delta_{jn}+\\delta_{in}\\delta_{jm})\\nonumber\\\\\n&+&\\frac{1}{4kT}\n\\sum_{\\langle\\alpha\\beta\\rangle}\\left\\langle \n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_j+\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_i\n\\right\\rangle\n\\sum_{\\langle\\gamma\\delta\\rangle}\\left\\langle \n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_m}R^{\\gamma\\delta}_n+\n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_n}R^{\\gamma\\delta}_m\n\\right\\rangle \\nonumber\\\\\n&-&\\frac{1}{4kT}\\sum_{\\langle\\alpha\\beta\\rangle,\\langle\\gamma\\delta\\rangle}\n\\Bigg\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_m}\n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_i} R^{\\alpha\\beta}_n R^{\\gamma\\delta}_j\n+\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_n}\n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_i} R^{\\alpha\\beta}_m R^{\\gamma\\delta}_j\n\n+\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_m}\n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_j} R^{\\alpha\\beta}_n R^{\\gamma\\delta}_i\n+\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_n}\n\\frac{\\partial\\Phi}{\\partial R^{\\gamma\\delta}_j} R^{\\alpha\\beta}_m R^{\\gamma\\delta}_i\n\\Bigg\\rangle\n \\nonumber\\\\\n&+&\\frac{1}{4}\\sum_{\\langle\\alpha\\beta\\rangle}\\Bigg\\langle\n\\frac{\\partial^2\\Phi}{\\partial R^{\\alpha\\beta}_m\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_nR^{\\alpha\\beta}_j\n+\\frac{\\partial^2\\Phi}{\\partial R^{\\alpha\\beta}_n\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_mR^{\\alpha\\beta}_j\n+\\frac{\\partial^2\\Phi}{\\partial R^{\\alpha\\beta}_m\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_nR^{\\alpha\\beta}_i\n+\\frac{\\partial^2\\Phi}{\\partial R^{\\alpha\\beta}_n\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_mR^{\\alpha\\beta}_i \\Bigg\\rangle\n \\nonumber\\\\\n&-&\\frac{1}{8}\\sum_{\\langle\\alpha\\beta\\rangle}\\Bigg\\{\n\\left\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_n+\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_n}R^{\\alpha\\beta}_j \\right\\rangle\\delta_{im}\n+\\left\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_m+\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_m}R^{\\alpha\\beta}_i \\right\\rangle\\delta_{jn}\n\\nonumber\\\\\n&+&\\left\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_n+\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_n}R^{\\alpha\\beta}_i \\right\\rangle\\delta_{jm}\n+\\left\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_j}R^{\\alpha\\beta}_m+\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_m}R^{\\alpha\\beta}_j \\right\\rangle\\delta_{in} \\Bigg\\} \\ .\n\\end{eqnarray}\n\\end{widetext}\nIn the above equations we already use the coordinates $\\bf R^{\\alpha\\beta}$\nof the undistorted system to emphasize the fact that all the averages are now\ncalculated in the absence of the deformation.\n\nSince $(\\partial\\Phi\/\\partial R^{\\alpha\\beta}_i)R^{\\alpha\\beta}_j=\n-f^{\\alpha\\beta}_iR^{\\alpha\\beta}_j$, where ${\\bf f}^{\\alpha\\beta}$ is\nthe force between the particles $\\alpha$ and $\\beta$, we can recognize in \nEq.~\\ref{softstress} the standard virial theorem, although in the textbooks \\cite{virtheorem} the derivation is quickly reduced to calculation \nof the (isotropic) pressure $p$.\n\nThe accuracy of the expressions \\ref{softstress} and \\ref{softelast} can \nbe verified by reducing the above formulae to the case of isotropic \ncentral force potential in which case\n\\begin{equation}\n\\frac{\\partial\\Phi}{\\partial R_i} = \\Phi'\\frac{R_i}{R}\\ ,\n\\end{equation}\nwhere prime denotes a derivative of $\\Phi$ with respect to \ninter-particle separation $R$. Similarly,\n\\begin{equation}\n\\frac{\\partial^2\\Phi}{\\partial R_i\\partial R_j}=\n\\Phi''\\frac{R_iR_j}{R^2}\n+\\Phi'\\frac{\\delta_{ij}}{R}\n-\\Phi'\\frac{R_iR_j}{R^3}\n\\end{equation}\nAfter performing these substitutions, we recover the standard expressions \nfor central force potentials \\cite{shh}.\n\n\n\\section{Hard potentials}\\label{sec:hardelast}\n\n\\begin{figure}\n\\includegraphics[height=5cm]{MKfig1.eps}\n\\caption{\\label{fig:eldefs} Two ``hard'' particles at a close approach.\n${\\bf R}^{\\alpha\\beta}$ is a vector connecting their centers, and\n$s^{\\alpha\\beta}$ is the minimal separation of the surfaces.\n }\n\\end{figure}\n\n\nThe expressions for stress and elastic constants that have been obtained \nin the previous section, presumed smooth potentials with well defined\nfirst and second derivatives. There is a certain difficulty in \ntranslating the expressions obtained for soft potentials to a hard \npotential situation. For instance, the term\n$(\\partial\\Phi\/\\partial R^{\\alpha\\beta}_i)R^{\\alpha\\beta}_j$\nin Eq.~\\ref{softstress} looks poorly defined for a hard potential that\nchanges on contact between 0 and $\\infty$. However, we note that the derivative\nof the potential really originates from the derivative\n$\\partial[{\\rm e}^{-\\Phi({\\bf R}^{\\alpha\\beta},\\Omega^\\alpha,\\Omega^\\beta)\/kT}]\/\n\\partial R^{\\alpha\\beta}_i$. The latter, is a derivative of a step function\nthat changes between 0 and 1, when the potential changes between $\\infty$\nand 0. This observation, has been used in Refs. \\cite{runge,barker} to derive\nsimple expressions for the pressure of hard sphere system. A detailed \nand rigorous description of the various aspects (including calculation of \npressure) of interaction of hard convex solids can be found in Ref. \\cite{hardpotreview}. A typical interaction between two hard particles \n(e.g., ellipses in 2D or ellipsoids in 3D) is represented by\nFig. \\ref{fig:eldefs} which depicts a close approach of two such particles, \nwhen ${\\bf R}^{\\alpha\\beta}$ is the vector connecting their centers, \nwhile $s^{\\alpha\\beta}$ denotes the minimal distance between them. \nGradient of \n${\\rm e}^{-\\Phi({\\bf R}^{\\alpha\\beta},\\Omega^\\alpha,\\Omega^\\beta)\/kT}$\nwith respect to ${\\bf R}^{\\alpha\\beta}$, when $\\Omega^\\alpha$ and \n$\\Omega^\\beta$ are kept fixed, is simply \n${\\bf n}^{\\alpha\\beta}\\delta(s^{\\alpha\\beta})$\nwhere ${\\bf n}^{\\alpha\\beta}$ is unit vector perpendicular to the \nsurfaces at the point of contact, pointing from $\\alpha$ to $\\beta$.\nThus we can make the substitution\n\\begin{eqnarray}\\label{substitution}\n&&\\left\\langle \\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_j\\right\\rangle\n\\rightarrow -kT\\int d{\\bf R}^\\alpha d{\\bf R}^\\beta d\\Omega^\\alpha d\\Omega^\\beta\\times\n\\nonumber\\\\\n&&\nn^{\\alpha\\beta}_i\\delta(s^{\\alpha\\beta})\nR^{\\alpha\\beta}_jP({\\bf R}^\\alpha,\\Omega^\\alpha,{\\bf R}^\\beta, \\Omega^\\beta)\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\\label{eq:Pdef}\n&&P({\\bf R}^\\alpha,\\Omega^\\alpha,{\\bf R}^\\beta, \\Omega^\\beta)=\n\\nonumber\\\\\n&&\\frac{1}{Z}\\int\\prod_{\\lambda=1\\atop\\lambda\\ne\\alpha,\\beta}^N(d{\\bf R}^\\lambda d\\Omega^\\lambda){\\rm e}^{-\\sum_{\\langle\\mu\\nu\\rangle\\atop\\ne\\langle\\alpha\\beta\\rangle}\\Phi({\\bf R}^{\\mu\\nu},\\Omega^\\mu,\\Omega^\\nu)\/kT}\n\\end{eqnarray}\nis the probability density of particles $\\alpha$ and $\\beta$ to be at particular\npositions and orientations. The integral in Eq.~\\ref{substitution} simply\nrepresents the average of $n^{\\alpha\\beta}_iR^{\\alpha\\beta}_j$ over all possible\ncontacts between two particles weighted with proper probability densities of\nthose contacts. (Since the probability density changes from a finite value,\nwhen the particles are almost in contact, to zero, when the particles overlap,\nwe need to consider the case when $s^{\\alpha\\beta}$ approaches 0 from the positive\nside.) Thus, if we denote \n$\\Delta_i^{\\alpha\\beta}=n^{\\alpha\\beta}_i\\delta(s^{\\alpha\\beta}-0^+)$, we can\nfind the hard potential limit by replacement\n\\begin{equation}\\label{hardlim}\n\\frac{\\partial\\Phi}{\\partial R^{\\alpha\\beta}_i}\n\\rightarrow -kT\\Delta_i^{\\alpha\\beta}\n\\end{equation}\nBy substituting the result into Eq.~\\ref{softstress} we obtain the following\nexpression for the stress in the case of hard potentials:\n\\begin{equation}\\label{eq:hardstress}\n\\frac{V_0\\sigma_{ij}}{kT}= -N\\delta_{ij}-\\frac{1}{2}\\sum_{\\langle\\alpha\\beta\\rangle}\n\\left\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j+\n \\Delta_j^{\\alpha\\beta}R^{\\alpha\\beta}_i\\right\\rangle .\n\\end{equation}\n\n\nThe substitution appearing in Eq.~\\ref{hardlim} cannot be used to calculate\nthe elastic constants in Eq.~\\ref{softelast}, since the latter expression\ncontains a double summation (over pairs $\\langle\\alpha\\beta\\rangle$ and\n$\\langle\\gamma\\delta\\rangle$) of the derivatives of the potentials. Such summation\nincludes a term where $\\langle\\alpha\\beta\\rangle=\\langle\\gamma\\delta\\rangle$.\nA direct substitution of Eq.~\\ref{hardlim} would lead to the appearance\nof the term $[\\delta(s^{\\alpha\\beta})]^2$ which causes the expression to diverge.\n(This is a {\\em true divergence}, rather than some mathematical subtlety of\napproaching the limit of hard potentials.) However, Eq.~\\ref{softelast} also\ncontains the second derivative of $\\Phi$, which becomes poorly defined in the\nhard potential limit. We shall show in the Appendix \\ref{sec:regularization}\nthat the {\\em sum} of the apparently divergent and poorly defined terms has a well defined hard \npotential limit. In fact this sum can be transformed into an expression \nin Eq.~\\ref{eq:Chard} which includes only first derivatives of potentials \nand products of the first derivatives of potentials of {\\em different} \nparticle pairs. While the resulting expression in Eq.~\\ref{eq:Chard} \nlooks more complicated, it can be simply transformed (using \nEq.~\\ref{hardlim}) into an expression for the elastic constants of hard particles:\n\n\n\n\\begin{widetext}\n\\begin{eqnarray}\\label{eq:hardelast}\n\\frac{V_0C_{ijmn}}{kT}&=&N(\\delta_{im}\\delta_{jn}+\\delta_{in}\\delta_{jm})\n\\nonumber\\\\\n&+&\\frac{1}{4}\n\\sum_{\\langle\\alpha\\beta\\rangle}\\left\\langle \n\\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j+\n\\Delta_j^{\\alpha\\beta}R^{\\alpha\\beta}_i\n\\right\\rangle\n\\sum_{\\langle\\gamma\\delta\\rangle}\\Big\\langle \n\\Delta_m^{\\gamma\\delta}R^{\\gamma\\delta}_n+\n\\Delta_n^{\\gamma\\delta}R^{\\gamma\\delta}_m\n\\Big\\rangle \\nonumber\\\\\n&-&\\frac{1}{4}\\sum_{\\langle\\alpha\\beta\\rangle}\\sum_{\\langle\\gamma\\delta\\rangle\n\\atop \\ne \\langle\\alpha\\beta\\rangle}\n\\left\\langle \\Delta_m^{\\alpha\\beta}\n\\Delta_i^{\\gamma\\delta} R^{\\alpha\\beta}_n R^{\\gamma\\delta}_j\n+\\Delta_n^{\\alpha\\beta}\n\\Delta_i^{\\gamma\\delta} R^{\\alpha\\beta}_m R^{\\gamma\\delta}_j\n+\\Delta_m^{\\alpha\\beta}\n\\Delta_j^{\\gamma\\delta} R^{\\alpha\\beta}_n R^{\\gamma\\delta}_i\n+\\Delta_n^{\\alpha\\beta}\n\\Delta_j^{\\gamma\\delta} R^{\\alpha\\beta}_m R^{\\gamma\\delta}_i\n\\right\\rangle\n \\nonumber\\\\\n&+&\\frac{1}{8}\\sum_{\\langle\\alpha\\beta\\rangle}\n\\sum_{\\gamma\\atop\\ne\\alpha,\\beta}\\Big\\langle\n\\Delta_m^{\\alpha\\beta}\n\\left(\\Delta_i^{\\gamma\\beta}+\\Delta_i^{\\alpha\\gamma}\\right)\nR^{\\alpha\\beta}_nR^{\\alpha\\beta}_j\n+\\Delta_n^{\\alpha\\beta}\n\\left(\\Delta_i^{\\gamma\\beta}+\\Delta_i^{\\alpha\\gamma}\\right)\nR^{\\alpha\\beta}_mR^{\\alpha\\beta}_j\n\\nonumber\\\\\n&+&\\Delta_m^{\\alpha\\beta}\n\\left(\\Delta_j^{\\gamma\\beta}\n+\\Delta_j^{\\alpha\\gamma}\\right)\nR^{\\alpha\\beta}_nR^{\\alpha\\beta}_i\n+\\Delta_n^{\\alpha\\beta}\n\\left(\\Delta_j^{\\gamma\\beta}\n+\\Delta_j^{\\alpha\\gamma}\\right)\nR^{\\alpha\\beta}_mR^{\\alpha\\beta}_i\\Big\\rangle\n\\nonumber\\\\\n&+&\\frac{1}{4}\\sum_{\\langle\\alpha\\beta\\rangle}\\Big\\{\n\\left\\langle \\Delta_j^{\\alpha\\beta}R^{\\alpha\\beta}_n+\n\\Delta_n^{\\alpha\\beta}R^{\\alpha\\beta}_j \\right\\rangle\\delta_{im}\n+\\left\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_m+\n\\Delta_m^{\\alpha\\beta}R^{\\alpha\\beta}_i \\right\\rangle\\delta_{jn}\n+\\left\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_n+\n\\Delta_n^{\\alpha\\beta}R^{\\alpha\\beta}_i \\right\\rangle\\delta_{jm}\n\\nonumber\\\\\n&+&\\left\\langle \\Delta_j^{\\alpha\\beta}R^{\\alpha\\beta}_m+\n\\Delta_m^{\\alpha\\beta}R^{\\alpha\\beta}_j \\right\\rangle\\delta_{in} \n+\\left\\langle \\Delta_m^{\\alpha\\beta}R^{\\alpha\\beta}_n+\n\\Delta_n^{\\alpha\\beta}R^{\\alpha\\beta}_m \\right\\rangle\\delta_{ij}\n+\\left\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j+\n\\Delta_j^{\\alpha\\beta}R^{\\alpha\\beta}_i \\right\\rangle\\delta_{mn}\n\\Big\\} .\n\\end{eqnarray}\n\n\\end{widetext}\nEach of the terms in the above equation corresponds to some particular\ncase of contact between pairs of particles. E.g., term of the type\n$\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle$ in the\nlast sum corresponds to a contact between a pair of\nparticles $\\alpha$ and $\\beta$, and consequently the contribution of\nthat sum is proportional to $N$. The sum with the prefactor $\\frac{1}{8}$\ncontains terms corresponding to three touching particles: E.g., the\nterm $\\langle\\Delta_m^{\\alpha\\beta}\\Delta_i^{\\gamma\\beta}R^{\\alpha\\beta}_n\nR^{\\alpha\\beta}_j\\rangle$\ncorresponds to the situation when particle $\\beta$ touches particles $\\alpha$ and\n$\\gamma$ simultaneously. The number of such contacts at any given moment \nis also\nproportional to $N$. The second and the third lines of the equation\nhave $N^2$ different averages corresponding to the number\nof pairs of contacts that might appear in the system. However, we note \nthat in both sums taken together we always have differences of the\ntype \n$\\langle\\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle \\langle\\Delta_m^{\\gamma\\delta}R^{\\gamma\\delta}_n\\rangle-\n\\langle\\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j \\Delta_m^{\\gamma\\delta}R^{\\gamma\\delta}_n\\rangle$. This term\nvanishes if the contact between the pair $\\langle\\alpha\\beta\\rangle$\nis {\\em uncorrelated} with the contact between the pair\n$\\langle\\gamma\\delta\\rangle$. This happens when the two\npairs are outside the correlation distance. Consequently, only \npairs of contacts close to each other will contribute, and therefore\nthe total contribution of those terms is proportional to $N$.\n\n\\section{Application to hard ellipse system}\\label{sec:application}\n\nEquations \\ref{eq:hardstress} and \\ref{eq:hardelast} enable calculation\nof the stress and elastic constants of a system consisting of hard particles,\nprovided we are able to calculate thermal averages of the type\n$\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle$. This expression\ndepends on the probability density of particles being in contact.\nFor the calculation of stress we need only the probability {\\em density}\nof a single contact between a pair of particles, while the calculation\nof elastic constants involves the probability density of two such contacts\nhappening simultaneously. It is natural to use MC method\n\\cite{hoover_bk,frenkel_bk,binder_bk} to evaluate averages of this type.\nMC simulation of hard potentials is particularly simple since no energy\nscale is present, and every elementary move of a particle is either accepted\nwithout a need to calculate the Boltzmann factor, or results in a forbidden\nconfiguration, and is, consequently, rejected. Application, of these\nprocedures requires a method to identify intersection of two hard particles.\nSuch methods have been found both for 2D ellipses \\cite{vieillard}\nand for 3D ellipsoids \\cite{hard_spheroids_funct,perram}.\nIn Appendix \\ref{sec:overlap} we explain in detail the case of ellipses which is\nused in the current work. Here, we will consider a slightly more general case\nwhen a simple function can be defined that identifies the contact between\ntwo particles.\n\nConsider a function $\\Psi$ that depends on the positions and orientations\nof two particles, that vanishes when the particles touch each other and \nis positive when the particles are separated. (The \nformalism can be trivially generalized to the case when the the function\nhas some other non-vanishing constant value at the contact.) Consider a case\nwhen the orientations of both particles are fixed, and we explore positions\nwhere the function vanishes. Figs.~\\ref{fig:elcontact}a and \n\\ref{fig:elcontact}b\ndepict 2D cases of one fixed ellipse, while another (identical) ellipse \nis rotated by a fixed angle and is shown in a variety positions where \nthey touch. The ratio between the major and minor semi-axes of \nthe ellipses, $a$ and $b$, respectively, are different in both pictures.\nThe thick line traces the possible positions of the center of the \nmoving ellipse. Note that the shape of such a contact line depends on\nthe degree of elongation of the ellipses and on their relative orientation.\nThose are the positions that are relevant for the calculation of the average \n$\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle$. In 2D\nthis is a line, while in 3D this is a surface. \n\n\n\\begin{figure}\n\\includegraphics[height=10cm]{MKfig2ab.eps}\n\\vskip 1cm\n\\caption{\\label{fig:elcontact} (a) Center of slightly elongated\nellipse ($a\/b=3$) tilted by 45$^o$ with respect to an identical (vertical)\nellipse circumscribes an ``oval'' trajectory, depicted by the thick line,\nwhen the contact point moves along the ellipse.\n(b) Center of strongly elongated ellipse ($a\/b=14$)\nrotated by 90$^o$ with respect to an identical (vertical) ellipse\ncircumscribes a ``rounded square'' trajectory, depicted by the thick line,\nwhen the contact point moves along the ellipse.\n }\n\\end{figure}\n\n\nThe direction of the force, normal to the contact plane, is also normal \nto this surface. Thus, assuming that $\\Psi$ is a sufficiently smooth \nfunction, we can calculate $n^{\\alpha\\beta}_i=\n(\\partial\\Psi^{\\alpha\\beta}\/\\partial R^{\\alpha\\beta}_i)\/\n|\\nabla\\Psi^{\\alpha\\beta}|$, where the gradient (and the partial derivative)\nwith respect to ${\\bf R}^{\\alpha\\beta}$ is taken when the orientations of\nthe particles are fixed. In thermal average we need to calculate\n\\begin{eqnarray}\\label{eq:example_av}\n&&\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle\n=\\int d\\Omega^{\\alpha}d\\Omega^{\\beta}\\times\n\\nonumber\\\\\n&&\n\\int dS^{\\alpha\\beta} n^{\\alpha\\beta}_iR^{\\alpha\\beta}_j\nP({\\bf R}^{\\alpha},\\Omega^{\\alpha},{\\bf R}^{\\beta},\\Omega^{\\beta}),\n\\end{eqnarray}\nwhich involves the integration along the contact surface (or line) \n$S^{\\alpha\\beta}$ of the probability density $P({\\bf R}^{\\alpha},\n\\Omega^{\\alpha},{\\bf R}^{\\beta},\\Omega^{\\beta})$, defined in \nEq.~\\ref{eq:Pdef}, of two particles being in those positions and \norientations. During a MC simulation such an event strictly never \noccurs. We can replace the integration along the surface, by\nan integration inside a thin shell of thickness $t$ along the\ncontact surface. In such a case $dS^{\\alpha\\beta}P\\approx(dV\/t)P=dp\/t$,\nwhere $dV$ is the volume element and $dp$ is the {\\em probability} \nof the center of a particle\nbeing within a shell, at some particular area. Note, that the thickness\nof the shell does not have to be constant, but can vary from place to place\non the contact surface. In fact, we can define the shell as corresponding\nto all positions for which $0\\le\\Psi^{\\alpha\\beta}<\\Psi_0$, where\n$\\Psi_0$ is some fixed number. Fig. \\ref{fig:psi0} depicts such a shell\ncorresponding to two values of $\\Psi^{\\alpha\\beta}$, for\na function defined in Appendix \\ref{sec:overlap}. If $\\Psi_0$ is small enough,\nwe can determine the local thickness of the shell from the relation\n$\\Psi_0\\approx|\\nabla\\Psi^{\\alpha\\beta}|t$. Substituting the values\nof $t$ and $n^{\\alpha\\beta}_i$ expressed via function $\\Psi^{\\alpha\\beta}$\ninto Eq.~\\ref{eq:example_av}, and noting that approximate expressions\nmentioned in this paragraph become exact for vanishing $\\Psi_0$, we\narrive at the expression\n\\begin{equation}\n\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle\n=\\lim_{\\Psi_0\\to 0}\\frac{1}{\\Psi_0}\\int d\\Omega^{\\alpha}d\\Omega^{\\beta}\n\\int\\limits_{S(\\Psi_0)} \\frac{\\partial\\Psi^{\\alpha\\beta}}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_j dp\n\\end{equation}\n\\begin{equation}\\label{eq:lim}\n\\langle \\Delta_i^{\\alpha\\beta}R^{\\alpha\\beta}_j\\rangle\n=\\lim_{\\Psi_0\\to 0} \\frac{1}{\\Psi_0}\\left\\langle\\frac{\\partial\\Psi^{\\alpha\\beta}}{\\partial R^{\\alpha\\beta}_i}R^{\\alpha\\beta}_j\\right\\rangle_{S(\\Psi_0)}\n\\end{equation}\nIn these equation $S(\\Psi_0)$ in the integral and in the thermal \naverage denote a shell defined by the limit $\\Psi_0$ on the function $\\Psi^{\\alpha\\beta}$.\n\n\\begin{figure}\n\\includegraphics[height=5cm]{MKfig3.eps}\n\\caption{\\label{fig:psi0} Line of $\\Psi=0$ and $\\Psi=1000$ for a pair\nof ellipses with aspect ratio $E=a\/b=2$ with their axes rotated\nby 45$^o$ for a function defined in Appendix \\ref{sec:overlap}.\nNote that the thickness of the area between two lines\nof fixed $\\Psi$ varies slightly.\n }\n\\end{figure}\n\nIn the numerical calculation of the stress, the limit in \nEq.~\\ref{eq:lim} is not easy to implement: Using a large $\\Psi_0$\nleads to an inaccurate answer, while using a small $\\Psi_0$ leads\nto a small number of ``almost contact\" events, and, consequently, to\nlarge statistical errors. One may try considering a numerical\nextrapolation to $\\Psi_0=0$ by measuring the stress for a \nsequence of decreasing $\\Psi_0$s. However, the events for\nsmaller values of $\\Psi_0$ are also contained in set of the\nevents for larger $\\Psi_0$s. It is difficult to extrapolate\nsuch correlated sets of data points. The independence of the\ndata points can be achieved by calculating a sequence of values\nof the stress for ``contact shells'' defined by $\\Psi_{\\alpha\\beta}$\nlocated in a sequence of segments $[0,\\Psi_0),[\\Psi_0,2\\Psi_0),\\dots\n[K\\Psi_0,(K+1)\\Psi_0),\\dots$ ($K$ is integer). Values of $\\sigma_{ij}$ \nnow can be conveniently extrapolated to their ``real\" values. \nA similar method has been used by Farago and Kantor \\cite{fk_formalism} \nto calculate the stress and elastic constants of hard sphere solids.\n\nThe terms in the expressions for elastic constants including two\npairs of particles can be similarly handled. One simply uses $\\Psi_{01}$\nand $\\Psi_{02}$ to define {\\em two} shells, $S(\\Psi_{01})$ and \n$S(\\Psi_{02})$, respectively, each corresponding to a particular contact, \nand considers the events which occur when both pairs of particles are \nwithin their respective shells simultaneously. E.g.,\n\\begin{eqnarray}\n&&\\langle \\Delta_m^{\\alpha\\beta}R^{\\alpha\\beta}_n \n\\Delta_i^{\\gamma\\delta} R^{\\gamma\\delta}_j\\rangle=\n\\nonumber\\\\\n\\lim_{{\\Psi_{01}\\to 0\\atop\\Psi_{02}\\to 0}} && \\frac{1}{\\Psi_{01}\\Psi_{02}}\\left\\langle\\frac{\\partial\\Psi^{\\alpha\\beta}}{\\partial R^{\\alpha\\beta}_m}R^{\\alpha\\beta}_n\n\\frac{\\partial\\Psi^{\\gamma\\delta}}{\\partial R^{\\gamma\\delta}_i}R^{\\gamma\\delta}_j\n\\right\\rangle_{S(\\Psi_{01}),S(\\Psi_{02})} .\n\\end{eqnarray}\nCompared with the case of the stress, the numerical evaluation of \nthe limit where the thickness of the shells vanishes presents an\neven bigger numerical problem, since\nthe probability of two contact events is very small. Nevertheless, this can\nbe handled in a similar way, by considering a 2D array of segments\nof the type $\\{[K\\Psi_0,(K+1)\\Psi_0),[L\\Psi_0,(L+1)\\Psi_0)\\}$ \n($K$ and $L$ integers)\nand obtaining the values of the various parts of the elastic constants by\nextrapolating the 2D surface to its ``real\" value of vanishing contact layer\nthickness.\n\n\n\\section{Results of simulations}\\label{sec:results}\n\nWe used the method developed in this work to calculate the elastic \nproperties of 2D hard ellipse system as a part of a study\nof its phase diagram \\cite{mk}. Here, we briefly demonstrate \nthe usefulness of the method. As in any hard particle system,\ntemperature plays no role, since the interactions have no \n``energy scale.\" The temperature appears only as a multiplicative\nprefactor in the free energy $F$ and in Eq.~\\ref{eq:hardstress}\nfor the stress and Eq.~\\ref{eq:hardelast} for the elastic constants. The\nresults depend on the density of the particles and their size and shape:\nwe characterized the system by the number of particles per unit\narea $\\rho$ and by the sizes of the major and minor semi-axes, $a$ \nand $b$, of the ellipses. Frequently, reduced density \n$\\rho^*\\equiv 4\\rho a b$ is used. The maximal possible (close packed) \n$\\rho^*$ is independent of the aspect ratio $E=a\/b$ of the ellipses and is\nequal to $2\/\\sqrt{3}\\approx 1.155$. It should be noted \\cite{vieillard}\nthat for every fixed $a$ and $b$ there is an infinity of possible \n(equally dense) close \npacked states which are obtained by orienting all ellipses in the same \ndirection and packing them into a (distorted triangular) periodic \nstructure.\n\nThe system of hard disks ($E=1$) has been extensively studied. For\n$\\rho^*\\agt 0.91$ it forms a periodic 2D solid --- a triangular lattice.\nThe correlation function of atom positions of such a solid decays \nto zero as a power law of the separation between the atoms \\cite{mermin}.\nSuch behavior is usually denoted as quasi-long-range order. At the\nsame time the orientations of the ``bonds\" (imaginary lines connecting \nneighboring atoms) have a long range correlation \\cite{mermin_bond}.\nThe system is liquid for $\\rho^*\\alt 0.89$, i.e it has no long range \norder of any kind. At the intermediate\ndensities the system is probably hexatic \\cite{hny} --- a phase\nwith algebraically decaying bond-orientational order, but without\npositional order. (However, even very large scale simulations \\cite{jm}\nhave difficulties in distinguishing the hexatic phase from coexisting\nsolid and liquid phases.) From the point of view of elasticity \ntheory, all three phases are isotropic, i.e. their second order elastic\nconstants are determined by two independent constants. Aspect ratio \n$E\\ne1$ of the ellipses adds an additional order parameter ---\ntheir orientation. E.g., for $E=4$ a system of ellipses forms \nisotropic liquid for densities $\\rho^*\\alt 0.8$. For larger densities \nthe ellipses in the liquid become oriented (``nematic phase''). Finally, at\n$\\rho^*\\approx 1.0$ the system becomes a solid of orientationally\nordered ellipses \\cite{cuesta}. For weakly elongated ellipses, we\nexpect the particles to remain orientationally disordered in the entire\nliquid phase, and with increasing density to go (possibly via hexatic\nphase) to a crystalline state in which the ellipses remain disordered.\nThe 3D analog of such a state is called {\\em plastic\ncrystal} \\cite{plastic}. (Presence of such a phase in almost circular \nellipses ($E=1.01$) was observed by Vieillard-Baron \\cite{vieillard}.)\nWith increasing density an additional phase transition will bring the \nellipses into an orientationally ordered state. Phase diagram which \nincludes such a transition between two solid phases for 3D hard \nellipsoids has been studied by Frenkel {\\em et al.} \\cite{fmm}.\n\n\nAs a test of our formalism we studied a case of moderately elongated\nhard ellipses with $E=1.5$. We considered system consisting of \n$N \\approx 1000$ ellipses contained in a 2D rectangular box whose \ndimensions were chosen as close as possible to a square. Periodic \nboundary conditions were used. The ellipses were initially placed \non a distorted triangular lattice, commensurate with the dimensions\nof a closed packed configuration corresponding to this particular\naspect ratio $E$ and the particular orientation of the ellipses. \nIn this Section we describe only the cases when the initial orientation\nof the major axes of the ellipses was taken to be perpendicular to \none of the axes of \nsymmetry of the ordered crystal drawn through neighboring particles.\nWe first performed an equilibration run at constant pressure, in \norder to allow the system to reach an equilibrated state with \nrespect to the orientational, as well as the translational ordering. \nThe orientational order parameter, the box dimensions and the density \nwere monitored during this equilibration run. A MC time unit in the \nequilibration run consisted of $N+1$ elementary moves, one of which, \non the average, was a volume change attempt, and the rest were particle \nmove attempts. A particle move attempt involved choosing a particle \nrandomly, and attempting to displace and rotate it simultaneously \nby an amount chosen from a uniform distribution. The move was \naccepted if the displaced particle did not overlap in\nits new position and orientation with other particles. The volume \nchange attempt was identical to that described in \\cite{frenkel_bk}. \nThe width of the distributions corresponding to the particle moves \nand the parameter of the volume change were chosen so that the average \nsuccess rates of both types of MC moves were about 50\\%.\n\nThe use of the constant pressure simulations at the equilibration stage \nenabled us to determine the equilibrium box shape for isotropic \nstress tensor (pressure) conditions. At most densities the final\nconfiguration had orientationally disordered ellipses, and we could\neasily verify that the final configurations were independent of the \nspecific choice of the starting configuration.\nHowever, at extremely high densities, approaching the close packed\ndensity, even after long equilibration the state resembled the starting\nstate of orientationally ordered ellipses. Typically, several millions \nof MC time units were required to reach equilibrium for a given \npressure. Upon completion of equilibration, we switched to constant \nvolume simulations, during which the stresses and the elastic \nconstants were evaluated. Very long simulation times (about \n$10^7$ MC time units) were required for accurate determination of \nthese constants, because their calculation depends on extremely \nrare events of two pairs of particles simultaneously touching each \nother. The range of ``contact \nshells'' was chosen in such a way that even in the most remote \nshell the separation between the particles was significantly smaller \nthan their mean separation. The statistical accuracy of the elastic \nconstants was evaluated by comparing the results of independent runs. \n\n\\begin{figure}\n\\includegraphics[height=6.5cm]{MKfig4.eps}\n\\caption{\\label{fig:elast1} \nPressure $p$ (open circles connected by solid line) and the elastic constant\n$C_{1212}$ (inverted open triangles connected by dashed line) of hard \nellipse system with aspect ratio $E=1.5$\nin the units of $kT\/4ab$ as functions of the reduced density $\\rho^*$.\nThe error bars of the pressure are significantly smaller than the\nsymbols denoting the data points.\n }\n\\end{figure}\n\\begin{figure}\n\\includegraphics[height=6.5cm]{MKfig5.eps}\n\\caption{\\label{fig:elast2} \nElastic constants $C_{1111}$ (open circles connected by solid line),\n$C_{2222}$ (asterisks connected by dotted line) and $C_{1122}$ (full \ntriangles connected by dot-dashed line) of hard ellipse system with \naspect ratio $E=1.5$\nin the units of $kT\/4ab$ as functions of the reduced density $\\rho^*$.\nThe error bars of all the data points are slightly smaller than the\nsymbols denoting them.\n }\n\\end{figure}\n\nIn a 2D system which has a reflection symmetry with respect to\neither $x$ or $y$ axis, the elastic constants with an index \nappearing an odd number \nof times (such as $C_{1112}$) must vanish. Indeed, our simulations\nshowed that these quantities vanish within the error bars of the\nmeasurement. The system still may have four unrelated elastic constants \n$C_{1111}$, $C_{2222}$, $C_{1122}$ and $C_{1212}$. For a system with\nquadratic symmetry the number of independent elastic constants reduces\nto three. Such systems are frequently characterized by their bulk\nmodulus and two shear moduli, $\\mu_1=C_{1212}-p$ and \n$\\mu_2=\\frac{1}{2}(C_{1111}-C_{1122})-p$. For an isotropic system \n$\\mu_1=\\mu_2$ and, therefore, there are only two independent\nconstants. (A system with six-fold symmetry is isotropic as far as\nthe elastic constants are concerned.)\nFigs. \\ref{fig:elast1} and \\ref{fig:elast2} depict the pressure and \nfour elastic constants for several values of the reduced density \n$\\rho^*$. One can see that for densities $\\rho^* \\le 1.086$, \n$C_{1111}$ and $C_{2222}$ practically coincide, indicating that these \nsystems are at least quadratic. Furthermore, the identity \n$C_{1111}-C_{1122}=2C_{1212}$, i.e. $\\mu_1=\\mu_2$, is found \nto hold within a few percent for these values of $\\rho^*$. Consequently,\nfor these densities the system is isotropic from the point of view of \nthe elastic properties. Figs.~\\ref{fig:configs}a, b and c depict the\nsystem in that range of densities: Figs.~\\ref{fig:configs}a and b \nrepresent states with vanishing shear moduli, and neither of them\nexhibits translational order of the ellipse centers. However, while\nFig.~\\ref{fig:configs}a represents a state with all the\ncharacteristics of a liquid, the state in Fig.~\\ref{fig:configs}b is \ncharacterized by slowly decaying bond orientational order, possibly \nindicating hexatic phase. At a slightly higher density, \nFig.~\\ref{fig:configs}c represents a plastic solid: while the ellipses\nare randomly oriented, the system exhibits long range bond orientational\norder, and algebraically decaying positional order of the particles.\nThe particles occupy, on the average, positions of an undistorted \ntriangular lattice although some undulations are apparent. This is \ncharacterized by two coinciding positive shear moduli. \n\nWhen we approach within few percent the close packed configuration,\ncorresponding to the two largest densities in Figs.~\\ref{fig:elast1} and\n\\ref{fig:elast2} the prolonged relaxation process does not change the\npreferred orientation of the ellipses and the distortion of the\nlattice. It would be reasonable to conclude, that at such high\ndensity we finally arrived at the orientationally ordered elastic\nsolid. The isotropic elastic symmetry no longer holds. We checked\nand found that almost all stability criteria \\cite{birch,zhou}, \nindicating the sign of the energy change upon small deformation,\nare positive. However, one of the shear moduli, namely $\\mu_1$,\nis slightly negative, although within one standard statistical\ndeviation from zero. This, may either indicate that we are in\nan unstable state, or that there is a continuum of equilibrium states\nwith different mean orientations of ellipses and, correspondingly,\ndifferent dimensions of elementary cell.\n\n\\begin{figure*}\n\\includegraphics[height=6cm]{MKfig6a.eps}\\hskip 3cm\n\\includegraphics[height=6cm]{MKfig6b.eps}\n\\par (a){\\hskip 8cm}(b)\n\\par\\vskip 1cm\n\\includegraphics[height=6cm]{MKfig6c.eps}\\hskip 3cm\n\\includegraphics[height=6cm]{MKfig6d.eps}\n\\par (c){\\hskip 8cm}(d)\n\\caption{\\label{fig:configs}\nTypical equilibrium configurations of slightly eccentric ($E=1.5$) \nhard ellipse system at several densities. Only part of the \nsystem is shown. All the pictures show the same (partial) volume of\nthe system; they differ only in the reduced density $\\rho^*$:\n(a) orientationally and translationally disordered liquid at \n$\\rho^*=0.881$;\n(b) liquid with a high degree of bond-orientational order at \n$\\rho^*=1.029$;\n(c) plastic solid with long-range bond-orientational order, \nand quasi-long-range translational order consisting of rotationally \ndisordered ellipses at $\\rho^*=1.086$;\n(d) solid of orientationally ordered ellipses at $\\rho^*=1.117$.\n }\n\\end{figure*}\n\n\n\\section{Discussion}\\label{sec:discussion}\n\nWe extended the formalism of SHH \\cite{shh} to the case of systems \ninteracting\nvia non-central two-particle potentials. In its form represented by \nEqs.~\\ref{softstress} and \\ref{softelast}, the formalism can be used\nto study properties of molecular systems. This is particularly\ntrue for highly non-spherical organic molecules, and various soft\ncondensed matter systems. The adaptation of the expressions\nto hard potentials (Eqs.~\\ref{eq:hardstress} and \\ref{eq:hardelast})\nproduced slightly more complicated expressions. However, the \nsimplicity of hard potential systems provides excellent insights \ninto the entropy-dominated systems. We demonstrated the usefulness \nof the formalism by presenting some results of our study of the hard ellipse\nsystem \\cite{mk}. Measurement of several order parameters, and \nthe correlation functions is not always sufficient to determine the\nnature of phases. For systems of moderate size, it maybe even difficult\nto distinguish liquid from a solid. Measurement of elastic constants\nprovides an additional, very important tool for assessing the nature\nof the state of the system. In particular, the elastic constants may\nindicate the presence of instability, even when prolonged \nequilibration does not change an existing state. Following the \nindications of instability at high densities, we are currently\nperforming extensive study of equilibrium states at these densities.\n\nWhile we worked on a 2D example, the method can be equally well applied \nin 3D for such systems as hard ellipsoids or spherocylinders. \n\n\\acknowledgements \nWe would like to thank O. Farago and M. Kardar for useful discussions. \nThis research was supported by Israel Science Foundation Grant\nNo. 193\/05.\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this work we are concerned with nonlinear Fokker-Planck-Kolmogorov equations (FPK-equations) on $\\mathbb{R}^d$, both deterministic\n\\begin{equation}\\label{NLFPKE}\\tag{NLFPK}\n\\partial_t\\mu_t = \\mathcal{L}^*_{t,\\mu_t}\\mu_t, \\,\\, t \\in [0,T],\n\\end{equation}and perturbed by a first-order stochastic term driven by a finite-dimensional Wiener process $W$\n\\begin{equation}\\label{SNLFPKE}\\tag{SNLFPK}\n\\partial_t\\mu_t = \\mathcal{L}^*_{t,\\mu_t}\\mu_t-\\text{div}(\\sigma(t,\\mu_t))dW_t,\\,\\, t\\in [0,T],\n\\end{equation}with solutions being continuous curves of subprobability measures $\\mu_t \\in \\mathcal{SP}$. Here, $\\mathcal{L}^*$ denotes the formal dual of a second-order differential operator acting on sufficiently smooth functions $\\varphi: \\mathbb{R}^d \\to \\mathbb{R}$ via\n\\begin{equation}\\label{1}\n\\mathcal{L}_{t,\\mu}\\varphi(x) = \\sum_{i,j=1}^{d}a_{ij}(t,\\mu,x)\\partial^2_{ij}\\varphi(x)+\\sum_{i=1}^{d}b_i(t,\\mu,x)\\partial_i\\varphi(x)\n\\end{equation}with coefficients $a$ and $b$ depending on $(t,x) \\in [0,T]\\times \\mathbb{R}^d$ and (in general non-locally) on the solution $\\mu_t$. These equations are to be understood in distributional sense, see Definition \\ref{Sol NLFPKE} and \\ref{Def sol SNLFPKE}. The nonlinearity arises from the dependence of $\\mathcal{L}$ and $\\sigma$ on the solution itself, which renders the theory of existence and uniqueness of such equations significantly more difficult compared to the linear case. For a thorough introduction to the field, we refer to \\cite{bogachev2015fokker} and the references therein. As shown in \\cite{Rckner-lin.-paper}, the deterministic nonlinear equation (\\ref{NLFPKE}) is naturally associated to a first-order linear continuity equation on $\\mathcal{P}(\\mathcal{SP})$, the space of Borel probability measures on $\\mathcal{SP}$, of type\n\\begin{equation}\\label{P-CE}\\tag{$\\mathcal{SP}$-CE}\n\\partial_t \\Gamma_t = \\mathbf{L}^*_t\\Gamma_t, \\,\\, t\\in [0,T],\n\\end{equation}in the sense of distributions, with the linear operator $\\mathbf{L}$ acting on sufficiently smooth real functions on $\\mathcal{SP}$ via the gradient operator $\\nabla^{\\mathcal{SP}}$ on $\\mathcal{SP}$ as\n$$\\mathbf{L}_tF = \\big \\langle \\nabla^{\\mathcal{SP}}F, b_t+a_t\\nabla \\big \\rangle_{L^2}.$$ Precise information on this operator and equation (\\ref{P-CE}) are given in Section 3, in particular in Definition \\ref{Def sol P-CE} and the paragraph preceding it.\\\\\n Our first main result, Theorem \\ref{main thm det case} states that each weakly continuous solution $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-CE}) is a \\textit{superposition} of solutions to (\\ref{NLFPKE}), i.e. (denoting by $e_t$ the canonical projection $e_t: (\\mu_t)_{t \\leq T} \\mapsto \\mu_t$)\n\\begin{equation}\\label{SuperPos eq}\n\\Gamma_t = \\eta \\circ e_t^{-1}\n\\end{equation}\nfor some probability measure $\\eta$ concentrated on solution curves to (\\ref{NLFPKE}) in a suitable sense.\\\\\n We also treat the stochastic case in a similar fashion. More precisely, in Section 4 we establish a new correspondence between the stochastic equation for measures (\\ref{SNLFPKE}) and a corresponding second-order equation for curves $(\\Gamma_t)_{t \\leq T}$ in $\\mathcal{P}(\\mathcal{SP})$ of type\n \\begin{equation}\\label{P-FPKE}\\tag{$\\mathcal{SP}$-FPK}\n \\partial_t \\Gamma_t = (\\mathbf{L}_t^{(2)})^*\\Gamma_t,\\,\\, t \\in [0,T],\n \\end{equation}where, roughly, $$\\mathbf{L}^{(2)}_t = \\mathbf{L}_t+ \\textit{ second-order perturbation}. $$ The second-order term stems from the stochastic perturbation of (\\ref{SNLFPKE}) and will be geometrically interpreted in terms of a (formal) notion of the Levi-Civita connection on $\\mathcal{SP}$. The second main result of this work, Theorem \\ref{main thm stoch case}, is then the stochastic generalization of the deterministic case: For any solution $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-FPKE}), there exists a solution process $(\\mu_t)_{t \\leq T}$ to (\\ref{SNLFPKE}) on some probability space such that $\\mu_t$ has distribution $\\Gamma_t$. We stress that in both cases, we do not require any regularity of the coefficients.\\\\\n \\\\\n Let us embed these results into the general research in this direction. Let $b_t(\\cdot): \\mathbb{R}^d \\to \\mathbb{R}^d$ be an inhomogeneous vector field and consider the (nonlinear) ODE\n \\begin{equation}\\label{ODE intro}\\tag{ODE}\n \\frac{d}{dt}\\gamma_t = b_t(\\gamma_t), \\,\\, t \\leq T\n \\end{equation}and the linear continuity equation for curves of Borel (probability) measures on $\\mathbb{R}^d$\n \\begin{equation}\\label{CE intro}\\tag{CE}\n \\partial_t \\mu_t = -\\divv (b_t\\mu_t), \\,\\, t \\leq T,\n \\end{equation}understood in distributional sense. In the seminal paper \\cite{Ambrosio2008}, L. Ambrosio showed the following: Any (probability) solution $(\\mu_t)_{t \\leq T}$ to (\\ref{CE intro}) with an appropriate global integrability condition is a \\textit{superposition} of solution curves to (\\ref{ODE intro}), i.e. there exists a (probability) measure $\\eta$ on the space of continuous paths with values in the state space of (\\ref{ODE intro}), $C([0,T],\\mathbb{R}^d)$, which is concentrated on solutions to (\\ref{ODE intro}) such that\n $$\\eta \\circ e_t^{-1} = \\mu_t, \\,\\, t\\leq T.$$This allows to transfer existence and uniqueness results between the linear equation (\\ref{CE intro}) and the nonlinear (\\ref{ODE intro}). However, the linear equation must be studied on an infinite-dimensional space of (probability) measures. The analogy to our deterministic result from Section 3 is as follows: (\\ref{ODE intro}) is replaced by (\\ref{NLFPKE}), which, in spirit of this analogy, we interpret as a differential equation on the manifold-like state space $\\mathcal{SP}$. Likewise, (\\ref{CE intro}) is replaced by (\\ref{P-CE}) and our first main result Theorem \\ref{main thm det case} may be understood as the analogue of Ambrosio's result to the present setting. By passing from (\\ref{NLFPKE}) to (\\ref{P-CE}), we \\textit{linearize} the equation.\\\\\n Concerning the stochastic case, consider a stochastic differential equation on $\\mathbb{R}^d$\n \\begin{equation}\\label{SDE}\\tag{SDE}\n dX_t = b(t,X_t)dt+\\tilde{a}(t,X_t)dB_t, \\,\\, t\\in [0,T].\n \\end{equation} By It\u00f4's formula, the one-dimensional marginals $\\mu_t$ of any (weak) martingale solution $X$ solve the corresponding linear FPK-equation\n \\begin{equation}\\label{FPKE}\\tag{FPK}\n \\partial_t \\mu_t = \\mathcal{L}_{lin, t}^*\\mu_t, \\,\\, t \\in [0,T],\n \\end{equation} where $\\mathcal{L}_{lin}$ is a linear second-order diffusion operator with coefficients $b$ and $\\frac{1}{2}\\tilde{a}\\tilde{a}^T$. Conversely, a superposition principle has successively been developed in increasingly general frameworks (cf. \\cite{FIGALLI2008109, Kurtz2011, trevisan2016, Rckner-superpos_pr}): Under mild global integrability assumptions, for every weakly continuous solution curve of probability measures $(\\mu_t)_{t \\leq T}$ to (\\ref{FPKE}), there exists a (weak) martingale solution $X$ to (\\ref{SDE}) with one-dimensional marginals $(\\mu_t)_{t \\leq T}$, thereby providing an equivalence between solutions to (\\ref{SDE}) and (\\ref{FPKE}), which offers a bridge between probabilistic and analytic approaches to diffusion processes. As in the deterministic case, the transition from (\\ref{SDE}) to (\\ref{FPKE}) provides a \\textit{linearization}, while at the same time it transfers the equation to a much higher dimensional state space. Concerning our stochastic result Theorem \\ref{main thm stoch case}, we replace the stochastic equation on $\\mathbb{R}^d$ by the stochastic equation for measures (\\ref{SNLFPKE}) and the corresponding second-order equation for measures (\\ref{FPKE}) by (\\ref{P-FPKE}) and prove an analogous superposition result for solutions to the latter equation.\\\\\n The proofs of both the deterministic and stochastic result rely on superposition principles for differential equations on $\\mathbb{R}^{\\infty}$ and the corresponding continuity equation (for the deterministic case) and for martingale solutions and FPK-equations on $\\mathbb{R}^{\\infty}$ (for the stochastic case) by Ambrosio and Trevisan (\\cite{ambrosio2014}, \\cite{TrevisanPhD}). The key technique is to transfer (\\ref{P-CE}) and (\\ref{NLFPKE}) (and, similarly, (\\ref{P-FPKE}) and (\\ref{SNLFPKE}) for the stochastic case) to suitable equations on $\\mathbb{R}^{\\infty}$ via a homeomorphism between $\\mathcal{SP}$ and $\\mathbb{R}^{\\infty}$ (replaced by $\\ell^2$ for the stochastic case, in order to handle the stochastic integral).\\\\\n Moreover, our results also blend into the theory of \\textit{distribution dependent} stochastic differential equations, also called \\textit{McKean-Vlasov equations}, i.e. stochastic equations on Euclidean space of type\n \\begin{equation}\\label{DDSDE}\\tag{DDSDE}\n dX_t = b(t,\\mathcal{L}_{X_t},X_t)dt+\\tilde{a}(t,\\mathcal{L}_{X_t},X_t)dB_t,\\,\\, t\\in [0,T],\n \\end{equation}\n see the classical papers \\cite{McKean1907, Funaki, scheutzow_1987} as well as the more recent works \\cite{HUANG20194747, 1078-0947_2019_6_3017, Coghi2019StochasticNF}. Here, $\\mathcal{L}_{X_t}$ denotes the distribution of $X_t$ and is not to be confused with the operators $\\mathcal{L}_{t,\\mu}$ and $\\mathcal{L}_t$ from above. As in the non-distribution dependent case, where the curve of marginals of any solution to (\\ref{SDE}) solves an equation of type (\\ref{FPKE}), a similar observation holds here: Each solution $X$ to (\\ref{DDSDE}) provides a solution to a nonlinear FPK-equation of type (\\ref{NLFPKE}) via $\\mu_t = \\mathcal{L}_{X_t}$ and a corresponding superposition principle holds analogously to the linear case as well (\\cite{barbu2020, doi:10.1137\/17M1162780}).\\\\\nHowever, while for (\\ref{SDE}) the passage to (\\ref{FPKE}) provides a complete linearization, the situation is different for equations of type (\\ref{NLFPKE}). This stems from the observation that (\\ref{DDSDE}) is an equation with two sources of nonlinearity. Hence, it seems natural to linearize (\\ref{NLFPKE}) once more in order to obtain a linear equation, which is related to (\\ref{DDSDE}) and (\\ref{NLFPKE}) in a natural way. By the results of \\cite{Rckner-lin.-paper}, this linear equation is of type (\\ref{P-CE}). Similar considerations prevail in the stochastic case, where one considers equations of type (\\ref{DDSDE}) with an additional source of randomness (we shall not pursue this direction in this work). \\\\\n \\\\\nOn the one hand, the superposition principles of Theorem \\ref{main thm det case} and Theorem \\ref{main thm stoch case} provide new structural results for nonlinear FPK-equations and its corresponding linearized equations on the space of probability measures over $\\mathcal{SP}$, involving a geometric interpretation of the latter. On the other hand, it is our future plan to further study the geometry of $\\mathcal{SP}$ as initiated in \\cite{Rckner-lin.-paper} and this work to develop an analysis on such infinite-dimensional manifold-like spaces, which allows to solve linear equations of type (\\ref{P-CE}) and (\\ref{P-FPKE}) on such spaces. By means of the results of this work, one can then lift such solutions to solutions to the nonlinear equations for measures (\\ref{NLFPKE}) and (\\ref{SNLFPKE}), thereby obtaining new existence results for these nonlinear equations for measures.\\\\\n \\\\\n We point out that although our main aim is to lift weakly continuous solutions to (\\ref{P-CE}) and (\\ref{P-FPKE}) concentrated on probability measures to a measure on the space of continuous probability measure-valued paths $(\\mu_t)_{t \\leq T}$, for technical reasons we more generally develop our results for vaguely continuous subprobability solutions (i.e. $\\mu_t \\in \\mathcal{SP}$). We comment on the advantages of this approach in Remark \\ref{Rem explain why SP} for the deterministic case and note that similar arguments prevail in the stochastic case as well. However, due to the global integrability assumptions we consider, we are able to obtain results for probability solutions as desired.\\\\\n \\\\\n The organization of this paper is as follows. After introducing general notation and recalling basic properties of the spaces $\\mathcal{P}$ and $\\mathcal{SP}$ in Section 2, Section 3 contains the deterministic case, i.e. the superposition principle between solutions to (\\ref{P-CE}) and (\\ref{NLFPKE}). Here, the main result is Theorem \\ref{main thm det case}. We use this result to prove an open conjecture of \\cite{Rckner-lin.-paper} (cf. Proposition \\ref{Prop conj Rckner}) and present several consequences. In Section 4, we treat the stochastic case for equations of type (\\ref{SNLFPKE}), the main result being Theorem \\ref{main thm stoch case}.\n \\paragraph{Acknowledgements}\n \fFinancial support by the German Science Foundation DFG (IRTG 2235) is gratefully\nacknowledged.\n \\section{Notation and Preliminaries}\n We introduce notation and repeat basic facts on spaces and topologies of measures.\n \\subsection*{Notation}\n For a measure space $(\\mathcal{X},\\mathcal{A},\\mu)$ and a measurable function $\\varphi: \\mathcal{X}\\to \\mathbb{R}$, we set $\\mu(\\varphi) := \\int_{\\mathcal{X}}\\varphi(x)d\\mu(x)$ whenever the integral is well-defined. For $x \\in \\mathcal{X}$, we denote by $\\delta_x$ the \\textit{Dirac measure in $x$}, i.e. $\\delta_x(A) = 1$ if and only if $x \\in A$ and $\\delta_x(A) =0$ else. For a topological space $X$ with Borel $\\sigma$-algebra $\\mathcal{B}(X)$ we denote the set of continuous bounded functions by $C_b(X)$, the set of Borel probability measures on $X$ by $\\mathcal{P}(X)$ and write $\\mathcal{P} = \\mathcal{P}(\\mathbb{R}^d)$. If $Y \\in \\mathcal{B}(X)$, we let $\\mathcal{B}(X)_{\\upharpoonright Y}$ denote the \\textit{trace of $Y$on} $\\mathcal{B}(X)$. For $T>0$, a family $(\\mu_t)_{t \\leq T} = (\\mu_t)_{t \\in [0,T]}$ of finite Borel measures on $\\mathbb{R}^d$ is a \\textit{Borel curve}, if $t \\mapsto \\mu_t(A)$ is Borel measurable for each $A \\in \\mathcal{B}(\\mathbb{R}^d)$. A set of functions $\\mathcal{G} \\subseteq C_b(\\mathbb{R}^d)$ is called \\textit{measure-determining}, if $\\mu(g) = \\nu(g)$ for each $g \\in \\mathcal{G}$ implies $\\mu = \\nu$ for any two finite Borel measures $\\mu, \\nu$ on $\\mathbb{R}^d$.\\\\\n For $x,y \\in \\mathbb{R}^d$, the usual inner product is denoted by $x \\cdot y$ and, with slight abuse of notation, we also denote by $x \\cdot y = \\sum_{k \\geq 1 }x_ky_k$ the inner product in $\\ell^2$ (the Hilbert space of square-summable real-valued sequences $x=(x_k)_{k \\geq1}$). For $\\varphi \\in C_b(\\mathbb{R}^d)$, we set $||\\varphi||_{\\infty} := \\underset{x \\in \\mathbb{R}^d}{\\text{sup}}|\\varphi(x)|$. If $\\varphi$ has first- and second-order partial derivatives, we denote them by $\\partial_i\\varphi$ and $\\partial^2_{ij}\\varphi$ for $i,j \\leq d$.\\\\\n \\\\\n We use notation for function spaces as follows. For $k \\in \\mathbb{N}_0$, $C^k_b(\\mathbb{R}^d)$ denotes the subset of functions $\\varphi$ in $C_b(\\mathbb{R}^d)$ with continuous, bounded partial derivatives up to order $k$, with the usual norm $||\\varphi||_{C^2_b} = \\text{max}(||\\varphi||_{\\infty},||\\partial_i\\varphi||_{\\infty}, ||\\partial^2_{ij}\\varphi||_{\\infty})$ for $k = 2$. Likewise, $C^k_c(\\mathbb{R}^d)$ denotes the subset of all such $\\varphi$ with compact support; for $k = 0$, we write $C_c(\\mathbb{R}^d)$ instead. For $n \\geq 1$, $p \\geq 1$ and a measure $\\mu$ on $\\mathcal{B}(\\mathbb{R}^d)$, we denote by $L^p(\\mathbb{R}^d,\\mathbb{R}^n;\\mu)$ the space of Borel functions $\\varphi: \\mathbb{R}^d \\to \\mathbb{R}^n$ such that\n $$\\int_{\\mathbb{R}^d}||\\varphi(x)||^pd\\mu(x) < +\\infty,$$\n where $||\\cdot||$ denotes the standard Euclidean norm on $\\mathbb{R}^n$. For $p =2$, $\\langle \\cdot, \\cdot \\rangle_{L^2(\\mathbb{R}^d,\\mathbb{R}^n;\\mu)}$ denotes the usual inner product on the Hilbert space $L^2(\\mathbb{R}^d,\\mathbb{R}^n;\\mu)$. For $T >0$ and a topological space $Y$, we write $C_TY$ for the set of continuous functions $\\varphi:[0,T]\\to Y$. By $\\mathbb{S}^+_d$ we denote the space of symmetric, positive-semidefinite $d\\times d$-matrices with real entries.\n \\subsection*{Basic properties of spaces of measures}\n \\subsubsection*{Probability measures}\n For a topological space $X$, we endow $\\mathcal{P}(X)$ with the topology of weak convergence of measures, i.e. the initial topology of the maps $\\mu \\mapsto \\mu(\\varphi)$, $\\varphi \\in C_b(X)$. If $X$ is Polish, then so is $\\mathcal{P}(X)$.\n\\subsubsection*{Subprobability measures}\n By $\\mathcal{SP}$ we denote the set of all Borel subprobability measures on $\\mathbb{R}^d$, i.e. $\\mu \\in \\mathcal{SP}$ if and only if $\\mu$ is a non-negative measure on $\\mathcal{B}(\\mathbb{R}^d)$ with $\\mu(\\mathbb{R}^d) \\leq 1$. Throughout, we endow $\\mathcal{SP}$ with the \\textit{vague topology}, i.e. the initial topology of the maps $\\mu \\mapsto \\mu(g)$, $g \\in C_c(\\mathbb{R}^d)$. Hence, a sequence $(\\mu_n)_{n \\geq 1}$ converges to $\\mu$ in $\\mathcal{SP}$ if and only if $\\mu_n(g) \\underset{n \\to \\infty}{\\longrightarrow} \\mu(g)$ for each $g \\in C_c(\\mathbb{R}^d)$. Its Borel $\\sigma$-algebra is denoted by $\\mathcal{B}(\\mathcal{SP})$. In particular, $\\mathcal{P}(\\mathcal{SP})$, the set of Borel probability measures on $\\mathcal{SP}$, is a topological space with the weak topology of probability measures on $(\\mathcal{SP},\\mathcal{B}(\\mathcal{SP}))$. The Riesz-Markov representation theorem yields that $\\mathcal{SP}$ with the vague topology coincides with the positive half of the closed unit ball of the dual space of $C_c(\\mathbb{R}^d)$ with the weak*-topology. Hence $\\mathcal{SP}$ with the vague topology is compact. It is also Polish and $\\mu \\mapsto \\mu(\\mathbb{R}^d)$ is vaguely lower semicontinuous, see \\cite[Ch.4.1]{OK}. In particular,\n $\\mathcal{P} \\in \\mathcal{B}(\\mathcal{SP}).$\nRecall that $\\mathcal{B}(\\mathcal{P}) = \\mathcal{B}(\\mathcal{SP})_{\\upharpoonright_{\\mathcal{P}}}$. Hence, in the sequel we may consider measures $\\Gamma \\in \\mathcal{P}(\\mathcal{P})$ as elements in $\\mathcal{P}(\\mathcal{SP})$ with mass on $\\mathcal{P}$.\\\\\n In contrast to weak convergence in $\\mathcal{P}$, vague convergence in $\\mathcal{SP}$ can be characterized by countably many functions in a sense made precise by Lemma \\ref{Prop G early}. The fact that this is not true for weak convergence in $\\mathcal{P}$ is the main reason why we formulate all equations for subprobability measures, although we are mainly interested in the case of probability solutions. More details in this direction are stated in Remark \\ref{Rem explain why SP}.\n \n \\section{Superposition Principle for deterministic nonlinear Fokker-Planck-Kolmogorov Equations}\n\n Fix $ T>0$ throughout, let each component of the coefficients\n $$a = (a_{ij})_{i,j \\leq d}: [0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d \\to \\mathbb{S}^+_d, \\, b = (b_i)_{i \\leq d}: [0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d \\to \\mathbb{R}^d$$\n be $\\mathcal{B}([0,T])\\otimes \\mathcal{B}(\\mathcal{SP})\\otimes \\mathcal{B}(\\mathbb{R}^d) \/ \\mathcal{B}(\\mathbb{R})$-measurable and consider the operator $\\mathcal{L}_{t,\\mu}$ as in (\\ref{1}). \n \\begin{dfn}\\label{Sol NLFPKE}\n \\begin{enumerate}\n \t\\item [(i)] A vaguely continuous curve $(\\mu_t)_{t \\leq T} \\subseteq \\mathcal{SP}$ is a \\textit{subprobability solution to }(\\ref{NLFPKE}), if for each $i, j \\leq d$ the global integrability condition\n \t\t\\begin{equation}\\label{2}\n \t\t\\int_0^T \\int_{\\mathbb{R}^d}|a_{ij}(t,\\mu_t,x)|+|b_i(t,\\mu_t,x)|d\\mu_t(x)dt < +\\infty\n \t\t\\end{equation}holds and for each $\\varphi \\in C^2_c(\\mathbb{R}^d)$ and $t \\in [0,T]$\n \t\t\\begin{equation}\\label{3}\n \t\t\\int_{\\mathbb{R}^d}\\varphi(x)d\\mu_t(x)-\\int_{\\mathbb{R}^d}\\varphi(x)d\\mu_0(x) = \\int_0^t\\int_{\\mathbb{R}^d}\\mathcal{L}_{s,\\mu_s}\\varphi(x)d\\mu_s(x)ds.\n \t\t\\end{equation}\n \n \t\\item[(ii)] A \\textit{probability solution} to (\\ref{NLFPKE}) is a curve $(\\mu_t)_{t \\leq T}\\subseteq \\mathcal{P}$ fulfilling (\\ref{2}) and (\\ref{3}) such that $t \\mapsto \\mu_t$ is weakly continuous.\n \\end{enumerate}\n \\end{dfn}\n Since vaguely continuous curves of measures are in particular Borel curves, all integrals in the above definition are defined. Below we shortly refer to \\textit{subprobability }and \\textit{probability solutions} and keep in mind the respective continuity conditions. In the literature, more general notions of solutions to (\\ref{NLFPKE}) are considered, such as (possibly discontinuous) curves of signed, bounded measures \\cite{bogachev2015fokker}. However, in this work, we restrict attention to continuous (sub-)probability solutions. In presence of the global integrability condition (\\ref{2}), we make the following observation.\n \\begin{rem}\\label{Rem mass conserv}\n \t\\begin{enumerate}\n \t\t\\item [(i)] Any subprobability solution $(\\mu_t)_{t \\leq T}$ with $\\mu_0 \\in \\mathcal{P}$ is a probability solution. Indeed, to prove this it suffices to show $\\mu_t(\\mathbb{R}^d) =1$ for each $t \\leq T$. Since $(\\mu_t)_{t \\leq T}$ fulfills (\\ref{3}), it suffices to choose a sequence $\\varphi_l$, $l \\geq 1$, from $C^2_c(\\mathbb{R}^d)$ with the following properties: $0 \\leq \\varphi_l \\nearrow 1$ pointwise such that $\\partial_i \\varphi_l \\underset{l \\to \\infty}{\\longrightarrow} 0$, $\\partial^2_{ij}\\varphi_l \\underset{l \\to \\infty}{\\longrightarrow}0$ pointwise with all first and second order derivatives bounded by some $M < +\\infty$ uniformly in $l \\geq 1$ and $x \\in \\mathbb{R}^d$. Considering (\\ref{3}) for the limit $l \\to \\infty$, we obtain, by (\\ref{2}) and dominated convergence, for each $t \\in [0,T]$\n \t\t$$\\int_{\\mathbb{R}^d}1 d\\mu_t - \\int_{\\mathbb{R}^d}1 d\\mu_0 = 0$$\n \tand hence the claim.\n \t\t\\item[(ii)] By the above argument, one shows that for any subprobability solution, (\\ref{3}) holds for each $\\varphi \\in C^2_b(\\mathbb{R}^d)$. \n \t\\end{enumerate} \n \\end{rem}\n \\subsubsection*{Geometric approach to $\\mathbf{\\mathcal{SP}}$}For our goals, it is preferable to consider $\\mathcal{SP}$ as a manifold-like space. We refer the reader to the appendix in \\cite{Rckner-lin.-paper}, where for the space of probability measures $\\mathcal{P}$ the tangent spaces $T_{\\mu}\\mathcal{P} = L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$ and a suitable test function class $\\mathcal{F}C^2_b(\\mathcal{P})$, \n \\begin{equation}\\label{Test fct. Rckner}\n F \\in \\mathcal{F}C^2_b(\\mathcal{P}) \\iff F: \\mu \\mapsto f\\big(\\mu(\\varphi_1),\\dots,\\mu(\\varphi_n)\\big) \\text{ for }n \\geq 1, f\\in C^1_b(\\mathbb{R}^n), \\, \\varphi_i \\in C^{\\infty}_c(\\mathbb{R}^d),\n \\end{equation}\n have been introduced. Further, based on these choices, a natural pointwise definition of the gradient $\\nabla^{\\mathcal{P}}F$ as a section in the tangent bundle $$T\\mathcal{P} = \\bigsqcup_{\\mu \\in \\mathcal{P}}T_{\\mu}\\mathcal{P}$$for $F$ as above is given by\n $$\\nabla^{\\mathcal{P}}F(\\mu) := \\sum_{k=1}^{n}\\partial_kf\\big(\\mu(\\varphi_1),\\dots,\\mu(\\varphi_n)\\big)\\nabla \\varphi_k \\in T_{\\mu}\\mathcal{P},$$which is shown to be independent of the representation of $F$ in terms of $f$ and $\\varphi_i$. The setting in the present paper is nearly identical, but we consider the manifold-like space $\\mathcal{SP}$ with the vague topology instead of $\\mathcal{P}$ with the weak topology as in \\cite{Rckner-lin.-paper}, because $\\mathcal{SP}$ is embedded in $\\mathbb{R}^{\\infty}$ in the following sense. Let\n \\begin{equation}\\label{Fct.class G}\n \\mathcal{G} = \\{g_i, i \\geq 1\\}\n \\end{equation} be dense in $C^2_c(\\mathbb{R}^d)$ with respect to $||\\cdot||_{C^2_b}$ such that no $g_i$ is constantly $0$. Clearly, any such set of functions is dense in $C_c(\\mathbb{R}^d)$ with respect to uniform convergence and measure-determining. Such sets of functions are sufficiently extensive to characterize the topology of $\\mathcal{SP}$ as well as solutions to (\\ref{NLFPKE}):\n \t\n \t\\begin{lem}\\label{Prop G early}\n \t\tLet $\\mathcal{G}$ be any set of functions with the properties mentioned above and let $(\\mu_n)_{n \\geq 1} \\subseteq \\mathcal{SP}$. Then,\n \t\t\\begin{enumerate}\n \t\t\t\\item [(i)] $(\\mu_n)_{n \\geq 1}$ converges vaguely to $\\mu \\in \\mathcal{SP}$ if and only if\n \t\t\t\\begin{equation*}\n \t\t\t\\mu_n(g_i) \\underset{n \\to \\infty}{\\longrightarrow}\\mu(g_i)\n \t\t\t\\end{equation*}\n \t\t\tfor each $g_i \\in \\mathcal{G}$.\n \t\t\t\\item[(ii)] A vaguely continuous curve $(\\mu_t)_{t \\leq T} \\subseteq \\mathcal{SP}$, which fulfills (\\ref{2}), is a subprobability solution to (\\ref{NLFPKE}) if and only if (\\ref{3}) holds for each $g_i \\in \\mathcal{G}$ in place of $\\varphi$.\n \t\t\\end{enumerate}\n \t\\end{lem}\n \t\\begin{proof}\n \t\t\\begin{enumerate}\n \t\t\t\\item [(i)] From $\\mu_n(g_i) \\underset{n \\to \\infty}{\\longrightarrow}\\mu(g_i)$ for each $g_i \\in \\mathcal{G}$, one obtains for each $f \\in C_c(\\mathbb{R}^d)$ and $\\epsilon>0$ by choosing $g_i \\in \\mathcal{G}$ with $||f-g_i||_{\\infty} < \\frac{\\epsilon}{3}$\n \t\t\t\\begin{equation}\\label{4}\n \t\t\t|\\mu_n(f)-\\mu(f)| \\leq |\\mu_n(f)-\\mu_n(g_i)|+|\\mu_n(g_i)-\\mu(g_i)|+|\\mu(g_i)-\\mu(f)| \\leq \\epsilon\n \t\t\t\\end{equation}for all sufficiently large $n \\geq 1$.\n \t\t\t\\item[(ii)] Let $\\varphi \\in C^2_c(\\mathbb{R}^d)$ be approximated uniformly up to second-order derivatives by a sequence $\\{g_{i_k}\\}_{k \\geq 1}$ from $\\mathcal{G}$. Considering (\\ref{3}) for such $g_{i_k}$ and letting $k \\to \\infty$, the result follows by dominated convergence, which applies due to (\\ref{2}).\n \t\t\\end{enumerate}\n \t\\end{proof}\n Considering $\\mathcal{SP}$ as a (infinite-dimensional) manifold-like topological space, any set of functions $\\mathcal{G}$ as above provides a global chart (i.e., an atlas consisting of a single chart) for $\\mathcal{SP}$, as it yields an embedding $\\mathcal{SP} \\subseteq \\mathbb{R}^{\\infty}$ (cf. Lemma \\ref{Lem aux G, J}).\\\\\n Consider $\\mathbb{R}^{\\infty}$ as a Polish space with the topology of pointwise convergence and the range $G(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ of $G$ as introduced below with its subspace topology. We write $C_TG(\\mathcal{SP})$ for the set of all elements in $C_T\\mathbb{R}^{\\infty}$ with values in $G(\\mathcal{SP})$. For $u \\in [0,T]$, we denote by $e_u$ the canonical projection on $C_T\\mathcal{SP}$\n $$e_u: (\\mu_t)_{t\\leq T} \\mapsto \\mu_u$$\n and, likewise, by $e^{\\infty}_u$ the projection on $C_T\\mathbb{R}^{\\infty}$. Subsequently, without further mentioning, we consider the spaces $C_T\\mathcal{SP}$ and $C_T\\mathbb{R}^{\\infty}$ with $\\sigma$-algebras\n $$\\mathcal{B}(C_T\\mathcal{SP}) = \\sigma(e_t, t\\in [0,T]) \\text{ and }\\mathcal{B}(C_T\\mathbb{R}^{\\infty}) = \\sigma(e_t^{\\infty}, t \\in [0,T]),$$\n respectively. These algebras coincide with the Borel $\\sigma$-algebras with respect to the topology of uniform convergence (because both $\\mathcal{SP}$ and $\\mathbb{R}^{\\infty}$ are Polish). Also, consider $C_TG(\\mathcal{SP})$ with the natural subspace $\\sigma$-algebra of $\\mathcal{B}(C_T\\mathbb{R}^{\\infty})$. We refer to these $\\sigma$-algebras as the \\textit{canonical $\\sigma$-algebras} on the respective spaces and denote the set of probability measures on the respective $\\sigma$-algebras by $\\mathcal{P}(C_T\\mathcal{SP})$ and $\\mathcal{P}(C_T\\mathbb{R}^{\\infty})$.\n \\begin{lem}\\label{Lem aux G, J}\n \tLet $\\mathcal{G} = \\{g_i\\}_{i \\geq 1} $ be a set of functions as in (\\ref{Fct.class G}).\n \t\\begin{enumerate}\n \t\t\\item [(i)] The map $G$, depending on $\\mathcal{G}$, \n \t\t\\begin{equation}\\label{Def map G}\n \t\tG : \\mathcal{SP} \\to \\mathbb{R}^{\\infty}, G(\\mu) := (\\mu(g_i))_{i \\geq 1}\n \t\t\\end{equation}\n \t\tis a homeomorphism between $\\mathcal{SP}$ and its range $G(\\mathcal{SP})$ (hence, formally, a \\textup{global chart} for $\\mathcal{SP}$). In particular, $G(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ is compact. Moreover, if $\\mathcal{G}' = \\{g_i', i \\geq 1\\}$ is another set as in (\\ref{Fct.class G}) with corresponding chart $G'$, then $G' = G \\circ \\mathcal{V}$ for a unique homeomorphism $\\mathcal{V}$ on $\\mathcal{SP}$.\n \t\t\\item[(ii)] The map\n \t\t$$J : C_T\\mathcal{SP} \\to C_T\\mathbb{R}^{\\infty},\\, J((\\mu_t)_{t \\leq T}) := G(\\mu_t)_{t \\leq T}$$\n \t\tis measurable and one-to-one with measurable inverse $J^{-1}: C_TG(\\mathcal{SP}) \\to C_T\\mathcal{SP}$. Further, $C_TG(\\mathcal{SP}) \\subseteq C_T\\mathbb{R}^{\\infty}$ is a measurable set, i.e. $C_TG(\\mathcal{SP}) \\subseteq \\mathcal{B}(C_T\\mathbb{R}^{\\infty})$.\n \t\\end{enumerate}\n \\end{lem}\n \\begin{proof}\n \t\\begin{enumerate}\n \t\t\\item [(i)] The continuity of $G$ is obvious by definition of the vague topology on $\\mathcal{SP}$ and since $\\mathcal{G} \\subseteq C_c(\\mathbb{R}^d)$. Since $\\mathcal{SP}$ is compact with respect to the vague topology, compactness of $G(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ follows. $\\mathcal{G}$ is measure-determining on $\\mathbb{R}^d$, which implies that $G$ is one-to-one. Since by definition \n \t\t$$G(\\mu_n) \\underset{n \\to \\infty}{\\longrightarrow} G(\\mu) \\iff \\mu_n(g_i) \\underset{n \\to \\infty}{\\longrightarrow} \\mu(g_i) \\text{ for each }g_i \\in \\mathcal{G},$$ \n \t\tcontinuity of $G^{-1}$ follows from Lemma \\ref{Prop G early} (i). The final assertion follows, since for $G'$ as in the assertion, $\\mathcal{V}:= G^{-1}\\circ \\mathcal{W}\\circ G'$ with $\\mathcal{W}:G'(\\mathcal{SP}) \\to G(\\mathcal{SP})$, $\\mathcal{W}: (\\mu(g'_i))_{i \\geq 1} \\mapsto (\\mu(g_i))_{i \\geq 1}$ is a homeomorphism.\n \t\t\\item[(ii)] Since $G$ is one-to-one and measurable, so is $J$. Clearly, $C_TG(\\mathcal{SP})$ is the range of $J$ and hence $J: C_T\\mathcal{SP} \\to C_TG(\\mathcal{SP})$ is a bijection between standard Borel spaces (the latter, because $\\mathcal{SP}$ and $G(\\mathcal{SP})$ with the respective topologies are Polish). This yields the measurability of $J^{-1}$. Finally, closedness of $G(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ implies that $C_TG(\\mathcal{SP})\\subseteq C_T\\mathbb{R}^{\\infty}$ is a measurable set, because $G(\\mathcal{SP})$ carries the subspace topology inherited from $\\mathbb{R}^{\\infty}$. \n \t\\end{enumerate}\n \\end{proof}\n By part (i) of the previous lemma it is justified to fix a set $\\mathcal{G} = \\{g_i, i \\geq 1\\}$ for the remainder of the section. In order to switch between test functions on $\\mathcal{SP}$ and $\\mathbb{R}^{\\infty}$ in an equivalent way, we slightly deviate from the test function class presented in \\cite{Rckner-lin.-paper} (see (\\ref{Test fct. Rckner})) and, instead, consider\n $$\\mathcal{F}C^2_b(\\mathcal{G}) := \\{F : \\mathcal{SP}\\to \\mathbb{R} | F(\\mu) = f\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big), f \\in C^2_b(\\mathbb{R}^n), n \\geq 1 \\},$$where the restriction $f \\in C^2_b(\\mathbb{R}^n)$ is made for consistency with the stochastic case later on only. We summarize our geometric interpretation of $\\mathcal{SP}$, which is of course still a close adaption of the ideas presented in \\cite{Rckner-lin.-paper}:\\\\\n For the manifold-like space $\\mathcal{SP}$, we consider smooth test functions $F \\in \\mathcal{F}C^2_b(\\mathcal{G})$, with $\\mathcal{G}$ being fixed as in (\\ref{Fct.class G}). For each $\\mu \\in \\mathcal{SP}$, we have the tangent space $T_{\\mu}\\mathcal{SP} = L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$ and the gradient\n $$\\nabla ^{\\mathcal{SP}}F(\\mu) = \\sum_{k=1}^{n}\\partial_kf\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big)\\nabla g_k \\in T_{\\mu}\\mathcal{SP}$$ for $\\mathcal{F}C^2_b(\\mathcal{G}) \\ni F: \\mu \\mapsto f\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big)$ as a section in the tangent bundle $T\\mathcal{SP}$, which is independent of the representation of $F$. Adding to the approach of $\\mathcal{SP}$ as a manifold-like space, the global chart $G$ as in (\\ref{Def map G}) embeds $\\mathcal{SP}$ into $\\mathbb{R}^{\\infty}$. However, we do not rigorously treat $\\mathcal{SP}$ as a (Fr\u00e9chet-)manifold and consider the embedding $\\mathcal{SP} \\subseteq \\mathbb{R}^{\\infty}$ merely as a tool to transfer (\\ref{NLFPKE}) and its corresponding continuity equation to equivalent equations over $\\mathbb{R}^{\\infty}$, as outlined below. \n\\subsubsection*{The continuity equation (\\ref{P-CE})}\n As mentioned in the introduction, we study the linear continuity equation associated to (\\ref{NLFPKE}) as derived in \\cite{Rckner-lin.-paper}, which is a first-order equation for curves of measures on $\\mathcal{SP}$. More precisely, in analogy to the derivation in \\cite{Rckner-lin.-paper}, it is readily seen that any subprobability solution $(\\mu_t)_{t \\leq T}$ to (\\ref{NLFPKE}) induces a curve of elements in $\\mathcal{P}(\\mathcal{SP})$, $\\Gamma_t := \\delta_{\\mu_t}$, $t \\leq T$, with\n \\begin{equation}\\label{Solved eq1}\n \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_t(\\mu)- \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_0(\\mu) = \\int_0^t \\int_{\\mathcal{SP}}\\big \\langle \\nabla^{\\mathcal{SP}}F(\\mu), b(s,\\mu)+a(s,\\mu)\\nabla\\big \\rangle_{L^2(\\mu)}d\\Gamma_s(\\mu)ds\n \\end{equation}for each $t \\leq T$ and $F \\in \\mathcal{F}C^2_b(\\mathcal{G})$. Here, we set $b(s,\\mu) = b(s,\\mu,\\cdot): \\mathbb{R}^d \\to \\mathbb{R}^d$ (similarly for $a(s,\\mu)$), $L^2(\\mu) = L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$ and abbreviated\n $$\\big \\langle \\nabla^{\\mathcal{SP}}F(\\mu), b(s,\\mu)+a(s,\\mu)\\nabla\\big \\rangle_{L^2(\\mu)} = \\int_{\\mathbb{R}^d}\\sum_{k=1}^{n}(\\partial_kf)\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big) a_{ij}(s,\\mu,x)\\partial^2_{ij}g_k(x)+b_i(s,\\mu,x)\\partial_i g_k(x)d\\mu(x).$$\n We rewrite (\\ref{Solved eq1}) in distributional form in duality with $\\mathcal{F}C^2_b(\\mathcal{G})$ as\n \\begin{equation*}\n \\partial_t \\Gamma_t = -\\nabla^{\\mathcal{SP}}\\cdot([b_t+a_t\\nabla]\\Gamma_t), \\,\\, t \\leq T.\n \\end{equation*}Setting\n \\begin{equation}\\label{5}\n \\mathbf{L}_tF(\\mu) := \\big\\langle a(t,\\mu)\\nabla+b(t,\\mu), \\nabla^{\\mathcal{SP}}F(\\mu)\\big\\rangle_{L^2(\\mu)},\n \\end{equation}this is just the linear continuity equation (\\ref{P-CE}). The term $a\\nabla$ has rigorous meaning only, if $a$ has sufficiently regular components in order to put the derivative $\\nabla$ on $a$ via integration by parts, which we do not assume at any point.\n Considering $\\mathcal{SP}$ as a manifold-like space, one may formally regard to $a\\nabla + b$ as a time-dependent section in the tangent bundle $T\\mathcal{SP}$.\\\\\n More generally, we introduce the following notion of solution to (\\ref{P-CE}) (see \\cite{Rckner-lin.-paper}):\n \\begin{dfn}\\label{Def sol P-CE}\n \tA weakly continuous curve $(\\Gamma_t)_{t \\leq T} \\subseteq \\mathcal{P}(\\mathcal{SP})$ is a \\textit{solution to }(\\ref{P-CE}), if the integrability condition\n \t\t\\begin{equation}\\label{aux_revised1}\n \t\t\\int_0^T\\int_{\\mathcal{SP}}||b(t,\\mu,\\cdot)||_{L^1(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)}+||a(t,\\mu,\\cdot)||_{L^1(\\mathbb{R}^d,\\mathbb{R}^{d^2};\\mu)}d \\Gamma_t(\\mu)dt < +\\infty\n \t\t\\end{equation} is fulfilled and for each $F \\in \\mathcal{F}C^2_b(\\mathcal{G})$ and $t \\in [0,T]$\n \t\t\\begin{equation}\\label{6}\n \t\t\\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_t(\\mu)- \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_0(\\mu) = \\int_0^t \\int_{\\mathcal{SP}}\\mathbf{L}_sF(\\mu)d\\Gamma_s(\\mu)ds\n \t\t\\end{equation}holds (which is just (\\ref{Solved eq1})).\n\n \\end{dfn}\n The choice of $\\mathcal{G}$ as in (\\ref{Fct.class G}) implies that any solution in the above sense fulfills (\\ref{6}) even for each $F \\in \\mathcal{F}C^2_b(\\mathcal{SP})$, i.e. for the larger class of test functions considered in \\cite{Rckner-lin.-paper} (upon extending their domain from $\\mathcal{P}$ to $\\mathcal{SP}$). In particular, this notion of solution is independent of $\\mathcal{G}$. The main result of this chapter, Theorem \\ref{main thm det case}, states that any solution to (\\ref{P-CE}) as in Definition \\ref{Def sol P-CE} arises as a superposition of solutions to (\\ref{NLFPKE}). Note that for $\\nu \\in \\mathcal{SP}$, uniqueness of solutions $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-CE}) with $\\Gamma_0 = \\delta_{\\nu}$ implies uniqueness of subprobability solutions $(\\mu_t)_{t \\leq T}$ to (\\ref{NLFPKE}) with $\\mu_0 = \\nu$.\n \n \\subsubsection*{Transferring (\\ref{NLFPKE}) and (\\ref{P-CE}) to $\\mathbb{R}^{\\infty}$}\n We use the global chart $G: \\mathcal{SP} \\to \\mathbb{R}^{\\infty}$ and the map $J$ of Lemma \\ref{Lem aux G, J} to reformulate both (\\ref{NLFPKE}) and (\\ref{P-CE}) on $\\mathbb{R}^{\\infty}$. Define a Borel vector field $\\bar{B} = (\\bar{B}_k)_{k \\in \\mathbb{N}}$ component-wise as follows. For $t \\in [0,T]$, consider the Borel set $A_t \\in \\mathcal{B}(\\mathcal{SP})$,\n $$A_t := \\bigg\\{\\mu \\in \\mathcal{SP}: \\int_{\\mathbb{R}^d}|a_{ij}(t,\\mu,x)+|b_i(t,\\mu,x)|d\\mu(x)< \\infty \\,\\, \\forall i,j \\leq d\\bigg\\}$$\n and define $B := (B_k)_{k \\in \\mathbb{N}}$ via\n $$B_k(t,\\mu):= \\int_{\\mathbb{R}^d}\\mathcal{L}_{t,\\mu}g_k(x)d\\mu(x),\\quad (t,\\mu) \\in [0,T]\\times A_t.$$\n Now define $\\bar{B}: [0,T]\\times \\mathbb{R}^\\infty \\to \\mathbb{R}^\\infty$ via\n $$\\bar{B}(t,z):= \\begin{cases}\n \tB(t,G^{-1}(z)),&\\quad \\text{ if }z \\in G(A_t)\\\\\n \t0,&\\quad \\text{ else,}\n \\end{cases}$$\nwhich is Borel measurable by Lemma \\ref{Lem aux G, J}. Next, consider the differential equation on $\\mathbb{R}^{\\infty}$\n \\begin{equation}\\label{Rinfty-ODE}\\tag{$\\mathbb{R}^{\\infty}$-ODE}\n \\frac{d}{dt} z_t = \\bar{B}(t,z_t), \\,\\,t \\in [0,T],\n \\end{equation}which turns out to be the suitable analogue to (\\ref{NLFPKE}) on $\\mathbb{R}^{\\infty}$. Analogously, the corresponding continuity equation for curves of Borel probability measures $\\bar{\\Gamma}_t$ on $\\mathbb{R}^{\\infty}$, i.e.\n \\begin{equation}\\label{Rinfty-CE}\\tag{$\\mathbb{R}^{\\infty}$-CE}\n \\partial_t \\bar{\\Gamma}_t = -\\bar{\\nabla}\\cdot(\\bar{B}\\bar{\\Gamma}_t), \\,\\, t\\in [0,T],\n \\end{equation}with $\\bar{\\nabla}$ as introduced below, is the natural analogue of the linear continuity equation (\\ref{P-CE}). Roughly, these analogies are to be understood in the sense that solutions to (\\ref{NLFPKE}) and (\\ref{P-CE}) can be transferred to solutions to (\\ref{Rinfty-ODE}) and (\\ref{Rinfty-CE}), respectively, via the chart $G$. We refer to the proof of the main result below for more details. Let $$p_i: z \\mapsto z_i,\\,\\, z \\in \\mathbb{R}^{\\infty}$$ denote the canonical projection to the $i$-th component, set $\\pi_n = (p_1,\\dots,p_n)$ and \n $$\\mathcal{F}C^2_b(\\mathbb{R}^{\\infty}) := \\{\\bar{F}: \\mathbb{R}^{\\infty} \\to \\mathbb{R} | \\bar{F}= f \\circ \\pi_n, f \\in C^2_b(\\mathbb{R}^n), n \\geq 1\\}.$$\n By $\\bar{\\nabla}$ we denote the gradient-type operator on $\\mathbb{R}^{\\infty}$, acting on $\\bar{F} = f \\circ \\pi_n \\in \\mathcal{F}C^2_b(\\mathbb{R}^{\\infty})$ via\n \\begin{equation}\\label{Def nabla gradient}\n \\bar{\\nabla}\\bar{F}(z):= \\big((\\partial_1f)(\\pi_nz),\\dots,(\\partial_nf) (\\pi_nz),0,0,\\dots\\big).\n \\end{equation}Again, the restriction to test functions possessing second-order derivatives is made in order to be consistent with the stochastic (second-order) case later on.\n \\begin{dfn}\\label{Def sol Rinfty-eq}\n \t\\begin{enumerate}\n \t\t\\item [(i)] A curve $(z_t)_{t \\leq T} = ((p_i\\circ z_t)_{i \\geq 1})_{t \\leq T} \\in C_T\\mathbb{R}^{\\infty}$ is a \\textit{solution to} (\\ref{Rinfty-ODE}), if for each $i \\geq 1$ the $\\mathbb{R}$-valued curve $t \\mapsto p_i \\circ z_t$ is absolutely continuous with weak derivative $t \\mapsto p_i \\circ \\bar{B}(t,z_t)$ $dt$-a.s.\n \t\t\\item[(ii)] A curve $(\\bar{\\Gamma}_t)_{t \\leq T} \\subseteq \\mathcal{P}(\\mathbb{R}^{\\infty})$ is a \\textit{solution to }(\\ref{Rinfty-CE}), if it is weakly continuous, fulfills the integrability condition\n \t\t\\begin{equation}\\label{8}\n \t\t\\int_0^T \\int_{\\mathbb{R}^{\\infty}} |\\bar{B}_k(t,z)|d\\bar{\\Gamma}_t(z)dt < +\\infty \\text{ for each }k \\geq 1\n \t\t\\end{equation} and for each $\\bar{F}\\in \\mathcal{F}C^2_b(\\mathbb{R}^{\\infty})$ the identity\n \t\t$$\\int_{\\mathbb{R}^{\\infty}}\\bar{F}(z)d\\bar{\\Gamma}_t(z)- \\int_{\\mathbb{R}^{\\infty}}\\bar{F}(z)d\\bar{\\Gamma}_0(z) = \\int_0^t \\int_{\\mathbb{R}^{\\infty}}\\bar{\\nabla} \\bar{F}(z) \\cdot \\bar{B}(s,z)d\\bar{\\Gamma}_s(z)ds$$\n \t\tholds for all $t \\in [0,T]$.\n \t\\end{enumerate}\n \\end{dfn}\n \\subsection{Main Result: Deterministic case}\nThe following theorem is the main result for the deterministic case.\n \\begin{theorem}\\label{main thm det case}\n \tLet $a,b$ be Borel coefficients on $[0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d$. For any weakly continuous solution $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-CE}) in the sense of Definition \\ref{Def sol P-CE}, there exists a probability measure $\\eta \\in \\mathcal{P}(C_T\\mathcal{SP})$, which is concentrated on vaguely continuous subprobability solutions to (\\ref{NLFPKE}) such that\n \t$$\\eta \\circ e_t^{-1} = \\Gamma_t,\\,\\, t \\in [0,T].$$\\\\\n \tMoreover, if $\\Gamma_0 \\in \\mathcal{P}(\\mathcal{P})$, then $\\eta$ is concentrated on weakly continuous probability solutions to (\\ref{NLFPKE}).\n \\end{theorem}\n \n The proof relies on a superposition principle for measure-valued solution curves of continuity equations on $\\mathbb{R}^{\\infty}$ and its corresponding differential equation, which we recall in Proposition \\ref{Sp-pr. Rinfty prop} below. More precisely, we proceed in three steps. First, we transfer $(\\Gamma_t)_{t \\leq T}$ to a solution $(\\bar{\\Gamma}_t)_{t \\leq T}$ to (\\ref{Rinfty-CE}). Then, by Proposition \\ref{Sp-pr. Rinfty prop} below we obtain a measure $\\bar{\\eta} \\in \\mathcal{P}(C_T\\mathbb{R}^{\\infty})$ with $\\bar{\\eta} \\circ (e^{\\infty}_t)^{-1} = \\bar{\\Gamma_t}$, which is concentrated on solution curves to (\\ref{Rinfty-ODE}). Finally, we transfer $\\bar{\\eta}$ back to a measures $\\eta \\in \\mathcal{P}(C_T\\mathcal{SP})$ with the desired properties. Below, we denote by $\\mathcal{F}C^1_b(\\mathbb{R}^{\\infty})$ the set of test functions of same type as in $\\mathcal{F}C^2_b(\\mathbb{R}^{\\infty})$, but with $f \\in C^1_b(\\mathbb{R}^n)$ in place of $f \\in \\mathcal{F}C^2_b(\\mathbb{R}^n)$.\n \\begin{prop}\\label{Sp-pr. Rinfty prop} [Superposition principle on $\\mathbb{R}^{\\infty}$, Thm. 7.1. \\cite{ambrosio2014}]\n \tLet $(\\bar{\\Gamma}_t)_{t \\leq T}$ be a solution to (\\ref{Rinfty-CE}) in the sense of Definition \\ref{Def sol Rinfty-eq} (ii) with test functions $\\mathcal{F}C^1_b(\\mathbb{R}^{\\infty})$ instead of $\\mathcal{F}C^2_b(\\mathbb{R}^{\\infty})$. Then, there exists a Borel measures $\\bar{\\eta} \\in \\mathcal{P}(C_T\\mathbb{R}^{\\infty})$ concentrated on solutions to (\\ref{Rinfty-ODE}) in the sense of Definition \\ref{Def sol Rinfty-eq} (i) such that $$\\bar{\\eta} \\circ (e^{\\infty}_t)^{-1} = \\bar{\\Gamma}_t, \\,\\,t \\leq T.$$\n \\end{prop}\n We proceed to the proof of the main result.\\\\\n \\\\\n \\textbf{Proof of Theorem \\ref{main thm det case}:} Let $\\Gamma = (\\Gamma_t)_{t \\leq T}$ be a weakly continuous solution to (\\ref{P-CE}) as in Definition \\ref{Def sol P-CE}. \\\\\n \\textbf{Step 1: From (\\ref{P-CE}) to (\\ref{Rinfty-CE}):} Set \n $$\\bar{\\Gamma}_t := \\Gamma_t \\circ G^{-1},$$\n with $G$ as in Lemma \\ref{Lem aux G, J}, which corresponds to the fixed set of functions $\\mathcal{G}$. Since $G$ is continuous, $(\\bar{\\Gamma}_t)_{t \\leq T}$ is a weakly continuous curve of Borel subprobability measures on $\\mathbb{R}^{\\infty}$. We show that $(\\bar{\\Gamma}_t)_{t \\leq T}$ solves (\\ref{Rinfty-CE}). Indeed, the integrability condition (\\ref{8}) is fulfilled, since $(\\Gamma_t)_{t \\leq T}$ fulfills Definition \\ref{Def sol P-CE}. Further, since $\\Gamma$ solves (\\ref{P-CE}), we have for any $\\mathcal{F}C^2_b(\\mathcal{G}) \\ni F: \\mu \\mapsto f\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big)$ and $t \\in [0,T]$\n \\begin{equation}\\label{extra1}\n \\int_0^t \\int_{\\mathcal{SP}}\\mathbf{L}_sF(\\mu)d\\Gamma_s(\\mu)ds = \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_t(\\mu) - \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_0(\\mu)\n \\end{equation}\tand hence, abbreviating $p_k \\circ B(t,\\cdot)$ by $B^k_t$ and setting $\\bar{F} = f \\circ \\pi_n$ for $f$ as above, we have\n \\begin{align*}\n \\int_0^t \\int_{\\mathcal{SP}}\\mathbf{L}_sF(\\mu)d\\Gamma_s(\\mu)ds &= \\int_0^t \\int_{\\mathcal{SP}}\\sum_{k =1}^{n} (\\partial_kf)\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big) \\bigg(\\int_{\\mathbb{R}^d}\\mathcal{L}_{s,\\mu}g_k(x)d\\mu(x)\\bigg) \\Gamma_s(\\mu)ds \\\\& =\n \\int_0^t \\int_{\\mathcal{SP}} \\sum_{k=1}^n (\\partial_k f)\\big(\\mu(g_1),\\dots,\\mu(g_n)\\big)B^k_s(\\mu)d\\Gamma_s(\\mu)ds \\\\&\n = \\int_0^t \\int_{\\mathcal{SP}} \\sum_{k=1}^n(\\partial_k f)\\big(p_1 \\circ G(\\mu),\\dots,p_n\\circ G(\\mu)\\big)\\bar{B}^k_s \\circ G(\\mu)d\\Gamma_s(\\mu)ds\\\\&\n =\\int_0^t \\int_{\\mathbb{R}^{\\infty}} \\nabla \\bar{F}(z) \\cdot \\bar{B}_s(z)\\bar{\\Gamma}_s(z)ds\n \\end{align*}and, furthermore, for each $s \\in [0,T]$\n $$\\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_s(\\mu) = \\int_{\\mathcal{SP}}f\\big(p_1 \\circ G(\\mu),\\dots,p_n \\circ G(\\mu)\\big)d\\Gamma_s(\\mu) = \\int_{\\mathbb{R}^{\\infty}}\\bar{F}(z)d\\bar{\\Gamma}_s(z).$$\n Comparing with (\\ref{extra1}), it follows that $(\\bar{\\Gamma}_t)_{t \\leq T}$ is a solution to (\\ref{Rinfty-CE}) as claimed, because $F \\in \\mathcal{F}C^2_b(\\mathcal{G})$ was arbitrary and hence $\\bar{F}$ as above is arbitrary in $\\mathcal{F}C^2_b(\\mathbb{R}^{\\infty})$. By standard approximation, one extends the above equation to test functions $\\bar{F}$ from $\\mathcal{F}C^1_b(\\mathbb{R}^\\infty)$.\\\\ \n \\\\\n \\textbf{Step 2: From (\\ref{Rinfty-CE}) to (\\ref{Rinfty-ODE}):} Proposition \\ref{Sp-pr. Rinfty prop} implies the existence of a measure $\\bar{\\eta} \\in \\mathcal{P}(C_T\\mathbb{R}^{\\infty})$ such that\n \\begin{enumerate}\n \t\\item [(i)] $\\bar{\\eta} \\circ (e^{\\infty}_t)^{-1} = \\bar{\\Gamma}_t$ for each $t \\in [0,T]$\n \t\\item[(ii)] $\\bar{\\eta}$ is concentrated on solution paths of (\\ref{Rinfty-ODE}).\n \\end{enumerate}\n\\textbf{Step 3: From (\\ref{Rinfty-ODE}) to (\\ref{NLFPKE}):} We show that the measure $\\eta := \\bar{\\eta} \\circ (J^{-1})^{-1}$, with $J$ as in Lemma \\ref{Lem aux G, J} fulfills all desired properties. Indeed, since\n$$\\bar{\\eta} \\circ (e^{\\infty}_t)^{-1} = \\bar{\\Gamma}_t = \\Gamma_t \\circ G^{-1},$$\nfor each $t \\in [0,T]$ we deduce that $\\bar{\\eta} \\circ (e^{\\infty}_t)^{-1}$ is concentrated on $G(\\mathcal{SP})$. By Lemma \\ref{Lem aux G, J}, $G(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ is closed. Since by construction $\\bar{\\eta}$ is concentrated on continuous curves in $\\mathbb{R}^{\\infty}$, $\\bar{\\eta}$ is concentrated on $C_TG(\\mathcal{SP})$. Further, $C_TG(\\mathcal{SP}) \\subseteq C_T\\mathbb{R}^{\\infty}$ is a measurable set and $J^{-1}: C_TG(\\mathcal{SP}) \\to C_T\\mathcal{SP}$ is measurable by Lemma \\ref{Lem aux G, J}. Therefore, we may define $\\eta \\in \\mathcal{P}(C_T\\mathcal{SP})$ via\n$$ \\eta := \\bar{\\eta} \\circ (J^{-1})^{-1}.$$\nIt remains to verify $\\eta \\circ e_t^{-1} = \\Gamma_t$ for all $t \\in [0,T]$ and that $\\eta$ is concentrated on subprobability solutions to (\\ref{NLFPKE}). Concerning the first matter, we have\n$$ \\eta \\circ e_t^{-1} = \\bar{\\eta} \\circ (J^{-1})^{-1} \\circ e_t^{-1} = \\bar{\\eta} \\circ (e_t \\circ J^{-1})^{-1}$$\nand\n$$\\Gamma_t = \\Gamma_t \\circ (G^{-1} \\circ G)^{-1} = \\bar{\\Gamma}_t \\circ (G^{-1})^{-1} = \\bar{\\eta} \\circ (G^{-1}\\circ e_t^{\\infty})^{-1}.$$\nSince $e_t\\circ J^{-1}$ and $G^{-1} \\circ e_t^{\\infty}$ coincide as measurable maps on $C_TG(\\mathcal{SP})$ and it was shown above that $\\bar{\\eta}$ is concentrated on $C_TG(\\mathcal{SP})$, we obtain\n$$\\eta \\circ e_t^{-1} = \\Gamma_t, \\,\\, t \\leq T.$$\n\nConcerning the second aspect, note that by definition of $\\eta$ and $\\bar{\\Gamma}_t$ and by the equality $e_t \\circ J^{-1} = G^{-1}\\circ e_t^\\infty$, \\eqref{aux_revised1} for $\\Gamma$ implies that $\\eta$ is concentrated on vaguely continuous curves $t \\mapsto \\mu_t$ in $\\mathcal{SP}$ with the global integrability property \\eqref{2} such that $t \\mapsto G(\\mu_t)$ is a solution to \\eqref{Rinfty-ODE}. Each such curve $t \\mapsto \\mu_t$ is a subprobability solution to \\eqref{NLFPKE}. Indeed, due to $\\mu_t \\in A_t$ $dt$-a.s., we have \n\\begin{align*}\n\\frac{d}{dt}p_k \\circ G(\\mu_t) =&\\, p_k \\circ \\bar{B}(t,G(\\mu_t))\\quad dt-a.s. \\iff \\frac{d}{dt}p_k \\circ G(\\mu_t) = p_k \\circ B(t,\\mu_t)\\quad dt-a.s. \\\\&\n\\iff \\frac{d}{dt}\\int_{\\mathbb{R}^d}g_k(x)d\\mu_t(x) = \\int_{\\mathbb{R}^d}\\mathcal{L}_{t,\\mu_t}g_k(x)d\\mu_t(x) \\quad dt-a.s.\\\\&\n\\iff \\int_{\\mathbb{R}^d}g_k d\\mu_t- \\int_{\\mathbb{R}^d}g_k d\\mu_0 = \\int_0^t \\int_{\\mathbb{R}^d}\\mathcal{L}_{s,\\mu_s}g_k (x)d\\mu_s(x)ds, \\,\\, t \\in [0,T],\n\\end{align*}\nand Lemma \\ref{Prop G early} (ii) applies.\nIt remains to prove the additional assertion about probability solutions. To this end, assume $\\Gamma_0$ is concentrated on $\\mathcal{P}$. Then, $\\eta(e_0 \\in \\mathcal{P}) = 1$ and hence the claim follows by Remark \\ref{Rem mass conserv}. \\qed \\\\\n\\\\\nThe final assertion of the theorem in particular implies: If $\\Gamma_0 \\in \\mathcal{P}(\\mathcal{P})$ for a weakly continuous solution $(\\Gamma_t)_{t \\leq T} \\subseteq \\mathcal{P}(\\mathcal{SP})$ to (\\ref{P-CE}), then $\\Gamma_t \\in \\mathcal{P}(\\mathcal{P})$ for each $t \\leq T$. Of course, this is to be expected due to the global integrability condition in Definition \\ref{Def sol P-CE}.\n\\begin{rem}\\label{Rem explain why SP}\nFinally, let us explain why we developed the above result for subprobability solutions to (\\ref{NLFPKE}) although our principal interest is restricted to probability solutions. If we directly consider solution curves $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-CE}) with $\\Gamma_t \\in \\mathcal{P}(\\mathcal{P})$, we cannot prove that $\\eta$ in Theorem \\ref{main thm det case} is concentrated on $C_T\\mathcal{P}$ (in fact, not even $\\eta(C_T\\mathcal{P}) >0$ could be shown). Indeed, inspecting the proof above, one may only prove that $\\eta \\circ e_t^{-1}$ is concentrated on $\\mathcal{P}$ for each $t\\leq T$. But since $\\mathcal{P} \\subseteq \\mathcal{SP}$ is not closed, curves in the support of $\\eta$ may be proper subprobability-valued at single times. The deeper reason for this is that the range $G(\\mathcal{P})$ of $G$ as in \\ref{Prop G early} as a map on $\\mathcal{P}$ with the weak topology is not closed in $\\mathbb{R}^{\\infty}$. It seems that one cannot resolve this issue by simply changing the function set $\\mathcal{G}$, since there exists no countable set of functions, which allows for a characterization of weak instead of vague convergence as in Lemma \\ref{Prop G early}.\nSince $\\mathcal{SP}$ with the vague topology is compact and the vague test function class $C_c(\\mathbb{R}^d)$ is separable, it is feasible to carry out the entire development for subprobability measures as above.\n\nWe also mention that to our understanding there is no inherent reason why the superposition principle could not be extended to larger spaces of measures (e.g. spaces of signed measures), as long as its topology allows for a suitable identification with $\\mathbb{R}^\\infty$ as in our present case. Our principal motivation from a probabilistic viewpoint was to study curves of probability measures, and we were only forced to extend to $\\mathcal{SP}$, the vague closure of $\\mathcal{P}$, by the reasons outlined above. In order to replace $\\mathcal{SP}$ by some larger space of measures $\\mathcal{M}$, it seems indispensable that Lemma \\ref{Lem aux G, J} remains true, i.e. that the range of $\\mathcal{M}$ under a suitable homeomorphism is closed in $\\mathbb{R}^\\infty$.\n\n\\end{rem}\n\n\\subsection{Consequences and applications}\nThe following existence- and uniqueness results immediately follow from the superposition principle Theorem \\ref{main thm det case} and provide an equivalence between the nonlinear FPK-equation (\\ref{NLFPKE}) and its linearized continuity equation (\\ref{P-CE}). \n\\begin{kor}\n\tLet $\\mu_0 \\in \\mathcal{SP}$ and assume there exists a solution to (\\ref{P-CE}) with initial condition $\\delta_{\\mu_0}$. Then, there exists a subprobability solution to (\\ref{NLFPKE}) with initial condition $\\mu_0$. Moreover, if $\\mu_0\\in \\mathcal{P}$, then there exists a probability solution to (\\ref{NLFPKE}) with initial condition $\\mu_0$.\n\\end{kor}\n\\begin{proof}\n\tBy Theorem \\ref{main thm det case} there exists a probability measure $\\eta$ concentrated on subprobability solutions to (\\ref{NLFPKE}) with $\\eta \\circ e_0^{-1} = \\delta_{\\mu_0}$. Hence, at least one such solution to (\\ref{NLFPKE}) with initial condition $\\mu_0$ exists. The second assertion is treated similarly.\n\\end{proof}\n\\begin{kor}\\label{Cor uniqueness det}\n\tLet $\\mu_0 \\in \\mathcal{SP}$ and assume there exists at most one vaguely continuous subprobability solution to (\\ref{NLFPKE}) with initial condition $\\mu_0$. Then, there exists also at most one weakly continuous solution $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-CE}) with initial condition $\\delta_{\\mu_0}$. If $\\mu_0 \\in \\mathcal{P}$, then, in the case of existence, $\\Gamma_t(\\mathcal{P}) = 1$ for each $t \\in [0,T]$.\n\\end{kor}\n\\begin{proof}\n\tLet $\\Gamma^{(1)}$ and $\\Gamma^{(2)}$ be weakly continuous solutions to (\\ref{P-CE}) with $\\Gamma^{(i)}_0 = \\delta_{\\mu_0}$ for $i \\in \\{1,2\\}$. By Theorem \\ref{main thm det case}, there exist probability measures $\\eta^{(i)}$, $i \\in \\{1,2\\}$, concentrated on subprobability solutions to (\\ref{NLFPKE}) with initial condition $\\mu_0$ such that $\\eta^{(i)} \\circ e_t^{-1} = \\Gamma_t^{(i)}$ for each $t \\in [0,T]$ and $i \\in \\{1,2\\}$. By assumption, we obtain $\\eta^{(1)}= \\delta_{\\mu}= \\eta^{(2)}$ for a unique element $\\mu \\in C_T\\mathcal{SP}$ and thus also $\\Gamma^{(1)} = \\Gamma^{(2)}$. If $\\mu_0 \\in \\mathcal{P}$, then $\\mu \\in C_T\\mathcal{P}$ by Remark \\ref{Rem mass conserv}, which gives the second assertion. \n\\end{proof}\n\\subsubsection{Application to coupled nonlinear-linear Fokker-Planck-Kolmogorov equations}\nUsing the superposition principle, we prove an open conjecture posed in \\cite{Rckner-lin.-paper}. Let us shortly recapitulate the necessary framework. In \\cite{Rckner-lin.-paper}, the authors consider a coupled nonlinear-linear FPK-equation of type \n\\begin{equation}\\label{9}\n\\begin{cases}\n\\partial_t \\mu_t = \\mathcal{L}^*_{t,\\mu_t}\\mu_t \\\\\n\\partial_t \\nu_t = \\mathcal{L}^*_{t,\\mu_t}\\nu_t,\n\\end{cases}\n\\end{equation}\ni.e. comparing to our situation the first nonlinear equation is of type (\\ref{NLFPKE}) and the second (linear) equation is obtained by \"freezing\" a solution $(\\mu_t)_{t \\leq T}$ to the first equation in the nonlinearity spot of $\\mathcal{L}$. For an initial condition $(\\bar{\\mu},\\bar{\\nu}) \\in \\mathcal{P}\\times \\mathcal{P}$, (\\ref{9}) is said to have a \\textit{unique solution}, if there exists a unique probability solution $(\\mu_t)_{t \\leq T}$ to the first equation in the sense of Definition \\ref{Sol NLFPKE} with $\\mu_0 = \\bar{\\mu}$ and a unique weakly continuous curve $(\\nu_t)_{t \\leq T} \\subseteq \\mathcal{P}$, which solves the second equation with fixed coefficient $\\mu_t$ with $\\nu_0 = \\bar{\\nu}$ (we refer to \\cite{Rckner-lin.-paper} for more details). The authors associate a linear continuity equation on $\\mathbb{R}^d \\times \\mathcal{P}$ to (\\ref{9}) in the following sense: Let $\\mathbb{L}$ be the operator acting on functions \n$$\\mathcal{C} := \\big\\{\\Phi: (x,\\mu) \\mapsto \\varphi(x)F(\\mu) | \\varphi \\in C^2_c(\\mathbb{R}^d), F \\in \\mathcal{F}C^2_b(\\mathcal{P}) \\big\\},$$\nvia \n$$\\mathbb{L}_t\\Phi(x,\\mu) := \\mathcal{L}_{t,\\mu}\\Phi(\\cdot, \\mu)(x)+\\mathbf{L}_t\\Phi(x, \\cdot)(\\mu),$$with $\\mathcal{L}$ as in (\\ref{1}) and $\\mathbf{L}$ as in (\\ref{5}).\nConsider the continuity equation \n\\begin{equation}\\label{10}\n\\partial_t \\Lambda_t = \\mathbb{L}_t^*\\Lambda_t, \\,\\, t \\in [0,T]\n\\end{equation}for weakly continuous curves of Borel probability measures on $\\mathbb{R}^d \\times \\mathcal{P}$. The exact notion of solution can be found in \\cite{Rckner-lin.-paper}, where also the following observation is made: A pair $(\\mu_t,\\nu_t)_{t \\leq T}$ solves (\\ref{9}) if and only if $\\Lambda_t := \\nu_t \\times \\delta_{\\mu_t}$ solves (\\ref{10}). Using our main result, we prove the following conjecture posed in Remark 4.4. of \\cite{Rckner-lin.-paper}.\n\\begin{prop}\\label{Prop conj Rckner}\n\tIf $(\\mu_t,\\nu_t)_{t \\leq T}$ is the unique solution to (\\ref{9}) with initial condition $(\\bar{\\mu},\\bar{\\nu}) \\in \\mathcal{P}\\times \\mathcal{P}$, then $(\\nu_t \\times \\delta_{\\mu_t})_{t \\leq T}$ is the unique solution to (\\ref{10}) with initial condition $\\bar{\\nu}\\times \\delta_{\\bar{\\mu}}$.\n\\end{prop}\n\\begin{proof}\n\tBy Corollary (\\ref{Cor uniqueness det}), the unique solution to (\\ref{P-CE}) with initial condition $\\delta_{\\bar{\\mu}}$ is $(\\delta_{\\mu_t})_{t \\leq T}$. Let $(\\Lambda^{(1)}_t)_{t \\leq T}$ and $(\\Lambda^{(2)}_t)_{t \\leq T}$ be two solutions to (\\ref{10}) with initial condition $\\bar{\\nu}\\times \\delta_{\\bar{\\mu}}$. It is straightforward to check that the curves of second marginals $(\\Lambda_t^{(1)}\\circ \\varPi_2^{-1})_{t \\leq T}$ and $(\\Lambda_t^{(2)}\\circ \\varPi_2^{-1})_{t \\leq T}$ are probability solutions to (\\ref{P-CE}) with initial condition $\\delta_{\\bar{\\mu}}$ (where we denote by $\\varPi_2$ the projection from $\\mathbb{R}^d \\times \\mathcal{P}$ onto the second coordinate). Hence, for each $t \\in [0,T]$\n\t$$\\Lambda_t^{(1)} \\circ \\varPi_2^{-1} = \\delta_{\\mu_t} = \\Lambda^{(2)}_t \\circ \\varPi_2^{-1}.$$\n\tConsequently, $\\Lambda_t^{(i)}$ is of product type, i.e. $\\Lambda^{(i)}_t = \\gamma_t^{(i)}\\times \\delta_{\\mu_t}$ for weakly continuous curves $(\\gamma^{(i)}_t)_{t \\leq T} \\subseteq \\mathcal{P}$, $i \\in \\{1,2\\}$. It is immediate to show that each curve $\\gamma^{(i)}$ solves the second equation of (\\ref{9}) with fixed $\\mu_t$ and initial condition $\\bar{\\nu}$. Hence, $\\gamma^{(i)}_t = \\nu_t$ for each $t \\in [0,T]$ and $i \\in \\{1,2\\}$, which implies $\\Lambda^{(1)}_t = \\Lambda^{(2)}_t$. Hence, the unique solution to (\\ref{10}) with initial condition $\\bar{\\nu}\\times \\delta_{\\bar{\\mu}}$ is given by $(\\nu_t \\times \\delta_{\\mu_t})_{t \\leq T}$. \n\\end{proof}\n\\section{Superposition Principle for stochastic nonlinear Fokker-Planck-Kolmogorov Equations}\nWe make use of the following notation specific to the stochastic case.\\\\\n\\\\\nFor two real-valued $n\\times n$ matrices $A,B$ we write $A$:$B = \\sum_{k,l = 1}^n A_{kl}B_{kl}$. We use the same notation for $A = (A_{kl})_{k,l \\geq 1}$ and $B = (B_{kl})_{k,l \\geq 1}$, if either $A$ or $B$ contain only finitely many non-trivial entries.\\\\\nFor the Hilbert space $\\ell^2$ with topology induced by the usual inner product $\\langle\\cdot,\\cdot \\rangle _{\\ell^2}$ and norm $||\\cdot||_{\\ell^2}$, we denote the space of continuous $\\ell^2$-valued functions on $[0,T]$ by $C_T\\ell^2$. On $\\ell^2$ and $C_T\\ell^2$, we unambiguously use the same notation $e_t, p_i$ and $\\pi_n$ as on $\\mathbb{R}^{\\infty}$ and $C_T\\mathbb{R}^{\\infty}$ in the previous section. Reminiscent to the previous section, we set $\\mathcal{B}(C_T\\ell^2) = \\sigma(e_t, t \\in [0,T])$ and denote the set of probability measures on this space by $\\mathcal{P}(C_T\\ell^2)$. For $\\sigma$-algebras $\\mathcal{A}_1$, $\\mathcal{A}_2$, we denote by $\\mathcal{A}_1 \\bigvee \\mathcal{A}_2$ the \\textit{$\\sigma$-algebra generated by $\\mathcal{A}_1$ and $\\mathcal{A}_2$}.\\\\\n\\\\\nWe call a filtered probability space $(\\Omega, \\mathcal{F}, (\\mathcal{F}_t)_{t \\leq T},\\mathbb{P})$ \\textit{complete}, provided both $\\mathcal{F}$ and $\\mathcal{F}_0$ contain all subsets of $\\mathbb{P}$-negligible sets $N \\in \\mathcal{F}$ (i.e. $\\mathbb{P}(N) = 0$). This notion does not require $(\\mathcal{F}_t)_{t \\leq T}$ to be right-continuous. A real-valued Wiener process $W = (W_t)_{t \\leq T}$ on such a probability space is called an $\\mathcal{F}_t$-\\textit{Wiener process}, if $W_t$ is $\\mathcal{F}_t$-adapted and $W_u-W_t$ is independent of $\\mathcal{F}_t$ for each $0 \\leq t \\leq u \\leq T$. Pathwise properties of stochastic processes such as continuity are to be understood up to a negligible set with respect to the underlying measure.\\\\\n\\\\\nAs in the previous section, we consider $\\mathcal{SP}$ as a compact Polish space with the vague topology. Let $d_1 \\geq1$ and consider product-measurable coefficients on $[0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d$\n$$a(t,\\mu,x)=(a_{ij}(t,\\mu,x)) \\in \\mathbb{S}^+_d,\\,\\,b(t,\\mu,x) = (b_i(t,\\mu,x))_{i \\leq d} \\in \\mathbb{R}^d,\\, \\sigma(t,\\mu,x)= (\\sigma_{ij}(t,\\mu,x))_{i,j \\leq d} \\in \\mathbb{R}^{d\\times d_1}$$\nsuch that $\\sigma$ is bounded, \nand let $\\mathcal{L}$ be as before, i.e.\n$$\\mathcal{L}_{t, \\mu} \\varphi(x) = b_i(t,\\mu,x)\\partial_i \\varphi(x)+ a_{ij}(t,\\mu,x)\\partial^2_{ij}\\varphi(x)$$\nfor $\\varphi \\in C^2(\\mathbb{R}^d)$ and $(t,\\mu,x) \\in [0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d$.\\\\ \n\\\\\nIn contrast to the deterministic framework of the previous section, here we consider nonlinear \\textit{stochastic} FPK-equations of type (\\ref{SNLFPKE}) on $[0,T]$, to be understood in distributional sense as follows. With slight abuse of notation, for $\\sigma \\in \\mathbb{R}^{d\\times d_1}$ and $x \\in \\mathbb{R}^d$, we write $\\sigma \\cdot x = (\\sum_{i=1}^d\\sigma^{ik}x_i)_{k \\leq d_1}$, which is consistent with the standard inner product notation $\\sigma \\cdot x$ in the case $d_1 =1$. \n\\begin{dfn}\\label{Def sol SNLFPKE}\n\t\\begin{enumerate}\n\t\t\\item [(i)] A pair $(\\mu, W)$ consisting of an $\\mathcal{F}_t$-adapted vaguely continuous $\\mathcal{SP}$-valued stochastic process $\\mu = (\\mu_t)_{t \\leq T}$ and an $\\mathcal{F}_t$-adapted, $d_1$-dimensional Wiener process $W = (W_t)_{t \\leq T}$ on a complete probability space $(\\Omega, \\mathcal{F}, (\\mathcal{F}_t)_{t \\leq T},\\mathbb{P})$ is a \\textit{subprobability solution to }(\\ref{SNLFPKE}), provided \n\t\t\\begin{equation}\\label{2_int_SNLFPKE}\n\t\t\t\\int_0^T\\int_{\\mathbb{R}^d}|b_i(t,\\mu_t,x)|+|a_{ij}(t,\\mu_t,x)| + |\\sigma_{ik}(t,\\mu_t,x)|^2d\\mu_t(x)dt < \\infty \\quad \\mathbb{P}\\text{-a.s.}\n\t\t\\end{equation}\n\t\tfor each $i,j \\leq d, k \\leq d_1$, and \n\t\t\\begin{equation}\\label{11}\n\t\t\\int_{\\mathbb{R}^d} \\varphi(x) d\\mu_t(x) - \t\\int_{\\mathbb{R}^d} \\varphi(x) d\\mu_0(x) = \\int_0^t \\int_{\\mathbb{R}^d}\\mathcal{L}_{s,\\mu_s}\\varphi(x) d\\mu_s(x)ds + \\int_0^t \\int_{\\mathbb{R}^d} \\sigma(s,\\mu_s,x)\\cdot \\nabla \\varphi(x)d \\mu_s(x)dW_s\n\t\t\\end{equation}holds $\\mathbb{P}$-a.s. for each $t \\in [0,T]$ and $\\varphi \\in C^2_c(\\mathbb{R}^d)$.\n\t\t\\item[(ii)] A \\textit{probability solution to } (\\ref{SNLFPKE}) is a pair as above such that $\\mu$ is a $\\mathcal{P}$-valued process $(\\mu_t)_{t \\leq T}$ with weakly continuous paths.\n\t\\end{enumerate}\n\\end{dfn}\n\\begin{rem}\n\t\\begin{enumerate}\n\t\t\\item [(i)] Since $C^2_c(\\mathbb{R}^d)$ is separable with respect to uniform convergence and since the paths $t \\mapsto \\mu_t(\\omega)$ are vaguely continuous, the exceptional sets in the above definition can be chosen independently of $\\varphi $ and $t$.\n\t\t\\item[(ii)] The first integral on the right-hand side of (\\ref{11}) is a pathwise (that is, for individual fixed $\\omega \\in \\Omega$) integral with respect to the finite measure $\\mu_s(\\omega)ds$ on $[0,T]\\times \\mathbb{R}^d$. The second integral is a stochastic integral, which is defined, since the integrand\n\t\t$$(t,\\omega) \\mapsto \\int_{\\mathbb{R}^d}\\sigma(t,\\mu_t(\\omega),x)\\cdot \\nabla \\varphi(x)d\\mu_t(\\omega)(x)$$\n\t\tis $\\mathbb{R}^{d_1}$-valued, bounded, product-measurable and $\\mathcal{F}_t$-adapted (Thm. 3.8 \\cite{ChungWilliamsStochInt}). More precisely,\n\t\t$$\\int_0^t \\int_{\\mathbb{R}^d} \\sigma(s,\\mu_s,x)\\cdot \\nabla \\varphi(x)d \\mu_s(x)dW_s = \\sum_{\\alpha=1}^{d_1} \\int_0^t\\int_{\\mathbb{R}^d}\\sigma^{\\alpha} \\cdot \\nabla \\varphi d\\mu_sdW^{\\alpha}_s,$$where $\\sigma^{\\alpha}= (\\sigma^{i\\alpha})_{i \\leq d}$ denotes the $\\alpha$-th column of $\\sigma$ and the components $W^{\\alpha}$, $\\alpha \\leq d_1$, of $W$ are real, independent Wiener processes.\n\t\\end{enumerate}\n\\end{rem}\nBy the global integrability assumption \\eqref{2_int_SNLFPKE} and since $\\sigma$ is bounded, we obtain (in analogy to Remark \\ref{Rem mass conserv}) the following conservation of mass, which we use to prove the final assertion of the main result Theorem \\ref{main thm stoch case}.\n\\begin{lem}\\label{Lem consv of mass stochastic}\n\tLet $(\\mu_t)_{t \\leq T}$ be a subprobability solution to (\\ref{SNLFPKE}). If $\\mu_0 \\in \\mathcal{P}$ $\\mathbb{P}$-a.s., then the paths of $t \\mapsto \\mu_t$ are $\\mathcal{P}$-valued $\\mathbb{P}$-a.s. and, hence, in particular weakly continuous.\n\\end{lem}\n\\begin{proof}\n\tLet $(\\varphi_k)_{k \\geq 1} \\subseteq C^2_c(\\mathbb{R}^d)$ approximate the constant function $1$ as in Remark \\ref{Rem mass conserv}. Then, by It\u00f4-isometry, for each $t \\in [0,T]$, there exists a subsequence $(k^t_l)_{l \\geq 1} = (k_l)_{l \\geq 1}$ such that\n\t\\begin{equation}\\label{12}\n\t\\int_0^t\\int_{\\mathbb{R}^d}\\sigma(s,\\mu_s, x)\\cdot \\nabla \\varphi_{k_l}(x)d\\mu_s(x)dW_s \\underset{l \\to \\infty}{\\longrightarrow}0 \\,\\, \\mathbb{P}\\text{-a.s.}\n\t\\end{equation}\n\tSince the stochastic integral is continuous in $t$, a classical diagonal argument yields that there exists a subsequence $(k_l)_{l \\geq 1}$ along which (\\ref{12}) holds for all $t \\in [0,T]$ on a set of full $\\mathbb{P}$-measure, independent of $t$. Let $\\omega' \\in \\Omega$ be from this set such that also $\\mu_0(\\omega') \\in \\mathcal{P}$ and (\\ref{11}) holds for each $t$ and $\\varphi$. Note that the set of all such $\\omega'$ has full $\\mathbb{P}$-measure. Then, similar to the reasoning in Remark \\ref{Rem mass conserv} and by using (\\ref{12}), considering (\\ref{11}) for such $\\omega'$ with $\\varphi_{k_l}$ in place of $\\varphi$ for the limit $l \\longrightarrow +\\infty$, we obtain\n\t$$\\mu_t(\\omega')(\\mathbb{R}^d) = \\mu_0(\\omega')(\\mathbb{R}^d), \\,\\, t\\in [0,T]$$\n\tand hence the result.\n\\end{proof}\nNote that the above proof can be adjusted to extend (\\ref{11}) to each $\\varphi \\in C^2_b(\\mathbb{R}^d)$. \n\\subsubsection*{Embedding $\\mathcal{SP}$ into $\\ell^2$}\nIn comparison with the deterministic case, we still consider $\\mathcal{SP}$ as a manifold-like space with tangent spaces $T_{\\mu}\\mathcal{SP} = L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$ as before. However, instead of embedding into $\\mathbb{R}^{\\infty}$ by $G$ as in the previous section, now we need a global chart \n$$H: \\mathcal{SP} \\to \\ell^2$$\nin order to handle the stochastic integral term later on. To this end, we replace the set of functions $\\mathcal{G} = \\{g_i, i \\geq 1\\}$ of the deterministic case by\n\\begin{equation}\\label{13}\n\\mathcal{H} := \\{h_i\\}_{i \\geq 1}, \\, h_i := 2^{-i}\\frac{g_i}{||g_i||_{C^2_b}}\n\\end{equation}and consider the map\n$$H : \\mathcal{SP} \\to \\ell^2,\\,\\, H: \\mu \\mapsto (\\mu(h_i))_{i \\geq 1}.$$\nThe following lemma collects useful properties of $\\mathcal{H}$ and $H$, which are in the spirit of Lemma \\ref{Prop G early} and \\ref{Lem aux G, J}. We point out that we could have used the function class $\\mathcal{H}$ instead of $\\mathcal{G}$ already in Section 3, but we decided to pass from $\\mathcal{G}$ to $\\mathcal{H}$ at this point in order to stress the technical adjustments necessary due to the stochastic case.\n\\begin{lem}\\label{Lem aux H}\n\t\\begin{enumerate}\n\t\t\\item [(i)] The set $\\mathcal{H}$ is measure-determining. Further, a process $(\\mu_t)_{t \\leq T}$ as in Definition \\ref{Def sol SNLFPKE} is a solution to (\\ref{SNLFPKE}) if and only if (\\ref{11}) holds for each $h_i \\in \\mathcal{H}$ in place of $\\varphi$.\n\t\t\\item[(ii)] $H$ is a homeomorphism between $\\mathcal{SP}$ and its range $H(\\mathcal{SP}) \\subseteq \\ell^2$, endowed with the $\\ell^2$-subspace topology. In particular, $H(\\mathcal{SP}) \\subseteq \\ell^2$ is compact.\n\t\n\t\\end{enumerate}\n\\end{lem}\n\\begin{proof}\n\t\\begin{enumerate}\n\t\t\\item [(i)] The first claim is obvious, since $\\mathcal{G}$ is measure-determining. Concerning the second claim, note that it is clearly sufficient to have (\\ref{11}) for each $\\varphi \\in C_c^2(\\mathbb{R}^d)$ with $||\\varphi||_{C^2_b} \\leq 1$. Since the functions $||g_i||^{-1}_{C^2_b}g_i$ are dense in the unit ball of $C^2_c(\\mathbb{R}^d)$ with respect to $||\\cdot||_{C^2_b}$, it is sufficient to have (\\ref{11}) for each such normalized function. Indeed, if $\\varphi_k \\underset{k \\to \\infty}{\\longrightarrow}\\varphi$ uniformly up to second-order partial derivatives, then by It\u00f4-isometry\n\t\t\\begin{equation*}\n\t\t\\mathbb{E}\\bigg[\\bigg(\\int_0^t\\int_{\\mathbb{R}^d}\\sigma(s,\\mu_s,\\cdot)\\cdot \\nabla (\\varphi_k-\\varphi)d\\mu_sdW_s\\bigg)^2\\bigg] = \\mathbb{E}\\bigg[\\int_0^t\\bigg(\\int_{\\mathbb{R}^d}\\sigma(s,\\mu_s,\\cdot)\\cdot\\nabla(\\varphi_k-\\varphi)d\\mu_s \\bigg)^2ds\\bigg],\n\t\t\\end{equation*}which converges to $0$ as $k \\longrightarrow \\infty$ due to the boundedness of $\\sigma$. Hence, along a subsequence $(k_l)_{l \\geq 1}$, we have a.s.\n\t\t\\begin{equation*}\n\t\t\\int_0^t \\int_{\\mathbb{R}^d}\\sigma(s,\\mu_s,x) \\cdot \\nabla \\varphi_{k_l}(x)d\\mu_s(x)dW_s \\underset{l \\to \\infty}{\\longrightarrow} \\int_0^t \\int_{\\mathbb{R}^d}\\sigma(s,\\mu_s,x)\\cdot \\nabla\\varphi(x)d\\mu_s(x)dW_s.\n\t\t\\end{equation*}The a.s.-convergence of all other terms in (\\ref{11}) is clear. Therefore, it is sufficient to require (\\ref{11}) for a dense subset of the unit ball of $C^2_c(\\mathbb{R}^d)$. Clearly, this yields at once that it is sufficient to have (\\ref{11}) for each $h_i \\in \\mathcal{H}$. \n\t\t\\item[(ii)] By definition, $H$ maps into $\\ell^2$. Since $\\mathcal{H}$ is measure-determining, $H$ is one-to-one, hence bijective onto its range. If $\\mu_n \\underset{n \\to \\infty}{\\longrightarrow} \\mu$ vaguely in $\\mathcal{SP}$, clearly $H(\\mu_n)$ converges to $H(\\mu)$ in the product topology. Since for any $i \\geq 1$\n\t\t$$\\underset{n \\geq 1}{\\text{sup}}|H(\\mu_n)_i| \\leq 2^{-i},$$\n\t\tthe convergence holds in $\\ell^2$ as well, which implies continuity of $H$. In particular, $H(\\mathcal{SP}) \\subseteq \\ell^2$ is compact. Conversely, if $H(\\mu_n)$ converges in $\\ell^2$ to some $z = (z_i)_{i \\geq 1}$, then, by closedness of $H(\\mathcal{SP}) \\subseteq \\ell^2$, we have $z=H(\\mu)$ for a unique element $\\mu \\in \\mathcal{SP}$ and $\\mu_n \\underset{n \\to \\infty}{\\longrightarrow}\\mu$ vaguely. Indeed, the latter follows as in Lemma \\ref{Lem aux G, J} (i). \t\n\t\\end{enumerate} \n\\end{proof}\nFor consistency of notation, below we denote the test function class of the manifold-like space $\\mathcal{SP}$ by $\\mathcal{F}C^2_b(\\mathcal{H})$ to stress that the base functions $g_i$ are now replaced by $h_i \\in \\mathcal{H}$. However, the class of test functions remains unchanged, because the transition from $g_i$ to $h_i$ can be incorporated in the choice of $f$.\n\\subsubsection*{Linearization of (\\ref{SNLFPKE})}\nAs in the deterministic case, also for the stochastic nonlinear equation (\\ref{SNLFPKE}) one can consider an associated linear equation for curves in $\\mathcal{P}(\\mathcal{SP})$. To the best of our knowledge, such a linearization for stochastic FPK-equations has not yet been considered in the literature. Of course, the basic idea stems from the deterministic case \\cite{Rckner-lin.-paper} discussed in the previous section. From It\u00f4's formula one expects this linearized equation to be of second-order.\\\\\n\\\\\nLet $\\big((\\mu_t)_{t \\leq T},W\\big)$ be a subprobability solution to (\\ref{SNLFPKE}) (with underlying measure $\\mathbb{P}$) and choose any $F: \\mu \\mapsto f\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)$ from $\\mathcal{F}C^2_b(\\mathcal{H})$. Again, we abbreviate $b(t,\\mu) := b(t,\\mu,\\cdot)$ and similarly for $a$ and $\\sigma$. By It\u00f4's formula, we have $\\mathbb{P}$-a.s.\n\\begin{align*}\nF(\\mu_t)-F(\\mu_0) &= \\int_0^t \\big\\langle\\nabla^{\\mathcal{SP}}F(\\mu), b(s,\\mu)+a(s,\\mu)\\nabla \\big \\rangle_{L^2(\\mu_s)}ds \\\\&\n+\\frac{1}{2}\\sum_{\\alpha=1}^{d_1}\\int_0^t \\sum_{k,l=1}^n (\\partial_{kl}f)(\\mu(h_1),\\dots,\\mu(h_n))\\bigg(\\int_{\\mathbb{R}^d}\\sigma^{\\alpha}(s,\\mu)\\cdot \\nabla h_k d\\mu\\bigg)\\bigg(\\int_{\\mathbb{R}^d}\\sigma^{\\alpha}(s,\\mu)\\cdot \\nabla h_l d\\mu\\bigg)ds \\\\&+ M_t^F\n,\n\\end{align*}with the martingale $M^F$ given as\n$$M_t^F:= \\sum_{\\alpha=1}^{d_1}\\int_0^t\\bigg[ \\sum_{l=1}^{n}(\\partial_lf)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big) \\int_{\\mathbb{R}^d}\\sigma^{\\alpha} \\cdot \\nabla h_l d\\mu_s \\bigg]dW_s^{\\alpha}. $$Since $M^F_0 = 0$ $\\mathbb{P}$-a.s., integrating with respect to $\\mathbb{P}$ and defining the curve of measures in $\\mathbb{P}(\\mathcal{SP})$\n$$\\Gamma_t := \\mathbb{P}\\circ \\mu_t^{-1}, \\,\\, t\\leq T$$yields\n\\begin{align}\\label{prelim eq}\n\\notag &\\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_t(\\mu) - \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_0(\\mu)= \\int_0^t\\int_{\\mathcal{SP}} \\big\\langle\\nabla^{\\mathcal{SP}}F(\\mu), b(s,\\mu)+a(s,\\mu)\\nabla \\big \\rangle_{L^2(\\mu)}d\\Gamma_s(\\mu)ds \\\\& \n\t+\\frac{1}{2}\\sum_{\\alpha=1}^{d_1}\\int_0^t \\int_{\\mathcal{SP}}\\sum_{k,l=1}^n (\\partial_{kl}f)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\bigg(\\int_{\\mathbb{R}^d}\\sigma^{\\alpha}(s,\\mu)\\cdot \\nabla h_k d\\mu\\bigg)\\bigg(\\int_{\\mathbb{R}^d}\\sigma^{\\alpha}(s,\\mu)\\cdot \\nabla h_l d\\mu\\bigg)d\\Gamma_s(\\mu)ds.\n\\end{align}As for the first-order term, which is interpreted as the pairing of the gradient $\\nabla^{\\mathcal{SP}}F$ with the inhomogeneous vector field $b+a\\nabla$ in the tangent bundle $T\\mathcal{SP}$, also the second-order term allows for a geometric interpretation: Recall that for a smooth, real function $F$ on a Riemannian manifold $M$ with tangent bundle $TM$, the Hessian $Hess(F)_p$ at $p \\in M$ is a bilinear form on $T_pM$ with\n\\begin{equation}\\label{Hess gen mf}\nHess(F)_p(\\eta_p, \\xi_p) = \\big\\langle \\nabla^L_{\\eta_p}\\nabla F(p), \\xi_p \\big \\rangle_{T_pM},\\,\\, \\eta_p, \\xi_p \\in T_pM, \n\\end{equation}where $\\nabla^L: TM \\times TM \\to TM$ denotes the Levi-Civita-connection on $M$, the unique affine connection compatible with the metric tensor on $M$ and $\\nabla$ denotes the usual gradient on $M$. Intuitively, $\\nabla^L_{\\eta_p}\\nabla F(p)$ denotes the change of the vector field $\\nabla F$ in direction $\\eta_p$ at $p$. Recall that we consider $\\mathcal{SP}$ as a manifold-like space with gradient $\\nabla^{\\mathcal{SP}}$ and that hence the reasonable notion of the Levi-Civita connection$\\nabla^{L,\\mathcal{SP}}$ on $\\mathcal{SP}$ for $\\sigma \\in T_{\\mu}\\mathcal{SP} = L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu), Y \\in T\\mathcal{SP}$ at $\\mu$ is given by\n$$\\nabla ^{L,\\mathcal{SP}}_{\\sigma}Y(\\mu) = \\big \\langle \\nabla^{\\mathcal{SP}}Y(\\mu), \\sigma\\big \\rangle_{T_{\\mu}\\mathcal{SP}},$$whenver $\\nabla^{\\mathcal{SP}}Y$ is defined in $T\\mathcal{SP}$. For the representation of $Hess(F)$ for a test function $F \\in \\mathcal{F}C^2_b(\\mathcal{H})$, we need to set $Y = \\nabla^{\\mathcal{SP}}F$. In this case, we can indeed make sense of\n$$ (\\nabla^{\\mathcal{SP}})^2F:= \\nabla^{\\mathcal{SP}}\\nabla^{\\mathcal{SP}}F,$$ because the gradient\n$$\\mu \\mapsto \\nabla^{\\mathcal{SP}}F(\\mu) = \\sum_{k=1}^{n}(\\partial_kf)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\nabla h_k$$ is a linear combinations of the \"$\\mathcal{F}C^2_b(\\mathcal{H})$-like\" functions $\\mu \\mapsto \\partial_kf\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)$. The linear combination has to be understood in an $x$-wise sense with coefficient functions $\\nabla h_k$, which are independent of the variable of interest $\\mu$. Denoting $F_k(\\mu):= (\\partial_kf)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)$, we then define\n\\begin{equation}\\label{Def nabla squared}\n(\\nabla^{\\mathcal{SP}})^2F(\\mu) (x,y) := \\sum_{k=1}^n\\big(\\nabla^{\\mathcal{SP}}F_k(\\mu)\\big)(y)\\nabla h_k(x),\\,\\, (x,y) \\in \\mathbb{R}^d\\times \\mathbb{R}^d. \n\\end{equation}\nConsequently, we have a reasonable notion of the Levi-Civita connection on $\\mathcal{SP}$ at $\\mu$ for $\\sigma \\in T_{\\mu}\\mathcal{SP}$ and $\\nabla^{\\mathcal{SP}}F$ for $F \\in \\mathcal{F}C^2_b(\\mathcal{H})$ as\n\\begin{equation}\n\\nabla^{L,\\mathcal{SP}}_{\\sigma}\\nabla^{\\mathcal{SP}}F(\\mu) := \\big \\langle (\\nabla^{\\mathcal{SP}})^2F(\\mu), \\sigma \\big \\rangle_{T_{\\mu}\\mathcal{SP}} = \\sum_{k,l=1}^{n}(\\partial_{kl}f)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\nabla h_k\\bigg(\\int_{\\mathbb{R}^d}\\sigma \\cdot \\nabla h_l d\\mu\\bigg).\n\\end{equation}The section $(\\nabla^{\\mathcal{SP}})^2F$ in $T\\mathcal{SP}^*\\otimes T\\mathcal{SP}^*$ (and hence $\\nabla^{L,\\mathcal{SP}}_{\\sigma}\\nabla^{\\mathcal{SP}}F$ and $Hess(F)$ below) is independent of the particular representation of $F$ in (\\ref{Def nabla squared}). Indeed, we have (c.f. Appendix A \\cite{Rckner-lin.-paper}) for $$\\gamma^{\\sigma}_{\\mu}(t):= \\mu \\circ (\\text{Id}+t \\sigma)^{-1}$$ the following pointwise (in $x \\in \\mathbb{R}^d$) equality for each $\\mu \\in \\mathcal{SP}, \\sigma \\in L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$\n\\begin{align*}\n\\frac{d}{dt}\\nabla^{\\mathcal{SP}}F\\big(\\gamma^{\\sigma}_{\\mu}(t)\\big) &= \\sum_{k=1}^n\\bigg[\\frac{d}{dt}(\\partial_kf)\\big(\\gamma^{\\sigma}_{\\mu}(t)(h_1),\\dots,\\gamma^{\\sigma}_{\\mu}(t)(h_n)\\big)\\bigg]\\nabla h_k \\\\& = \\sum_{k,l=1}^n(\\partial_{kl}f)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\big \\langle \\nabla h_l, \\sigma \\big \\rangle_{L^2(\\mu)} \\nabla h_k \\\\& = \n\\big \\langle (\\nabla^{\\mathcal{SP}})^2F(\\mu),\\sigma \\big \\rangle_{L^2(\\mu)}.\n\\end{align*}Since the gradient $\\nabla^{\\mathcal{SP}}F$ is independent of the particular representation of $F$ and $\\sigma \\in L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$ is arbitrary, also $(\\nabla^{\\mathcal{SP}})^2F$ is independent of the representation of $F$. \\\\\n\nConsidering (\\ref{Hess gen mf}), we then set for $F \\in \\mathcal{F}C^2_b(\\mathcal{H})$ and $\\sigma, \\tilde{\\sigma} \\in L^2(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)$\n\\begin{equation}\nHess(F)(\\mu): (\\sigma, \\tilde{\\sigma}) \\mapsto \\sum_{k,l=1}^{n}(\\partial_{kl}f)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\bigg(\\int_{\\mathbb{R}^d}\\sigma \\cdot \\nabla h_l d\\mu\\bigg)\\bigg(\\int_{\\mathbb{R}^d}\\tilde{\\sigma} \\cdot \\nabla h_k d\\mu\\bigg),\n\\end{equation}which is a (symmetric) bilinear form on $T_{\\mu}\\mathcal{SP}$ and rewrite (\\ref{prelim eq}) as\n\\begin{align}\\label{P-FPKE eq}\n\\int_{\\mathcal{SP}}Fd\\Gamma_t - \\int_{\\mathcal{SP}}Fd\\Gamma_0= \\int_0^t\\int_{\\mathcal{SP}} \\big\\langle\\nabla^{\\mathcal{SP}}F, b_s+a_s\\nabla \\big \\rangle_{L^2} + \\frac{1}{2}\\sum_{\\alpha=1}^{d_1} Hess(F)(\\sigma_s^{\\alpha},\\sigma_s^{\\alpha})d\\Gamma_sds\n\\end{align}(with $b_s: (\\mu,x) \\mapsto b(s,\\mu,x)$ and similarly for $a_s$ and $\\sigma_s$). Introducing the second-order operator $\\mathbf{L}^{(2)}$, acting on $F \\in \\mathcal{F}C^2_b(\\mathcal{H})$ via\n$$\\mathbf{L}^{(2)}_tF(\\mu) = \\big\\langle\\nabla^{\\mathcal{SP}}F, b(t,\\mu)+a(t,\\mu)\\nabla \\big \\rangle_{L^2(\\mu)} + \\frac{1}{2}\\sum_{\\alpha=1}^{d_1} Hess(F)\\big(\\sigma^{\\alpha}(t,\\mu),\\sigma^{\\alpha}(t,\\mu)\\big),$$\nwe arrive at the distributional formulation of (\\ref{P-FPKE})\n$$\\partial_t\\Gamma_t = (\\mathbf{L}_t^{(2)})^*\\Gamma_t, \\,\\, t \\leq T,$$as in the introduction.\n\\begin{rem}\n\tEquation (\\ref{P-CE}) is the natural analogue to second-order FPK-equations over Euclidean spaces. Indeed, for a stochastic equation on $\\mathbb{R}^d$\n\t\t\\begin{equation}\\label{StochEq}\n\t\tdX_t = b(t,X_t)dt+\\sigma(t,X_t)dW_t,\n\t\t\\end{equation}\n\t by It\u00f4's formula, the corresponding linear second-order equation for measures in distributional form is \n\t$$\\partial_t\\mu_t = \\big(\\mathcal{L}^{(2)}_t\\big)^*\\mu_t$$\n\twith $$\\mathcal{L}^{(2)}_tf = \\nabla f\\cdot b_t+\\frac{1}{2}\\big\\langle\\sigma_t,Hess(f)\\sigma_t\\big\\rangle_{\\mathbb{R}^d},$$where $Hess(f)$ denotes the usual Euclidean Hessian matrix of $f \\in C^2(\\mathbb{R}^d)$. In this spirit, it seems natural to consider (\\ref{SNLFPKE}) as a stochastic equation with state space $\\mathcal{SP}$ instead of $\\mathbb{R}^d$ as for (\\ref{StochEq}) and (\\ref{P-CE}) as the corresponding linear Fokker-Planck-type equation on $\\mathcal{SP}$.\n\n\\end{rem}\n By the above derivation, for any subprobability solution process $(\\mu_t)_{t \\leq T}$ to (\\ref{SNLFPKE}) the curve $(\\Gamma_t)_{t \\leq T}$, $\\Gamma_t := \\mathbb{P}\\circ \\mu_t^{-1}$ in $\\mathcal{P}(\\mathcal{SP})$ solves (\\ref{P-CE}) in the sense of the following definition.\n\\begin{dfn}\n\tA weakly continuous curve $(\\Gamma_t)_{t \\leq T} \\subseteq \\mathcal{P}(\\mathcal{SP})$ is a \\textit{solution to (\\ref{SNLFPKE})}, if the integrability condition\n\t\t\\begin{align} \\label{2_int_SP-FPKE}\n\t\t\\int_0^T \\int_{\\mathcal{SP}}||b(t,\\mu)||_{L^1(\\mathbb{R}^d,\\mathbb{R}^d;\\mu)}+||a(t,\\mu)||_{L^1(\\mathbb{R}^d,\\mathbb{R}^{d^2};\\mu)}+ ||\\sigma(t,\\mu)||^2_{L^2(\\mathbb{R}^d,\\mathbb{R}^{d\\times d_1};\\mu)}d\\Gamma_t(\\mu)dt < \\infty\n\t\\end{align}\n\t is fulfilled and for each $F \\in \\mathcal{F}C^2_b(\\mathcal{H})$, \\ref{P-FPKE eq} holds for each $t \\in [0,T]$.\n\\end{dfn}\n\\subsubsection*{Transferring (\\ref{SNLFPKE}) and (\\ref{P-FPKE}) to $\\ell^2$}Reminiscent to the deterministic case, we use the global chart $H: \\mathcal{SP}\\to \\ell^2$ to introduce auxiliary equations on $\\ell^2$ and the space of measures on $\\ell^2$, respectively, as follows. Again, we use the notation\n$$A_t:= \\bigg\\{\\mu \\in \\mathcal{SP}: \\int_{\\mathbb{R}^d}|a_{ij}(t,\\mu,x)|+|b_i(t,\\mu,x)|d\\mu(x) < \\infty\\,\\,\\forall \\,1\\leq i,j \\leq d\\bigg\\}, \\quad t \\in [0,T].$$\nFor $i,j \\geq 1$, $\\alpha \\leq d_1$, define the measurable coefficients $B_i$ for $(t,\\mu)$ such that $\\mu \\in A_t$, and $\\Sigma^{\\alpha}_i$ and $A_{ij}$ on $[0,T]\\times \\mathcal{SP}$ by\n\\begin{align*}\nB_i(t,\\mu) &:= \\int_{\\mathbb{R}^d}\\mathcal{L}_{t,\\mu}h_i(x)d\\mu(x), \\quad (t,\\mu)\\in [0,T]\\times A_t,\\\\\n\\Sigma^{\\alpha}_i(t,\\mu) &:= \\int_{\\mathbb{R}^d}\\sigma^{\\alpha}(t,\\mu,x)\\cdot \\nabla h_i(x)d\\mu(x), \\\\ \\Sigma_i(t,\\mu) &:= \\big( \\Sigma^{\\alpha}_i(t,\\mu)\\big)_{\\alpha \\leq d_1} , \\\\ A_{ij}(t,\\mu) &:= \\big \\langle \\Sigma_i, \\Sigma_j \\big \\rangle_{d_1} (t,\\mu),\n\\end{align*}\nand set \n$$B := (B_i)_{i \\geq 1},\\, \\Sigma := (\\Sigma^{\\alpha}_i)_{\\alpha \\leq d_1, i \\geq 1}, \\, A := (A_{ij})_{i,j \\geq 1}.$$ Now, transferring to $\\ell^2$, define $\\bar{B}, \\bar{\\Sigma}$ and $\\bar{A}_{ij}$ on $[0,T] \\times \\ell^2$ component-wise via\n\\begin{align*}\n\\bar{B}_i(t,z) := \n\\begin{cases}\nB_i(t,H^{-1}(z))&, z \\in H(A_t) \\\\\n0 &, \\text{else}\n\\end{cases},\n\\end{align*},\n\\begin{align*}\n\\bar{\\Sigma}^{\\alpha}_i(t,z) := \n\\begin{cases}\n\\Sigma_i^{\\alpha}(t,H^{-1}(z))&, z \\in H(\\mathcal{SP}) \\\\\n0 &, z \\in \\ell^2 \\backslash H(\\mathcal{SP})\n\\end{cases},\n\\end{align*}\n$$\\bar{\\Sigma}_i(t,z) := \\big(\\bar{\\Sigma}^{\\alpha}_i(t,z)\\big)_{\\alpha \\leq d_1},$$\n$$\\bar{A}_{ij}(t,z) := \\big \\langle\\bar{\\Sigma}_i,\\bar{\\Sigma}_j\\big \\rangle_{d_1}(t,z).$$\n$\\bar{B}$ and $\\bar{\\Sigma}^{\\alpha}$ are $\\ell^2$-valued, since for $z = H(\\mu)$\n$$|\\bar{B}_i(t,z)| \\leq \\int_{\\mathbb{R}^d}|\\mathcal{L}_{t,\\mu}h_i(x)|d\\mu(x) \\leq C2^{-i},$$\nwhere $C = C(a,b,d)$ is a finite constant independent of $t,z$ and $i \\geq 1$. A similar argument is valid for each $\\bar{\\Sigma}^{\\alpha}$. Each $\\bar{B}_i$ and $\\bar{\\Sigma}^{\\alpha}_i$ is product-measurable with respect to the $\\ell^2$-topology due to the measurability of $B$ and $\\Sigma^{\\alpha}$. Reminiscent to (\\ref{Rinfty-CE}) in the previous section, we associate to (\\ref{P-FPKE}) the FPK-equation on $\\ell^2$\n\\begin{equation}\\label{ell2-FPKE}\\tag{$\\ell^2$-FPK}\n\\partial_t \\bar{\\Gamma}_t = -\\bar{\\nabla}\\cdot(\\bar{B}(t,z)\\bar{\\Gamma}_t)+\\partial^2_{ij}(\\bar{A}_{ij}(t,z)\\bar{\\Gamma}_t),\n\\end{equation}which we understand in the sense of the following definition, with $\\bar{\\nabla}$ as in (\\ref{Def nabla gradient}). Subsequently, we denote by $\\mathcal{F}C^2_b(\\ell^2)$ the set of all maps $\\bar{F}: \\ell^2 \\to \\mathbb{R}$ of type $\\bar{F} = f \\circ \\pi_n$ for $n \\geq 1$ and $f \\in C^2_b(\\mathbb{R}^n)$. Also, set\n$$D^2\\bar{F}_{ij} := \n\\begin{cases}\n(\\partial^2_{ij}f) \\circ \\pi_n&, i,j \\leq n \\\\\n0&, \\text{ else}.\n\\end{cases}$$Consequently, both summands in (\\ref{14}) contain only finitely many non-trivial summands. \n\\begin{dfn}\\label{Def sol ell2-FPKE}\n\tA weakly continuous curve $(\\bar{\\Gamma}_t)_{t \\leq T}\\subseteq \\mathcal{P}(\\ell^2)$ is a \\textit{solution to} (\\ref{ell2-FPKE}), if it fulfills the integrability condition\n\t\t\\begin{equation}\\label{2_global_int_l2-FPKE}\n\t\t\\int_0^T\\int_{\\ell^2}|\\bar{B}_i(t,z)| + |\\bar{A}_{ij}(t,z)|d\\bar{\\Gamma}_tdt < \\infty,\\quad \\forall \\,i,j \\geq 1,\n\t\\end{equation}\n\tand for any $\\bar{F} \\in \\mathcal{F}C^2_b(\\ell^2)$, $\\bar{F}:= f\\circ \\pi_n$,\n\t\\begin{equation}\\label{14}\n\t\\int_{\\ell^2}\\bar{F}(z)d\\bar{\\Gamma}_t(z) = \\int_{\\ell^2}\\bar{F}(z)d\\bar{\\Gamma}_0(z)+\\int_0^t \\int_{\\ell^2}\\bar{\\nabla} \\bar{F}(z) \\cdot \\bar{B}(s,z)+\\frac{1}{2}D^2\\bar{F}(z):\\bar{A}(s,z)d\\bar{\\Gamma}_s(z)ds.\n\t\\end{equation}holds for each $t \\leq T$. \n\\end{dfn}\n\n\n\\subsection{Main Result: Stochastic case}\nThe main result of this section is the following superposition principle for solutions to (\\ref{SNLFPKE}) and (\\ref{P-FPKE}), which generalizes Theorem \\ref{main thm det case} to stochastically perturbed equations.\n\\begin{theorem}\\label{main thm stoch case}\nLet $\\sigma$ be bounded on $[0,T]\\times \\mathcal{SP}\\times \\mathbb{R}^d$.\tLet $(\\Gamma_t)_{t \\leq T}$ be a weakly continuous solution to (\\ref{P-FPKE}). Then, there exists a complete filtered probability space $(\\Omega,\\mathcal{F}, (\\mathcal{F}_t)_{t \\leq T}, \\mathbb{P})$, an adapted $d_1$-dimensional Wiener process $W = (W_t)_{t \\leq T}$ and a $\\mathcal{SP}$-valued adapted vaguely continuous process $\\mu=(\\mu_t)_{t \\leq T}$ such that $(\\mu,W)$ solves (\\ref{SNLFPKE}) and\n\t$$\\mathbb{P}\\circ \\mu_t^{-1} = \\Gamma_t$$\n\tholds for each $t \\in [0,T]$.\\\\\n\tMoreover, if $\\Gamma_0$ is concentrated on $\\mathcal{P}$, i.e. $\\Gamma_0(\\mathcal{P}) = 1$, then the paths $t \\mapsto \\mu_t(\\omega)$ are $\\mathcal{P}$-valued for $\\mathbb{P}$-a.e. $\\omega \\in \\Omega$ and hence even weakly continuous.\n\\end{theorem}\nAs in the proof of \\ref{main thm det case}, we proceed in three steps. Since parts of the proof are technically more involved than in the deterministic case, we first present the ingredients of each step and afterwards state the proof of Theorem \\ref{main thm stoch case} as a corollary.\\\\\n\\\\\n\\textbf{Step 1: From (\\ref{P-FPKE}) to (\\ref{ell2-FPKE}):} \n\\begin{lem}\\label{Lem of Step1 stochCase}\n\tFor any solution $(\\Gamma_t)_{t \\leq T}$ to (\\ref{P-FPKE}), the curve $\\bar{\\Gamma}_t = \\Gamma_t \\circ H^{-1}$ is a solution to (\\ref{ell2-FPKE}).\n\\end{lem}\n\\begin{proof}\nClearly, $t \\mapsto \\bar{\\Gamma}_t$ is a weakly continuous curve in $\\mathcal{P}(\\ell^2)$ due to the continuity of $H: \\mathcal{SP} \\to \\ell^2$. \\eqref{2_global_int_l2-FPKE} holds, since $t \\mapsto \\Gamma_t$ fulfills \\eqref{2_int_SP-FPKE} and since $\\sigma$ is bounded. Moreover, we have for $s,t \\leq T$, $\\bar{F} = f \\circ \\pi_n \\in \\mathcal{F}C^2_b(\\ell^2)$ and $F: \\mu \\mapsto f\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)$\n\\begin{align*}\n& \\int_{\\ell^2}\\bar{\\nabla} \\bar{F}(z) \\cdot \\bar{B}(s,z)+\\frac{1}{2}D^2\\bar{F}(z):\\bar{A}(s,z)d\\bar{\\Gamma}_s(z) \\\\&= \\int_{\\mathcal{SP}}\\sum_{k=1}^n(\\partial_kf)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)B_k(s,\\mu) + \\sum_{\\alpha=1}^{d_1} \\frac{1}{2}\\sum_{k,l=1}^n(\\partial_{kl}f)\\big(\\mu(h_1),\\dots,\\mu(h_n)\\big)\\Sigma_k^{\\alpha}(s,\\mu)\\Sigma_l^{\\alpha}(s,\\mu)d\\Gamma_s(\\mu) \\\\&=\n\\int_{\\mathcal{SP}}\\big \\langle \\nabla^{\\mathcal{SP}}F(\\mu),b(s,\\mu)+a(s,\\mu)\\nabla \\big \\rangle_{L^2(\\mu)}+ \\frac{1}{2}\\sum_{\\alpha=1}^{d_1}Hess(F)\\big(\\sigma^{\\alpha}(s,\\mu), \\sigma^{\\alpha}(s,\\mu)\\big)d \\Gamma_s(\\mu)\n\\end{align*}and likewise\n\\begin{equation*}\n\\int_{\\ell^2}\\bar{F}(z)d\\bar{\\Gamma}_t = \\int_{\\mathcal{SP}}F(\\mu)d\\Gamma_t.\n\\end{equation*}Comparing with (\\ref{P-FPKE eq}), the statement follows.\n\\end{proof}\n\t\n\\textbf{Step 2: From (\\ref{ell2-FPKE}) to the martingale problem ($\\ell^2$-MGP):} We introduce a martingale problem on $\\ell^2$, which is related to (\\ref{ell2-FPKE}) in the sense of Remark \\ref{Rem easy connect stoch ell2} below and is, roughly speaking, the stochastic analogue to (\\ref{Rinfty-ODE}) from the previous section. Recall the notation $e_t$ for the projection $e_t: C_T\\ell^2 \\to \\ell^2$, $e_t: \\gamma \\mapsto \\gamma_t$ for $t \\leq T$.\n\\begin{dfn}\\label{Def ell2-MGP sol}\n\tA measure $\\bar{Q} \\in \\mathcal{P}(C_T\\ell^2)$ is a \\textit{solution to the $\\ell^2$-martingale problem ($\\ell^2$-MGP)}, provided\n\t\t\\begin{equation}\n\t\t\\int_{C_T\\ell^2}\\int_0^T|\\bar{B}_i(t,e_t)|+|\\bar{A}_{ij}(t,e_t)|dtd\\bar{Q} < \\infty,\\quad i,j \\geq 1,\n\t\\end{equation}\n\tand\n\t\\begin{equation}\\label{15}\n\t\\bar{F}\\circ e_t - \\int_0^t \\bar{\\nabla} \\bar{F} \\circ e_s \\cdot \\bar{B}(s,e_s)+\\frac{1}{2}D^2\\bar{F}\\circ e_s : \\bar{A}(s,e_s)ds\n\t\\end{equation}is a $\\bar{Q}$-martingale on $C_T\\ell^2$ with respect to the natural filtration on $C_T\\ell^2$ for any $\\bar{F} \\in \\mathcal{F}C^2_b(\\ell^2)$.\n\\end{dfn}\n\\begin{rem}\\label{Rem easy connect stoch ell2}\n\tBy construction, any such solution $\\bar{Q}$ induces a weakly continuous solution $(\\bar{\\Gamma}_t)_{t \\leq T}$ to (\\ref{ell2-FPKE}) via $\\bar{\\Gamma}_t := \\bar{Q} \\circ e_t^{-1}$. Indeed, this is readily seen by integrating (\\ref{15}) with respect to $\\bar{Q}$ and Fubini's theorem.\n\\end{rem}\nIn view of Proposition \\ref{Prop SPprincip ell2} below, we extend the coefficients $\\bar{B}_i, \\bar{\\Sigma}^{\\alpha}_i$ (and hence also $\\bar{A}_{ij}$) to $\\mathbb{R}^{\\infty}$ via\n$$\\bar{B}_i := 0 =: \\bar{\\Sigma}^{\\alpha}_i \\text{ on }[0,T]\\times \\mathbb{R}^{\\infty} \\backslash \\ell^2.$$\nWe still use the notation $\\bar{B}$, $\\bar{\\Sigma}^{\\alpha}$ and $\\bar{A}$ and note that they are $\\mathcal{B}([0,T])\\otimes \\mathcal{B}(\\mathbb{R}^{\\infty})\/\\mathcal{B}(\\mathbb{R}^{\\infty})$-measurable due to Remark \\ref{Rem meas P SP} below. Due to the same remark, we may regard any solution $(\\bar{\\Gamma}_t)_{t \\leq T}$ to (\\ref{ell2-FPKE}) as a solution to a FPK-equation on $\\mathbb{R}^{\\infty}$ by considering (\\ref{ell2-FPKE}) with the extended coefficients and test functions $\\bar{F} \\in \\mathcal{F}C^2_b(\\ell^2)$ extended to $\\mathbb{R}^{\\infty}$ by considering $\\pi_n$ on $\\mathbb{R}^{\\infty}$ instead of $\\ell^2$. Similarly, the formulation of the martingale problem ($\\ell^2$-MGP) as in Definition \\ref{Def ell2-MGP sol} extends to $\\mathbb{R}^{\\infty}$ in the sense that a measure $\\bar{Q} \\in \\mathcal{P}(C_T\\mathbb{R}^{\\infty})$ is understood as a solution, provided the process (\\ref{15}) is a $\\bar{Q}$-martingale on $C_T\\mathbb{R}^{\\infty}$ with respect to the natural filtration for each $\\bar{F} = f \\circ \\pi_n: \\mathbb{R}^{\\infty} \\to \\mathbb{R}$ as above.\n\\begin{rem}\\label{Rem meas P SP}\n\tWe recall that $\\ell^2 \\in \\mathcal{B}(\\mathbb{R}^{\\infty})$ and $\\mathcal{B}(\\ell^2) = \\mathcal{B}(\\mathbb{R}^{\\infty})_{\\upharpoonright \\ell^2}$. In particular, any probability measure $\\bar{\\Gamma} \\in \\mathcal{P}(\\ell^2)$ uniquely extends to a Borel probability measure on $\\mathbb{R}^{\\infty}$ via $\\bar{\\Gamma}(A) := \\bar{\\Gamma}(A \\cap \\ell^2)$, $A \\in \\mathcal{B}(\\mathbb{R}^{\\infty})$.\n\\end{rem}\n We shall need the following superposition principle from \\cite{TrevisanPhD}, which lifts a solution to a FPK-equation on $\\mathbb{R}^{\\infty}$ to a solution of the associated martingale problem. Note that in \\cite{TrevisanPhD}, the author assumes an integrability condition of order $p>1$ instead of $p=1$ as in \\eqref{2_global_int_l2-FPKE} in order to essentially reduce the proof to the corresponding finite-dimensional result, see \\cite[Thm.2.14]{TrevisanPhD}, which requires such a higher order integrability. However, since the latter result was extended to the case of an $L^1$-integrability condition by the same author \\cite[Thm.2.5]{trevisan2016}, it is easy to see that also the infinite-dimensional result \\cite[Thm.7.1]{TrevisanPhD} holds for solutions with $L^1$-integrability as in \\eqref{2_global_int_l2-FPKE}.\n\n\\begin{prop}\\label{Prop SPprincip ell2}\n\t[Superposition principle on $\\mathbb{R}^{\\infty}$, Thm.7.1. \\cite{TrevisanPhD} For any weakly continuous solution $(\\bar{\\Gamma}_t)_{t \\leq T} \\subseteq \\mathcal{P}(\\mathbb{R}^{\\infty})$ to the $\\mathbb{R}^{\\infty}$-extended version of (\\ref{ell2-FPKE}), there exists $\\bar{Q} \\in \\mathcal{P}(C_T\\mathbb{R}^{\\infty})$, which solves the $\\mathbb{R}^{\\infty}$-extended version of ($\\ell^2$-MGP) such that $\\bar{Q}\\circ e_t^{-1} = \\bar{\\Gamma}_t$ for each $t \\in [0,T]$.\n\\end{prop}\nMoreover, we have the following consequence for the solutions we are interested in. Note that paths $t \\mapsto z_t \\in H(\\mathcal{SP})$ are continuous with respect to the product topology if and only if they are $\\ell^2$-continuous. Hence, we may use the notation $C_TH(\\mathcal{SP})$ unambiguously and consider it as a subset of either $C_T\\mathbb{R}^{\\infty}$ or $C_t\\ell^2$. Since $H(\\mathcal{SP}) \\subseteq \\ell^2$ is closed even with respect to the product topology, $C_TH(\\mathcal{SP})$ belongs to $\\mathcal{B}(C_T\\ell^2)$ and $\\mathcal{B}(C_T\\mathbb{R}^{\\infty})$. \n\\begin{lem}\n\tIf in the situation of the previous proposition each $\\bar{\\Gamma}_t$ is concentrated on the Borel set $H(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$, then $\\bar{Q}$ is concentrated on continuous curves in $H(\\mathcal{SP})$. In particular, in this case $\\bar{Q}$ may be regarded as an element of $\\mathcal{P}(C_T\\ell^2)$ and a solution to the martingale problem ($\\ell^2$-MGP) as in Definition \\ref{Def ell2-MGP sol}.\n\\end{lem}\n\\begin{proof}\n\tThe closedness of $H(\\mathcal{SP}) \\subseteq \\mathbb{R}^{\\infty}$ yields\n\t$$\\bar{Q}(C_TH(\\mathcal{SP})) = \\bar{Q}\\bigg(\\underset{q \\in [0,T]\\cap \\mathbb{Q}}{\\bigcap}\\{e_q \\in H(\\mathcal{SP})\\}\\bigg) = 1,$$\n\tdue to $\\bar{Q}\\circ e_t^{-1} = \\bar{\\Gamma}_t$ for each $ t \\leq T$. By the observation above this lemma, it follows\n\t$$\\mathcal{B}(C_T\\ell^2)_{\\upharpoonright C_TH(\\mathcal{SP})} \\subseteq \\mathcal{B}(C_T\\mathbb{R}^{\\infty})_{\\upharpoonright C_TH(\\mathcal{SP})} \\subseteq \\mathcal{B}(C_T\\mathbb{R}^{\\infty})$$and we can therefore consider $\\bar{Q}$ as a probability measure on $\\mathcal{B}(C_T\\ell^2)$ via \n\t$$\\bar{Q}(A) := \\bar{Q}\\big(A\\cap C_TH(\\mathcal{SP})\\big),\\,\\, A \\in \\mathcal{B}(C_T\\ell^2) $$with mass on $C_TH(\\mathcal{SP})$. It is clear that this measure fulfills Definition \\ref{Def ell2-MGP sol}.\n\\end{proof}\nHence, subsequently we may regard to $\\bar{Q}$ as in Proposition \\ref{Prop SPprincip ell2} as a solution to ($\\ell^2$-MGP) on either $\\mathbb{R}^{\\infty}$ or $\\ell^2$ without differing the notation. Recall the notation $p_i: \\ell^2 \\to \\mathbb{R}$, $p_i(z) = z_i.$ \n\\begin{lem}\\label{Lem martingale and covar}\n\tLet $\\bar{Q}$ be a solution to the martingale problem ($\\ell^2$-MGP) on $\\ell^2$. Then, for any $i \\geq 1$, the process\n\t\\begin{equation}\\label{16}\n\tM_i(t) := p_i \\circ e_t - p_i \\circ e_0 - \\int_0^t \\bar{B}_i(s,e_s)ds\n\t\\end{equation}\n\tis a real-valued, continuous $\\bar{Q}$-martingale on $C_T\\ell^2$ with respect to the canonical filtration. The covariation $\\langle \\langle M_i, M_j \\rangle \\rangle $ of $M_i$ and $M_j$ is $\\bar{Q}$-a.s. given by\n\t\n\t\\begin{equation}\\label{17}\n\t\\langle\\langle M_i, M_j \\rangle\\rangle_t = \\int_0^t \\bar{A}_{ij}(s,e_s)ds, \\,\\, t\\in [0,T].\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tFor $i,j \\geq 1$, let $n \\geq \\text{max}(i,j)$, consider $p^n_i: \\mathbb{R}^n \\to \\mathbb{R}$, $p^n_i(x) = x_i$ and let\n\t$$\\bar{F}^n_i : \\ell^2 \\to \\mathbb{R}, \\,\\, \\bar{F}^n_i(z) = p^n_i \\circ \\pi_n(z).$$ Note that $\\bar{F}^n_i = p_i$ on $\\ell^2$, independent of $n \\geq \\text{max}(i,j)$. For $k \\geq 1$, introduce the stopping time $\\sigma_k :=\\text{inf}\\{t \\in [0,T]: ||e_t||_{\\ell^2} \\geq k \\}$ with respect to the canonical filtration on $C_T\\ell^2$. Clearly, $\\sigma_k \\nearrow +\\infty$ pointwise. Consider $\\eta_k \\in C^{2}_c(\\mathbb{R}^n)$ such that $\\eta_k(x) = 1$ for $|x| \\leq k+1$.\\\\ Since $\\partial_{k}p^n_i = \\delta_{ki}$ and $\\partial_{kl}p^n_i = 0$ for $k,l \\leq n$, we have\n\t$$M_i(t) = \\bar{F}^n_i\\circ e_t - \\int_0^t \\bar{\\nabla} \\bar{F}^n_i \\circ e_s \\cdot \\bar{B}(s,e_s)+\\frac{1}{2}D^2\\bar{F}^n_i\\circ e_s : \\bar{A}(s,e_s)ds$$ and, setting $\\bar{F}^{n,k}_i := (\\eta_kp^n_i) \\circ \\pi_n \\in \\mathcal{F}C^2_b(\\ell^2)$, \n\t$$M_i(\\sigma_k \\wedge t) = \\bar{F}^{n,k}_i\\circ e_{t\\wedge \\sigma_k} - \\int_0^{t\\wedge \\sigma_k} \\bar{\\nabla} \\bar{F}^{n,k}_i \\circ e_s \\cdot \\bar{B}(s,e_s)+\\frac{1}{2}D^2\\bar{F}^{n,k}_i\\circ e_s : \\bar{A}(s,e_s)ds.$$Since the latter is a continuous $\\bar{Q}$-martingale for each $k \\geq 1$, it follows that $M_i$ is a continuous local $\\bar{Q}$-martingale. Concerning (\\ref{17}), it suffices to prove that for any $\\bar{F} \\in \\mathcal{F}C^2_b(\\mathcal{\\ell}^2)$, $\\bar{F} = f \\circ\\pi_n$, we have\n\t \\begin{equation}\\label{Covar aux}\n\t\\langle \\langle M^{\\bar{F}} \\rangle \\rangle_t = \\int_0^t \\big \\langle \\bar{\\nabla}\\bar{F}(e_s), \\bar{A}(s,e_s)\\bar{\\nabla}\\bar{F}(e_s)ds \\big \\rangle_{\\ell^2}ds, \n\t\\end{equation}with\n\t$$M^{\\bar{F}}_t := \\bar{F}\\circ e_t - \\int_0^t \\bar{\\nabla}\\bar{F}(e_s) \\cdot \\bar{B}(s,e_s) + \\frac{1}{2}D^2\\bar{F}(e_s): \\bar{A}(s,e_s)ds.$$Indeed, from here (\\ref{17}) follows by considering (\\ref{Covar aux}) for $\\bar{F}^{n,k}_i$, localization of the local martingale $M_i$ and polarization for the quadratic (co-)variation. Concerning (\\ref{Covar aux}), it is standard (cf. \\cite[p.73,74]{stroock_1987}) to use It\u00f4's product rule to obtain that\n\t$$t\\mapsto (M^{\\bar{F}}_t)^2 - \\int_0^t \\bar{\\mathbf{L}}_s^{(2)}\\bar{F}^2(e_s)-2\\bar{F}(e_s)\\bar{\\mathbf{L}}_s^{(2)}\\bar{F}(e_s)ds $$\n\tis a continuous $\\bar{Q}$-martingale on $C_T\\ell^2$, where we denote by $\\bar{\\mathbf{L}}_t^{(2)}\\bar{F}(e_s)$ the integrand of the integral term in the definition of $M^{\\bar{F}}$. A straightforward calculation yields\n\t$$\\int_0^t \\bar{\\mathbf{L}}_s^{(2)}\\bar{F}^2(e_s)-2\\bar{F}(e_s)\\bar{\\mathbf{L}}_s^{(2)}\\bar{F}(e_s)ds = \\int_0^t \\big \\langle \\bar{\\nabla}\\bar{F}(e_s), \\bar{A}(s,e_s)\\bar{\\nabla}\\bar{F}(e_s) \\big \\rangle_{\\ell^2}ds,$$\n\twhich completes the proof.\n\\end{proof}\nWe summarize the results of this step in the following proposition.\n\\begin{prop}\\label{Prop summary Step2}\n\tLet $(\\bar{\\Gamma}_t)_{t \\leq T}$ be a weakly continuous solution to (\\ref{ell2-FPKE}) such that $\\bar{\\Gamma}_t(H(\\mathcal{SP})) = 1$ for each $t \\in [0,T]$. Then, there exists a solution $\\bar{Q} \\in \\mathcal{P}(C_T\\ell^2)$ to the martingale problem ($\\ell^2$-MGP) such that $\\bar{Q}$ is concentrated on $C_TH(\\mathcal{SP})$ with $\\bar{Q}\\circ e_t^{-1} = \\bar{\\Gamma}_t$ for each $t \\in [0,T]$. Further, the results of Lemma \\ref{Lem martingale and covar} apply to $\\bar{Q}$. \n\\end{prop}\n\n\n\\textbf{Step 3: From ($\\ell^2$-MGP) to (\\ref{SNLFPKE}):} For a given solution $\\bar{Q} \\in \\mathcal{P}(C_T\\ell^2)$ to ($\\ell^2$-MGP), set\n$$\\mathcal{C}:= \\mathcal{B}(C_T\\ell^2) \\bigvee \\mathcal{N}_{\\bar{Q}}$$\nand\n$$\\mathcal{C}_t := \\sigma(e_s, s\\leq t) \\bigvee \\mathcal{N}_{\\bar{Q}}$$\nfor $t \\leq T$, where $\\mathcal{N}_{\\bar{Q}}$ denotes the collection of all subsets of sets $N \\in \\mathcal{B}(C_T\\ell^2)$ with $\\bar{Q}(N) = 0$. Of course, $\\mathcal{C}$ and $\\mathcal{C}_t$ depend on $\\bar{Q}$, but we suppress this dependence in the notation. Without further mentioning, we understand such $\\bar{Q}$ as extended to $\\mathcal{C}$ in the canonical way. Then, $(C_T\\ell^2,\\mathcal{C},(\\mathcal{C}_t)_{t \\leq T}, \\bar{Q})$ is a complete filtered probability space. Clearly, $(t,\\gamma) \\mapsto \\bar{\\Sigma}(t,e_t(\\gamma))$ is $\\mathcal{C}_t$-progressively measurable from $[0,T]\\times C_T\\ell^2$ to $L(\\mathbb{R}^{d_1},\\ell^2)$, the space of bounded linear operators from $\\mathbb{R}^{d_1}$ to $\\ell^2$.\n\n\\begin{rem}\\label{Rem extend prob space}\n\tWe extend $(C_T\\ell^2,\\mathcal{C},(\\mathcal{C}_t)_{t \\leq T}, \\bar{Q})$ as follows. Let $(\\Omega', \\mathcal{F}'', (\\mathcal{F}''_t)_{t \\leq T}, P)$ be a complete filtered probability space with a real-valued $\\mathcal{F}''_t$-Wiener process $\\beta$ on it, define\n\t\\begin{equation*}\n\t\\Omega := C_T\\ell^2 \\otimes \\underset{l \\geq 1}{\\bigotimes}\\Omega', \\,\\, \\mathcal{F}' := \\mathcal{C}\\otimes \\underset{l \\geq 1}{\\bigotimes}\\mathcal{F}'', \\,\\, \\mathcal{F}_t' := \\mathcal{C}_t \\otimes \\underset{l \\geq 1}{\\bigotimes}\\mathcal{F}''_t, \\,\\, \\mathbb{P}' := \\bar{Q}\\otimes \\underset{l \\geq 1}{\\bigotimes}P,\n\t\\end{equation*}let $\\mathcal{F}$ and $\\mathcal{F}_t$ be the $\\mathbb{P}'$-completion of $\\mathcal{F}'$ and $\\mathcal{F}_t'$, respectively, and denote the canonical extension of $\\mathbb{P}'$ to $\\mathcal{F}$ by $\\mathbb{P}$. Further, we denote the Wiener process $\\beta$ on the $i$-th copy of $\\Omega'$ by $\\beta_i$ and extend each $\\beta_i$ to $\\Omega$ by $\\beta_i(\\omega) := \\beta_i(\\omega_i)$ for $\\omega = \\gamma \\times (\\omega_i)_{i \\geq 1} \\in \\Omega$. Similarly, we extend each projection $e_t$ from $C_T\\ell^2$ to $\\Omega$ via $e_t(\\omega) := e_t(\\gamma)$ for $\\omega$ as above, but keep the same notation for this extended process. Obviously, $(e_t)_{t \\leq t}$ is a continuous, $\\mathcal{F}_t$-adapted process on $\\Omega$ and each $\\beta_i$ is an $\\mathcal{F}_t$-Wiener process on $\\Omega$ under $\\mathbb{P}$. Moreover, $(e_t)_{t \\leq T}$ and $(\\beta_i)_{i \\geq 1}$ are independent on $\\Omega$ with respect to $\\mathbb{P}$ by construction. Further, it is clear that the process $M_i$ as in (\\ref{16}) is a $\\mathbb{P}$-martingale with respect to $\\mathcal{F}_t$ for each $i \\geq 1$ with covariation as in (\\ref{17}) and that $(t,\\gamma) \\mapsto \\bar{\\Sigma}(t,e_t(\\gamma)) \\in L(\\mathbb{R}^{d_1},\\ell^2)$ is $\\mathcal{F}_t$-progressively measurable on $[0,T]\\times \\Omega$.\n\\end{rem}\n\nFinally, we need the following result, which is a special case of Theorem 2, \\cite{Ondrejat_StochInt_Repr}.\n\\begin{prop}\\label{Prop Step3 Ondrejat appl}\n\tLet $\\bar{Q} \\in \\mathcal{P}(C_T\\ell^2)$ be a solution to the martingale problem ($\\ell^2$-MGP). Then, there exists a complete filtered probability space with an adapted $d_1$-dimensional Wiener process $W = (W^{\\alpha})_{\\alpha \\leq d_1}$ and an $\\ell^2$-valued adapted continuous process $X = (X_t)_{t \\leq T}$ such that the law of $X$ on $C_T\\ell^2$ is $\\bar{Q}$ and for $i \\geq 1$ and $t \\in [0,T]$, we have a.s.\n\t\\begin{equation}\\label{extra2}\n\tp_i \\circ X_t - p_i \\circ X_0 - \\int_0^t \\bar{B}_i(s,X_s)ds = \\sum_{\\alpha =1}^{d_1}\\int_0^t \\bar{\\Sigma}^{\\alpha}_i(s,X_s)dW_s^{\\alpha}\n\t\\end{equation}and the exceptional set can be chosen independent of $t$ and $i$.\n\\end{prop}\nTo see this, consider Theorem 2 of \\cite{Ondrejat_StochInt_Repr} with $X = \\ell^2$, $U_0 = \\mathbb{R}^{d_1}$, $D = \\{p_i, i \\geq 1\\}$, the processes $M(p_i)$ given by $M_i$ as in (\\ref{16}) on the probability space $\\Omega$ of Remark \\ref{Rem extend prob space} and \n$$g_s = \\bar{\\Sigma}(s,e_s) \\in L(\\mathbb{R}^{d_1}, \\ell^2).$$ These choices fulfill all requirements of \\cite{Ondrejat_StochInt_Repr}. In this case, the $\\ell^2$-valued process $X$ is given by $X_t = e_t$ on $\\Omega$. Since all terms in (\\ref{extra2}) are continuous in $t$, the exceptional set may indeed by chosen independently of $t \\in [0,T]$ and $i \\geq 1$.\\\\ \n\\\\\nThe proof of Theorem \\ref{main thm stoch case} now follows from the above three step-scheme as follows.\\\\\n\\\\\n\\textit{Proof of Theorem \\ref{main thm stoch case}:} Let $(\\Gamma_t)_{t \\leq T}\\subseteq \\mathcal{P}(\\mathcal{SP})$ be a weakly continuous solution to (\\ref{P-FPKE}). By Lemma \\ref{Lem of Step1 stochCase} of Step 1, the weakly continuous curve of Borel probability measures on $\\ell^2$\n$$\\bar{\\Gamma}_t := \\Gamma_t \\circ H^{-1}, \\,\\, t\\in [0,T]$$\nsolves (\\ref{ell2-FPKE}) and each $\\bar{\\Gamma}_t$ is concentrated on $H(\\mathcal{SP})$. By Proposition \\ref{Prop summary Step2} of Step 2, there exists a solution $\\bar{Q} \\in \\mathcal{P}(C_T\\ell^2)$ to the martingale problem ($\\ell^2$-MGP), which is concentrated on $C_TH(\\mathcal{SP})$ such that\n$$\\bar{Q} \\circ e_t^{-1} = \\bar{\\Gamma}_t, \\,\\, t\\in [0,T].$$\nFurther, Lemma \\ref{Lem martingale and covar} applies to $\\bar{Q}$. By Lemma \\ref{Lem martingale and covar} and Proposition \\ref{Prop Step3 Ondrejat appl} of Step 3, there is a $d_1$-dimensional $\\mathcal{F}_t$-adapted Wiener process $W = (W^{\\alpha})_{\\alpha \\leq d_1}$ and an $\\mathcal{F}_t$-adapted process $X$ on some complete filtered probability space $(\\Omega, \\mathcal{F}, (\\mathcal{F}_t)_{t \\leq T}, \\mathbb{P})$, which fulfill (\\ref{extra2}) and $X \\in C_TH(\\mathcal{SP})$ $\\mathbb{P}$-a.s. such that $\\bar{Q}$ is the law of $X$.\\\\\nPossibly redefining $X$ on a $\\mathbb{P}$-negligible set (which preserves (\\ref{extra2}) and its adaptedness, the latter due to the completeness of the underlying filtered probability space), we may assume $X_t(\\omega) = H(\\mu_t(\\omega))$ for some $\\mu_t(\\omega) \\in \\mathcal{SP}$ for each $(t,\\omega) \\in [0,T]\\times \\Omega$. The continuity of $H^{-1}: H(\\mathcal{SP}) \\to \\mathcal{SP}$ and $t \\mapsto X_t(\\omega)$ implies vague continuity of \n\\begin{equation}\\label{19}\nt \\mapsto \\mu_t(\\omega) = H^{-1} \\circ X_t(\\omega)\n\\end{equation}\nfor each $\\omega \\in \\Omega$ and $\\mathcal{F}_t$-adaptedness of the $\\mathcal{SP}$-valued process $(\\mu_t)_{t \\leq T}$. Considering (\\ref{extra2}), $X_t = H(\\mu_t)$ and the definition of $\\bar{B}$ and $\\bar{\\Sigma}^{\\alpha}_i$, we obtain, recalling $p_i(H(\\nu)) = \\nu(h_i)$ for each $\\nu \\in \\mathcal{SP}$,\n\\begin{equation*}\n\\mu_t(h_i)-\\mu_0(h_i)- \\int_0^t B_i(s,\\mu_s)ds = \\sum_{\\alpha = 1}^{d_1}\\int_0^t \\Sigma^{\\alpha}_i(s,\\mu_s)dW^{\\alpha}_s, \\,\\, t \\leq T\n\\end{equation*}$\\mathbb{P}$-a.s. for each $i \\geq 1$. From here, it follows by Lemma \\ref{Lem aux H} (i) that $(\\mu_t)_{t \\leq T}$ is a solution to (\\ref{SNLFPKE}) as in Definition \\ref{Def sol SNLFPKE}. Further, \n$$\\mathbb{P}\\circ \\mu_t^{-1} = (\\mathbb{P}\\circ X_t^{-1})\\circ (H^{-1})^{-1} = \\bar{\\Gamma}_t \\circ (H^{-1})^{-1} = \\Gamma_t \\circ H^{-1} \\circ (H^{-1})^{-1} = \\Gamma_t.$$\nIt remains to prove the final assertion of the theorem. To this end, note that $\\Gamma_0(\\mathcal{P}) = 1$ implies $\\bar{\\Gamma}_0(H(\\mathcal{P})) = 1$ and hence $\\mu_0 \\in \\mathcal{P}$ $\\mathbb{P}$-a.s. with $\\mu_0$ as in (\\ref{19}). From here, the assertion follows by Lemma \\ref{Lem consv of mass stochastic}. \\qed\n\\begin{rem}\nThe particular type of noise we consider for \\eqref{SNLFPKE} was partially motivated by \\cite{Coghi2019StochasticNF}, where the natural connection of equations of type \\eqref{SNLFPKE} to interacting particle systems with common noise was investigated. Other types of noise terms may be treated in the future, including a possible extension to infinite-dimensional ones. In particular, Proposition \\ref{Prop Step3 Ondrejat appl} via \\cite[Thm.2]{Ondrejat_StochInt_Repr} in the final step of the proof seems capable of such extensions, since the latter is an infinite-dimensional result.\n\\end{rem}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFrom the origin, games in extensive form have been formulated on a tree.\nIn his seminal paper\n\\emph{Extensive Games and the Problem of Information}~\\cite{Kuhn:1953},\nKuhn claimed that ``The use of a geometrical model (\\ldots)\nclarifies the delicate problem of information''. This tells us that the proper handling of information was a strong \nmotivation for Kuhn's extensive games. \nOn the game tree, moves are those vertices that possess alternatives,\nthen moves are partitioned into players moves, themselves partitioned into \ninformation sets (with the constraint that no two moves in an \ninformation set can be on the same play).\nKuhn mentions agents, one agent per information set, \nto ``personalize the interpretation'' but the notion is not central\n(to the point that his definition of perfect recall\n``obviates the use of agents'').\n\nBy contrast, in the so-called Witsenhausen's intrinsic model\n\\cite{Witsenhausen:1971a,Witsenhausen:1975},\nagents play a central role. \nEach agent is equipped with a decision set and a $\\sigma$-field,\nand the same for Nature. Then, Witsenhausen introduces the product set\nand the product $\\sigma$-field. This product set hosts the agents' information subfields. \nThe Witsenhausen's intrinsic model was elaborated \nin the control theory setting, in order to handle how information\nis distributed among agents\nand how it impacts their strategies.\nAlthough not explicitly designed for games, \nWitsenhausen's intrinsic model had, from the start, the potential \nto be adapted to games. Indeed, in~\\cite{Witsenhausen:1971a} Witsenhausen \n places his own model in the context of game theory by referring to \nvon~Neuman and Morgenstern~\\cite{vonNeuman-Morgenstern:1947},\nKuhn~\\cite{Kuhn:1953} and Aumann~\\cite{Aumann:1964}.\n\\medskip\n\nIn this paper, we introduce a new representation of games\nthat we call \\emph{games in intrinsic form}. \nGame representations play a key role in their analysis\n(see the illuminating introduction of the book~\\cite{Alos-Ferrer-Ritzberger:2016}),\nand we claim that games in intrinsic form display appealing features.\nIn the philosophy of the tree-based extensive form (Kuhn's view), \nthe temporal ordering is\nhardcoded in the tree structure: one goes from the root to the leaves,\nmaking decisions at the moves, contingent on information, chance and strategies.\nFor Kuhn, the time arrow (tree) comes first; information comes second\n(partition of the move vertices).\nBy contrast, for Witsenhausen, information comes first;\nthe time arrow comes (possibly) second, \nunder a proper causality assumption contingent to the information structure. \n\nNot having a hardcoded temporal ordering makes mathematical representations\nless constrained, hence more general.\nMoreover, Witsenhausen's framework makes representations more intrinsic.\nAs an illustration, let us consider a game where two players play once but at the same\ntime. To formulate it on a tree requires to arbitrarily decide which of the\ntwo plays first. This is not the case for games in intrinsic form,\nwhere each player\/agent is equipped with an information subfield\nand strategies that are measurable with respect to the latter;\nwriting the system of two equations that express decisions as \nthe output of strategies leads to a unique outcome,\nwithout having to solve one equation first, and the other second.\n\nThe tree representation of games has its pros and cons.\nOn the one hand, trees are perfect to follow step by step how a game is played\nas any strategy profile induces a unique play:\none goes from the root to the leaves, passing from one node\nto the next by an edge that depends on the strategy profile.\nOn the other hand, in games with information,\n information sets are represented as ``union'' of tree nodes\nthat must satisfy restrictive axioms, and such\n unions do not comply in a natural way with the tree structure,\n which can render the game analysis delicate\n \\cite{Alos-Ferrer-Ritzberger:2016,Bonanno:2004,Brandenburger:2007}.\nBy contrast, the notion of Witsenhausen's intrinsic games (W-games) does not require\nan explicit description of the play temporality, and the intrinsic form replaces\nthe tree structure with a product structure, more amenable to mathematical analysis. \nIf the introduction of the model may seem involved, we argue that the\nresulting structure is a powerful mathematical tool, because there are many\nsituations in which it is easier to reason and discuss with mathematical\nformulas than with trees.\n\nWe illustrate our claim with a proof of the celebrated Kuhn's equivalence theorem for games\nin intrinsic form.\nIndeed, as a first step in a broader research program, \n we show that equivalence between mixed and behavioral\n strategies holds under perfect recall for W-games.\nMore precisely, our proof relies on an equivalence between behavioral,\nmixed and a new notion of product-mixed strategies.\nThese latter form a subclass of mixed strategies.\nIn the spirit of~\\cite{Aumann:1964}, \nin a product-mixed strategy, each agent (corresponding to time index in~\\cite{Aumann:1964}) \ngenerates strategies from a random device that is independent of all the\nother agents. We prove that, under perfect recall for W-games,\nany mixed strategy of a player is not only equivalent to a behavioral strategy,\nbut also to a product-mixed strategy where all the agents under control \nof the player randomly select their pure strategy independently of the other agents.\n\\medskip\n\nThe paper is organized as follows.\nIn Sect.~\\ref{Witsenhausen_intrinsic_model},\nwe present the finite version of Witsenhausen's intrinsic model. \nThen, in Sect.~\\ref{Finite_games_in_intrinsic_form}, we propose a formal\ndefinition of games in intrinsic form (W-games), and then discuss three notions of \n``randomization'' of pure strategies --- mixed, product-mixed and behavioral.\nFinally, we derive an equivalent of Kuhn's equivalence theorem for games in intrinsic form\nin Sect.~\\ref{Kuhn_Theorem}.\nIn Appendix~\\ref{Background_on_fields_atoms_and_partitions}, \nwe present background material on fields, atoms and partitions,\nas these notions lay at the core of Witsenhausen's intrinsic model\nin the finite case.\nIn all the paper, we adopt the convention that a player is female (hence using ``she'' and\n``her''), whereas an agent is male (``he'', ``his'').\n\n\n\n\\section{Witsenhausen's intrinsic model (the finite case)}\n\\label{Witsenhausen_intrinsic_model}\n\nIn this paper, we tackle the issue of information in the context of finite games.\nFor this purpose, we will present the so-called intrinsic model of Witsenhausen\n\\cite{Witsenhausen:1975,Carpentier-Chancelier-Cohen-DeLara:2015}\nbut with finite sets rather than with infinite ones \nas in the original exposition.\nWe refer the reader to \nAppendix~\\ref{Background_on_fields_atoms_and_partitions}\nfor background material on fields, atoms and partitions.\n\nIn~\\S\\ref{Finite_Witsenhausen_intrinsic_model},\nwe present the finite version of Witsenhausen's\nintrinsic model, where we highlight the role of the configuration field\nthat contains the information subfields of all agents.\nIn~\\S\\ref{Examples}, we illustrate, on a few examples, the ease with which \none can model information in strategic contexts, \nusing subfields of the configuration field.\nFinally, we present in~\\S\\ref{Solvability_Causality} \nthe notions of solvability and causality. \n\n\n\\subsection{Finite Witsenhausen's intrinsic model (W-model)}\n\\label{Finite_Witsenhausen_intrinsic_model}\n\nWe present the finite version of Witsenhausen's\nintrinsic model, introduced some five decades ago in the control community\n\\cite{Witsenhausen:1971a,Witsenhausen:1975}. \n\n\\begin{definition}(adapted from \\cite{Witsenhausen:1971a,Witsenhausen:1975})\n\nA \\emph{finite W-model} is a collection\n$\\bp{\n\\AGENT,\n\\np{\\Omega, \\tribu{\\NatureField}}, \n\\sequence{\\CONTROL_{\\agent}, \\tribu{\\Control}_{\\agent}}{\\agent \\in \\AGENT}, \n\\sequence{\\tribu{\\Information}_{\\agent}}{\\agent \\in \\AGENT} \n}$, where \n\\begin{itemize}\n\\item\n$\\AGENT$ is a finite set, whose elements are called \\emph{agents};\n\\item \n\\( \\Omega \\) is a finite set which represents all uncertainties;\nany $\\omega \\in \\Omega$ is called a \\emph{state of Nature};\n$\\tribu{\\NatureField}$ is the complete field over~\\( \\Omega \\);\n\\item \nfor any \\( \\agent \\in \\AGENT \\), $\\CONTROL_{\\agent}$ is a finite set, \nthe \\emph{set of decisions} for agent~$\\agent$;\n$\\tribu{\\Control}_{\\agent}$ is the complete field over~$\\CONTROL_{\\agent}$;\n\\item\nfor any \\( \\agent \\in \\AGENT \\), \\( \\tribu{\\Information}_{\\agent} \\)\nis a subfield of the following product field\n\\begin{equation}\n \\tribu{\\Information}_{\\agent} \\subset \n {\\oproduit{\\bigotimes \\limits_{b \\in \\AGENT} \\tribu{\\Control}_{b}}{\\tribu{\\NatureField}}}\n \\eqsepv\n \\forall \\agent \\in \\AGENT \n \\label{eq:information_field_agent}\n\\end{equation}\n\nand is called the \\emph{information field} of the agent~$\\agent$.\n\\end{itemize}\n\\label{de:W-model}\n\\end{definition}\n\n\\begin{subequations}\nThe \\emph{configuration space} is the product space \n(called \\emph{hybrid space} by Witsenhausen, hence the $\\HISTORY$ notation)\n\\begin{equation}\n \\label{eq:HISTORY}\n \\HISTORY = \\produit{ \\prod\\limits_{\\agent \\in \\AGENT} \\CONTROL_{\\agent}}{\\Omega} \n \\eqfinp \n\\end{equation}\nAs all fields \\( \\tribu{\\NatureField} \\) and\n\\( \\sequence{\\tribu{\\Control}_{\\agent}}{\\agent \\in \\AGENT} \\) are complete, \nthe product \\emph{configuration field}\n\\begin{equation}\n \\tribu{\\History} =\\oproduit{{\\bigotimes \\limits_{\\agent \\in \\AGENT} \\tribu{\\Control}_{\\agent}}}{\\tribu{\\NatureField}}\n \\label{eq:history_field}\n\\end{equation}\nis also the complete field of~\\( \\HISTORY \\).\nA \\emph{configuration} \\( \\history \\in \\HISTORY \\) is denoted by\n\\begin{equation}\n \\history=\\bp{\\omega, \\sequence{\\control_{\\agent}}{\\agent \\in \\AGENT}}\n \\iff\n \\history_\\emptyset = \\omega\n \\text{ and }\n \\history_{\\agent} =\\control_{\\agent}\n \\eqsepv\n \\forall \\agent \\in \\AGENT\n \\eqfinp\n \\label{eq:history}\n\\end{equation}\n\\end{subequations}\n\nIn lieu of the information field~\\( \\tribu{\\Information}_{\\agent} \\)\nin~\\eqref{eq:information_field_agent},\nit will be convenient to consider the equivalence relation~$\\sim_{\\agent}$,\non the configuration space~$\\HISTORY$, defined in such a way that \nthe equivalence classes \\( \\bracket{\\cdot}_{\\agent} \\subset \\HISTORY \\) \ncoincide with the atoms of~\\( \\tribu{\\Information}_{\\agent} \\),\nthat is, with the elements of the partition\n\\( \\crochet{\\tribu{\\Information}_{\\agent}} \\) in~\\eqref{eq:atom_set}:\n\\begin{equation}\n \\bp{\\forall \\history', \\history'' \\in \\HISTORY} \\quad\n \\history' \\sim_{\\agent} \\history'' \\;\\Leftrightarrow\\;\n \\history'' \\in \\bracket{\\history'}_{\\agent}\\;\\Leftrightarrow\\;\n \\exists G \\in \\crochet{\\tribu{\\Information}_{\\agent}},\n \\, \\{ \\history', \\history'' \\} \\subset G\n \\eqfinp\n \\label{eq:bracket_HISTORY}\n\\end{equation}\nThus defined, the subset \\( \\bracket{\\history}_{\\agent} \\subset \\HISTORY \\)\nis the unique atom~$G$ in \\( \\crochet{\\tribu{\\Information}_{\\agent}} \n\\subset \\tribu{\\Information}_{\\agent} \\)\nthat contains the configuration~$\\history$.\n\nWe will need the following equivalent characterization of measurable\nmappings, which is a slight reformulation of~\\cite[Proposition~3.35]{Carpentier-Chancelier-Cohen-DeLara:2015}.\n\n\\begin{proposition}(adapted from \\cite[Proposition~3.35]{Carpentier-Chancelier-Cohen-DeLara:2015})\n \\label{pr:measurability}\n \n Let \\( \\rho : (\\HISTORY,\\tribu{\\History}) \\to ({\\mathbb D},\\tribu{D}) \\) \n be a mapping, \n where ${\\mathbb D}$ is a set and $\\tribu{D}$ is a $\\sigma$-field over~${\\mathbb D}$.\n We suppose that the $\\sigma$-field~$\\tribu{D}$ contains all the singletons.\n Then, for any agent~$\\agent \\in \\AGENT$, the following statements are equivalent:\n \\begin{subequations}\n \\begin{equation}\n \\rho^{-1} (\\tribu{D}) \\subset \\tribu{\\Information}_{\\agent} \n \\eqfinv\n \\end{equation}\n \\begin{equation}\n \\bp{ \\forall \\history', \\history'' \\in \\HISTORY } \\quad\n \\history' \\sim_{\\agent} \\history'' \\implies \n \\rho\\np{\\history'}=\\rho\\np{\\history''}\n \\eqfinv\n \\label{eq:strategy_atoms_rho}\n \\end{equation}\n \\begin{equation}\n \\begin{split}\n \\textrm{the set-valued mapping } \\hat\\rho :\n \\crochet{\\tribu{\\Information}_{\\agent}}\\rightrightarrows{\\mathbb D} \n \\eqsepv \\textrm{defined by} \\\\\n \\hat\\rho\\np{G_{\\agent}}= \\nset{ \\rho\\np{\\history}}%\n { \\history \\in G_{\\agent}}\n \\eqsepv\n \\forall G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}},\n \\textrm{ is a mapping.}\n \\end{split}\n \\label{eq:measurability-and-set-valued-map}\n \\end{equation}\n \\end{subequations}\n In any of these equivalent cases, we say that \n the mapping~\\( \\rho \\) is \\emph{$\\tribu{\\Information}_{\\agent}$-measurable},\n and, for all $G_{\\agent}\\in \\crochet{\\tribu{\\Information}_{\\agent}}$, we denote by \\( \\rho\\np{G_{\\agent}} \\)\n the unique element of~${\\mathbb D}$ in $\\hat\\rho\\np{G_{\\agent}}$, that is, \n \n \n \n \\begin{equation}\n \\bp{ \\forall r \\in \\rho\\np{\\HISTORY}\n \\eqsepv \\forall G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} }\\quad\n \\rho\\np{G_{\\agent}} =r \\iff\n \\hat\\rho\\np{{G_{\\agent}}} = \\na{r}\n \\eqfinp\n \\label{eq:common_value_rho}\n \\end{equation}\n Then, using the extended notation above~\\eqref{eq:bracket_HISTORY}, we have the property \n \\begin{equation}\n \\rho \\text{ is }\\tribu{\\Information}_{\\agent}\\text{-measurable}\n \\implies \n \\rho\\np{ \\bracket{\\history}_{\\agent} } =\n \\rho\\np{\\history} \n \\eqsepv \\forall \\history \\in \\HISTORY\n \\eqfinp\n \\label{eq:common_value_equivalence_class_rho}\n \\end{equation}\n\\end{proposition}\n\nNow that we have explicited measurable mappings with respect to \nagents information subfields, we introduce the notion of \npure W-strategy.\n\n\\begin{definition}(\\cite{Witsenhausen:1971a,Witsenhausen:1975})\n \\label{de:W-strategy}\n \n A \\emph{pure W-strategy} of agent~$\\agent \\in \\AGENT$ is a mapping\n \n \\begin{subequations}\n \n \\begin{equation}\n \\policy_{\\agent} : (\\HISTORY,\\tribu{\\History}) \\to\n (\\CONTROL_{\\agent},\\tribu{\\Control}_{\\agent})\n \\end{equation}\n from configurations to decisions,\n which is measurable with respect to the information\n field~$\\tribu{\\Information}_{\\agent}$ of agent~$\\agent$, that is,\n \\begin{equation}\n \\label{eq:decision_rule}\n \\policy_{\\agent}^{-1} (\\tribu{\\Control}_{\\agent})\n \\subset \\tribu{\\Information}_{\\agent} \n \\eqfinp\n \\end{equation}\n \n \\end{subequations}\n \n \\begin{subequations}\n \n We denote by $\\POLICY_{\\agent}$ the set of all pure W-strategies of agent $\\agent \\in \\AGENT$.\n \n A \\emph{pure W-strategies profile}~$\\policy$ is a family \n \\begin{equation}\n \\policy = \\sequence{\\policy_{\\agent}}{\\agent \\in \\AGENT} \n \\in \\prod_{\\agent \\in \\AGENT} \\POLICY_{\\agent} \n \\label{eq:W-strategy_profile}\n \\end{equation}\n of pure W-strategies, one per agent~$\\agent \\in \\AGENT$.\n The \\emph{set of pure W-strategies profiles} is \n \\begin{equation}\n \\POLICY= \\prod_{\\agent \\in \\AGENT} \\POLICY_{\\agent} \n \\eqfinp\n \\label{eq:W-STRATEGY}\n \\end{equation}\n \\end{subequations}\n\\end{definition}\nCondition~\\eqref{eq:decision_rule} expresses the property that any \n(pure) W-strategy of agent~$\\agent$ may only depend upon the\ninformation~$\\tribu{\\Information}_{\\agent}$ available to the agent. \n\nIn what follows, we will need some notations.\nFor any nonempty subset $\\mathbb{B} \\subset \\AGENT$ of agents, we define \n\\begin{subequations}\n \n \\begin{align}\n \\tribu{\\Control}_\\mathbb{B} \n &= \n \\bigotimes \\limits_{b \\in \\mathbb{B}} \\tribu{\\Control}_b\n \\otimes\n \\bigotimes \\limits_{\\agent \\not\\in \\mathbb{B}} \\{ \\emptyset, \\CONTROL_{\\agent} \\}\n \\subset\n \\bigotimes \\limits_{\\agent \\in \\AGENT} \\tribu{\\Control}_{\\agent}\n \\eqfinv\n \\label{eq:sub_control_field_BGENT}\n \\\\\n \\tribu{\\History}_\\mathbb{B} \n &= \n \\tribu{\\NatureField} \\otimes \\tribu{\\Control}_\\mathbb{B}\n = \\tribu{\\NatureField} \\otimes \\bigotimes \\limits_{b \\in \\mathbb{B}} \\tribu{\\Control}_b\n \\otimes\n \\bigotimes \\limits_{\\agent \\not\\in \\mathbb{B}} \\{ \\emptyset, \\CONTROL_{\\agent} \\}\n \\subset \\tribu{\\History}\n \\eqfinv\n \\label{eq:sub_history_field_BGENT}\n \\\\\n \\history_\\mathbb{B} \n &=\n \\sequence{\\history_b}{b \\in \\mathbb{B}}\n \\in \\prod \\limits_{b \\in \\mathbb{B}} \\CONTROL_b\n \\eqsepv \\forall \\history \\in \\HISTORY\n \\eqfinv\n \\label{eq:sub_history_BGENT}\n \\\\\n \\policy_\\mathbb{B} \n &=\n \\sequence{\\policy_b}{b \\in \\mathbb{B}}\n \\in \\prod \\limits_{b \\in \\mathbb{B}} \\POLICY_{b} \n \\eqsepv \\forall \\policy \\in \\POLICY\n \\eqfinp\n \\label{eq:sub_wstrategy_BGENT}\n \\end{align}\n \n \n \n \n\\end{subequations}\n\n\n\\subsection{Examples}\n\\label{Examples}\n\nWe illustrate, on a few examples, the ease with which \none can model information in strategic contexts, \nusing subfields of the configuration field.\nEven if we have presented the finite version of Witsenhausen's intrinsic model\nin~\\S\\ref{Finite_Witsenhausen_intrinsic_model},\nwe take the opportunity here to show its potential to describe \ninfinite decision and Nature sets.\n\n\\subsubsubsection{Sequential decisions}\nSuppose an individual has to take decisions (say, an element of $\\RR^n$) \nat every discrete time step in the set\\footnote{For any integers $a \\leq b$, $\\ic{a,b}$ denotes the subset\n $\\na{a,a+1,\\ldots,b-1,b}$.} \n\\( \\ic{1,\\horizon{-}1} \\), where $\\horizon \\geq 1$ is an integer.\nThe situation will be modeled with (possibly) Nature set and field\n\\( \\np{\\Omega, \\tribu{\\NatureField}} \\), \nand with $\\horizon$~agents in $\\AGENT=\\ic{0,\\horizon{-}1}$, \nand their corresponding sets, $\\CONTROL_t= \\RR^n$, and fields,\n$\\tribu{\\Control}_t = \\borel{\\RR^{n}} $\n(the Borel $\\sigma$-field of~$\\RR^n$), for $t \\in \\AGENT$.\nThen, one builds up the product set \n$\\HISTORY=\\produit{\\prod_{t=0}^{\\horizon{-}1} \\CONTROL_{t} }{\\Omega}$ and \nthe product field $\\tribu{\\History}= \\oproduit{%\n \\bigotimes_{t=0}^{\\horizon{-}1} \\tribu{\\Control}_{t} }{\\tribu{\\NatureField}}$.\nEvery agent \\( t \\in \\ic{0,\\horizon{-}1} \\) is equipped with an \ninformation field \\( \\tribu{\\Information}_{t} \\subset \\tribu{\\History} \\).\nThen, we show how we can express four information patterns:\nsequentiality, memory of past information, \nmemory of past actions, perfect recall.\nThe inclusions \\( \\tribu{\\Information}_{t} \\subset \n\\tribu{\\History}_{\\{0,\\ldots,t{-}1\\}} = \\oproduit{%\n\\bigotimes_{s=0}^{t-1} \\tribu{\\Control}_{s} \\otimes \n\\bigotimes_{s=t}^{\\horizon{-}1} \\{ \\emptyset, \\CONTROL_{s} \\}}{\\tribu{\\NatureField}} \\),\nfor \\( t\\in \\ic{0,\\horizon{-}1} \\), \nexpress that every agent can remember no more than his past actions\n(sequentiality); \nmemory of past information is represented by the inclusions \n\\(\\tribu{\\Information}_{t-1} \\subset \\tribu{\\Information}_{t} \\),\nfor \\( t\\in \\ic{1,\\horizon{-}1} \\);\nmemory of past actions is represented by the inclusions \n\\( \\oproduit{\\bigotimes_{s=0}^{t-1} \\tribu{\\Control}_{s} \\otimes \n\\bigotimes_{s=t}^{\\horizon{-}1} \\{ \\emptyset, \\CONTROL_{s} \\}}%\n{ \\{ \\emptyset, \\Omega \\} } = \n\\oproduit{ \\tribu{\\Control}_{\\{0,\\ldots,t{-}1\\}}}%\n{ \\{ \\emptyset, \\Omega \\} } \n\\subset \\tribu{\\Information}_{t} \\),\nfor \\( t\\in \\ic{1,\\horizon{-}1} \\);\nperfect recall is represented by the inclusions \n\\( \\tribu{\\Information}_{t-1} \\vee \\bp{ \\oproduit{ \\tribu{\\Control}_{\\{0,\\ldots,t{-}1\\}}}%\n{ \\{ \\emptyset, \\Omega \\} } }\n\\subset \\tribu{\\Information}_{t} \\),\nfor \\( t\\in \\ic{1,\\horizon{-}1} \\).\n\nTo represent $N$~players --- each~$\\player$ of whom makes a sequence of decisions,\none for each period~$t \\in \\ic{0,\\horizon_\\player{-}1}$ --- \nwe use $\\prod_{\\player=1}^N \\horizon_\\player$~agents, labelled by\n\\( (\\player,t) \\in \\bigcup_{\\player'=1}^N \\bp{ \\na{\\player'} \\mathord{\\times} \\ic{0,\\horizon_{\\player'}{-}1}} \\).\nWith obvious notations, the inclusions \n\\(\\tribu{\\Information}_{\\np{\\player,t-1}} \\subset \\tribu{\\Information}_{\\np{\\player,t}} \\)\nexpress memory of one's own past information,\nwhereas the inclusions \n\\( \\bigvee_{\\player'=1}^N\n\\bp{\\oproduit{\\bigotimes_{s=0}^{t-1} \\tribu{\\Control}_{s} \\otimes \n \\bigotimes_{s=t}^{\\horizon_{\\player'}-1} \\{ \\emptyset, \\CONTROL_{s} \\}}\n { \\{ \\emptyset, \\Omega \\} }}\n\\subset \\tribu{\\Information}_{\\np{\\player,t}} \\),\nexpress memory of all players past actions.\n\n\\subsubsubsection{Principal-Agent models} \n\nA branch of Economics studies so-called \\emph{Prin\\-ci\\-pal-Agent} models with \ntwo decision makers (agents) --- \n the Principal~$\\Principal$ (\\emph{leader}) who makes decisions $\\control_{\\Principal} \\in \\CONTROL_{\\Principal}$,\n where the set~$\\CONTROL_{\\Principal}$ is equipped with a\n $\\sigma$-field~$\\tribu{\\Control}_{\\Principal}$,\n and the Agent~$\\Agent$ (\\emph{follower}) who makes decisions $\\control_{\\Agent} \\in \\CONTROL_{\\Agent}$,\n where the set~$\\CONTROL_{\\Agent}$ is equipped with a\n $\\sigma$-field~$\\tribu{\\Control}_{\\Agent}$ ---\nand with Nature, corresponding to \\emph{private information (or type)} of the\nAgent~$\\Agent$, taking values in a set~$\\Omega$, \nequipped with a $\\sigma$-field~$\\tribu{\\NatureField}$.\n\n\\emph{Hidden type} (leading to adverse selection or to signaling) is represented by any information structure \nwith the property that, on the one hand,\n\\begin{equation}\n \\tribu{\\Information}_{\\Principal} \\subset \n \\oproduit{ \n \\{\\emptyset,\\CONTROL_{\\Principal}\\} \n \\otimes \n \\underbrace{\\tribu{\\Control}_{\\Agent} }_{\\textrm{\\makebox[0pt]{\\hspace{1cm}$\\Agent$'s action possibly observed}}}\n }%\n {\\underbrace{ \\{\\emptyset,\\Omega \\} }_{\\textrm{\\makebox[0pt]{$\\Agent$ type not observed}}}}\n \\eqfinv \n\\end{equation}\nthat is, \nthe Principal~$\\Principal$ does not know the Agent~$\\Agent$ type,\nbut can possibly observe the Agent~$\\Agent$ action,\nand, on the other hand, that\n\\begin{equation}\n \\oproduit{ \n \\{\\emptyset,\\CONTROL_{\\Principal}\\} \n \\otimes\n \\{\\emptyset,\\CONTROL_{\\Agent}\\} \n }{%\n \\underbrace{ \\tribu{\\NatureField} }_{ \\textrm{\\makebox[0pt]{known inner type}}}}\n \\subset \\tribu{\\Information}_{\\Agent}\n \\eqfinv \n \\label{eq:Agent_Information_type}\n\\end{equation}\nthat is, the Agent~$\\Agent$ knows the state of nature (his type).\n\n\n\\emph{Hidden action} (leading to moral hazard) is represented by any information structure \nwith the property that, on the one hand, \n\\begin{equation}\n \\tribu{\\Information}_{\\Principal} \\subset \n \\oproduit{ \n \\{\\emptyset,\\CONTROL_{\\Principal}\\} \n \\otimes \n \\underbrace{ \\{\\emptyset,\\CONTROL_{\\Agent}\\} }_{\\textrm{\\makebox[0pt]{\\hspace{2cm}cannot observe $\\Agent$'s action}}}\n }{%\n \\underbrace{ \\tribu{\\NatureField} }_{ \\textrm{\\makebox[0pt]{\\hspace{1cm}possibly knows $\\Agent$ type} } }}\n \\eqfinv \n\\end{equation}\nthat is, the Principal~$\\Principal$ does not know the Agent~$\\Agent$ action,\nbut can possibly observe the Agent~$\\Agent$ type\nand, on the other hand, that\nthe inclusion~\\eqref{eq:Agent_Information_type} holds true, \nthat is, the agent~$\\Agent$ knows the state of nature (his type). \n\n\n\n\\subsubsubsection{Stackelberg leadership model}\nIn Stackelberg games, the leader~$\\Principal$ makes a decision\n\\( \\control_{\\Principal} \\in \\CONTROL_{\\Principal} \\)\n--- based at most upon the partial observation of the state \n\\( \\omega \\in \\Omega \\) of Nature ---\nand the the follower~$\\Agent$ makes a decision\n\\( \\control_{\\Agent} \\in \\CONTROL_{\\Agent} \\)\n--- based at most upon the partial observation of the state of Nature \n\\( \\omega \\in \\Omega \\), and upon the leader decision\n\\( \\control_{\\Principal} \\in \\CONTROL_{\\Principal} \\).\nThis kind of information structure is expressed with the following inclusions \nof fields:\n\\begin{equation}\n\\tribu{\\Information}_{\\Principal} \n\\subset \n\\oproduit{%\n\\{\\emptyset,\\CONTROL_{\\Principal}\\}\n\\otimes\n\\{\\emptyset,\\CONTROL_{\\Agent}\\} \n}{%\n \\tribu{\\NatureField} }\n\\mtext{ and }\n\\tribu{\\Information}_{\\Agent} \n\\subset \n\\oproduit{%\n\\tribu{\\Control}_{\\Principal} \n\\otimes\n\\{\\emptyset,\\CONTROL_{\\Agent}\\} \n}{%\n \\tribu{\\NatureField} }\n\\eqfinp \n\\label{eq:Stackelberg_Information} \n\\end{equation}\nEven if the players are called leader and follower, \nthere is no explicit time arrow in~\\eqref{eq:Stackelberg_Information}.\nIt is the information structure that reveals the time arrow.\nIndeed, if we label the leader~$\\Principal$ as~$t=0$ (first player)\nand the follower~$\\Agent$ as~$t=1$ (second player), \nthe inclusions~\\eqref{eq:Stackelberg_Information} become the inclusions \n\\( \\tribu{\\Information}_{0} \\subset \\oproduit{%\n\\{ \\emptyset, \\CONTROL_{0} \\} \\otimes \n\\{ \\emptyset, \\CONTROL_{1} \\}}{\\tribu{\\NatureField}} \\),\nand \n\\( \\tribu{\\Information}_{1} \\subset \\oproduit{%\n\\tribu{\\Control}_{0} \\otimes \n\\{ \\emptyset, \\CONTROL_{1} \\}}{\\tribu{\\NatureField}} \\):\nthe sequence \\( \\tribu{\\Information}_{0}, \\tribu{\\Information}_{1} \\)\nof information fields \nis ``adapted'' to the filtration\n\\( \\oproduit{%\n\\{ \\emptyset, \\CONTROL_{0} \\} \\otimes \n\\{ \\emptyset, \\CONTROL_{1} \\}}{\\tribu{\\NatureField}}\n\\subset \\oproduit{%\n\\tribu{\\Control}_{0} \\otimes \n\\{ \\emptyset, \\CONTROL_{1} \\}}{\\tribu{\\NatureField}} \\).\nBut if we label the leader~$\\Principal$ as~$t=1$ \nand the follower~$\\Agent$ as~$t=0$,\nthe new sequence of information fields would not be \n``adapted'' to the new filtration.\nIt is the information structure that prevents \nthe follower to play first, but that makes possible\nthe leader to play first and the follower to play second.\n\n\n\n\\subsection{Solvability and causality}\n\\label{Solvability_Causality}\n\nIn the Kuhn formulation, Witsenhausen says that ``For any\ncombination of policies one can find the corresponding outcome by\nfollowing the tree along selected branches, and this is an explicit\nprocedure'' \\cite{Witsenhausen:1971a}. \nIn the Witsenhausen formulation, there is no such explicit procedure\nas, for any combination of policies, there may be none, one or many solutions to \nthe closed-loop equations; these equations express the decision of one agent as \nthe output of his strategy, supplied with Nature outcome and with all agents decisions.\nThis is why Witsenhausen needs a property of solvability,\nwhereas Kuhn does not need it as it is hardcoded in the tree structure.\nThen, Witsenhausen defines the notion of causality (which parallels that of\ntree) and proves in~\\cite{Witsenhausen:1971a} that solvability holds true under causality.\nYet, in~\\cite[Theorem~2]{Witsenhausen:1971a},\nWitsenhausen exhibits an example of noncausal W-model that is solvable.\n\n\n\\subsubsection{Solvability}\n\nWith any given pure W-strategies profile $\\policy = \\sequence{\\policy_{\\agent}}{\\agent \\in \\AGENT}\n\\in \\prod_{\\agent \\in \\AGENT} \\POLICY_{\\agent} $ we associate the set-valued\nmapping\n\\begin{align}\n{\\cal M}_\\policy: \\Omega\n & \\rightrightarrows \\prod_{b \\in \\AGENT} \\CONTROL_{b} \n\\label{eq:SetValuedReducedSolutionMap}\n \\\\\n \\omega\n & \\mapsto \\Bset{ \\sequence{\\control_b}{b \\in \\AGENT} \\in \n \\prod_{b \\in \\AGENT} \\CONTROL_{b} }{%\n\\control_{\\agent} = \\policy_{\\agent}\\bp{\\omega,\n\\sequence{\\control_b}{b \\in \\AGENT} }\n \\eqsepv \\forall \\agent \\in \\AGENT}\n \\eqfinp\n \\nonumber\n\\end{align}\n\nWith this definition, we slightly reformulate below\nhow Witsenhausen introduced the property of solvability.\n\n\\begin{definition}(\\cite{Witsenhausen:1971a,Witsenhausen:1975})\n \\begin{subequations}\n \n The \\emph{solvability property} holds true for the W-model of Definition~\\ref{de:W-model}\n when,\n for any pure W-strategies profile $\\policy = \\sequence{\\policy_{\\agent}}{\\agent \\in \\AGENT}\n \\in \\prod_{\\agent \\in \\AGENT} \\POLICY_{\\agent} $, \nthe set-valued mapping~${\\cal M}_{\\policy}$\nin~\\eqref{eq:SetValuedReducedSolutionMap}\n is a mapping whose domain is $\\Omega$, that is,\n the cardinal of ${\\cal M}_{\\policy}\\np{\\omega}$ \nis equal to one, for any state of nature $\\omega \\in \\Omega$. \n\n Thus, under the solvability property, for any state of nature $\\omega \\in \\Omega$, \n there exists one, and only one, decision profile\n $\\sequence{\\control_b}{b \\in \\AGENT} \\in \n \\prod_{b \\in \\AGENT} \\CONTROL_{b}$\n \n which is a solution of the \\emph{closed-loop equations}\n \\begin{equation}\n \\control_{\\agent} = \\policy_{\\agent}\\bp{\\omega, \\sequence{\\control_b}{b \\in \\AGENT}}\n \\eqsepv\n \\forall \\agent \\in \\AGENT\n \\eqfinp\n \\label{eq:solution_map_IFF}\n \\end{equation}\n In this case, we define the \\emph{solution map} \n \\begin{equation}\n M_\\policy: \\Omega \\rightarrow \\prod_{b \\in \\AGENT} \\CONTROL_{b}\n\n \\label{eq:solution_map}\n \\end{equation}\n as the unique element contained in the image set\n${\\cal M}_{\\policy}\\np{\\omega}$ that is, \nfor all $\\sequence{\\control_b}{b \\in \\AGENT} \\in \n \\prod_{b \\in \\AGENT} \\CONTROL_{b}$, \n $M_\\policy\\np{\\omega} = \n\\sequence{\\control_b}{b \\in \\AGENT} \\iff\n {\\cal M}_{\\policy}\\np{\\omega} = \\na{\\sequence{\\control_b}{b \\in \\AGENT}}$.\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\end{subequations}\n \\label{de:solvability}\n\\end{definition}\n\n\n\n\n\\subsubsection{Configuration-orderings}\n\nIn his articles \\cite{Witsenhausen:1971a,Witsenhausen:1975},\nWitsenhausen introduces a notion of causality that relies on \nsuitable configuration-orderings.\nHere, we introduce our own notations, because they make possible a compact formulation \nof the causality property and, later, of perfect recall.\n\\medskip\n\n\\begin{subequations}\nFor any finite set~\\( {\\mathbb D} \\), let \\( \\cardinal{{\\mathbb D}} \\) denote the cardinal\nof~\\( {\\mathbb D} \\). \n Thus, \\( | \\AGENT | \\) denotes the cardinal of the set~\\( \\AGENT \\), that is,\n\\( | \\AGENT | \\) is the number of agents.\nFor $k \\in \\ic{1,| \\AGENT |}$, let $\\Sigma^k$ denote the set of\n$k$-orderings, that is, injective\nmappings from $\\ic{1, k}$ to $\\AGENT$:\n\\begin{equation}\n \\Sigma^k=\\defset{ \\kappa: \\ic{1,k} \\to \\AGENT }%\n{ \\kappa \\mtext{ is an injection} }\n\\eqfinp \n\\label{eq:ORDER_k}\n\\end{equation}\nThe set \\( \\Sigma^{| \\AGENT |} \\) is the set of \\emph{total orderings} of\nagents in $\\AGENT$, that is, bijective\nmappings from $\\ic{1,| \\AGENT |}$ to $\\AGENT$\n(in contrast with \\emph{partial orderings} in~$\\Sigma^k$ for $k < | \\AGENT |$).\nFor any $k \\in \\ic{1, | \\AGENT |}$, any ordering $\\kappa \\in \\Sigma^k$,\nand any integer $\\ell \\le k$, \\( \\kappa_{\\vert \\{ 1, \\ldots, \\ell \\}} \\)\nis the restriction of the ordering~$\\kappa$ to the first $\\ell$~integers.\nFor any $k \\in \\ic{1,| \\AGENT |}$, there is a natural mapping $\\psi_k$ \n\\begin{align}\n\\psi_k: \\Sigma^{| \\AGENT |} \\rightarrow \\Sigma^k \n\\eqsepv \n\\rho \\mapsto \\rho_{\\vert \\{ 1, \\ldots, k \\} } \n\\eqfinv \n\\label{eq:cut}\n\\end{align}\nwhich is the restriction of any (total) ordering of~$\\AGENT$ \nto~$\\ic{1,k}$.\nWe define the \\emph{set of orderings} by\n\\begin{equation}\n \\Sigma= \\bigcup_{ k \\in \\ic{0,| \\AGENT |}} \\Sigma^k\n\\quad \\text{ where } \\Sigma^0 = \\{ \\emptyset \\} \n\\eqfinp \n\\label{eq:ORDER}\n\\end{equation}\nFor any \\( k \\in \\ic{1,| \\AGENT |} \\), and any $k$-ordering~$\\kappa \\in \\Sigma^k$,\nwe define the \\emph{range} $\\range{\\kappa}$ of the ordering~$\\kappa$ as the\nsubset \n\\begin{align}\n \\range{\\kappa}\n&=\n\\ba{ \\kappa(1), \\ldots, \\kappa(k) }\n\\subset \\AGENT\n\\eqsepv \\forall \\kappa \\in \\Sigma^k\n\\eqfinv\n \\label{range_kappa}\n \\intertext{the \\emph{cardinal} $\\cardinal{\\kappa}$ of the ordering $\\kappa$ as\n the integer}\n\\cardinal{\\kappa}\n&=k\n\\in \\ic{1, | \\AGENT |}\n\\eqsepv \\forall \\kappa \\in \\Sigma^k\n\\eqfinv\n \\label{cardinal_kappa}\n \\intertext{the \\emph{last element} $\\LastElement{\\kappa}$ of the ordering $\\kappa$ as the agent}\n\\LastElement{\\kappa}\n&=\\kappa(k)\n\\in \\AGENT\n\\eqsepv \\forall \\kappa \\in \\Sigma^k\n\\eqfinv\n \\label{LastElement_kappa}\n \\intertext{the \\emph{restriction} $\\FirstElements{\\kappa}$ of the ordering $\\kappa$ to the first $k{-}1$ elements}\n\\FirstElements{\\kappa}\n&= \\kappa_{\\vert \\{ 1, \\ldots, k-1 \\}} \\in \\Sigma^{k-1}\n\\eqsepv \\forall \\kappa \\in \\Sigma^k\n\\eqfinp\n \\label{FirstElements_kappa}\n\\end{align}\n\\end{subequations}\nWith the notations introduced,\nany ordering $\\kappa \\in \\Sigma \\setminus \\{ \\emptyset \\}$\ncan be written as $\\kappa = \\np{\\FirstElements{\\kappa}, \\LastElement{\\kappa}}$,\nwith the convention that \n $\\kappa = \\np{\\LastElement{\\kappa}}$ when $\\kappa \\in \\Sigma^1$.\n\n\\begin{definition}(\\cite{Witsenhausen:1971a,Witsenhausen:1975})\nA \\emph{configuration-ordering} is a mapping \n$\\varphi: \\HISTORY \\to \\Sigma^{| \\AGENT |}$ from configurations towards total orderings.\nWith any configuration-ordering~$\\varphi$, \nand any ordering $\\kappa \\in \\Sigma$,\nwe associate the subset \\( \\HISTORY_{\\kappa}^{\\varphi} \\subset \\HISTORY \\)\nof configurations defined by\n\\begin{equation}\n \\HISTORY_{\\kappa}^{\\varphi} =\n \\defset{\\history \\in \\HISTORY}{\\psi_{\\cardinal{\\kappa}}\\bp{\\varphi(\\history)} =\\kappa} \n \\eqsepv \\forall \\kappa \\in \\Sigma\n \\eqfinp\n \\label{eq:HISTORY_k_kappa}\n\\end{equation}\nBy convention, we put \n\\( \\HISTORY_{\\emptyset}^{\\varphi} = \\HISTORY \\). \n\\label{de:configuration-ordering}\n\\end{definition}\nAlong each configuration $\\history \\in \\HISTORY$, the agents are ordered by\n$\\varphi(\\history) \\in \\Sigma^{| \\AGENT |}$.\nThe set~$\\HISTORY_{\\kappa}^{\\varphi}$ in~\\eqref{eq:HISTORY_k_kappa} \ncontains all the configurations \nfor which the agent~\\( \\kappa(1) \\) is acting first, \nthe agent~\\( \\kappa(2) \\) is acting second, \\ldots, till \nthe last agent~\\( \\LastElement{\\kappa}=\\kappa(\\cardinal{\\kappa}) \\) acting at stage~$\\cardinal{\\kappa}$.\n\n\n\\subsubsection{Causality}\n\nIn his article \\cite{Witsenhausen:1971a},\nWitsenhausen introduces a notion of causality \nand he proves that causal systems are solvable.\n\nThe following definition can be interpreted as follows.\nIn a causal W-model, there exists a configuration-ordering with the following\nproperty: when an agent is called to play --- as he is the last one in an\nordering --- what he knows cannot depend on decisions made by agents\nthat are not his predecessors \n(in the range of the ordering under consideration).\n\n\\begin{definition}(\\cite{Witsenhausen:1971a,Witsenhausen:1975})\n \\label{de:causality}\n A W-model (as in Definition~\\ref{de:W-model}) \n is \\emph{causal} if there exists (at least) one\n configuration-ordering $\\varphi: \\HISTORY \\to \\Sigma^{| \\AGENT |}$ \n with the property that \n \\begin{equation}\n \\label{eq:causality_a}\n \\HISTORY_{\\kappa}^{\\varphi} \\cap \\History \\in \n \n \\tribu{\\History}_{\\range{\\FirstElements{\\kappa}}}\n \\eqsepv \n \\forall \\History \\in \\tribu{\\Information}_{\\LastElement{\\kappa}}\n \\eqsepv \n \\forall \\kappa \\in \\Sigma \n \\eqfinp\n \\end{equation}\n\\end{definition}\nOtherwise said, once we know the first $\\cardinal{\\kappa}$~agents, \nthe information of the (last) agent~$\\LastElement{\\kappa}$ depends at most\n on the decisions of the (previous) agents in the range~$\\range{\\FirstElements{\\kappa}}$.\nIn~\\eqref{eq:causality_a}, the subset~$\\HISTORY_{\\kappa}^{\\varphi} \\subset \\HISTORY$ of configurations \nhas been defined in~\\eqref{eq:HISTORY_k_kappa},\nthe last agent~$\\LastElement{\\kappa}$ in~\\eqref{LastElement_kappa},\nthe partial ordering~$\\FirstElements{\\kappa}$ in~\\eqref{FirstElements_kappa},\nthe range~$\\range{\\FirstElements{\\kappa}}$ in~\\eqref{range_kappa},\nand --- using the definition~\\eqref{eq:sub_history_field_BGENT} \nof the subfield~$\\tribu{\\History}_{\\mathbb{B}}$ of~\\( \\tribu{\\History} \\),\nwith the subset~$\\mathbb{B}=\\range{\\FirstElements{\\kappa}}$ of agents \ndefined in~\\eqref{FirstElements_kappa} and~\\eqref{range_kappa} --- \nthe subfield $\\tribu{\\History}_{\\range{\\FirstElements{\\kappa}}}$ \nof~\\( \\tribu{\\History} \\) is \n\\begin{equation}\n \\tribu{\\History}_{\\range{\\FirstElements{\\kappa}}}\n =\n \\tribu{\\NatureField} \\otimes \n\\bigotimes \\limits_{\\agent \\in \\range{\\FirstElements{\\kappa}}} \\tribu{\\Control}_{\\agent}\n \\otimes\n \\bigotimes \\limits_{b \\not\\in \\range{\\FirstElements{\\kappa}} } \\{\n \\emptyset, \\CONTROL_b \\}\n \\subset \\tribu{\\History}\n \\eqfinp\n\\label{eq:causality_b}\n\\end{equation}\n\nWitsenhausen's intrinsic model deals with agents, information and strategies,\nbut not with players and preferences.\nWe now turn to extending the Witsenhausen's intrinsic model to games.\n\n\n\\section{Finite games in intrinsic form}\n\\label{Finite_games_in_intrinsic_form}\n\nWe are now ready to embed Witsenhausen's intrinsic model\ninto game theory.\nIn~\\S\\ref{Definition_of_a_finite_game_in_intrinsic_form},\nwe introduce a formal definition of a finite game in intrinsic form (W-game),\nand in~\\S\\ref{Mixed_and_behavioral_strategies}\nwe introduce three notions of \n``randomization'' of pure strategies --- mixed, product-mixed and behavioral.\nIn~\\S\\ref{Strategy_equivalence},\nwe discuss relations between product-mixed and behavioral W-strategies.\n\nIn what follows, when ${\\mathbb D}$ is a finite set, we denote by \n\\( \\Delta\\np{{\\mathbb D}} \\) the set of probability distributions over~${\\mathbb D}$.\nWhen needed, the set~\\( \\Delta\\np{{\\mathbb D}} \\) can be equipped with \nthe Borel topology and\nthe Borel $\\sigma$-field, as \\( \\Delta\\np{{\\mathbb D}} \\)\nis homeomorphic to the simplex~$\\Sigma_{\\cardinal{{\\mathbb D}}}$ of~$\\RR^{\\cardinal{{\\mathbb D}}}$, \nand is thus homeomorphic to a closed\nsubset of a finite dimensional space.\n\n\\subsection{Definition of a finite game in intrinsic form (W-game)}\n\\label{Definition_of_a_finite_game_in_intrinsic_form}\n\nWe introduce a formal definition of a finite game in intrinsic form (W-game).\n\n\\begin{definition}\n A \\emph{finite W-game}\n \\( \\Bp{ \n \\bp{ \n \\sequence{\\AGENT^{\\player}}{\\player \\in \\PLAYER}, \n \\np{\\Omega, \\tribu{\\NatureField}},\n \\sequence{\\CONTROL_{\\agent}, \\tribu{\\Control}_{\\agent},\n \\tribu{\\Information}_{\\agent}}{\\agent \\in \\bigcup_{\\player \\in \\PLAYER} \\AGENT^{\\player}} },\n (\\precsim^{\\player})_{\\player \\in \\PLAYER}\n }\n \\),\n or a \\emph{finite game in intrinsic form},\n is a made of \n \\begin{itemize}\n \\item \n a family \\( \\sequence{\\AGENT^{\\player}}{\\player \\in \\PLAYER} \\),\nwhere the set~$\\PLAYER$ of \\emph{players} is finite,\n of two by two disjoint nonempty sets \n whose union $\\AGENT= \\bigcup_{\\player \\in \\PLAYER} \\AGENT^{\\player}$ is the set \n of \\emph{agents}; each subset~$\\AGENT^{\\player}$ is interpreted as \n the subset of executive agents of the \n \\emph{player} \\( \\player \\in \\PLAYER \\),\n \\item \n a finite W-model \n \\( \\bp{ \n \\AGENT,\n (\\Omega, \\tribu{\\NatureField}), \n \\sequence{\\CONTROL_{\\agent}, \\tribu{\\Control}_{\\agent},\n \\tribu{\\Information}_{\\agent}}{\\agent \\in \\AGENT} } \\),\n as in Definition~\\ref{de:W-model},\n \n \n \n \\item \n for each player $\\player \\in \\PLAYER$,\n a preference relation~$\\precsim^{\\player}$ \n on the set of mappings \n\\( \\Omega \\to \\Delta\\np{ \\prod_{b \\in \\AGENT} \\CONTROL_{b} } \\).\n \\end{itemize}\n \n A finite W-game is said to be solvable (resp. causal)\n if the underlying W-model\n is solvable as in Definition~\\ref{de:solvability}\n (resp. causal as in Definition~\\ref{de:causality}).\n \n \\label{de:W-game}\n\\end{definition}\n\nWe comment on the preference relations~$\\precsim^{\\player}$\non the set of mappings \n\\( \\Omega \\to \\Delta\\np{ \\prod_{b \\in \\AGENT} \\CONTROL_{b} } \\).\nOur definition covers\n(like in~\\cite{Blume-Brandenburger-Dekel:1991})\nthe most traditional preference relation~$\\precsim^{\\player}$,\nwhich is the numerical \\emph{expected utility} preference.\nIn this latter, each player~$\\player \\in \\PLAYER$\nis endowed, on the one hand, with a \\emph{criterion} (payoff), that is,\na measurable function $\\criterion_\\player: (\\HISTORY, \\tribu{\\History}) \n\\rightarrow [-\\infty,+\\infty[$,\nand, on the other hand, with a \\emph{belief}, that is,\na probability distribution\n $\\nu^\\player: \\tribu{\\NatureField} \\rightarrow [0, 1]$\nover the states of Nature $(\\Omega, \\tribu{\\NatureField})$.\nThen, given \\( K_i : \\Omega \\to \\Delta\\np{ \\prod_{b \\in \\AGENT}\n \\CONTROL_{b} } \\), $i=1,2$, one says that\n \\( K_1 \\precsim^{\\player} K_2 \\) if \n\\begin{align*}\n \\int_{\\Omega} \\nu^\\player\\np{d\\omega}\n &\n \\int_{ \\prod_{b \\in \\AGENT} \\CONTROL_{b} } \n \\criterion_\\player\\bp{\\omega, \\sequence{\\control_b}{b \\in \\AGENT}}\n K_1\\bp{\\omega, d\\sequence{\\control_b}{b \\in \\AGENT}}\n \\\\\n & \\leq\n \\int_{\\Omega} \\nu^\\player\\np{d\\omega}\n \\int_{ \\prod_{b \\in \\AGENT} \\CONTROL_{b} } \n \\criterion_\\player\\bp{\\omega, \\sequence{\\control_b}{b \\in \\AGENT}}\n K_2\\bp{\\omega, d\\sequence{\\control_b}{b \\in \\AGENT}}\n \\eqfinp\n\\end{align*}\n\nNote also that the Definition~\\ref{de:W-game} includes Bayesian games,\nby specifying a product structure for $\\Omega$ --- where some factors represent\ntypes of players, and one factor represents chance --- and by considering\nadditional probability distributions.\n \n\n\n\\subsection{Mixed, product-mixed and behavioral strategies}\n\\label{Mixed_and_behavioral_strategies}\n\nWe introduce three notions of \n``randomization'' of pure strategies: mixed, product-mixed and behavioral.\n\nThe notion of mixed strategy comes from the study of games in normalized form,\nwhere each player has to select a pure strategy,\nthe collection of which determines a unique outcome. \nIf we allow the players to select their pure strategy at random, \nthe lottery they use is called a mixed strategy. \nFor an extensive game, a mixed strategy can be interpreted in the following sense.\nFirst, the player selects a pure strategy using the lottery. \nSecond, the game is played. When the player is called by the umpire,\nshe plays the action specified by the selected pure strategy for the current information set. \n\nObserve that there is only one dice roll per player. This dice roll determines the reactions of the player for every situation of the game. \nIt would be more natural to let the player roll a dice every time she has to\nplay, leading to the notion of behavioral strategy. \n\nA fundamental question in game theory is to identify settings in which those two\nviews (mixed strategy and behavioral strategy) are equivalent.\nTo formulate this question in the W-game framework, \nwe will give formal definitions of these two notions of randomization.\nWe will also add a third one, that we call\nproduct-mixed strategy, and which \nis in the spirit of Aumann~\\cite{Aumann:1964}, \nas each agent (corresponding to time index in~\\cite{Aumann:1964}) \n``generates'' strategies from a random device that is independent of all the\nother agents.\n\n\n\\subsubsection{Mixed W-strategies}\n\nFor any agent~$\\agent \\in \\AGENT$, the set~\\( \\POLICY_{\\agent} \\) \nof pure W-strategies for agent~$\\agent$ (see Definition~\\ref{de:W-strategy})\nis finite, hence the set \\( \\Delta\\np{\\POLICY_{\\agent} } \\)\nof probability distributions over~\\( \\POLICY_{\\agent} \\)\nis is homeomorphic to \n$\\Sigma_{|\\POLICY_{\\agent}|}$, the simplex of\n$\\RR^{|\\POLICY_{\\agent}|}$, and is thus homeomorphic to a closed\nsubset of a finite dimensional space.\nSo is the space \\( \\Delta\\np{\\POLICY} \\)\nof probability distributions over the set~\\( \\POLICY \\)\nof pure W-strategies profiles. \nWe will also consider the sets\n\\begin{equation}\n \\POLICY^{\\player} = \n\\prod_{\\agent \\in \\AGENT^{\\player}} \\POLICY_{\\agent}\n\\eqsepv \\forall \\player \\in \\PLAYER\n\\label{eq:W-strategies_profiles_player}\n\\end{equation}\nof pure W-strategies profiles, player by player, and the set\n\\( \\Delta\\np{\\POLICY^{\\player}} \\) of probability distributions\nover~\\( \\POLICY^{\\player} \\).\n\n\n\\begin{subequations}\n\\begin{definition}\nWe consider a finite W-game, as in Definition~\\ref{de:W-game}.\nA \\emph{mixed W-strategy}\nfor player~$\\player \\in \\PLAYER$ \nis an element~$\\mu^{\\player}$ of \\( \\Delta\\np{\\POLICY^{\\player} } \\),\nthe set of probability distributions\nover the set~\\( \\POLICY^{\\player} \\) in~\\eqref{eq:W-strategies_profiles_player}\nof W-strategies of the executive agents in~$\\AGENT^{\\player}$.\nThe set of \\emph{mixed W-strategies profiles} is\n\\( \\prod_{\\player \\in \\PLAYER} \\Delta\\np{\\POLICY^{\\player}} \\). \n\\label{de:mixed_W-strategy}\n\\end{definition}\nA mixed W-strategies profile is denoted by \n\\begin{equation}\n\\mu = \\np{\\mu^{\\player}}_{\\player \\in \\PLAYER}\n\\in \\prod_{\\player \\in \\PLAYER} \\Delta\\bp{ \\POLICY^{\\player} }\n\\eqfinv\n\\label{eq:mixed_W-strategies_profiles}\n\\end{equation}\nand, when we focus on player~$\\player$, we write \n\\begin{equation}\n\\mu = \n\\couple{{\\mu}^{-\\player}}{{\\mu}^{\\player}} \\in\n\\Delta\\bp{ \\POLICY^{\\player} } \\times\n\\prod_{\\player'\\neq \\player} \\Delta\\bp{ \\POLICY^{\\player'} }\n\\eqfinp\n\\label{eq:mixed_W-strategies_profiles_player}\n\\end{equation}\n\\end{subequations}\n\n\n\n\\begin{subequations}\n\\begin{definition}\nWe consider a solvable finite W-game (see Definition~\\ref{de:W-game}),\nand \\( \\mu = \\np{\\mu^{\\player}}_{\\player \\in \\PLAYER}\n\\in \\prod_{\\player \\in \\PLAYER} \\Delta\\np{ \\POLICY^{\\player} } \\)\na mixed W-strategies profile as in~\\eqref{eq:mixed_W-strategies_profiles}.\nFor any \\( \\omega\\in\\Omega\\), we denote by \n\\begin{equation}\n \\QQ^{\\omega}_{\\mu} =\n \\QQ^{\\omega}_{\\np{\\mu^{\\player}}_{\\player \\in \\PLAYER}} = \n\\bp{ \\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player} }\n\\circ \\bp{ M\\np{\\omega,\\cdot}}^{-1}\n\\in \\Delta\\bp{ \\prod_{b \\in \\AGENT} \\CONTROL_{b} }\n\\label{eq:push_forward_probability_a}\n\\end{equation}\nthe pushforward probability,\non the space \\( \\bp{\\prod_{b \\in \\AGENT} \\CONTROL_{b},\n\\bigotimes \\limits_{b \\in \\AGENT} \\tribu{\\Control}_{b} } \\)\nof the product probability distribution~\\( \\bigotimes_{\\player \\in\n \\PLAYER}\\mu^{\\player} \\)\non~\\( \\prod_{\\player \\in \\PLAYER} \\POLICY^{\\player} = \\POLICY \\)\nby the mapping\n\\begin{equation}\n M\\np{\\omega,\\cdot} : \\POLICY \\to \n\\prod_{b \\in \\AGENT} \\CONTROL_{b} \n \\eqsepv \n\\policy \\mapsto M_{\\policy}(\\omega) \n\\eqfinv\n\\label{eq:push_forward_probability_b}\n\\end{equation}\nwhere \\( M_{\\policy} \\) is the \nsolution map~\\eqref{eq:solution_map}, \nwhich exists by the solvability assumption.\n\\end{definition}\n\\label{eq:push_forward_probability}\n\\end{subequations}\n\nBy~\\eqref{eq:solution_map_IFF}, which defines the solution map,\nand by definition of a pushforward probability,\nwe have, for any configuration~\\( \\bp{\\omega, \n\\sequence{\\control_b}{b\\in\\AGENT} } \\in\\HISTORY \\),\n\\begin{align*}\n \\QQ^{\\omega}_{\\mu}\n \\bp{\\sequence{\\control_b}{b\\in\\AGENT} }\n & =\n \\bp{\\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player}}\n \\Bp{ M\\np{\\omega,\\cdot}^{-1}\n \\bp{\\sequence{\\control_b}{b\\in\\AGENT} } } \n \\\\\n &=\n \\prod_{\\player \\in \\PLAYER} \\mu^{\\player}\n \\bgp{\\Bset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\bp{\\omega, \n \\sequence{\\control_b}{b\\in\\AGENT} } =\\control_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }}\n \\eqfinp\n\\end{align*}\n\n\n\n\\subsubsection{Product-mixed W-strategies}\n\nIn a mixed W-strategy, the executive agents of player~$\\player \\in \\PLAYER$ \ncan be correlated because the probability~$\\mu^{\\player}$\nin Definition~\\ref{de:mixed_W-strategy}\nis a joint probability on the product space\n\\( \\POLICY^{\\player} =\\prod_{\\agent \\in \\AGENT^{\\player}} \\POLICY_{\\agent}\n\\).\nWe now introduce product-mixed W-strategies, where \nthe executive agents of player~$\\player \\in \\PLAYER$ are independent\nin the sense that the probability~$\\mu^{\\player}$ is the\nproduct of individual probabilities,\neach of them on the individual space~\\( \\POLICY_{\\agent} \\)\nof the strategies of one agent~$\\agent$. \n\n\\begin{definition}\n \\label{de:product-mixed_W-strategy}\n We consider a finite W-game, as in Definition~\\ref{de:W-game}.\n A \\emph{product-mixed W-strategy} for player~$\\player \\in \\PLAYER$ is\n an element \n \\( \\pi^{\\player}=\n \\sequence{ \\pi^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}} \\) of \n \\( \\prod_{\\agent \\in \\AGENT^{\\player}} \\Delta\\np{\\POLICY_{\\agent}} \\).\n\\end{definition}\nThe product-mixed W-strategy \n\\( \\sequence{ \\pi^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}} \\)\ninduces a product probability\\footnote{%\nBy an abuse of notation, we will sometimes write \n\\( \\pi^{\\player}=\n\\otimes_{\\agent\\in\\AGENT^{\\player}}\\pi^{\\player}_{\\agent} \\).\n\\label{ft:product-mixed_W-strategy}}\n\\(\n\\otimes_{\\agent\\in\\AGENT^{\\player}}\\pi^{\\player}_{\\agent} \\)\non the set~\\( \\POLICY^{\\player} \\),\nwhich is a mixed W-strategy\nas in Definition~\\ref{de:mixed_W-strategy}.\n\n\n\n\n\\subsubsection{Behavioral W-strategies}\n\nWe formalize the intuition of behavioral strategies in W-games\nby the following definition of behavioral W-strategies.\n\n\\begin{definition}\nWe consider a finite W-game, as in Definition~\\ref{de:W-game}.\nA \\emph{behavioral W-strategy}\nfor player~$\\player \\in \\PLAYER$ \nis a family \n\\( \\beta^{\\player} = \n\\sequence{\\beta^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}} \\),\nwhere \n\\begin{equation}\n\\beta^{\\player}_{\\agent} :\n\\HISTORY \\times \\tribu{\\Control}_{\\agent} \\to [0,1] \n\\eqsepv \n\\np{\\history, \\Control_{\\agent}} \\mapsto \\beta^{\\player}_{\\agent}\n\\conditionalySet{\\Control_{\\agent}}{\\history} \n\\end{equation}\nis an \\( \\tribu{\\Information}_{\\agent} \\)-measurable stochastic kernel\nfor each \\( \\agent \\in \\AGENT^{\\player} \\), that is,\nif one of the two equivalent statements holds true:\n\\begin{enumerate}\n\\item \n\\label{it:behavioral_W-strategy_abstract}\non the one hand, \nthe function\n\\( \\history \\in \\HISTORY \\mapsto \\beta^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\) \nis \\( \\tribu{\\Information}_{\\agent} \\)-measurable,\nfor any \\( \\control_{\\agent}\\in\\CONTROL_{\\agent} \\) and,\non the other hand, \neach \\( \\beta^{\\player}_{\\agent}\n\\conditionalySet{ \\cdot }{ \\history } \\) is \na probability distribution on the finite set~\\( \\CONTROL_{\\agent} \\),\nfor any \\( \\history \\in \\HISTORY \\),\n\\item \n\\label{it:behavioral_W-strategy}\non the one hand, \n\\( \\history' \\sim_{\\agent} \\history'' \\implies\n\\beta^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history'}\n=\n\\beta^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history''}\n \\), for any \\( \\control_{\\agent}\\in\\CONTROL_{\\agent} \\)\nand, on the other hand,\nfor any \\( \\history \\in \\HISTORY \\),\nwe have \\( \\beta^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\geq 0 \\),\n\\( \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \\),\nand\n\\( \\sum_{ \\control_{\\agent}\\in\\CONTROL_{\\agent} } \n\\beta^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} = 1 \\).\n\\end{enumerate}\n\\label{de:behavioral_W-strategy}\n\\end{definition}\nThe equivalences come from the fact that the sets \\( \\HISTORY \\) and \\( \\CONTROL_{\\agent} \\)\nare finite and equipped with their respective complete fields, and\nby Proposition~\\ref{pr:measurability}, \nand especially~\\eqref{eq:strategy_atoms_rho}. \n\n\n\\subsection{Relations between product-mixed and behavioral W-strategies}\n\\label{Strategy_equivalence}\n\nHere, we show that product-mixed and behavioral W-strategies\nare ``equivalent'' in the sense that\na product-mixed W-strategy naturally induces a \nbehavioral W-strategy, and that a behavioral W-strategy can be ``realized''\nas a product-mixed W-strategy\n(see Figure~\\ref{fig:randomization}).\n\n\\subsubsection*{From product-mixed to behavioral W-strategies}\n\nWe prove that a product-mixed W-strategy naturally induces a \nbehavioral W-strategy.\n\n\\begin{proposition}\n \\label{pr:PMtoB}\n We consider a finite W-game, as in Definition~\\ref{de:W-game},\n and a player \\( \\player \\in \\PLAYER \\).\n\n For any product-mixed W-strategy\n \\( \\pi^{\\player}=\n \\sequence{ \\pi^{\\player}_{\\agent}}{\\agent \\in\n \\AGENT^{\\player}} \\in \\prod_{\\agent \\in \\AGENT^{\\player}} \\Delta\\np{\\POLICY_{\\agent}} \\), \n as in Definition~\\ref{de:product-mixed_W-strategy}, \n we define, for any agent~\\( \\agent\\in\\AGENT^{\\player} \\), \n \\begin{equation}\n \\PMtoB{\\pi}^{\\player}_{\\agent}\n \\conditionaly{\\control_{\\agent}}{\\history}\n =\n \\pi^{\\player}_{\\agent}\n \\Bp{\n \\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}}}\n \n \\eqsepv \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \n \\eqsepv \\forall \\history \\in \\HISTORY \n \\eqfinp\n \\label{eq:PMtoB}\n \\end{equation}\n Then, \\( \\PMtoB{\\pi}^{\\player}=\n \\sequence{ \\PMtoB{\\pi}^{\\player}_{\\agent}}{\\agent \\in\n \\AGENT^{\\player}} \\) is a behavioral W-strategy, \n as in Definition~\\ref{de:behavioral_W-strategy}.\n\\end{proposition}\n\n\n\\begin{proof}\nLet be given a product-mixed W-strategy\n\\( \\pi^{\\player}=\n\\sequence{ \\pi^{\\player}_{\\agent}}{\\agent \\in\n \\AGENT^{\\player}} \\in \\prod_{\\agent \\in \\AGENT^{\\player}} \\Delta\\np{\\POLICY_{\\agent}} \\). \n\nTo prove that~\\eqref{eq:PMtoB} defines a \nbehavioral W-strategy, \nwe have to show (see Item~\\ref{it:behavioral_W-strategy_abstract}\nin Definition~\\ref{de:behavioral_W-strategy}), \non the one hand, that the function\n\\( \\history \\in \\HISTORY \\mapsto \\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\) \nis \\( \\tribu{\\Information}_{\\agent} \\)-measurable,\nfor any \\( \\control_{\\agent}\\in\\CONTROL_{\\agent} \\) and,\non the other hand, that \neach \\( \\beta^{\\player}_{\\agent}\n\\conditionalySet{ \\cdot }{ \\history } \\) is \na probability on the finite set \\( \\CONTROL_{\\agent} \\),\nfor any \\( \\history \\in \\HISTORY \\).\nFor this purpose, we will use the more practical characterization of \nItem~\\ref{it:behavioral_W-strategy}\nin Definition~\\ref{de:behavioral_W-strategy}.\n\nLet us fix \\( \\control_{\\agent}\\in\\CONTROL_{\\agent} \\).\nLet \\( \\history', \\history'' \\in \\HISTORY \\) be such that\n\\( \\history' \\sim_{\\agent} \\history'' \\),\nwhere we recall that the classes of the\nequivalence relation~$\\sim_{\\agent}$ in~\\eqref{eq:bracket_HISTORY}\nare exactly the atoms in~\\( \\crochet{\\tribu{\\Information}_{\\agent}} \\).\nBy~\\eqref{eq:common_value_equivalence_class_rho}, we have that \n\\( \\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n{\\policy_{\\agent}\\np{\\history'}=\\control_{\\agent}}\n=\n\\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n{\\policy_{\\agent}\\np{\\history''}=\\control_{\\agent}}\n\\). \nTherefore, from the expression~\\eqref{eq:PMtoB},\nwe have obtained that \n\\( \\history' \\sim_{\\agent} \\history'' \\implies\n\\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history'}\n=\n\\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history''}\n \\), hence that the function\n\\( \\history \\in \\HISTORY \\mapsto \\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\) \nis \\( \\tribu{\\Information}_{\\agent} \\)-measurable,\nby Proposition~\\ref{pr:measurability}, \nand especially~\\eqref{eq:strategy_atoms_rho}. \n\nBy the expression~\\eqref{eq:PMtoB},\nwe have that \n\\( \\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\geq 0 \\),\n\\( \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \\)\nand, since \\( \\pi^{\\player}_{\\agent} \\)\nis a probability on~\\( \\Delta\\np{\\POLICY_{\\agent}} \\),\nthat \n\\( \\sum_{ \\control_{\\agent}\\in\\CONTROL_{\\agent} } \n\\PMtoB{\\pi}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} = 1 \\).\n\\medskip\n\nThis ends the proof.\n\\end{proof}\n\n\n\\subsubsection*{From behavioral to product-mixed W-strategies}\n\nWe prove that a behavioral W-strategy can be ``realized''\nas a product-mixed W-strategy.\n\n\\begin{proposition}\n \\label{pr:BtoPM} \n We consider a finite W-game, as in Definition~\\ref{de:W-game},\n and a player \\( \\player \\in \\PLAYER \\).\n\n For any behavioral W-strategy \\( \\beta^{\\player} = \n \\sequence{\\beta^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}}\n \\), as in Definition~\\ref{de:behavioral_W-strategy},\n there exists a product-mixed W-strategy\n \\( \\BtoPM{\\beta}^{\\player} = \n \\sequence{\\BtoPM{\\beta}^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}}\\),\n as in Definition~\\ref{de:product-mixed_W-strategy}, \n with the property that, for any agent~\\( \\agent \\) in \\(\\AGENT^{\\player}\\) we have\n \\begin{equation}\n \\BtoPM{\\beta}^{\\player}_{\\agent}\n \\Bp{\\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}}}\n = \n \\beta^{\\player}_{\\agent}\n \\conditionaly{\\control_{\\agent}}{\\history}\n \n \\eqsepv \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \n \\eqsepv \\forall \\history \\in \\HISTORY \n \\eqfinp\n \\label{eq:BtoPM}\n \\end{equation}\n\\end{proposition}\n\\begin{subequations}\n \\begin{proof} \n We consider a fixed agent~\\( \\agent\\in\\AGENT^{\\player} \\).\n\n On the one hand,\n by Proposition~\\ref{pr:measurability}, we get that\n \n \n \\begin{equation*}\n \\policy_{\\agent} \\in \\POLICY_{\\agent} \\iff\n \\exists \\sequence{\\control_{\\agent}^{G_{\\agent}}}{%\n G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}} \n \\in \n \\CONTROL_{\\agent}^{\\crochet{\\tribu{\\Information}_{\\agent}}}\n \\eqsepv \n \\policy_{\\agent}\\np{\\history}=\\control_{\\agent}^{G_{\\agent}}\n \\eqsepv \\forall G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}\n \\eqsepv \\forall \\history \\in G_{\\agent}\n \\eqfinv\n \n \\end{equation*}\nwhere \\( \\CONTROL_{\\agent}^{\\crochet{\\tribu{\\Information}_{\\agent}}} \\) is the\nset of mappings from~\\( \\crochet{\\tribu{\\Information}_{\\agent}} \\) to~\\(\n\\CONTROL_{\\agent} \\).\nTherefore, the following mapping is a bijection:\n \\begin{equation}\n \\Psi : \\POLICY_{\\agent} \\to \n \\CONTROL_{\\agent}^{\\crochet{\\tribu{\\Information}_{\\agent}}}\n \\eqsepv\n \\policy_{\\agent} \\mapsto \n \\sequence{\\policy_{\\agent}\\np{G_{\\agent}}}{%\n G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}} \n \\eqfinp\n \\label{eq:BtoPM_proof_bijection} \n \\end{equation}\n We denote the inverse bijection by\n \\begin{equation}\n \\Phi=\\Psi^{-1} : \n \\CONTROL_{\\agent}^{\\crochet{\\tribu{\\Information}_{\\agent}}}\n \\to \\POLICY_{\\agent} \n \\eqfinp\n \\label{eq:BtoPM_proof_bijection_inverse} \n \\end{equation}\n\n On the other hand, \n by Item~\\ref{it:behavioral_W-strategy_abstract}\n in Definition~\\ref{de:behavioral_W-strategy}, each \n \\( \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ \\history } \\) is \n a probability on the finite set \\( \\CONTROL_{\\agent} \\),\n for any \\( \\history \\in \\HISTORY \\).\n As the mapping \\( \\history \\in \\HISTORY \\mapsto \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ \\history } \\) is \n \\( \\tribu{\\Information}_{\\agent} \\)-measurable,\n by Definition~\\ref{de:behavioral_W-strategy}, \n the notation \\( \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } \\) \n makes sense by~\\eqref{eq:common_value_rho}.\n\n We equip the finite set \n \\( \\CONTROL_{\\agent}^{\\crochet{\\tribu{\\Information}_{\\agent}}} \\)\n with the product probability\n \\( \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } \\),\n and we define the pushforward probability\n \\begin{equation}\n \\BtoPM{\\beta}^{\\player}_{\\agent}\n = \\Bp{ \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } }\n \\circ \\Phi^{-1} \\eqsepv\n \\label{eq:BtoPM_proof_probability}\n \\end{equation}\n on the finite set~\\( \\POLICY_{\\agent} \\).\n Then, we calculate, for any \\( \\history \\in \\HISTORY \\), \n \\begin{align*}\n \\BtoPM{\\beta}^{\\player}_{\\agent}\n &\n \\Bp{\\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}}}\n \\\\\n &= \n \\Bp{ \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } }\n \\Bp{ \\Phi^{-1} \\bp{ \\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}} }}\n \\tag{by definition~\\eqref{eq:BtoPM_proof_probability}}\n \\\\\n &=\n \\Bp{ \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } }\n \\Bp{ \\Psi \\bp{\\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}} }}\n \\tag{as \\( \\Phi^{-1}=\\Psi \\) by~\\eqref{eq:BtoPM_proof_bijection_inverse}}\n \\\\\n &=\n \\Bp{ \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } }\n \\Bp{ \\{ \\control_{\\agent} \\} \\times\n \\CONTROL_{\\agent}^{ \\crochet{\\tribu{\\Information}_{\\agent}}\\backslash \\bracket{\\history}_{\\agent} }\n }\n \\intertext{because, by definition of the mapping~$\\Psi$\n in~\\eqref{eq:BtoPM_proof_bijection},\n any \\( \\policy_{\\agent} \\in\n \\defset{ \\policy'_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy'_{\\agent}\\np{\\history}=\\control_{\\agent}} \\) \n takes the value~$\\control_{\\agent}$ \n on the atom~\\( \\bracket{\\history}_{\\agent} \\)\n (by definition of the set~\\( \\POLICY_{\\agent} \\)\n in Definition~\\ref{de:W-strategy} and by~\\eqref{eq:common_value_equivalence_class_rho}),\n and any possible value in~\\( \\CONTROL_{\\agent} \\) for all the \n other atoms in \\( \\crochet{\\tribu{\\Information}_{\\agent}}\n \\backslash \\bracket{\\history}_{\\agent} \\) }\n \n &= \n \\beta^{\\player}_{\\agent}\n \\conditionaly{ \\control_{\\agent} }{ \\bracket{\\history}_{\\agent} }\n \\times\n \\prod_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}\n \\backslash \\bracket{\\history}_{\\agent} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\CONTROL_{\\agent} }{ G_{\\agent} }\n \\tag{by definition of the product probability\n \\( \\bigotimes_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} } \n \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\cdot }{ G_{\\agent} } \\) }\n \\\\\n &= \n \\beta^{\\player}_{\\agent}\n \\conditionaly{ \\control_{\\agent} }{ \\bracket{\\history}_{\\agent} }\n \\times\n \\prod_{ G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}\n \\backslash \\bracket{\\history}_{\\agent} } \n 1\n \\tag{as \\( \\beta^{\\player}_{\\agent}\n \\conditionalySet{ \\CONTROL_{\\agent} }{ G_{\\agent} }=1 \\)\n for all \\( G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}} \\)}\n \\\\\n &= \n \\beta^{\\player}_{\\agent}\n \\conditionaly{ \\control_{\\agent} }{ \\bracket{\\history}_{\\agent} }\n \\\\\n &= \n \\beta^{\\player}_{\\agent}\n \\conditionaly{\\control_{\\agent}}{\\history}\n \\end{align*}\n as the function \\( \\history \\in \\HISTORY \\mapsto \n \\beta^{\\player}_{\\agent}\n \\conditionaly{\\control_{\\agent}}{\\history} \\) is \n \\( \\tribu{\\Information}_{\\agent} \\)-measurable,\n by Item~\\ref{it:behavioral_W-strategy_abstract}\n in Definition~\\ref{de:behavioral_W-strategy}, \n and using~\\eqref{eq:common_value_equivalence_class_rho}.\n \\medskip\n\n This ends the proof. \n \\end{proof}\n\\end{subequations}\n\n\n\n\n\\section{Kuhn's equivalence theorem}\n\\label{Kuhn_Theorem}\n\nNow, we are equipped to give, for games in intrinsic form, a statement and a proof of the \ncelebrated Kuhn's equivalence theorem: when a player enjoys perfect recall, \nfor any mixed W-strategy, there is an equivalent behavioral strategy.\n\n\\begin{figure}\n \\centering\n \\[\n \\begin{tikzcd}[column sep=\"2cm\",row sep=\"4cm\",cells={nodes={draw=gray}},labels={font=\\everymath\\expandafter{\\the\\everymath\\textstyle}}]\n & \\substack{\\textrm{\\normalsize mixed} \\\\ \\textrm{\\normalsize W-strategies}}\n \n \\arrow{dr}[description]{{\\textrm{Proposition~\\ref{pr:MtoB_PerfectRecall} under perfect recall}}}\n &\n \\\\\n \\substack{\\textrm{\\normalsize product mixed} \\\\ \\textrm{\\normalsize W-strategies}}\n \\arrow[ru,bend left=10,\"\\textrm{injection}\" ]\n \\arrow[rr,bend left=10,\"\\textrm{Proposition~\\ref{pr:PMtoB}}\"]\n &\n &\n \\substack{\\textrm{\\normalsize behavioral} \\\\ \\textrm{\\normalsize W-strategies}}\n \\arrow[ll,bend left=10,\"\\textrm{Proposition~\\ref{pr:BtoPM}}\"] \n \n \\end{tikzcd}\n \\]\n \\caption{\\label{fig:randomization}Three Propositions that relate three notions of\n randomization of strategies}\n\\end{figure}\n\nIn this Section, we consider a causal finite W-game\n(see Definition~\\ref{de:W-game}), \nthat is, the underlying W-model (as in Definition~\\ref{de:W-model}) \nis causal (see Definition~\\ref{de:causality}), \nwith suitable configuration-ordering $\\varphi: \\HISTORY \\to \\Sigma^{| \\AGENT |}$.\nIn~\\S\\ref{Perfect_recall}, we introduce a formal definition of perfect recall \nin a causal finite game in intrinsic form. \nIn~\\S\\ref{Preliminary_results} (Proposition~\\ref{pr:MtoB_PerfectRecall}), we show that \nany mixed W-strategy induces a behavioral W-strategy under perfect recall\n(see Figure~\\ref{fig:randomization}).\nFinally, in~\\S\\ref{Main_result} (Theorem~\\ref{th:Kuhn}), we give a statement and\na proof of Kuhn's equivalence theorem for games in intrinsic form. \n\n\n\\subsection{Definition of perfect recall for causal W-games}\n\\label{Perfect_recall}\n\nFor any agent~\\( \\agent\\in\\AGENT \\), \nwe define the \\emph{choice field}~\\( \\tribu{C}_{\\agent} \n\\subset \\tribu{\\History} \\) by\n \\begin{equation}\n\\tribu{C}_{\\agent} = \\tribu{\\Control}_{\\agent} \\bigvee \n\\tribu{\\Information}_{\\agent} \\eqsepv \\forall \\agent \\in \\AGENT\n\\eqfinp\n \\label{eq:ChoiceField}\n \\end{equation}\nThus defined, the choice field of an agent contains both what the agent did\nand what he knew when making the decision.\n\nThe following definition of perfect recall is new.\n\\begin{subequations}\n %\n\\begin{definition}\nWe consider a causal finite W-game for which the underlying W-model \nis causal with the configuration-ordering $\\varphi: \\HISTORY \\to \\Sigma^{| \\AGENT |}$.\n\n We say that a player \\( \\player \\in \\PLAYER \\) enjoys \\emph{perfect recall} \nif, for any ordering \\( \\kappa \\in \\Sigma \\)\nsuch that~\\( \\LastElement{\\kappa} \\in \\AGENT^{\\player} \\)\n(that is, the last agent is an executive of the player), we have \n \\begin{equation}\n\\HISTORY_{\\kappa}^{\\varphi} \\cap \\History \\in \n\\tribu{\\Information}_{\\LastElement{\\kappa}}\n\\eqsepv \n\\forall \\History \\in \\tribu{C}_{\\range{\\FirstElements{\\kappa}} \\cap\n \\AGENT^{\\player}} \n\\eqfinv \n \\label{eq:PerfectRecall}\n \\end{equation}\nwhere the subset~$\\HISTORY_{\\kappa}^{\\varphi} \\subset \\HISTORY$ of configurations \nhas been defined in~\\eqref{eq:HISTORY_k_kappa},\nthe last agent~$\\LastElement{\\kappa}$ in~\\eqref{LastElement_kappa},\nthe partial ordering~$\\FirstElements{\\kappa}$ in~\\eqref{FirstElements_kappa},\nthe range~$\\range{\\FirstElements{\\kappa}}$ in~\\eqref{range_kappa},\nand where\n\\begin{equation}\n\\tribu{C}_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} =\n\\bigvee \\limits_{\\substack{\\agent \\in \\range{\\FirstElements{\\kappa}}\\\\\n\\agent \\in \\AGENT^{\\player}}}\n\\tribu{C}_{\\agent} \n\\eqfinv\n \\label{eq:PastChoiceField}\n\\end{equation}\nwith the choice subfield \\( \\tribu{C}_{\\agent} \\subset \\tribu{\\History} \\) given by~\\eqref{eq:ChoiceField}.\n \\label{de:PerfectRecall}\n\\end{definition}\n\\end{subequations}\nWe interpret the above definition \nas follows.\nA player enjoys perfect recall when any of her executive\nagents --- when called to play as the last one in an ordering --- \nknows at least what did and knew those of the \nexecutive agents that are both his predecessors \n(in the range of the ordering under consideration)\nand that are executive agents of the player.\n\n\n\\subsection{Mixed W-strategy induces behavioral W-strategy under perfect recall}\n\\label{Preliminary_results}\n\nAs a preparatory result for the proof of Kuhn's equivalence theorem, we show that \nany mixed W-strategy induces a behavioral W-strategy under perfect recall.\n\n\\begin{proposition}\nWe consider a causal finite W-game\n(see Definition~\\ref{de:W-game}),\nfor which the underlying W-model (see Definition~\\ref{de:W-model}) \nis causal (see Definition~\\ref{de:causality})\nwith the configuration-ordering $\\varphi: \\HISTORY \\to \\Sigma^{| \\AGENT |}$.\n\nWe consider a player \\( \\player \\in \\PLAYER \\),\nequipped with a mixed W-strategy \\( \\mu^{\\player} \\in\n\\Delta\\np{\\POLICY^{\\player}} \\) and \nsupposed to enjoy perfect recall (see Definition~\\ref{de:PerfectRecall}),\nand an agent~\\( \\agent\\in\\AGENT^{\\player} \\).\n\nThen, for each agent \\( \\agent \\in \\AGENT^{\\player} \\), \nthe following formula\\footnote{%\nWith the convention that \n\\( \\MtoB{\\mu}^{\\player}_{\\agent}\\np{ \\{\\control_{\\agent}\\} \\mid\n \\history}=0 \\) if the denominator\nis zero (in which case the numerator, which is smaller, is also zero),\nand using the notations\n\\( \\policy=\\sequence{\\policy_{\\agent}}{\\agent \\in \\AGENT^{\\player}}\n\\in\\POLICY^{\\player} \\), \n\\eqref{eq:sub_history_BGENT} for \\( \\history_\\mathbb{B} \\)\nand~\\eqref{eq:sub_wstrategy_BGENT} for \\( \\policy_\\mathbb{B} \\), \nwith \\( \\mathbb{B} = \\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player} \\),\nwhere\nthe last agent~$\\LastElement{\\kappa}$ has been defined in~\\eqref{LastElement_kappa},\nthe partial ordering~$\\FirstElements{\\kappa}$ in~\\eqref{FirstElements_kappa}\nand the range~$\\range{\\FirstElements{\\kappa}}$ in~\\eqref{range_kappa}.\n} \n\\begin{equation}\n \\begin{split}\n \\MtoB{\\mu}^{\\player}_{\\agent}\\conditionaly{\\control_{\\agent}}{\\history}\n= \\\\\n\\frac{ \\mu^{\\player} \\defset{ \\policy\\in\\POLICY^{\\player} }{ \n\\exists \\kappa \\in \\Sigma,\n\\, \\history\\in\\HISTORY_{\\kappa}^{\\varphi} ,\n\\, \\LastElement{\\kappa}=\\agent,\n\\, \\policy_{\\agent}\\np{\\history}=\\control_{\\agent},\n\\, \\policy_{ \\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player} }\\np{\\history} \n= \\history_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} } }%\n{ \\mu^{\\player} \\defset{ \\policy\\in\\POLICY^{\\player} }{ \\exists \\kappa \\in \\Sigma,\n\\, \\history\\in\\HISTORY_{\\kappa}^{\\varphi} ,\n\\, \\LastElement{\\kappa}=\\agent,\n\\, \\policy_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}}\\np{\\history} \n= \\history_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} } } \n \\end{split}\n\\label{eq:MtoB_PerfectRecall}\n\\end{equation}\ndefines an \\( \\tribu{\\Information}_{\\agent} \\)-measurable stochastic kernel\n\\( \\MtoB{\\mu}^{\\player}_{\\agent}\\).\nAs a consequence, the family \n\\( \\MtoB{\\mu}^{\\player} = \n\\sequence{\\MtoB{\\mu}^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}} \\),\nis a behavioral W-strategy, as in Definition~\\ref{de:behavioral_W-strategy}.\n\\label{pr:MtoB_PerfectRecall}\n\\end{proposition}\n\n\\begin{subequations}\n %\n\\begin{proof}\nWe consider a player \\( \\player \\in \\PLAYER \\),\nequipped with a mixed W-strategy \\( \\mu^{\\player} \\in\n\\Delta\\np{\\POLICY^{\\player}} \\) and \nsupposed to enjoy perfect recall as in Definition~\\ref{de:PerfectRecall},\nand an agent~\\( \\agent\\in\\AGENT^{\\player} \\).\n\\medskip\n\nBy~\\eqref{eq:MtoB_PerfectRecall}, it is easy to see that, \nfor any \\( \\history \\in \\HISTORY \\),\nwe have \\( \\MtoB{\\mu}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\geq 0 \\),\n\\( \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \\),\nand\n\\( \\sum_{ \\control_{\\agent}\\in\\CONTROL_{\\agent} } \n\\MtoB{\\mu}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} = 1 \\).\nTherefore, by Item~\\ref{it:behavioral_W-strategy_abstract}\nin Definition~\\ref{de:behavioral_W-strategy}, \nthere remains to prove that \nthe function \\( \\history \\in \\HISTORY \\mapsto \n\\MtoB{\\mu}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\) is \n\\( \\tribu{\\Information}_{\\agent} \\)-measurable.\nThe proof is in several steps.\n\\medskip\n\n\\noindent$\\bullet$\nAs the agent~\\( \\agent\\in\\AGENT \\) is fixed,\nit is easily seen that the family \\( \\sequence{ \\HISTORY_{\\kappa}^{\\varphi} }{\n\\kappa \\in \\Sigma, \\, \\LastElement{\\kappa}=\\agent } \\),\nwhere the subset~$\\HISTORY_{\\kappa}^{\\varphi} \\subset \\HISTORY$ of configurations \nhas been defined in~\\eqref{eq:HISTORY_k_kappa}, is made of\n(possibly empty) disjoint sets whose union is~\\( \\HISTORY \\).\nIndeed, on the one hand, for any \\( \\history\\in \\HISTORY \\), we have that \n \\( \\history\\in \\HISTORY_{\\kappa_0}^{\\varphi} \\) where \n\\( \\kappa_0=\\psi_k\\np{\\rho} \\), with\n\\( \\rho=\\varphi\\np{\\history} \\) \nand $k$ the unique integer such that \\( \\rho(k)=\\agent \\).\nOn the other hand, if we had \\( \\HISTORY_{\\kappa_1}^{\\varphi}\n\\cap \\HISTORY_{\\kappa_2}^{\\varphi} \\neq \\emptyset \\),\nwith \\( \\LastElement{\\kappa_1}=\\LastElement{\\kappa_2}=\\agent \\), \nthen \\( \\history\\in \\HISTORY_{\\kappa_1}^{\\varphi}\n\\cap \\HISTORY_{\\kappa_2}^{\\varphi} \\) would be such that\n\\( \\kappa_1=\\kappa_2=\\psi_k\\np{\\rho} \\), with\n\\( \\rho=\\varphi\\np{\\history} \\) \nand $k$ the unique integer such that \\( \\rho(k)=\\agent \\).\nThus, \\( \\kappa_1 \\neq \\kappa_2 \\implies \\HISTORY_{\\kappa_1}^{\\varphi}\n\\cap \\HISTORY_{\\kappa_2}^{\\varphi} = \\emptyset \\).\n\\medskip\n\n\\noindent$\\bullet$\nAs the family \\( \\sequence{ \\HISTORY_{\\kappa}^{\\varphi} }{\n \\kappa \\in \\Sigma, \\, \\LastElement{\\kappa}=\\agent } \\) is made of\n disjoint sets whose union is~\\( \\HISTORY \\), \nwe rewrite~\\eqref{eq:MtoB_PerfectRecall} as\n\\begin{align}\n \\MtoB{\\mu}^{\\player}_{\\agent}\n &\\conditionaly{\\control_{\\agent}}{\\history}\n \\nonumber\n \\\\\n &=\\frac{{\\displaystyle \\mathop{\\sum}_{\\substack{ \\kappa \\in \\Sigma\\\\\\LastElement{\\kappa}=\\agent}}}\n \\mu^{\\player} \\defset{ \\policy\\in\\POLICY^{\\player} }{ \n \n \n \\history\\in\\HISTORY_{\\kappa}^{\\varphi} \n \\eqsepv \\policy_{\\agent}\\np{\\history}=\\control_{\\agent}\n \\eqsepv \\policy_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}}\\np{\\history} \n = \\history_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} } }%\n {{\\displaystyle \\sum_{ \\kappa \\in \\Sigma, \\LastElement{\\kappa}=\\agent}} \n \\mu^{\\player} \\defset{ \\policy\\in\\POLICY^{\\player} }{ \n \\history\\in\\HISTORY_{\\kappa}^{\\varphi} \n \\eqsepv \\policy_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}}\\np{\\history} \n = \\history_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} } }\n \\nonumber\n\\\\\n &=\n \\frac{ \\sum_{ \\kappa \\in \\Sigma, \\LastElement{\\kappa}=\\agent}\n \\mu^{\\player}\\ba{\\Phi_\\kappa\\np{\\history,\\control_{\\agent}}} }%\n { \\sum_{ \\kappa \\in \\Sigma, \\LastElement{\\kappa}=\\agent} \n \\sum_{ \\control_{\\agent}\\in\\CONTROL_{\\agent} }\n \\mu^{\\player}\\ba{\\Phi_\\kappa\\np{\\history,\\control_{\\agent}}} }\n \\eqfinv\n \\nonumber\n \n \\intertext{where, for any \\( \\history\\in\\HISTORY \\),\n \\( \\control_{\\agent}\\in\\CONTROL_{\\agent} \\) and \\( \\kappa \\in \\Sigma \\) such that \n \\( \\LastElement{\\kappa}=\\agent \\), we have defined the following subset of strategies}\n \\Phi_\\kappa\\np{\\history,\\control_{\\agent}} \n &=\n \\label{eq:proof_Phi_kappa}\n \\\\\n &\n \\defset{ \\policy\\in\\POLICY^{\\player} }{\n \\history\\in\\HISTORY_{\\kappa}^{\\varphi} \n \\eqsepv \\policy_{\\agent}\\np{\\history}=\\control_{\\agent}\n \\eqsepv \\policy_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}}\\np{\\history} \n = \\history_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}} } \n \\subset \\POLICY^{\\player} \n \\eqfinp\n \\nonumber\n\\end{align}\nWe will prove, in three steps, that \\( \\Phi_{\\kappa}\\np{\\history,\\control_{\\agent}} \\) in~\\eqref{eq:proof_Phi_kappa} \ntakes the same (set) value for any \\( \\history \\in G_{\\agent} \\),\nwhere $G_{\\agent}$ is an atom of~\\( \\tribu{\\Information}_{\\agent} \\).\n\\medskip\n\n\\noindent$\\bullet$\nLet $G_{\\agent} \\subset \\HISTORY$ be an atom of~\\( \\tribu{\\Information}_{\\agent} \\).\nWe prove that there exists a unique \\( \\kappa_0 \\in \\Sigma \\) \nsuch that \\( \\LastElement{\\kappa_0}=\\agent \\)\nand \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\),\nthat is, we prove that \n\\begin{equation}\nG_{\\agent} \\in \\crochet{ \\tribu{\\Information}_{\\agent} }\n\\implies \\exists ! \\, \\kappa_0 \\in \\Sigma \\eqsepv\n\\LastElement{\\kappa_0}=\\agent \\eqsepv \nG_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi}\n\\eqfinp\n \\label{eq:proof_unique_kappa_0}\n\\end{equation}\n\nFirst, we show that, for any \\( \\kappa \\in \\Sigma \\) such that \n\\( \\LastElement{\\kappa}=\\agent \\), we have \neither \\( G_{\\agent} \\subset \\HISTORY_{\\kappa}^{\\varphi} \\)\nor \\( G_{\\agent} \\cap \\HISTORY_{\\kappa}^{\\varphi} = \\emptyset \\).\nIndeed, by~\\eqref{eq:PerfectRecall} with \n\\( \\History = \\HISTORY \\in \\tribu{C}_{\\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player}}\\), \nwe obtain that \\( \\HISTORY_{\\kappa}^{\\varphi} \n\\in \\tribu{\\Information}_{\\agent} \\).\nTherefore, either \\( G_{\\agent} \\cap \\HISTORY_{\\kappa}^{\\varphi} = \\emptyset \\) \nor \\( G_{\\agent} \\subset \\HISTORY_{\\kappa}^{\\varphi} \\)\nby~\\eqref{eq:atom_property} since\n$G_{\\agent}$ is an atom of~\\( \\tribu{\\Information}_{\\agent} \\).\nSecond, we have seen that the family \\( \\sequence{ \\HISTORY_{\\kappa}^{\\varphi} }{\n\\kappa \\in \\Sigma, \\, \\LastElement{\\kappa}=\\agent } \\) is made of\ndisjoint sets whose union is~\\( \\HISTORY \\).\n\nBy combining both results, we conclude that there exists a unique \\( \\kappa_0 \\in \\Sigma \\) \nsuch that \\( \\LastElement{\\kappa_0}=\\agent \\)\nand \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\).\n\\medskip\n\n\\noindent$\\bullet$\nAs a consequence of~\\eqref{eq:proof_unique_kappa_0}, we have that \n\\( \\Phi_\\kappa\\np{\\history,\\control_{\\agent}} =\\emptyset \\),\nfor any \\( \\history\\in G_{\\agent} \\), and \nfor any \\( \\kappa \\in \\Sigma \\) such that \\( \\kappa \\neq \\kappa_0 \\)\nand \\( \\LastElement{\\kappa}=\\agent \\).\nThere only remains to prove that \\( \\Phi_{\\kappa_0}\\np{\\history,\\control_{\\agent}} \\) \nin~\\eqref{eq:proof_Phi_kappa} takes the same (set) value\nfor any \\( \\history \\in G_{\\agent} \\). For this purpose, \nwe consider $\\history', \\history'' \\in \\HISTORY $ which belong to \nthe atom~$G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}$, that is,\n\\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\),\nand we establish two preliminary results.\n\nFirst, we prove that \n\\( \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\).\nFor this purpose, we define the subset \n\\( \\History'=\\defset{ \\history \\in \\HISTORY }{\n\\history_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\n=\\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} } \n\\subset \\HISTORY \\) and we show in two steps that \n\\( \\history'' \\in \\History' \\), hence that \n\\( \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\), that is,\nwe show that \n\\begin{equation}\n\\{\\history', \\history''\\} \\subset G_{\\agent} \n\\implies\n \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\n\\eqfinp\n \\label{eq:proof_recall_decisions}\n\\end{equation}\n\n\\begin{description}\n\\item[-]\nWe show that \n\\( \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \\in \\tribu{\\Information}_{\\agent} \\).\nBy definition of the field~\\( \\tribu{\\Control}_{\\range{\\FirstElements{\\kappa_0}}\n \\cap \\AGENT^{\\player}} \\) in~\\eqref{eq:sub_control_field_BGENT} \nwith \\( \\mathbb{B} = \\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player} \\),\nand because each field~\\( \\tribu{\\Control}_{b} \\), for\n\\( b \\in \\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player} \\), is complete,\nhence has the singletons for atoms, \nwe have that \\( \\History' \\in \\tribu{\\Control}_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\).\nAs \\( \\tribu{\\Control}_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \n\\subset \n\\tribu{C}_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\)\nby~\\eqref{eq:PastChoiceField}, we use the perfect recall\nassumption~\\eqref{eq:PerfectRecall}\nwith \\( \\History = \\History' \\),\nand obtain that \\( \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \n\\in \\tribu{\\Information}_{\\LastElement{\\kappa_0}}=\\tribu{\\Information}_{\\agent}\n\\) since \\( \\LastElement{\\kappa_0}=\\agent \\).\n\\item[-]\nAs \\( \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \\in\n\\tribu{\\Information}_{\\agent} \\)\nand $G_{\\agent}$ is an atom of~\\( \\tribu{\\Information}_{\\agent} \\),\nwe have either \n\\( G_{\\agent} \\cap \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' =\\emptyset \\)\nor \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \\)\nby~\\eqref{eq:atom_property}.\nNow, as \\( \\history' \\in \\History' \\) and \n\\( \\history' \\in G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\),\nwe get that \n\\( G_{\\agent} \\cap \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \\neq \\emptyset \\),\nand therefore \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\cap \\History' \\).\nSince \\( \\history'' \\in G_{\\agent} \\), we deduce that \n\\( \\history'' \\in \\History' \\), hence that \n\\( \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\).\n\\end{description}\n\nSecond, we prove that, for any \\( b \\in \\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player} \\)\nand any atom $G_b$ of~\\( \\tribu{\\Information}_{b} \\),\nwe have either \n\\( \\{\\history', \\history''\\} \\cap G_b =\\emptyset\\)\nor \\( \\{\\history', \\history''\\} \\subset G_b \\), that is, \nwe prove that\n\\begin{equation}\n \\begin{split}\n\\Bp{ b \\in \\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player} \n\\text{ and }\nG_b \\in \\crochet{ \\tribu{\\Information}_{b} } }\n\\implies \\\\\n\\Bp{ \\{\\history', \\history''\\} \\cap G_b =\\emptyset\n\\text{ or }\n\\{\\history', \\history''\\} \\subset G_b }\n\\eqfinp\n \\end{split}\n \\label{eq:proof_recall_information}\n\\end{equation}\nFor this purpose, we consider \n\\( b \\in \\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player} \\)\nand \\( G_b \\in \\crochet{ \\tribu{\\Information}_{b} } \\).\n\nAs \\( \\tribu{\\Information}_{b} \\subset \n\\tribu{C}_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\)\nby~\\eqref{eq:PastChoiceField}, we use the perfect recall\nassumption~\\eqref{eq:PerfectRecall}\nwith \\( \\History = G_b \\),\nand obtain that \\( \\HISTORY_{\\kappa_0}^{\\varphi} \\cap G_b\n\\in \\tribu{\\Information}_{\\LastElement{\\kappa_0}}=\\tribu{\\Information}_{\\agent}\n\\) since \\( \\LastElement{\\kappa_0}=\\agent \\) by~\\eqref{eq:proof_unique_kappa_0}.\nAs a consequence, as \\( \\HISTORY_{\\kappa_0}^{\\varphi} \\cap G_b\n\\in \\tribu{\\Information}_{\\agent} \\)\nand $G_{\\agent}$ is an atom of~\\( \\tribu{\\Information}_{\\agent} \\),\nwe have either \n\\( G_{\\agent} \\cap \\HISTORY_{\\kappa_0}^{\\varphi} \\cap G_b =\\emptyset \\)\nor \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\cap G_b \\)\nby~\\eqref{eq:atom_property}.\nSince \\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\) by assumption,\nwhere \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\)\nby~\\eqref{eq:proof_unique_kappa_0}, we conclude\nthat either \\( \\{\\history', \\history''\\} \\subset G_b \\)\nor \\( \\{\\history', \\history''\\} \\cap G_b =\\emptyset\\).\n\\medskip\n\n\\noindent$\\bullet$\nWe consider an atom~$G_{\\agent} \\in \\crochet{\\tribu{\\Information}_{\\agent}}$,\nand we finally prove that \\( \\Phi_{\\kappa_0}\\np{\\history,\\control_{\\agent}} \\) \nin~\\eqref{eq:proof_Phi_kappa} takes the same (set) value\nfor any \\( \\history \\in G_{\\agent} \\).\nFor this purpose, we consider $\\history'$ and $\\history''$ which belong to \nthe atom~$G_{\\agent}$ of~\\( \\tribu{\\Information}_{\\agent} \\), that is,\n\\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\),\nand we show that \n\\( \\policy\\in\\Phi_{\\kappa_0}\\np{\\history',\\control_{\\agent}} \\implies\n\\policy\\in\\Phi_{\\kappa_0}\\np{\\history'',\\control_{\\agent}} \\).\n\nThus, we take \\( \\policy\\in\\Phi_{\\kappa_0}\\np{\\history',\\control_{\\agent}} \\)\nin~\\eqref{eq:proof_Phi_kappa} --- that is, \\(\\policy\\in\\POLICY^{\\player} \\) is such that \n\\( \\history'\\in\\HISTORY_{\\kappa_0}^{\\varphi} \\), \n\\( \\policy_{\\agent}\\np{\\history'}=\\control_{\\agent} \\) \nand \\( \\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history'} \n= \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\) ---\nand we are going to prove in three steps that \n\\( \\history''\\in\\HISTORY_{\\kappa_0}^{\\varphi} \\), \n\\( \\policy_{\\agent}\\np{\\history''}=\\control_{\\agent} \\) \nand \\( \\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history''} \n= \\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\).\n\\begin{description}\n\\item[-]\nFirst, we have that \n\\( \\history''\\in\\HISTORY_{\\kappa_0}^{\\varphi} \\)\nsince \n\\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\) by assumption,\nand \\( G_{\\agent} \\subset \\HISTORY_{\\kappa_0}^{\\varphi} \\) \nby~\\eqref{eq:proof_unique_kappa_0}.\n\\item[-]\nSecond, since \\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\) by assumption\nand since the strategy \\( \\policy_{\\agent} \\) \nis \\( \\tribu{\\Information}_{\\agent} \\)-measurable, we have that\n\\( \\policy_{\\agent}\\np{\\history'}=\\policy_{\\agent}\\np{\\history''} \\)\nby~\\eqref{eq:strategy_atoms_rho}, \nhence \\( \\policy_{\\agent}\\np{\\history''}=\\control_{\\agent} \\) since\n\\( \\policy_{\\agent}\\np{\\history'}=\\control_{\\agent} \\) by assumption.\n\\item[-]\nThird, we have shown that, for any \\( b \\in \\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player} \\)\nand any atom $G_b$ of~\\( \\tribu{\\Information}_{b} \\),\nwe have either \\( \\{\\history', \\history''\\} \\subset G_b \\)\nor \\( \\{\\history', \\history''\\} \\cap G_b =\\emptyset\\)\nby~\\eqref{eq:proof_recall_information}.\nAs a consequence, the pair \\( \\{\\history', \\history''\\} \\)\nis included in one of the atoms that make \nthe partition~\\( \\crochet{\\tribu{\\Information}_{b}} \\).\nTherefore, we obtain that \n\\( \\policy_{b}\\np{\\history'} =\n\\policy_{b}\\np{\\history''} \\) by~\\eqref{eq:strategy_atoms_rho},\nhence \\( \\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history'} =\n\\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history''}\n\\), using the notation~\\eqref{eq:sub_wstrategy_BGENT} for \\( \\policy_\\mathbb{B} \\), \nwith \\( \\mathbb{B} = \\range{\\FirstElements{\\kappa}} \\cap \\AGENT^{\\player} \\).\nAs we had obtained in~\\eqref{eq:proof_recall_decisions}\nthat \\( \\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\), we conclude that \n\\( \\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history''} =\n\\policy_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}\\np{\\history'} =\n\\history'_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}}=\n\\history''_{\\range{\\FirstElements{\\kappa_0}} \\cap \\AGENT^{\\player}} \\).\n\\end{description}\nAs a consequence, we have just proved that\n\\( \\policy\\in\\Phi_{\\kappa_0}\\np{\\history'',\\control_{\\agent}} \\),\nhence that \n\\( \\Phi_{\\kappa_0}\\np{\\history',\\control_{\\agent}} =\n\\Phi_{\\kappa_0}\\np{\\history'',\\control_{\\agent}} \\) \nwhenever \n\\( \\{\\history', \\history''\\} \\subset G_{\\agent} \\).\n\\medskip\n\n\\noindent$\\bullet$\nLet $G_{\\agent}$ be an atom of~\\( \\tribu{\\Information}_{\\agent} \\).\nFinally, since \\( \\Phi_{\\kappa}\\np{\\history,\\control_{\\agent}} \\) \nin~\\eqref{eq:proof_Phi_kappa} takes the same (set) value\nfor any \\( \\history \\in G_{\\agent} \\), \nthe expression~\\eqref{eq:MtoB_PerfectRecall} takes the same value\nfor any \\( \\history \\in G_{\\agent} \\), and thus the function\n\\( \\history \\in \\HISTORY \\mapsto \n\\MtoB{\\mu}^{\\player}_{\\agent}\n\\conditionaly{\\control_{\\agent}}{\\history} \\) is \n\\( \\tribu{\\Information}_{\\agent} \\)-measurable.\n\\medskip\n\nThis ends the proof.\n\\end{proof}\n\\end{subequations}\n\n\\subsection{Kuhn's equivalence theorem for causal finite games in intrinsic form}\n\\label{Main_result}\n\nFinally, we give a statement and a proof of Kuhn's equivalence theorem for games in intrinsic form. \n\n\\begin{theorem}\nWe consider a causal finite W-game\n(see Definition~\\ref{de:W-game}),\nand a player \\( \\player \\in \\PLAYER \\) \nsupposed to enjoy perfect recall (see Definition~\\ref{de:PerfectRecall}).\n\nThen, for any mixed W-strategy \\( \\mu^{\\player} \\in\n\\Delta\\np{\\POLICY^{\\player}} \\),\nthere exists a product-mixed W-strategy\n\\( \\pi^{\\player}=\n\\sequence{ \\pi^{\\player}_{\\agent}}{\\agent \\in\n \\AGENT^{\\player}} \\in \\prod_{\\agent \\in \\AGENT^{\\player}} \\Delta\\np{\\POLICY_{\\agent}} \\), \nas in Definition~\\ref{de:product-mixed_W-strategy}, \nsuch that\\footnote{%\nSee Footnote~\\ref{ft:product-mixed_W-strategy}\nfor the abuse of notation \\( \\pi^{\\player}=\n\\otimes_{\\agent\\in\\AGENT^{\\player}}\\pi^{\\player}_{\\agent} \\).}\n\\begin{equation}\n \\QQ^{\\omega}_{\\couple{{\\mu}^{-\\player}}{{\\mu}^{\\player}}} \n= \\QQ^{\\omega}_{\\couple{{\\mu}^{-\\player}}{{\\pi}^{\\player}}}\n\\eqsepv \\forall {\\mu}^{-\\player} \n\\in \\prod_{\\player'\\neq \\player} \\Delta\\bp{ \\POLICY^{\\player'} } \n\\eqsepv \\forall \\omega\\in\\Omega\n\\eqfinv\n\\end{equation}\nwhere the probability distribution \\( \\QQ^{\\omega}_{\\mu} \n\\in \\Delta\\bp{\\prod_{b \\in \\AGENT} \\CONTROL_{b}} \\)\nhas been defined in~\\eqref{eq:push_forward_probability}.\n\\label{th:Kuhn}\n\\end{theorem}\n\n\n\\begin{subequations}\n \n \\begin{proof}\n The proof is in three steps. \n \\medskip\n\n \\noindent$\\bullet$\n First, as all the assumptions of\n Proposition~\\ref{pr:MtoB_PerfectRecall} are satisfied,\n there exists a behavioral W-strategy\n \\( \\MtoB{\\mu}^{\\player} = \n \\sequence{\\MtoB{\\mu}^{\\player}_{\\agent}}{\\agent \\in \\AGENT^{\\player}} \\), \n as in Definition~\\ref{de:behavioral_W-strategy},\n which satisfies~\\eqref{eq:MtoB_PerfectRecall}.\n \n By Proposition~\\ref{pr:BtoPM}, we define \n the product-mixed W-strategy\n \\( \\pi^{\\player}=\n \\BtoPM{\\MtoB{\\mu}}^{\\player} \\), that is,\n with the property~\\eqref{eq:BtoPM} that, for any agent~\\( \\agent\\in\\AGENT^{\\player} \\), \n \\begin{equation}\n \n \\pi^{\\player}_{\\agent}\n \\defset{ \\policy_{\\agent}\\in\\POLICY_{\\agent}}%\n {\\policy_{\\agent}\\np{\\history}=\\control_{\\agent}}\n = \n \\MtoB{\\mu}^{\\player}_{\\agent}\n \\conditionaly{\\control_{\\agent}}{\\history}\n \n \\eqsepv \\forall \\control_{\\agent}\\in\\CONTROL_{\\agent} \n \\eqsepv \\forall \\history \\in \\HISTORY \n \\eqfinp\n \\label{eq:Kuhn_proof_BtoPM}\n \\end{equation}\n \\medskip\n\n \\noindent$\\bullet$\n Second, we prove that\\footnote{%\n See Footnote~\\ref{ft:product-mixed_W-strategy}\n for the abuse of notation \\( \\pi^{\\player}=\n \\otimes_{\\agent\\in\\AGENT^{\\player}}\\pi^{\\player}_{\\agent} \\).}\n \\begin{align}\n \\mu^{\\player}\n & \\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\nonumber\n \\\\\n &=\n \\ProductMixedStrategy^{\\player}\n \\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\eqsepv \\forall \\history \\in \\HISTORY \n \\eqfinp \n \\label{eq:proof_Kuhn_mixedstrategy_ProductBehavioral}\n \\end{align}\n In what follows, we consider a configuration~\\( \\history\\in\\HISTORY \\)\n and the total ordering~\\( \\rho=\\varphi\\np{\\history} \\in\n \\Sigma^{| \\AGENT |} \\).\n We label the set~$\\AGENT^{\\player}$ of agents of the player~$\\player$ \n by the stage at which each of them plays as follows:\n \\begin{equation}\n \\AGENT^{\\player}=\\{ \\rho(j_1) ,\\ldots, \\rho(j_{N}) \\} \n \\text{ with } j_1 < \\cdots < j_{N}\n \n \\eqfinp \n \\label{eq:labelling}\n \\end{equation}\n \n \n \n With this, we have\\footnote{Using the notation $\\ic{n}=\\na{1,\\ldots,n}$ to shorten some expressions.}\n \\begin{align*}\n \\mu^{\\player}\n &\n \\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\\\\n &=\n \\mu^{\\player}\n \\defset{ \\sequence{\\policy_{\\rho(j_k)}}{k\\in\\ic{N}}\n \\in \\prod_{k=1}^{N} \\POLICY_{\\rho(j_k)} }%\n { \\lambda_{\\rho(j_k)}\\np{\\history} =\\history_{\\rho(j_k)}\n \\eqsepv \\forall k\\in \\ic{N}}\n \\tag{as \\( \\AGENT^{\\player}=\\{ \\rho(j_1) ,\\ldots,\n \\rho(j_{N}) \\} \\)\n by~\\eqref{eq:labelling}}\n \\\\\n &=\n \\prod_{n=1}^{N}\n \\frac{\n \\mu^{\\player}\n \\defset{ \\sequence{\\policy_{\\rho(j_k)}}{k\\in\\ic{n}}\n \\in {\\displaystyle \\mathop{\\prod}_{k=1}^{n}} \\POLICY_{\\rho(j_k)} }%\n { \\lambda_{\\rho(j_k)}\\np{\\history} =\\history_{\\rho(j_k)}\n \\eqsepv \\forall k \\in \\ic{n}}\n }{%\n \\mu^{\\player}\n \\defset{ \\sequence{\\policy_{\\rho(j_k)}}{k\\in\\ic{n{-}1}}\n \\in {\\displaystyle \\prod_{k=1}^{n-1}}\\POLICY_{\\rho(j_k)} }%\n { \\lambda_{\\rho(j_k)}\\np{\\history} =\\history_{\\rho(j_k)}\n \\eqsepv \\forall k\\in\\ic{n-1}}\n }\n \\intertext{where, if the smaller term (the one to be found two equality lines above) is zero, every fraction\n is supposed to take the value zero, and, if the smaller term is positive, so\n are all the terms and no denominator is zero}\n \n &=\n \\prod_{n=1}^{N} \n \\MtoB{\\mu}^{\\player}_{\\rho(j_n)}%\n \\conditionaly{\\history_{\\rho(j_n)}}{\\history}\n \n \\intertext{by~\\eqref{eq:MtoB_PerfectRecall}, because \n \\( \\range{\\psi_{j_n}\\np{\\rho}} \\cap \\AGENT^{\\player}\n =\n \\{ \\rho(j_n) \\} \\cup \\bp{\n \\range{\\psi_{j_{n-1}}\\np{\\rho}} \\cap \\AGENT^{\\player} } \\) \n by definition~\\eqref{eq:cut} of the restriction mapping~$\\psi$,\n and by definition of the sequence \\( j_1 < \\cdots < j_{N} \\) \n in~\\eqref{eq:labelling}, which is such that \n \\( \\AGENT^{\\player}=\\{ \\rho(j_1) ,\\ldots, \\rho(j_{N}) \\} \\)}\n \n &=\n \\prod_{n=1}^{N} \n \\ProductMixedStrategy^{\\player}_{\\rho(j_n)}\n \\defset{ \\policy_{\\rho(j_n)} \\in \\POLICY_{\\rho(j_n)} }%\n {\\policy_{\\rho(j_n)} \\np{\\history}=\\history_{\\rho(j_n)} }\n \\tag{by~\\eqref{eq:Kuhn_proof_BtoPM}}\n \\\\\n &=\n \\Bp{ \\bigotimes_{n=1}^{N} \n \\ProductMixedStrategy^{\\player}_{\\rho(j_n)} }\n \\defset{ \\sequence{\\policy_{\\rho(j_k)}}{k\\in\\ic{N}} \n \\in \\prod_{k=1}^{N} \\POLICY_{\\rho(j_k)} }%\n { \\lambda_{\\rho(j_k)}\\np{\\history} =\\history_{\\rho(j_k)}\n \\eqsepv \\forall k\\in\\ic{N}}\n \\tag{by definition of the product probability}\n \n \n \n \n \\\\\n &=\n \\Bp{ \\bigotimes_{n=1}^{N} \n \\ProductMixedStrategy^{\\player}_{\\rho(j_n)} }\n \n \\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent} }\n \n \\tag{as \\( \\AGENT^{\\player}=\\{ \\rho(j_1) ,\\ldots,\n \\rho(j_{N}) \\} \\) in~\\eqref{eq:labelling}}\n \\end{align*}\n Thus, we have proved~\\eqref{eq:proof_Kuhn_mixedstrategy_ProductBehavioral}.\n \\medskip\n\n\n \\noindent$\\bullet$\n Third, for any configuration~\\( \\history=\\bp{\\omega, \n \\sequence{\\control_b}{b\\in\\AGENT} } \\in\\HISTORY \\), we have \n\n \\begin{align*}\n \\QQ^{\\omega}_{\\mu}\n \\bp{\\sequence{\\control_b}{b\\in\\AGENT} }\n & =\n \\bp{\\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player}}\n \\Bp{ M\\np{\\omega,\\cdot}^{-1}\n \\bp{\\sequence{\\control_b}{b\\in\\AGENT} } } \n \\tag{by definition~\\eqref{eq:push_forward_probability_a} \n of~\\( \\QQ^{\\omega}_{\\mu} \\) }\n \\\\\n &=\n \\bp{ \\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player} }\n \\defset{\\policy\\in\\POLICY}{ M\\np{\\omega,\\policy} \n = \\sequence{\\control_b}{b\\in\\AGENT} }\n \\tag{by definition of a pushforward probability} \n \\\\\n &=\n \\bp{ \\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player} }\n \\defset{\\policy\\in\\POLICY}{ \\lambda_{\\agent}\\bp{\\omega, \n \\sequence{\\control_b}{b\\in\\AGENT} } =\\control_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT }\n \\tag{by~\\eqref{eq:push_forward_probability_b} \n and~\\eqref{eq:solution_map_IFF} which define\n the mapping~\\( M\\np{\\omega,\\cdot}\\)} \n \\\\\n &=\n \\prod_{\\player \\in \\PLAYER} \\mu^{\\player}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\bp{\\omega, \n \\sequence{\\control_b}{b\\in\\AGENT} } =\\control_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\intertext{by definition of the product probability \n \\( \\bigotimes_{\\player \\in \\PLAYER}\\mu^{\\player} \\)\n on the product space\n \\( \\prod_{\\player \\in \\PLAYER} \\POLICY^{\\player} \\) }\n \n &=\n \\prod_{\\player \\in \\PLAYER} \\mu^{\\player}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history}=\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\intertext{(because \\( \\history=\\bp{\\omega, \n \\sequence{\\control_b}{b\\in\\AGENT} } \\) \n and where we have used notation~\\eqref{eq:history})}\n \n &=\n \\prod_{\\player' \\neq \\player} \\mu^{\\player'}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player'} }\n \\in\\POLICY^{\\player'} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player'} }\n \\\\\n &\\phantom{==} \\times\n \\mu^{\\player}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\tag{where we have singled out the player~$\\player$}\n \\\\\n &=\n \\prod_{\\player' \\neq \\player} \\mu^{\\player'}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player'} }\n \\in\\POLICY^{\\player'} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player'} }\n \\\\\n &\\phantom{==} \\times\n \\ProductMixedStrategy^{\\player}\n \n\\defset{ \\sequence{\\policy_{\\agent}}{\\agent\\in\\AGENT^{\\player} }\n \\in\\POLICY^{\\player} }%\n { \\lambda_{\\agent}\\np{\\history} =\\history_{\\agent}\n \\eqsepv \\forall \\agent\\in \\AGENT^{\\player} }\n \\tag{by~\\eqref{eq:proof_Kuhn_mixedstrategy_ProductBehavioral}}\n \\\\\n &=\n \\QQ^{\\omega}_{\\couple{{\\mu}^{-\\player}}{{\\pi}^{\\player}}}\n \\bp{\\sequence{\\control_b}{b\\in\\AGENT} }\n \\eqfinv\n \\end{align*}\n by reverting to the top equality with \n \\( \\mu^{\\player} \\) replaced by \n \\( \\ProductMixedStrategy^{\\player} \\). \n \\medskip\n\n This ends the proof.\n \\end{proof}\n \n\\end{subequations}\n\n\n\n\\section{Discussion}\n\nMost games in extensive form are formulated on a tree.\nHowever, whereas trees are perfect to follow step by step how a game is played,\nthey can be delicate to manipulate when information sets are added\nand must satisfy restrictive axioms to comply with the tree structure\n\\cite{Alos-Ferrer-Ritzberger:2016,Bonanno:2004,Brandenburger:2007}.\nIn this paper, we have introduced the notion of games in intrinsic form,\nwhere the tree structure is replaced with a product structure,\nmore amenable to mathematical analysis. \nFor this, we have adapted Witsenhausen's intrinsic model --- \na model with Nature, agents and their decision sets, and\nwhere information is represented by $\\sigma$-fields --- to games.\nIn contrast to games in extensive form formulated on a tree,\nWitsenhausen's intrinsic games (W-games) do not require\nan explicit description of the play temporality. \nNot having a hardcoded temporal ordering makes mathematical representations more intrinsic.\n\nAs part of a larger research program, we have focused here on Kuhn's equivalence\ntheorem.\nFor this purpose, we have defined the property of perfect recall for a player \nof a causal W-game (that is, without referring to a tree structure),\nand we have introduced three different definitions of\n``randomized'' strategies in the W-games setting\n--- mixed, product-mixed and behavioral. \nThen, we have shown that, under perfect recall for a player,\nany of her possible mixed strategies can be replaced by a behavioral strategy,\nwhich is the statement of Kuhn's equivalence theorem.\nMoreover, we have shown that any of her possible mixed strategies can also be replaced by a\nproduct-mixed strategy, that is, a mixed strategy under which her executive\nagents are probabilistically independent.\n\nWe add to the existing literature on extensive games representation \nby proposing a representation that is more general than the tree-based ones as,\nfor instance, it allows to describe noncausal situations.\nIndeed, Witsenhausen showed that there are noncausal W-models that yet are\nsolvable. \n\nFurthermore, our paper illustrates that the intrinsic form is well equipped to\nhandle proofs with mathematical formulas, without resorting to tree-based\n arguments that can be cumbersome when handling information. \nWe hence believe that the intrinsic form constitutes a new valuable tool for the analysis of games \nwith information. \n\nThe current work is the first output of a larger research program \nthat addresses games in intrinsic form.\nWe are currently working on the embedding of tree-based games in extensive\nform into W-games (by a mapping that associates each information set with an\nagent), and on the restricted class of W-games that can be \nembedded in tree-based games.\nFuther research includes extensions to measurable decision sets, \nand to infinite number of agents or players.\nWe will also investigate what can be said about subgame perfect equilibria and\nbackward induction, as well as Bayesian games.\n\\medskip\n\n\\textbf{Acknowledgements}.\n\nWe thank Dietmar Berwanger and Tristan Tomala for\nfruitful discussions, and for their valuable comments on a first version of this\npaper. This research benefited from the support of the FMJH Program PGMO and\nfrom the support to this program from EDF.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nAn information system designed for analysis of social media must consider a common set properties that characterize all social media data.\n\\begin{itemize}[leftmargin=*]\n \\item Information elements in social media are essentially \\textit{heterogeneous} in nature -- users, posts, images, external URL references, although related, all bear different kinds information. \n \\item Most social information is \\textit{temporal} -- a timestamp is associated with user events like the creation or response on a post, as well as system events like user account creation, deactivation and deletion. The system should therefore allow both temporal as well as time-agnostic analyses.\n \\item Information in social media evolves fast. In one study \\citep{zhu2013modeling}, it was shown that the number of users in a social media is a power function of time. More recently, \\citep{antonakaki2018utilizing} showed that Twitter's growth is supralinear and follows Lescovec's model of graph evolution \\citep{leskovec2007graph}. Therefore, an analyst may first have to perform \\textit{exploration tasks} on the data before figuring out their analysis plan. \n \\item Social media has a significant textual content, sometimes with specific entity markers (e.g., mentions) and topic markers (e.g., hashtags). Therefore any information element derived from text (e.g., named entities, topics, sentiment scores) may also be used for analysis. To be practically useful, the system must accommodate semantic synonyms -- \\#KamalaHarris, @KamalaHarris and ``Kamala Harris'' refer to the same entity.\n \\item Relationships between information items in social media data must capture both canonical relationships like (\\texttt{tweet-15 mentions user-392}) but a wide-variety of computed relationships over base entities (users, posts, $\\ldots$) and text-derived information (e.g., named entities). \n\\end{itemize}\nIt is also imperative that such an information must support three styles of analysis tasks \n\\begin{enumerate}\n \\item \\textbf{Search}, where the user specifies content predicate without specifying the structure of the data. For example, seeking the number of tweets related to Kamala Harris should count tweets where she is the author, as well as tweets where any synonym of ``Kamala Haris'' is in the tweet text.\n \\item \\textbf{Query}, where the user specifies query conditions based on the structure of the data. For example, tweets with \\texttt{create\\_date} between 9 and 9:30 am on January 6th, 2021, with \\texttt{text} containing the string ``Pence'' and that were \\texttt{favorited} at least 100 times during the same time period.\n \\item \\textbf{Discovery,} where the user may or may not know the exact predicates on the data items to be retrieved, but can specify analytical operations (together with some post-filters) whose results will provide insights into the data. For example, we call a query like \\texttt{Perform \\textit{community detection} on all tweets on January 6, 2021 and return the users from the largest community} a discovery query.\n\\end{enumerate}\nIn general, a real-life analytics workload will freely combine these modalities as part of a user's information exploration process. \n\nIn this paper, we present a general-purpose graph-based model for social media data and a subgraph discovery algorithm atop this data model. Physically, the data model is implemented on AWESOME \\citep{gupta:awesome:2016}, an analytical platform designed to enable large-scale social media analytics over continuously acquired data from social media APIs. The platform, developed as a polystore system natively supports relational, graph and document data, and hence enables a user to perform complex analysis that include arbitrary combinations of search, query and discovery operations. We use the term \\textbf{\\textit{query-driven discovery}} to reflect that the scenario where the user does not want to run the discovery algorithm on a large and continuously collected body of data; rather, the user knows a starting point that can be specified as an expressive query (illustrated later), and puts bounds on the discovery process so that it terminates within an acceptable time limit.\n\n\\medskip \n\n\\noindent \\textbf{Contributions.} This paper makes the following contributions. (a) It offers a new formulation for the subgraph interestingness problem for social media; (b) based on this formulation, it presents a discovery algorithm for social media; (c) it demonstrates the efficacy of the algorithm on multiple data sets.\n\n\\medskip\n\n\\noindent \\textbf{Organization of the paper.} The rest of the paper is organized as follows. Section \\ref{sec:related} describes the related research on interesting subgraph finding in as investigated by researchers in Knowledge Discovery, Information Management, as well Social Network Mining. Section \\ref{sec:prelim} presents the abstract data model over which the Discovery Algorithm operations and the basic definitions to establish the domain of discourse for the discovery process. Section \\ref{sec:generate} presents our method of generating candidate subgraphs that will be tested for interestingness. Section \\ref{sec:discovery} first presents our interestingness metrics and then the testing process based on these metrics. Section \\ref{sec:experiments} describes the experimental validation of our approach on multiple data sets. Section \\ref{sec:conclusion} presents concluding discussions.\n\n\\section{Related Work}\n\\label{sec:related}\nThe problem of finding ``interesting'' information in a data set is not new. \\citep{what-makes-patterns-interesting:1996} described that an``interestingness measure'' can be ``objective'' or ``subjective''. A measure is ``objective'' when it is computed solely based on the properties of the data. In contrast, a ``subjective'' measure must take into account the user's perspective. They propose that (a) a pattern is interesting if it is \"surprising\" to the user (\\textit{unexpectedness}) and (b) a pattern is interesting if the user can act on it\nto his advantage (\\textit{actionability}). Of these criteria, actionability is hard to determine algorithmically; unexpectedness, on the other hand, can be viewed as the departure from the user's beliefs. For example, a user may believe that the 24-hour occurrence pattern of all hashtags are nearly identical. In this case, a discovery would be to find a set of hashtags and sample dates for which this belief is violated. \nFollowing \\citep{geng2006interestingness}, there are three possibilities regarding how a system is informed of a user's knowledge and beliefs: (a) the user provides a formal specification of his or her knowledge, and after obtaining the mining results, the system chooses which unexpected patterns to present to the user \\citep{BingLiu:1999}; (b) according to the user's interactive feedback, the system removes uninteresting patterns \\citep{not-interesting:1999}; and (c) the system applies the user's specifications as constraints during the mining process to narrow down the search space and provide fewer results. Our work roughly corresponds to the third strategy.\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm, height=6.5cm]{fig-1-core-peri.png}\n\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm, height=6.5cm]{fig-1-networking-connection.png}\n \n \\end{minipage}\n \\begin{minipage}{.4\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{fig-1-top-25-bar.png}\n \\end{minipage}\n \\begin{minipage}{.4\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{degree-hist.png}\n \\end{minipage}\n \\caption{An example of an interesting subgraph. Tweets on Indian politics form a sparse center in a dense periphery (separately shown in the top right). The bottom two figures show the hashtag distribution of the center and the degree distribution of the center (log scale) respectively.}\n \\label{fig:sparse}\n\\end{figure*}\nEarly research on finding interesting subgraphs focused primarily on finding interesting substructures. This body of research primarily found interestingness in two directions: (a) finding frequently occurring subgraphs in a collection of graphs (e.g., chemical structures) \\citep{kuramochi2001frequent, yan2003closegraph, thoma2010discriminative} and (b) finding regions of a large graph that have high edge density \\citep{lee2010survey, sariyuce2015finding, epasto2015efficient, wen2017efficient} compared to other regions in the graph. Note that while a dense region in the graph can definitely interesting, shows, the inverse situation where a sparsely connected region is surrounded by an otherwise dense periphery can be equally interesting for an application. \n\nWe illustrate the situation in Figure \\ref{fig:sparse}. The primary data set is a collection of tweets on COVID-19 vaccination, but this specific graph shows a sparse core on Indian politics that is loosely connected to nodes of an otherwise dense periphery on the primary topic. Standard network features like hashtag histogram and the node degree histograms do not reveal this substructure requiring us to explore new methods of discovery. \n\nA serious limitation of the above class of work is that the interestingness criteria does not take into account \\textit{node content} (resp. edge content) which may be present in a property graph data model \\citep{angles2018property} where nodes and edges have attributes. \\citep{bendimerad2019mining} presents several discovery algorithms for graphs with vertex attributes and edge attributes. They perform both structure-based graph clustering and subspace clustering of attributes to identify interesting (in their domain ``anomalous'') subgraphs.\n\nOn the ``subjective'' side of the interestingness problem, one approach considers interesting subgraphs as a subgraph matching problem \\citep{shan2019dynamic}. Their general idea is to compute all matching subgraphs that satisfy a user the query and then ranking the results based on the rarity and the likelihood of the associations among entities in the subgraphs. In contrast \\citep{adriaens2019subjectively} uses the notion of ``subjective interestingness'' which roughly corresponds to finding subgraphs whose connectivity properties (e.g., the average degree of a vertices) are distinctly different from an ``expected'' \\textit{background} graph. This approach uses a constrained optimization problem that maximizes an objective function over the \\textit{information content} ($IC$) and the \\textit{description length} ($DL$) of the desired subgraph pattern. \n\nOur work is conceptually most inspired by \\citep{bendimerad2019subj} that explores the subjective interestingness problem for attributed graphs. Their main contribution centers around CSEA (Cohesive Subgraph with Exceptional Attributes) patterns that inform the user that a given set of attributes has exceptional values throughout a set of vertices in the graph. The subjective interestingness is given by $$S(U,S) = \\frac{IC(U,S)}{DL(U)}$$ where $U$ is a subset of nodes and $S$ is a set of restrictions on the value domains of the attributes. The system models the prior beliefs of the user as the Maximum Entropy distribution subject to any stated prior, beliefs the user may hold about the data (e.g., the distribution of an attribute value). The information content $IC(U,S)$ of a CSEA pattern $(U,S)$ is formalized as negative of the logarithm of the probability that the pattern is present under the background distribution. The length of a description\nof $U$ is the intersection of all neighborhoods in a subset $X \\subseteq N (U)$,\nalong with the set of ``exceptions'', vertices are in the intersection but not part of\n$U$. However, we have a completely different, more database-centric formulation of the background and the user's beliefs.\n\n\\section{The Problem Setup}\n\\label{sec:prelim}\n\n\\subsection{Data Model}\n\\label{sec:dataModel}\nOur abstract model social media data takes the form of a \\textit{heterogeneous information network} (an information network with multiple types of nodes and edges), which we view as a temporal property graph $G$. Let $N$ be the node set and $E$ be the edge set of $G$. $N$ can be viewed as a disjoint union of different subsets (called \\textit{node types}) -- users $U$, posts $P$, topic markers (e.g., hastags) $H$, term vocabulary $V$ (the set of all terms appearing in a corpus), references (e.g., URLs) $R(\\tau)$, where $\\tau$ represents the type of resource (e.g., image, video, web site $\\ldots$). Each type of node have a different set of properties (attributes) $\\bar{A}(.)$. We denote the attributes of $U$ as $\\bar{A}(U) = a_1(U), a_2(U) \\ldots$ such that $a_i(U)$ is the $i$-the attribute of $U$. An attribute of a node type may be temporal -- a post $p \\in P$ may have temporal attribute called \\texttt{creationDate}. Edges in this network can be directional and have a single \\textit{edge type}. The following is a set of \\textit{base (but not exhaustive) edge types}:\n\\begin{itemize}[leftmargin=1em, label=\\scriptsize{$-$}]\n \\item \\texttt{writes}: $U \\mapsto P$\n \\item \\texttt{uses}: $P \\mapsto H$\n \\item \\texttt{mentions}: $P \\mapsto U$ maps a post $p$ to a user $u$ if $u$ mentioned in $p$\n \\item \\texttt{repostOf}: $P \\mapsto P$ maps a post $p_2$ to a post $p_1$ if $p_2$ is repost of $p_1$. This implies that $ts(p_2) < ts(p_1)$ where $ts$ is the timestamp attribute\n \\item \\texttt{replyTo\/comment}: $P \\mapsto P$ maps a post $p_2$ to a post $p_1$ if $p_2$ is a reply to $p_1$. This implies that $ts(p_2) < ts(p_1)$ where $ts$ is the timestamp attribute\n \\item \\texttt{contains}: $P \\mapsto V \\times \\mathcal{N}$ where $\\mathcal{N}$ is the set of natural numbers and represents the count of a token $v \\in V$ in a post $p \\in P$\n\\end{itemize}\n We realistically assume that the inverse of these mappings can be computed, i.e., if $v_1, v_2 \\in V$ are terms, we can perform a \\texttt{contains}$^{-1}(v_1 \\wedge \\neg v_2)$ operation to yield all posts that used $v_1$ but not $v_2$.\n \nThe AWESOME information system allows users construct \\textit{derived or computed edge types} depending on the specific discovery problem they want to solve. For example, they can construct a standard hashtag co-occurrence graph using a non-recursive aggregation rule in Datalog \\citep{consens1993low}:\n\\begin{equation*}\n \\begin{aligned}\n HC(h_1, h_2, count(p)) &\\longleftarrow \\\\\n &p \\in P, h_1 \\in H, h_2 \\in H \\\\ \n &uses(p, h_1), uses(p, h_2) \n \\end{aligned}\n\\end{equation*}\nWe interpret $HC(h_1, h_2, count(p))$ as an edge between nodes $h_1$ and $h_2$ and $count(p)$ as an attribute of the edge. In our model, a computed edge has the form: $E_T(N_b, N_e, \\bar{B})$ where $E_T$ is the edge type, $N_b, N_e$ are the tail and head nodes of the edge, and $\\bar{B}$ designates a flat schema of edge properties. The number of such computed edges can be arbitrarily large and complex for different subgraph discovery problems. A more complex computed edge may look like: $UMUHD(u_1,u_2,mCount,d, h)$, where an edge from $u_1$ to $u_2$ is constructed if user $u_1$ creates a post that contains hashtag $h$, and mentions user $u_2$ a total number of $mCount$ times on day $d$. Note that in this case, hashtag $h$ is a property of edge type $UMUHD$ and is not a topic marker node of graph $G$. \n\n\\subsection{Specifying User Knowledge}\n\\label{sec:hquery}\nSince the discovery process is query-driven, the system has no \\textit{a priori} information about the user's interests, prior knowledge and expectations, if any, for a specific discovery task. Hence, given a data corpus, the user needs to provide this information through a set of parameterized queries. We call these queries ``heterogeneous'' because they can be placed on conditions on any arbitrary node (resp. edge) properties, network structure and text properties of the graph. \n\n\\medskip\n\n\\noindent \\textbf{User Interest Specification.} The discovery process starts with the specification of the user's universe of discourse identified with a query $Q_0$. We provide some illustrative examples of user interest with queries of increasing complexity.\n\n\\noindent \\textit{Example 1.} ``All tweets related to COVID-19 between 06\/01\/2020 and 07\/31\/2020 that refer to Hydroxychloroquine''. In this specification, the condition ``related to COVID-19'' amounts to finding tweets containing any $k$ terms from a user-provided list, and the condition on ``Hydroxychloroquine'' is expressed as a fuzzy search.\n\\begin{figure}\n\\includegraphics[width=8.5cm, height=6.5cm]{hydroxyclq.png}\n\\vspace{-2cm}\n \\caption{Hydroxychloroquine and COVID related Hashtags}\n \\label{fig:hydroxi}\n\\end{figure}\nFigure \\ref{fig:hydroxi} shows the top hashtags related to this search -- note that the hashtag co-occurrence graph around COVID-19 and hyroxychloroquine includes ``FakeNewsMedia''.\n\n\\noindent \\textit{Example 2.} ``All tweets from users who mention \\texttt{Trump} in their user profile and \\texttt{Fauci} in at least $n$ of their tweets''. Notice that this query is about users with a certain behavioral pattern -- it captures \\textit{all} tweets from users who have a combination of specific profile features and tweet content. \n\n\\noindent \\textit{Example 3.} ``All tweets from users $U_1$ whose tweets appear in hashtag-cooccurrence in the 3 neighborhood around \\texttt{\\#ados} is used together with all tweets of users $U_2$ who belong to the user-mention-user networks of these users (i.e., $U_1$) '', where \\texttt{\\#ados} refers to ``American Descendant of Slaves'', which represents an African American cause.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{kamala-ht-vol.png}\n\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{kamala-ht.png}\n \n \\end{minipage}\n\n \\caption{Two views of the \\texttt{\\#ados} cluster of hashtags }\n \\label{fig:ados}\n\\end{figure*}\n\n\n\\noindent The end result of $Q_0$ is a collection of posts that we call the \\textit{initial post set} $P_0$. Using the posts in $P_0$, the system creates a background graph as follows.\n\n\\medskip\n\n\\noindent \\textbf{Initial Background Graph.} The initial background graph $G_0$ is the graph derived from $P_0$ over which the discovery process runs. However, to define the initial graph, we first develop the notion of a \\textit{conversation}.\n\n\\begin{definition}[\\textbf{Semantic Neighborhood}.] $N(p)$, the semantic neighborhood of a post $p$ is the graph connecting $p$ to instances of $U \\cup H \\cup P \\cup V$ that directly relates to $p$.\n\\end{definition}\n\n\\begin{definition}[\\textbf{Conversation Context}.] The conversation context $C(p)$ of post $p$ is a subgraph satisfying the following conditions:\n\\begin{enumerate}[leftmargin=*]\n \\item $P_1$: The set of posts reachable to\/from $p$ along the relationships \\texttt{repostOf, replyTo} belong to $C(p)$. \n \\item $P_2$: The union of posts in the semantic neighborhood of $P_1$ belong to $C(p)$.\n \\item $E$: The induced subgraph of $P_1 \\cup P_2$ belong to $C(p)$\n \\item Nothing else belongs to $C(p)$.\n\\end{enumerate}\n\\end{definition}\n\\noindent Clearly, we can assert that $C(p)$ is a connected graph and that $N(p)~ \\sqsubset_g~ C(p)$ where $\\sqsubset_g$ denotes a subgraph relationship.\n\n\\begin{definition}[\\textbf{Initial Background Graph}.] The initial background graph $G_0$ is a merger of all conversation contexts $C(p_i), p_i \\in P_0$, together with all computed edges induces the nodes of $\\cup_i C(p_i)$\n\\end{definition}\nThe initial background graph itself can be a gateway to finding interesting properties of the graph. To illustrate this based on the graph obtained from our Example 2. Figure \\ref{fig:ados} presents two views of the \\texttt{\\#ados} cluster of hashtags from January 2021. The left chart shows the time vs. count of the hashtags while the right chart shows the dominant hashtags of the same period in this cluster. The strong peak in the timeline, was due to an intense discussion, revealed by topic modeling, on the creation of an office on African American issues. The occurrence of this peak is interesting because most of the social media conversation in this time period was focused on the Capitol attack on January 6.\n\n\nGiven the $G_0$ graph,we discover subgraphs $S_i \\subset G_o$ whose content and structure are distinctly different that of $G_0$. However, unlike previous approaches, we apply a generate-and-test paradigm for discovery. The generate-step (Section \\ref{sec:generate}) uses a graph cube like \\citep{zhao2011graph} technique to generate candidate subgraphs that might be interesting and the test-step (Section \\ref{sec:testing}) computes if (a) the candidate is sufficiently distinct from the $G'$, and (b) the collection of candidates are sufficiently distinct from each other.\n\n\\noindent \\textbf{Subgraph Interestingness.} For a subgraph $S_i$ to be considered as a candidate, it must satisfy the following conditions.\n\n\\noindent \\textbf{C1.} $S_i$ must be connected and should satisfy a size threshold $\\theta_n$, the minimal number of nodes.\n\n\\noindent \\textbf{C2.} Let $A_{ij}$ (resp. $B_{ik}$) be the set of \\textit{local} properties of node $j$ (resp. edge $k$) of subgraph $S_i$. A property is called ``local'' if it is not a network property like vertex degree. All nodes (resp. edges) of $S_i$ must satisfy some user-specified predicate $\\phi_N$ (resp. $\\phi_E$) specified over $A_{ij}$ (resp. $B_{ik}$). For example, a node predicate might require that all ``post'' nodes in the subgraph must have a re-post count of at least 300, while an edge predicate may require that all hashtag co-occurrence relationships must have a weight of at least 10. A user defined constraint on the candidate subgraph improves the interpretability of the result. Typical subjective interestingness techniques \\citep{van2016subjective, adriaens2019subjectively} use only structural features of the network and do not consider attribute-based constraints, which limits their pragmatic utility.\n\n\\noindent \\textbf{C3.} For each text-valued attribute $a$ of $A_{ij}$, let $C(a)$ be the collection of the values of $a$ over all nodes of $S_i$, and $\\mathcal{D}(C(a))$ is a textual diversity metric computed over $C(a)$. For $S_i$ to be interesting, it must have at least one attribute $a$ such that $\\mathcal{D}(C(a))$ does not have the usual power-law distribution expected in social networks. Zheng et al \\citep{zheng2019social} used \\textit{vocabulary diversity} and \\textit{topic diversity} as textual diversity measures.\n\n\n\\section{Candidate Subgraph Generation}\n\\label{sec:generate}\nSection \\ref{sec:hquery} describes the creation of the initial background graph $G_0$ that serves as the domain of discourse for discovery. Depending on the number of initial posts $P_0$ resulting form the initial query, the size of $G_0$ might be too large -- in this case the user can specify followup queries on $G_0$ to narrow down the scope of discovery. We call this narrowed-down graph of interest as $G'$ -- if no followup queries were used, $G' = G_0$. The next step is to generate some candidate subgraphs that will be tested for interestingness. \n\n\\noindent \\textbf{Node Grouping.} A node group is a subset of \\textit{nodes($G'$)} where all nodes in a group have some similar property. We generalize the \\textit{groupby} operation, commonly used in relational database systems, to heterogeneous information networks. To describe the generalization, let us assume $R(A, B, C, D, \\ldots)$ is a relation (table) with attributes $A, B, C, D, \\ldots$ A \\textit{groupby} operation takes as input (a) a subset of \\textit{grouping attributes} (e.g. $A, B$), (b) a \\textit{grouped attribute} (e.g., $C$) and (c) an \\textit{aggregation function} (e.g., \\textit{count}). The operation first computes each distinct cross-product value of the grouping attributes (in our example, $A \\times B$) and creates a list of all values of the grouped attribute corresponding to each distinct value of the grouping attributes, and then applies the aggregation function to the list. Thus, the result of the \\textit{groupby} operation is a single aggregated value for each distinct cross-product value of grouping attributes. \n\nTo apply this operation to a social network graph, we recognize that there are two distinct ways of defining the ``grouping-object''. \\\\\n(1) Node properties can be directly used just like in the relational case. For example, for tweets a grouping condition might be \\texttt{getDate} \\texttt{(Tweet.created\\_at)} $\\wedge$ \\texttt{bin(Tweet.favoriteCount, 100)}, where the \\texttt{getDate} function extracts the date of a tweet and the \\texttt{bin} function creates buckets of size 100 from the favorite count of each tweet. \\\\\n(2) The grouping-object is a subgraph pattern. For example, the subgraph pattern\\\\\n\\texttt{(:tweet\\{date\\})-[:uses]->(:hashtag\\{text\\})} \\hfill{(P1)}\\\\\nstates that all ''tweet'' nodes having the same posting date, together with every distinct hashtag text will be placed in a separate group. Notice that while (1) produces disjoint tweets, (2) produces a ``soft'' partitioning on the tweets and hashtags due to the many-to-many relationship between tweets and hashtags. \\\\\nIn either case, the result is a set of node groups, designated here as $N_i$. For example, the grouping pattern P1 expressed in a Cypher-like syntax \\citep{francis2018cypher} (implemented in the Neo4J graph data management system) states that all tweets having the same posting date, together with every distinct hashtag text will be placed in a separate group.\nNotice that this process produces a ``fuzzy'' partitioning on the tweets and hashtags due to the many-to-many relationship between tweets and hashtags. Hence, the same tweet node can belong to two different groups because it has multiple hashtags. Similarly, a hashtag node can belong to multiple groups because tweets from different dates may have used the same hashtag. While the grouping condition specification language can express more complex grouping conditions, in this paper, we will use simpler cases to highlight the efficacy of the discovery algorithm. We denote the node set in each group as $N_i$.\n\n\\noindent \\textbf{Graph Construction.} To complete the \\textit{groupby} operation, we also need to specify the aggregation function in addition to the grouping-object and the grouped-object. This function takes the form of a graph construction operation that constructs a subgraph $S_i$ by expanding on the node set $N_i$. Different expansion rules can be specified, leading to the formation of different graphs. Here we list three rules that we have found fairly useful in practice.\n\n\\noindent \\textbf{G1.} Identify all the \\texttt{tweet} nodes in $N_i$. Construct a \\textit{relaxed induced subgraph} of the \\texttt{tweet}-labeled nodes in $N_i$. The subgraph is induced because it only uses tweets contained within $N_i$, and it is \\textit{relaxed} because contains all nodes \\textit{directly associated} with these tweet nodes, such as author, hashtags, URLs, and mentioned-users. \n\n\\noindent \\textbf{G2.} Construct a \\textit{mention network} from within the tweet nodes in $N_i$ -- the mention network initially connects all \\texttt{tweet} and \\texttt{user}-labeled nodes. Extend the network by including all nodes \\textit{directly associated} with these tweet nodes.\n\n\\noindent \\textbf{G3.} A third construction relaxes the grouping constraint. We first compute either \\textbf{G1} or \\textbf{G2}, and then extend the graph by including the first order neighborhood of mentioned users or hashtags. While this clearly breaks the initial group boundaries, a network thus constructed includes tweets of similar themes (through hashtags) or audience (through mentions).\n\n\\noindent \\textbf{Automated Group Generation.} In a practical setting, as shown in Section \\ref{sec:experiments}, the parameters for node grouping operation can be specified by a user, or it can be generated automatically. Automatic generation of grouping-objects is based on the considerations described below. To keep the autogeneration manageable, we will only consider single and two objects for attribute grouping and only a single edge for subgraph patterns.\n\\begin{itemize}[leftmargin=*]\n \\item Since temporal shifts in social media themes and structure are almost always of interest, the posting timestamp is always a grouping variable. For our purposes, we set the granularity to a day by default, although a user can set it.\n \\item The frequency of most nontemporal attributes (like hashtags) have a mixture distribution of double-pareto lognormal distribution and power law \\citep{gupta:osn:2020}, we will adopt the following strategy.\n \\begin{itemize}[label={$\\circ$}]\n \\item Let $f(A)$ be distribution of attribute $A$, and $\\kappa(f(A))$ be the curvature of $f(A)$. If $A$ is a discrete variable, we find $a*$, the maximum curvature (elbow) point of $f(A)$ numerically \\citep{antunes2018knee}.\n \\item We compute $A'$, the values of attribute $A$ to the left of $a*$ for all attributes and choose the attribute where the cardinality of $A'$ is maximum. In other words, we choose attributes which have the highest number of pre-elbow values. \n \\end{itemize}\n \\item We adopt a similar strategy for subgraph patterns. If $T_1(a_i)\\stackrel{L}{\\longrightarrow}T_2(b_j)$ is an edge where $T_1, T_2$ are node labels, $a_i, b_j$ are node properties and $L$ is an edge label, then $a_i$ and $b_j$ will be selected based on the conditions above. Since the number of edge labels is fairly small in our social media data, we will evaluate the estimated cardinality of the edge for all such triples and select one with the lowest cardinality. \n\\end{itemize}\n\n\\section{The Discovery Process}\n\\label{sec:discovery}\n\n\\subsection{Measures of for Relative Interestingness}\n\\label{sec:interestingness}\nWe compute the interestingness of a subgraph $S$ in reference to a background graph $G_b$ (e.g., $G'$), and consists of a structural as well as a content component. We first discuss the structural component. To compare a subgraph $S_i$ with the background graph, we first compute a set of network properties $P_j$ (see below) for nodes (or edges) and then compute the frequency distribution $f(P_j(S_i))$ of these properties over all nodes (resp. edges) of (a) subgraphs $S_i$, and (b) the reference graph (e.g., $G'$). A distance between $f(P_j(S_i))$ and $f(P_j(G_b))$ is computed using Jensen\u2013Shannon divergence (JSD). In the following, we use $\\Delta(f_1,f_2)$ to refer to the JS-divergence of distributions $f_1$ and $f_2$. \n\\medskip\n\\noindent \\textbf{Eigenvector Centrality Disparity:} The testing process starts by identifying the distributions of nodes with high node centrality between the networks. While there is no shortage of centrality measures in the literature, we choose eigenvector centrality \\citep{das2018study} defined below, to represent the dominant nodes. Let $A = (a_{i,j})$ be the adjacency matrix of a graph. The eigenvector centrality $x_{i}$ of node $i$ is given by: $$x_i = \\frac{1}{\\lambda} \\sum_k a_{k,i} \\, x_k$$ where $\\lambda \\neq 0$ is a constant. \n The rationale for this choice follows from earlier studies in \\citep{Bonacich2007-mx,Ruhnau2000-jy,Yan2014-dn}, who establish that since the eigenvector centrality can be seen as a weighted sum of direct and indirect connections, \n it represents the true structure of the network more faithfully than other centrality measures. \n Further, \\citep{Ruhnau2000-jy} proved that the eigenvector-centrality under the Euclidean norm can be transformed into node-centrality, a property not exhibited by other common measures.\n Let the distributions of eigenvector centrality of subgraphs $A$ and $B$ be $\\beta_a$ and $\\beta_b$ respectively, and that of the background graph be $\\beta_t$, then \n $|\\Delta_e(\\beta_t, \\beta_a)| > \\theta $ indicates that $A$ is sufficiently structurally distinct from $G_b$ \n $|\\Delta_e(\\beta_t, \\beta_a)| > |\\Delta_e(\\beta_t, \\beta_b)|$ indicates that $A$ contains significantly more or significantly less influential nodes than $B$. \n\n\\medskip\n\\noindent \\textbf{Topical Navigability Disparity:} Navigability measures ease of flow. If subgraph $S$ is more navigable than subgraph $S'$, then there will be more traffic through $S$ compared to $S'$. However, the likelihood of seeing a higher flow through a subgraph depends not just on the structure of the network, but on extrinsic covariates like time and topic. So, a subgraph is interesting in terms of navigability if for some values of a covariate, its navigability measure is different from that of a background subgraph. \n\nInspired by its application in biology \\citep{seguin2018navigation}, traffic analysis \\citep{scellato2010traffic}, and network attack analysis \\citep{lekha2020central}, we use \\textit{edge betweenness centrality} \\citep{das2018study} as the generic (non-topic) measure of navigability. Let $\\alpha_{ij}$ be the number of shortest paths from node i to j and $\\alpha_{ij}(k)$ is the number of paths passes through the edge $k$. Then the edge-betweenness centrality is $$C_{eb}(k)= \\sum_{(i,j)\\in V} \\frac{\\alpha_{ij}(k)}{\\alpha_{ij}}$$\nBy this definition, the edge betweenness centrality is the portion of all-pairs shortest paths that pass through an edge. Since edge betweenness centrality of edge $e$ measures the proportion of paths that passes through $e$, a subgraph $S$ with a higher proportion of high-valued edge betweenness centrality implies that $S$ may be more \\textit{navigable} than $G_b$ or another subgraph $S'$ of the graph, i.e., information propagation is higher through this subgraph compared to the whole background network, for that matter, any other subgraph of network having a lower proportion of nodes with high edge betweenness centrality. Let the distribution of the edge betweenness centrality of two subgraphs $A$ and $B$ are $c_1$ and $c_2$ respectively, and that of the reference graph is $c_0$. Then, $|\\Delta_b(c_0, c_1)| < |\\Delta_b(c_0, c_2)|$ means the second subgraph is more navigable than the first.\n\nTo associate navigability with topics, we detect topic clusters over the background graph and the subgraph being inspected. The exact method for topic cluster finding is independent of the use of topical navigability. In our setting, we have used topic topic modeling and dense region detection in hashtag cooccurrence networks. For each topic cluster, we identify posts (within the subgraph) that belong to the cluster. If the number of posts is greater than a threshold, we compute the navigability disparity.\n\n\\medskip\n\n\\noindent \\textbf{Propagativeness Disparity:} The concept of propagativeness builds on the concept of navigability. Propagativeness attempts to capture how strongly the network is spreading information through a navigable subgraph $S$. We illustrate the concept with a network constructed over tweets where a political personality (Senator Kamala Harris in this example) is mentioned in January 2021. The three rows in Figure \\ref{fig:kamala} show the network characteristics of the subregions of this graph, related, respectively to the themes of \\texttt{\\#ados} and ``black lives matter'' (Row 1), Captiol insurrection (Row 2) and Socioeconomic issues related to COVID-19 including stimulus funding ad business reopening (Row 3). In earlier work \\citep{zheng2019social}, we have shown that a well known propagation mechanism for tweets is to add user-mentions to improve the reach of a message - hence the user-mention subgraph is indicative of propagative activity. In Figure \\ref{fig:kamala}, we compare the hashtag activity (measured by the Hashtag subgraph) and the mention activity (size of the mention graph) in these three subgraphs. Figure \\ref{fig:kamala} (e) shows a low and fairly steady size of the user mention activity in relation to the hashtag activity on the same topic, and these two indicators are not strongly correlated. Further, Figure \\ref{fig:kamala} (f) shows that the mean and standard deviation of node degree of hashtag activity are fairly close, and the average degree of user co-mention (two users mentioned in the same tweet) graph is relatively steady over the period of observation -- showing low propagativeness. In contrast, Row 1 and Row 2 show sharper peaks. But the curve in Figure \\ref{fig:kamala} (c) declines and has low, uncorrelated user mention activity. Hence, for this topic although there is a lot of discussion (leading to high navigability edges), the propagativeness is quite low. In comparison, Figure \\ref{fig:kamala} (a) shows a strong peak and a stronger correlation be the two curves indicating more propagativeness. The higher standard deviation in the co-mention node degree over time (Figure \\ref{fig:kamala} (b)) also shows the making of more propagation around this topic compared to the others.\n\nWe capture propagativeness using current flow betweenness centrality \\citep{brandes2005centrality} which is based on Kirchoff's current laws. We combine this with the average neighbor degree of the nodes of $S$ to measure the spreading propensity of $S$. The current flow betweenness centrality is the portion of all-pairs shortest paths that pass through a node, and the average neighbor degree is the average degree of the neighborhood of each node. If a subgraph has higher current flow betweenness centrality plus a higher average neighbor degree, the network should have faster communicability. Let $\\alpha_{ij}$ be the number of shortest paths from node $i$ to $j$ and $\\alpha_{ij}(n)$ is the number of paths passes through the node $n$. Then the current flow betweenness centrality: $$C_{nb}(n)= \\sum_{(i,j)\\in V} \\frac{\\alpha_{ij}(n)}{\\alpha_{ij}}$$\n\nSuppose the distribution of the current flow betweenness centrality of two subgraphs $A$ and $B$ is $p_1$ and $p_2$ respectively, and distribution of the reference graph is $p_t$. Also the distribution of the $\\beta_{n}$, the average neighbor degree of the node $n$, for the subgraph $A$ and $B$ is $\\gamma_1$ and $\\gamma_1$ respectively, and the reference distribution is $\\gamma_t$. If the condition\n $$\\Delta(p_t, p_1) * \\Delta(\\gamma_t, \\gamma_1) < \\Delta(p_t, p_2) * \\Delta(\\gamma_t, \\gamma_2)$$\nholds, we can conclude that subgraph $B$ is a faster propagating network than subgraph $A$. This measure is of interest in a social media based on the observation that misinformation\/disinformation propagation groups either try to increase the average neighbor degree by adding fake nodes or try to involve influential nodes with high edge centrality to propagate the message faster \\citep{besel2018full}. \n\n\\medskip\n \\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{ados-kamal.png}\n\n \\caption{\\#ADOS and Black Lives Matter }\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{ados-kamal-deg.png}\n \n \\caption{AVG Degree Distributions and Std Dev }\n \\end{minipage}\n\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{insurrection-kamal.png}\n \\caption{Insurrection and Capitol Attack}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{insurrection-kamal-deg.png}\n \n \\caption{AVG Degree Distributions and Std Dev}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5cm]{stimulus-kamal.png}\n \\caption{Socioeconomic Issues During COVID-19 }\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5cm]{stimulus-kamal-deg.png}\n \\caption{AVG Degree Distributions and Std Dev}\n \\end{minipage}\n \\caption{Various Metrics for Daily Partitioned Hashtag co-occurrences and User mentioned Graph from the Political Personalty Dataset. $Q_0$ retrieves tweets where Kamala Harris is mentioned in a hashtag, text body or user mention.}\n \\label{fig:kamala}\n\\end{figure*}\n\\noindent \\textbf{Subgroups within a Candidate Subgraph:} \n \n The purpose of the last metric is to determine whether a candidate subgraph identified using the previous measures need to be further decomposed into smaller subgraphs. We use subgraph centrality \\citep{estrada2005subgraph} and coreness of nodes as our metrics. \n \n \n \n The subgraph centrality measures the number of subgraphs a vertex participates in, and the core number of a node is the largest value $k$ of a $k$-core containing that node. So a subgraph for which the core number and subgraph centrality distributions are right-skewed compared to the background subgraph are (i) either split around high-coreness nodes, or (ii) reported to the user as a mixture of diverse topics. \nThe node grouping, per-group subgraph generation and candidate subgraph identification process is presented in Algorithm \\ref{alg:graph-metrics}. In the algorithm, function \\textit{cut2bin} extends the cut function, which compares the histograms of the two distributions whose domains (X-values) must overlap, and produces equi-width bins to ensure that two histograms (i.e., frequency distributions) have compatible bins.\n\n\\subsection{The Testing Process}\n\\label{sec:testing}\n\n\\begin{algorithm}\n\\scriptsize\n\\caption{Graph Construction Algorithm}\n\\label{alg:graph-metrics}\n\\SetKwProg{ComputeMetrics}{Function \\emph{ComputeMetrics}}{}{end}\nINPUT : $Q_{out}$ Output of the query, $L$ Graph construction rules, $gv$ grouping variable, $th_{size}$ is the minimum size of the subgraph\\;\n\\SetKwProg{gmetrics}{Function \\emph{gmetrics}}{}{end}\n\\SetKwProg{CompareHistograms}{Function \\emph{CompareHistograms}}{}{end}\n\\gmetrics{($Q_{out}$, $L$, $groupVar$)}{\nG[]$\\leftarrow$ ConstructGraph($Q_{out}$, $L$)\\;\n$T \\leftarrow$ []\\;\n\\For{$g \\in G $}{\n $t_{\\alpha} \\leftarrow$ ComputeMetrics(g)\\; \n $T.push(t_{alpha})$\\;\n \n }\nreturn $T$\n}\n\\ComputeMetrics{(Graph g)}{\n$m\\leftarrow[]$\\;\n$m.push(eigenVectorCentrality(g))$\\;\n.........\n$m.push(coreNumber(g))$\\;\nreturn $m$\n}\n\\CompareHistograms{(List $t_{1}$, List $x_{2}$)}{\n$s_g \\leftarrow cut2bin(x_2, bin_{edges})$\\;\n$bin_{edges} \\leftarrow$ getBinEdges($x_{2}$)\\;\n$t_g \\leftarrow cut2bin(t_1, bin_{edges})$\\;\n\n$\\beta_{js} \\leftarrow distance.jensenShannon(t_g, s_g)$\\;\n$h_t \\leftarrow histogram(t_g, s_g,bin_{edges} )$\\;\nreturn $\\beta_{js}, h_t, bin_{edges}$\\;\n}\n\\end{algorithm}\n\n\n\n\n\n\n\\begin{algorithm}\n \\scriptsize\n\\caption{Graph Discovery Algorithm}\n\\label{alg:discovery-algo}\n\\SetKwProg{discover}{Function \\emph{discover}}{}{end}\n\\KwIn{ Set of all subgraphs divergence $\\sigma$}\n\\KwOut{Feature vectors $v_1$, $v_2$, $v_3$, List for re-partition recommendations $l$}\n$ev$ : eigenvector centrality\\;\n$ec$ : edge current flow betweenness centrality\\;\n$nc$ : current flow betweenness centrality\\;\n$\\mu$ : core number\\;\n$z$ : average neighbor degree\\;\n\\discover{($\\sigma$)}{\n\\For{any two set of divergence from $\\sigma_1$ ans $\\sigma_2$}{\n\\If{$\\sigma_2(ev) > \\sigma_1(ev)$}{\n $v_1(\\sigma_2) = v_1(\\sigma_2) + 1$\\;\n \\If{$\\sigma_2(ec) > \\sigma_1(ec)$}{\n $v_2(\\sigma_2) = v_2(\\sigma_2) + 1$\\;\n \\If{($\\sigma_2(nc)+ \\sigma_2(\\mu)) > (\\sigma_1(ec) + \\sigma_2(\\mu)$)}{\n $v_3(\\sigma_2) = v_3(\\sigma_2) + 1$\\;\n }\n \\If{($\\sigma_2(sc)+ \\sigma_2(z)) > (\\sigma_1(sc) + \\sigma_2(z)$)}{\n $l(\\sigma_2) = 1$\\;\n }\n }\n }\n }\n}\n\\end{algorithm} \nThe discovery algorithm's input is the list of divergence values of two candidate sets computed against the same reference graph. It produces four lists at the end. Each of the first three lists contains one specific factor of interestingness of the subgraph. The most interesting subgraph should present in all three vectors. \nIf the subgraph has many cores and is sufficiently dense, then the system considers the subgraph to be \\textit{uninterpretable} and sends it for re-partitioning. \nTherefore, the fourth list contains the subgraph that should partition again. Currently, our repartitioning strategy is to take subsets of the original keyword list provided by the user at the beginning of the discovery process to re-initiate the discovery process for the dense, uninterpretable subgraph.\\\\\n\n\\noindent The output of each metric produces a value for each participant node of the input. However, to compare two different candidates, in terms of the metrics mentioned above, we need to convert them to comparable histograms by applying a binning function depending on the data type of the grouping function. \n\\\\\n\\noindent \\textit{Bin Formation (cut2bin):} Cut is a conventional operator (available with R, Matlab, Pandas etc. ) segments and sorts data values into bins. The cut2bin is an extension of a standard cut function, which compares the histograms of the two distributions whose domains (X-values) must overlap. The cut function accepts as input a set of set of node property values (e.g., the centrality metrics), and optionally a set of edge boundaries for the bins. It returns the histograms of distribution. Using the cut, first, we produce $n$ equi-width bins from the distribution with the narrower domain. Then we extract bin edges from the result and use it as the input bin edges to create the wider distribution`s cut. This enforces the histograms to be compatible. In case one of the distribution is known to be a reference distribution (distribution from the background graph) against which the second distribution is compared, we use the reference distribution for equi-width binning and bin the second distribution relative to the first.\n\\\\\n\\noindent The $CompareHistograms$ function uses the \\textit{cut2bin} function to produce the histograms, and then computes the JS Divergence on the comparable histograms. The $CompareHistograms$ function returns the set of divergence values for each metric of a subgraph, which is the input of the discovery algorithm. The function requires the user to specify which of the compared graphs should be considered as a reference -- this is required to ensure that our method is scalable for large background graphs (which are typically much larger than the interesting subgraphs). If the background graph is very large, we take several random subgraphs from this graph to ensure they are representative before the actual comparisons are conducted. To this end, we adopt the well-known random walk strategy.\n \n\\noindent In the algorithm $v_1$, $v_2$ and $v_3$ are the three vectors to store the interestingness factors of the subgraphs, and $l$ is the list for repartitioning. For two subgraphs, if one of them qualified for $v_1$ means, the subgraph contains higher centrality than the other. In that case, it increases the value of that qualified bit in the vector by one. Similarly, it increases the value of $v_2$ by one, if the same candidate has high navigability. Finally, it increases the $v_3$, if it has higher propagativeness. The algorithm selects the top-$k$ scores of candidates from each vector, and marks them interesting. \n\n\n\n\n\n\\begin{table*}[]\n\\caption{Dataset Descriptions }\n\\label{tab:dataset}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\textbf{Data Set} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Total \\\\ Collection\\\\ Size\\end{tabular}} & \\textit{\\textbf{\\begin{tabular}[c]{@{}l@{}}Sub\\\\ Query\\end{tabular}}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Network \\\\ Type\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Total \\\\ Tweets\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Unique \\\\ nodes\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Unique \\\\ edges\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Self\\\\ Loop\\end{tabular}} & \\textbf{Density} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Avg\\\\ Degree\\end{tabular}} \\\\ \\hline\n\\textit{\\begin{tabular}[c]{@{}l@{}}Kamala \\\\ Harris\\end{tabular}} & 12469480 & \\textit{\\begin{tabular}[c]{@{}l@{}}Capitol \\\\ Attack\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 164397 & 1398 & 7801 & 16 & 0.0025 & 4.3 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user\\\\ co-mention\\end{tabular} & 164397 & 8012 & 48604 & 87 & 0.00012 & 3.19 \\\\ \\hline\n\\textit{} & & \\textit{\\#ADOS} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 158419 & 3671 & 10738 & 29 & 0.0015 & 5.8 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 158419 & 30829 & 39865 & 49 & 8.3 & 2.5 \\\\ \\hline\n\\textit{} & & \\textit{\\begin{tabular}[c]{@{}l@{}}Economic \\\\ Issues\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 36678 & 1278 & 1828 & 4 & 0.0022 & 2.8 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 36678 & 6971 & 11584 & 19 & 0.0004 & 3.4 \\\\ \\hline\n\\textit{Joe Biden} & 45258151 & \\textit{\\begin{tabular}[c]{@{}l@{}}Capitol \\\\ Attack\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 676898 & 7728 & 21422 & 50 & 0.00071 & 5.49 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 676898 & 82046 & 101646 & 130 & 3.0146 & 2.473 \\\\ \\hline\n\\textit{} & & \\textit{\\#ADOS} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 183765 & 3007 & 11008 & 29 & 0.002 & 3.85 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 158419 & 29547 & 40932 & 56 & 9.3 & 2.7 \\\\ \\hline\n\\textit{} & & \\textit{\\begin{tabular}[c]{@{}l@{}}Economic \\\\ Issues\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 138754 & 2961 & 5733 & 10 & 0.0013 & 3.87 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 138754 & 21417 & 19691 & 23 & 8.5 & 1.83 \\\\ \\hline\n\\textit{Vaccine} & 24172676 & \\textit{\\begin{tabular}[c]{@{}l@{}}Vaccine\\\\ Anti-vax\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 1000000 & 18809 & 24195 & 44 & 2.52 & 2.5 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 1000000 & 203211 & 41877 & 46 & 2.02 & 0.4 \\\\ \\hline\n\\textit{} & & \\textit{Covid Test} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 1000000 & 26671 & 45378 & 69 & 0.00012 & 3.4 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 1000000 & 188761 & 83656 & 109 & 4.67 & 0.886 \\\\ \\hline\n & & economy & \\begin{tabular}[c]{@{}l@{}}hashtag \\\\ co-occur\\end{tabular} & 917890 & 3002 & 4395 & 9 & 0.0009 & 2.9 \\\\ \\hline\n & & & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 917890 & 20590 & 8528 & 13 & 4.023 & 0.8 \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\subsection{Data Sets}\n\\label{sec:datasets}\nData sets used for the experiments are tweets collected using the Twitter Streaming API using a set of domain-specific, hand-curated keywords. We used three data sets, all collected between the 1st and the 31st of January, 2021. The first two sets are for political personalities. The Kamala Harris data set was collected by using variants of Kamala Harris's name together with her Twitter handle and hashtags constructed from her name. The second data set was similarly constructed for Joe Biden. The third data set was collected during the COVID-19 pandemic. The keywords were selected based on popularly co-occurring terms from Google Trends. We selected the terms manually to ensure that they are related to the pandemic and vaccine related issues (and not, for example, partisan politics). Table \\ref{tab:dataset} presents a quantitative summary on the three data sets. We used a set of subqueries to find subsets from each of our datasets. These subqueries are temporal, and represent trending terms that stand for the emerging issues. For the first two datasets, we used three themes to construct the background graphs: a) capitol attack and insurrection, b) Black Lives Matter and ADOS movement, and c) American economic crisis. and recovery efforts. For the vaccine data set, we selected two subsets of posts, one for vaccine-related concerns, anti-vaccine movements, and related issues, and the second for covid testing and infection-related issues. The vaccine data set is larger, and the content is more diverse than the first two. \nAll the data sets are publicly available from The Awesome Lab \\footnote{https:\/\/code.awesome.sdsc.edu\/awsomelabpublic\/datasets\/int-springer-snam\/)}\n\nIn the experiments, we constructed two subgraphs from each subquery. The first is the \"hashtag co-occurrence\" graph, where each hashtag is a node, and they are connected through an edge if they coexist in a tweet. The second is the \"user co-mention\" graph, where each user is a node, and there is an edge between two nodes if a tweet mentioned them jointly. Intuitively, the hashtag co-occurrence subgraph captures topic prevalence and propagation, whereas the co-mention subgraph captures the tendency to influence and propagate messages to a larger audience. Our goal is to discover surprises (and lack thereof) in these two aspects for our data sets.\n\nWe note that the dataset chosen is from a month where the US experienced a major event in the form of the Capitol Attack, and a new administration was sworn in. This explains why the number of tweets in ``Capitol Attack'' subgraph is high for both politicians in this week, and not surprisingly it is also the most discussed topic as evidenced by the high average node degree. Therefore, this ``selection bias'' sets our expectation for subjective interestingness -- given the specific week we have chosen, this issue will dominate most social media conversations in the USA. We also observe the low ratio of the number of unique nodes to the number of tweets, signifying the high number of retweets, that signals a form of information propagation over the network. The propagativeness of the network during this eventful week is also evidenced by the fact that the unique node count of a co-mention network is almost 75\\% - 88\\% higher on average compared to the hashtag co-occur network of the same class. In Section \\ref{sec:analysis}, we show how our interestingness technique performs in the face of this dataset.\n\n\n\\begin{figure*}[t] \n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-capitol-attack-ht-hist.png}\n \\caption{Capitol Attack}\n \\label{fig:hist-results-harris-capitol}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-ados-ht-hist.png}\n \\caption{\\#ADOS}\n \\label{fig:hist-results-harris-ados}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-economy-ht-hist.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-harris}\n \\end{minipage}%\n\n \\caption{Top Hashtag Distributions of the Kamala Harris Data set }\n \\label{fig:hist-results-harris}\n\\end{figure*}\n\\begin{figure*}[t] \n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-capitol-attack-ht-hist.png}\n \\caption{Capitol Attack}\n \\label{fig:hist-results-biden-capitol}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-ados-ht-hist.png}\n \\caption{\\#ADOS Issues}\n \\label{fig:hist-results-biden-ados}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-economy-ht-hist.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-biden-eco}\n \\end{minipage}%\n\n \\caption{Top Hashtag Distributions of the Joe Biden set }\n \\label{fig:hist-results-biden}\n\\end{figure*} \n\\begin{figure*}[t]\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-vax-anti-vax.png}\n \\caption{Vaccine anti-vaccine Issues}\n \\label{fig:hist-results-vac-vac-anti-vax}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-covid-test.png}\n \\caption{COVID-19 Test }\n \\label{fig:hist-results-vac-covid}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-economy.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-vac-eco}\n \\end{minipage}%\n \\caption{Top Hashtag Distributions of the Vaccine Data set }\n \\label{fig:hist-results-vac}\n\\end{figure*} \n\\subsection{Experimental Setup}\n\\label{sec:setup}\nThe experimental setup has three steps a) data collection and archival storage, b)Indexing and storing data, and c) executing analytical pipelines. We used The AWESOME project's continuous tweet ingestion system that collects all the tweets through Twitter 1\\% REST API using a set of hand-picked keywords. We used the AWESOME Polysotre for indexing, storing, and search the data. For computation, we used the Nautilus facility of the Pacific Research Platform (PRP). Our hardware configurations are as follows. The Awesome server has 64 GB memory and 32 core processors, the Nautilus has 32 core, and 64 GB nodes. The data ingestion process required a large memory. Depending on the density of the data, this requirement varies. Similarly, centrality computation is a CPU bounded process. Performance optimizations that we implemented are outside the scope of this paper. \n\\medskip\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-egc-kamal-ht.png}\n\n \\caption{Eigenvector Centrality Disparity Hashtag Network}\n \\label{fig:result-pipline-harris-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-nav-kamal-ht.png}\n \\caption{Topical Navigability Disparity Hashtag Network}\n \\label{fig:result-pipline-harris-ht-nav}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-prop-kamal-ht.png}\n\n \\caption{Propagativeness Disparity Hashtag Network }\n \\label{fig:result-pipline-harris-ht-prop}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-core-kamal-mention.png}\n \\caption{Eigenvector Centrality Disparity co-mention Network}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-nav-kamal-mention.png}\n\n \\caption{Topical Navigability Disparity co-mention Network}\n \\label{fig:result-pipline-harris-co-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-prop-kamal-mention.png}\n \\caption{Propagativeness Disparity co-mention Network}\n \\label{fig:result-pipline-harris-co-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Kamala Harris\" data set.}\n \\label{fig:result-pipline-harris}\n\\end{figure*}\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-egc-biden-ht.png}\n\n \\caption{Eigenvector Centrality Disparity Hashtag Network}\n \\label{fig:result-pipline-biden-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-nav-biden.png}\n \\caption{Topical Navigability Disparity Hashtag Network}\n \\label{fig:result-pipline-biden-ht-nav}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-prop-biden-ht.png}\n\n \\caption{Propagativeness Disparity Hashtag Network }\n \\label{fig:result-pipline-biden-ht-prop}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-egc-biden-mention.png}\n \\caption{Eigenvector Centrality Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-egc}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-nav-biden-mention.png}\n\n \\caption{Topical Navigability Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-prop-biden-mention.png}\n \\caption{Propagativeness Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Joe Biden\" data set.}\n \\label{fig:result-pipline-biden}\n\\end{figure*}\n\\begin{figure*}\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-egc-covid-ht.png}\n\n \\caption{Eigenvector Centrality Disparity: \"Vaccine\" Hashtag Network}\n \\label{fig:result-pipline-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-nav-covid-ht.png}\n \\caption{Topical Navigability Disparity: Vaccine Hashtag Network}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-prop-covid-ht.png}\n \\caption{Propagativeness Disparity: \"Vaccine\" Hashtag Network }\n \\label{fig:result-pipline-ht-vax}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-egc-covid-mention.png}\n \\caption{Eigenvector Centrality Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-egc}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-nav-covid-mention.png}\n\n \\caption{Topical Navigability Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-prop-covid-mention.png}\n \\caption{Propagativeness Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Vaccine\" data set.}\n \\label{fig:result-pipline-vax}\n\\end{figure*}\n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-ados.png}\n \\caption{\\#ADOS}\n \\label{fig:core-peri-kamal-ados}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-capitol.png}\n \\caption{Capitol attack}\n \\label{fig:core-peri-kamal-cap}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harrris-economy.png}\n \\caption{Economic issues}\n \\label{fig:core-peri-kamal-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-random.png}\n \\caption{Random Graph}\n \\label{fig:core-peri-kamal-random}\n \\end{minipage}%\n \\caption{Core and Periphery visualization of \"Kamala Harris\" Data set}\n \\label{fig:core-peri-kamal}\n\\end{figure*} \n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-bidon-ados.png}\n \\caption{\\#ADOS}\n \\label{fig:core-peri-biden-ados}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-capitol.png}\n \\caption{Capitol attack}\n \\label{fig:core-peri-biden-capitol}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-economy.png}\n \\caption{Economic issues}\n \\label{fig:core-peri-biden-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-random.png}\n \\caption{Random Data}\n \\label{fig:core-peri-biden-random}\n \\end{minipage}%\n \\caption{Core and Periphery visualization of \"Joe Biden\" Data set}\n \\label{fig:core-peri-biden}\n\\end{figure*} \n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-vax-ht.png}\n \\caption{Vaccine Issues}\n \\label{fig:core-peri-vax-vax-anti-vax}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-test-ht.png}\n \\caption{COVID-19 Test }\n \\label{fig:core-peri-vax-test}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-economy.png}\n \\centering\n \\caption{Economic issues}\n \\label{fig:core-peri-vax-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-random.png}\n \\caption{Random Graph}\n \\label{fig:core-peri-vax-random}\n \\end{minipage}%\n \\centering\n \\caption{Core and Periphery visualization of the Vaccine Data set}\n \\label{fig:core-peri-vax}\n\\end{figure*}\n\\subsection{Result Analysis}\n\\label{sec:analysis}\n\\textbf{Kamala Harris Network}: Figure \\ref{fig:result-pipline-harris} represents Kamala Harris network. This network is interesting because even in the context of the Capitol Attack and the Senate approval of election results, it is dominated by \\#ADOS conversations, not by the other political issues including economic policies. An analysis of the three interesting measures for this network reveals the following.\nThe Eigenvector Centrality Disparity for the hashtag network (Figure \\ref{fig:result-pipline-harris-ht-egc}) shows that all the groups generated by the subqueries are equally distributed, while the random graph has a higher volume with similar trends. Hence, these three subqueries are equally important. However, there are a few spikes on \\#ADOS and \"Capitol Attack\" that indicates the possibility of interestingness. Figure \\ref{fig:result-pipline-harris-ht-nav} shows that most of the \"economic issues\" has a spike on lower centrality nodes, but it touches zero with the increased centrality. While \"Capitol attack\" and \"\\#ADOS\" has many more spikes in different centrality level. Hence, we conclude that this network is much more navigable for \"Capitol attack\" and \"\\#ADOS\". However, the figure \\ref{fig:result-pipline-harris-ht-prop} represents Propagativeness Disparity, and it is clear from this picture that African American issues dominated the conversation here, any subtopic related to it including non-US issues like ``Tigray'' (on Ethiopian genocide) propagated quickly in this network. \n\\\\\n\\noindent\n\\textbf{Biden Network}: While the predominance of \\#ADOS issues might still be expected for the Kamala Harris data set, we discovered it to be an ``dominant enough'' subgraph in the Joe Biden data set represented in figure \\ref{fig:result-pipline-biden}. The eigenvector centrality disparity shows that the three subgroups are equally dominated in this network. The \"Joe Biden\" data set represented in figure \\ref{fig:result-pipline-biden-ht-egc}. The eigenvector centrality disparity shows that the three subgroups are equally dominated in this network. However, the navigability of the network(figure \\ref{fig:result-pipline-biden-ht-nav}) also shows that it is navigable for all three subgraphs. It has two big spikes for \"economic issues\" and \"Capitol attack,\" plus many mid-sized spikes for \"\\#ADOS\". Interestingly, in the figure \\ref{fig:result-pipline-biden-ht-prop} the propagativeness shows that the network is strongly propagative with the economic issues and the \\#ADOS issue, which shows up both in the Capitol Attack and the ADOS subgroups. Interestingly, the \"Joe Biden\" data's co-mention network shows more propagativeness than the hashtag co-occur network, which indicates exploring the co-mention subgraph will be useful. We also note the occurrence of certain complete unexpected topics (e.g., COVIDIOT, AUSvIND -- Australia vs. India) within the ADOS group, while Economic Issues for Biden do not exhibit surprising results.\n\\\\\n\\noindent\n\\textbf{Vaccine Network} In the vacation network, we found that \"economic issues\" and \"covid tests\" are more propagative than \"vaccine and anti-vaccine\" related topics (Figure \\ref{fig:result-pipline-vax}). The surprising result here is that the \"vaccine - anti-vaccine\" topics show a strong correlation with \"Economy\" in the other two charts. We observe that while the vaccine issues are navigable through the network, but this topic cluster is is not very propagative in the network. In contrast, in the co-mention network, vaccine and anti-vaccine issues are both very navigable and strongly propagative. Further, the propagativeness in the co-mention network for the Covid-test shows many spikes at the different levels, which signifies for testing related issues, the network serves as a vehicle of message propagation and influencing. \n\n\\subsection{Result Validation}\n\\label{sec:validation}\n\nThere is not a lot of work in the literature on interesting subgraph finding. Additionally, there are no benchmark data sets on which our ``interestingness finding'' technique can be compared to. This prompted us to evaluate the results using a core-periphery analysis as indicated earlier in Figure \\ref{fig:sparse}. The idea is to demonstrate that the parts of the network claimed to be interesting stand out in comparison to the network of a random sample of comparable size from the background graph. These results are presented in Figures \\ref{fig:core-peri-kamal}, \\ref{fig:core-peri-biden}, \\ref{fig:core-peri-vax}. In each of these cases, we have shown a representative random graph in the rightmost subfigure to represent the background graph. To us, the large and dense core formation in Figure \\ref{fig:core-peri-kamal}(b) is an expected, non-surprising result. However, the lack of core formation on Figure \\ref{fig:core-peri-kamal}(c) is interesting because it shows while there was a sizeable network for economics related term for Kamala Harris, the topics never ``gelled'' into a core-forming conversation and showed little propagation. Figure \\ref{fig:core-peri-kamal}(a) is somewhat more interesting because the density of the periphery is far less strong than the random graph while the core has about the same density as the random graph. The core periphery separation is much more prominent in first three plots of Figure \\ref{fig:core-peri-biden}. Unlike the Kamala Harris random graph, Figure \\ref{fig:core-peri-biden}(d) shows that the random graph of this data set itself has a very large dense core and a moderately dense periphery. Among the three subfigures, the small (but lighter) core of Figure \\ref{fig:core-peri-biden}(b) has the maximum difference compared to the random graph, although we find \\ref{fig:core-peri-biden}(a) to be conceptually more interesting. For the vaccine data set, Figure \\ref{fig:core-peri-vax} (c)is closest to the random graph showing that most conversations around the topic touches economic issues, while the discussion on the vaccine itself (Figure \\ref{fig:core-peri-biden}(a)) and that of COVID-testing (Figure \\ref{fig:core-peri-biden}(b)) are more focused and propagative. \n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we presented a general technique to discover interesting subgraphs in social networks with the intent that social network researchers from different domains would find it as a viable research tool. The results obtained from the tool will help researchers to probe deeper into analyzing the underlying phenomena that leads to the surprising result discovered by the tool. While we used Twitter as our example data source, similar analysis can be performed on other social media, where the content features would be different. Further, in this paper, we have used a few centrality measures to compute divergence-based features -- but the system is designed to plug in other measures as well. \n\\FloatBarrier\n\\begin{acknowledgements}\nThis work was partially supported by NSF Grants \\#1909875, \\#1738411. We also acknowledge SDSC cloud support, particularly Christine Kirkpatrick \\& Kevin Coakley, for generous help and support for collecting and managing our tweet collection.\n\\end{acknowledgements}\n\\bibliographystyle{spbasic} \n\n\\section{Introduction}\n\\label{sec:intro}\nAn information system designed for analysis of social media must consider a common set properties that characterize all social media data.\n\\begin{itemize}[leftmargin=*]\n \\item Information elements in social media are essentially \\textit{heterogeneous} in nature -- users, posts, images, external URL references, although related, all bear different kinds information. \n \\item Most social information is \\textit{temporal} -- a timestamp is associated with user events like the creation or response on a post, as well as system events like user account creation, deactivation and deletion. The system should therefore allow both temporal as well as time-agnostic analyses.\n \\item Information in social media evolves fast. In one study \\citep{zhu2013modeling}, it was shown that the number of users in a social media is a power function of time. More recently, \\citep{antonakaki2018utilizing} showed that Twitter's growth is supralinear and follows Lescovec's model of graph evolution \\citep{leskovec2007graph}. Therefore, an analyst may first have to perform \\textit{exploration tasks} on the data before figuring out their analysis plan. \n \\item Social media has a significant textual content, sometimes with specific entity markers (e.g., mentions) and topic markers (e.g., hashtags). Therefore any information element derived from text (e.g., named entities, topics, sentiment scores) may also be used for analysis. To be practically useful, the system must accommodate semantic synonyms -- \\#KamalaHarris, @KamalaHarris and ``Kamala Harris'' refer to the same entity.\n \\item Relationships between information items in social media data must capture both canonical relationships like (\\texttt{tweet-15 mentions user-392}) but a wide-variety of computed relationships over base entities (users, posts, $\\ldots$) and text-derived information (e.g., named entities). \n\\end{itemize}\nIt is also imperative that such an information must support three styles of analysis tasks \n\\begin{enumerate}\n \\item \\textbf{Search}, where the user specifies content predicate without specifying the structure of the data. For example, seeking the number of tweets related to Kamala Harris should count tweets where she is the author, as well as tweets where any synonym of ``Kamala Haris'' is in the tweet text.\n \\item \\textbf{Query}, where the user specifies query conditions based on the structure of the data. For example, tweets with \\texttt{create\\_date} between 9 and 9:30 am on January 6th, 2021, with \\texttt{text} containing the string ``Pence'' and that were \\texttt{favorited} at least 100 times during the same time period.\n \\item \\textbf{Discovery,} where the user may or may not know the exact predicates on the data items to be retrieved, but can specify analytical operations (together with some post-filters) whose results will provide insights into the data. For example, we call a query like \\texttt{Perform \\textit{community detection} on all tweets on January 6, 2021 and return the users from the largest community} a discovery query.\n\\end{enumerate}\nIn general, a real-life analytics workload will freely combine these modalities as part of a user's information exploration process. \n\nIn this paper, we present a general-purpose graph-based model for social media data and a subgraph discovery algorithm atop this data model. Physically, the data model is implemented on AWESOME \\citep{gupta:awesome:2016}, an analytical platform designed to enable large-scale social media analytics over continuously acquired data from social media APIs. The platform, developed as a polystore system natively supports relational, graph and document data, and hence enables a user to perform complex analysis that include arbitrary combinations of search, query and discovery operations. We use the term \\textbf{\\textit{query-driven discovery}} to reflect that the scenario where the user does not want to run the discovery algorithm on a large and continuously collected body of data; rather, the user knows a starting point that can be specified as an expressive query (illustrated later), and puts bounds on the discovery process so that it terminates within an acceptable time limit.\n\n\\medskip \n\n\\noindent \\textbf{Contributions.} This paper makes the following contributions. (a) It offers a new formulation for the subgraph interestingness problem for social media; (b) based on this formulation, it presents a discovery algorithm for social media; (c) it demonstrates the efficacy of the algorithm on multiple data sets.\n\n\\medskip\n\n\\noindent \\textbf{Organization of the paper.} The rest of the paper is organized as follows. Section \\ref{sec:related} describes the related research on interesting subgraph finding in as investigated by researchers in Knowledge Discovery, Information Management, as well Social Network Mining. Section \\ref{sec:prelim} presents the abstract data model over which the Discovery Algorithm operations and the basic definitions to establish the domain of discourse for the discovery process. Section \\ref{sec:generate} presents our method of generating candidate subgraphs that will be tested for interestingness. Section \\ref{sec:discovery} first presents our interestingness metrics and then the testing process based on these metrics. Section \\ref{sec:experiments} describes the experimental validation of our approach on multiple data sets. Section \\ref{sec:conclusion} presents concluding discussions.\n\n\\section{Related Work}\n\\label{sec:related}\nThe problem of finding ``interesting'' information in a data set is not new. \\citep{what-makes-patterns-interesting:1996} described that an``interestingness measure'' can be ``objective'' or ``subjective''. A measure is ``objective'' when it is computed solely based on the properties of the data. In contrast, a ``subjective'' measure must take into account the user's perspective. They propose that (a) a pattern is interesting if it is \"surprising\" to the user (\\textit{unexpectedness}) and (b) a pattern is interesting if the user can act on it\nto his advantage (\\textit{actionability}). Of these criteria, actionability is hard to determine algorithmically; unexpectedness, on the other hand, can be viewed as the departure from the user's beliefs. For example, a user may believe that the 24-hour occurrence pattern of all hashtags are nearly identical. In this case, a discovery would be to find a set of hashtags and sample dates for which this belief is violated. \nFollowing \\citep{geng2006interestingness}, there are three possibilities regarding how a system is informed of a user's knowledge and beliefs: (a) the user provides a formal specification of his or her knowledge, and after obtaining the mining results, the system chooses which unexpected patterns to present to the user \\citep{BingLiu:1999}; (b) according to the user's interactive feedback, the system removes uninteresting patterns \\citep{not-interesting:1999}; and (c) the system applies the user's specifications as constraints during the mining process to narrow down the search space and provide fewer results. Our work roughly corresponds to the third strategy.\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm, height=6.5cm]{fig-1-core-peri.png}\n\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=6.5cm, height=6.5cm]{fig-1-networking-connection.png}\n \n \\end{minipage}\n \\begin{minipage}{.4\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{fig-1-top-25-bar.png}\n \\end{minipage}\n \\begin{minipage}{.4\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{degree-hist.png}\n \\end{minipage}\n \\caption{An example of an interesting subgraph. Tweets on Indian politics form a sparse center in a dense periphery (separately shown in the top right). The bottom two figures show the hashtag distribution of the center and the degree distribution of the center (log scale) respectively.}\n \\label{fig:sparse}\n\\end{figure*}\nEarly research on finding interesting subgraphs focused primarily on finding interesting substructures. This body of research primarily found interestingness in two directions: (a) finding frequently occurring subgraphs in a collection of graphs (e.g., chemical structures) \\citep{kuramochi2001frequent, yan2003closegraph, thoma2010discriminative} and (b) finding regions of a large graph that have high edge density \\citep{lee2010survey, sariyuce2015finding, epasto2015efficient, wen2017efficient} compared to other regions in the graph. Note that while a dense region in the graph can definitely interesting, shows, the inverse situation where a sparsely connected region is surrounded by an otherwise dense periphery can be equally interesting for an application. \n\nWe illustrate the situation in Figure \\ref{fig:sparse}. The primary data set is a collection of tweets on COVID-19 vaccination, but this specific graph shows a sparse core on Indian politics that is loosely connected to nodes of an otherwise dense periphery on the primary topic. Standard network features like hashtag histogram and the node degree histograms do not reveal this substructure requiring us to explore new methods of discovery. \n\nA serious limitation of the above class of work is that the interestingness criteria does not take into account \\textit{node content} (resp. edge content) which may be present in a property graph data model \\citep{angles2018property} where nodes and edges have attributes. \\citep{bendimerad2019mining} presents several discovery algorithms for graphs with vertex attributes and edge attributes. They perform both structure-based graph clustering and subspace clustering of attributes to identify interesting (in their domain ``anomalous'') subgraphs.\n\nOn the ``subjective'' side of the interestingness problem, one approach considers interesting subgraphs as a subgraph matching problem \\citep{shan2019dynamic}. Their general idea is to compute all matching subgraphs that satisfy a user the query and then ranking the results based on the rarity and the likelihood of the associations among entities in the subgraphs. In contrast \\citep{adriaens2019subjectively} uses the notion of ``subjective interestingness'' which roughly corresponds to finding subgraphs whose connectivity properties (e.g., the average degree of a vertices) are distinctly different from an ``expected'' \\textit{background} graph. This approach uses a constrained optimization problem that maximizes an objective function over the \\textit{information content} ($IC$) and the \\textit{description length} ($DL$) of the desired subgraph pattern. \n\nOur work is conceptually most inspired by \\citep{bendimerad2019subj} that explores the subjective interestingness problem for attributed graphs. Their main contribution centers around CSEA (Cohesive Subgraph with Exceptional Attributes) patterns that inform the user that a given set of attributes has exceptional values throughout a set of vertices in the graph. The subjective interestingness is given by $$S(U,S) = \\frac{IC(U,S)}{DL(U)}$$ where $U$ is a subset of nodes and $S$ is a set of restrictions on the value domains of the attributes. The system models the prior beliefs of the user as the Maximum Entropy distribution subject to any stated prior, beliefs the user may hold about the data (e.g., the distribution of an attribute value). The information content $IC(U,S)$ of a CSEA pattern $(U,S)$ is formalized as negative of the logarithm of the probability that the pattern is present under the background distribution. The length of a description\nof $U$ is the intersection of all neighborhoods in a subset $X \\subseteq N (U)$,\nalong with the set of ``exceptions'', vertices are in the intersection but not part of\n$U$. However, we have a completely different, more database-centric formulation of the background and the user's beliefs.\n\n\\section{The Problem Setup}\n\\label{sec:prelim}\n\n\\subsection{Data Model}\n\\label{sec:dataModel}\nOur abstract model social media data takes the form of a \\textit{heterogeneous information network} (an information network with multiple types of nodes and edges), which we view as a temporal property graph $G$. Let $N$ be the node set and $E$ be the edge set of $G$. $N$ can be viewed as a disjoint union of different subsets (called \\textit{node types}) -- users $U$, posts $P$, topic markers (e.g., hastags) $H$, term vocabulary $V$ (the set of all terms appearing in a corpus), references (e.g., URLs) $R(\\tau)$, where $\\tau$ represents the type of resource (e.g., image, video, web site $\\ldots$). Each type of node have a different set of properties (attributes) $\\bar{A}(.)$. We denote the attributes of $U$ as $\\bar{A}(U) = a_1(U), a_2(U) \\ldots$ such that $a_i(U)$ is the $i$-the attribute of $U$. An attribute of a node type may be temporal -- a post $p \\in P$ may have temporal attribute called \\texttt{creationDate}. Edges in this network can be directional and have a single \\textit{edge type}. The following is a set of \\textit{base (but not exhaustive) edge types}:\n\\begin{itemize}[leftmargin=1em, label=\\scriptsize{$-$}]\n \\item \\texttt{writes}: $U \\mapsto P$\n \\item \\texttt{uses}: $P \\mapsto H$\n \\item \\texttt{mentions}: $P \\mapsto U$ maps a post $p$ to a user $u$ if $u$ mentioned in $p$\n \\item \\texttt{repostOf}: $P \\mapsto P$ maps a post $p_2$ to a post $p_1$ if $p_2$ is repost of $p_1$. This implies that $ts(p_2) < ts(p_1)$ where $ts$ is the timestamp attribute\n \\item \\texttt{replyTo\/comment}: $P \\mapsto P$ maps a post $p_2$ to a post $p_1$ if $p_2$ is a reply to $p_1$. This implies that $ts(p_2) < ts(p_1)$ where $ts$ is the timestamp attribute\n \\item \\texttt{contains}: $P \\mapsto V \\times \\mathcal{N}$ where $\\mathcal{N}$ is the set of natural numbers and represents the count of a token $v \\in V$ in a post $p \\in P$\n\\end{itemize}\n We realistically assume that the inverse of these mappings can be computed, i.e., if $v_1, v_2 \\in V$ are terms, we can perform a \\texttt{contains}$^{-1}(v_1 \\wedge \\neg v_2)$ operation to yield all posts that used $v_1$ but not $v_2$.\n \nThe AWESOME information system allows users construct \\textit{derived or computed edge types} depending on the specific discovery problem they want to solve. For example, they can construct a standard hashtag co-occurrence graph using a non-recursive aggregation rule in Datalog \\citep{consens1993low}:\n\\begin{equation*}\n \\begin{aligned}\n HC(h_1, h_2, count(p)) &\\longleftarrow \\\\\n &p \\in P, h_1 \\in H, h_2 \\in H \\\\ \n &uses(p, h_1), uses(p, h_2) \n \\end{aligned}\n\\end{equation*}\nWe interpret $HC(h_1, h_2, count(p))$ as an edge between nodes $h_1$ and $h_2$ and $count(p)$ as an attribute of the edge. In our model, a computed edge has the form: $E_T(N_b, N_e, \\bar{B})$ where $E_T$ is the edge type, $N_b, N_e$ are the tail and head nodes of the edge, and $\\bar{B}$ designates a flat schema of edge properties. The number of such computed edges can be arbitrarily large and complex for different subgraph discovery problems. A more complex computed edge may look like: $UMUHD(u_1,u_2,mCount,d, h)$, where an edge from $u_1$ to $u_2$ is constructed if user $u_1$ creates a post that contains hashtag $h$, and mentions user $u_2$ a total number of $mCount$ times on day $d$. Note that in this case, hashtag $h$ is a property of edge type $UMUHD$ and is not a topic marker node of graph $G$. \n\n\\subsection{Specifying User Knowledge}\n\\label{sec:hquery}\nSince the discovery process is query-driven, the system has no \\textit{a priori} information about the user's interests, prior knowledge and expectations, if any, for a specific discovery task. Hence, given a data corpus, the user needs to provide this information through a set of parameterized queries. We call these queries ``heterogeneous'' because they can be placed on conditions on any arbitrary node (resp. edge) properties, network structure and text properties of the graph. \n\n\\medskip\n\n\\noindent \\textbf{User Interest Specification.} The discovery process starts with the specification of the user's universe of discourse identified with a query $Q_0$. We provide some illustrative examples of user interest with queries of increasing complexity.\n\n\\noindent \\textit{Example 1.} ``All tweets related to COVID-19 between 06\/01\/2020 and 07\/31\/2020 that refer to Hydroxychloroquine''. In this specification, the condition ``related to COVID-19'' amounts to finding tweets containing any $k$ terms from a user-provided list, and the condition on ``Hydroxychloroquine'' is expressed as a fuzzy search.\n\\begin{figure}\n\\includegraphics[width=8.5cm, height=6.5cm]{hydroxyclq.png}\n\\vspace{-2cm}\n \\caption{Hydroxychloroquine and COVID related Hashtags}\n \\label{fig:hydroxi}\n\\end{figure}\nFigure \\ref{fig:hydroxi} shows the top hashtags related to this search -- note that the hashtag co-occurrence graph around COVID-19 and hyroxychloroquine includes ``FakeNewsMedia''.\n\n\\noindent \\textit{Example 2.} ``All tweets from users who mention \\texttt{Trump} in their user profile and \\texttt{Fauci} in at least $n$ of their tweets''. Notice that this query is about users with a certain behavioral pattern -- it captures \\textit{all} tweets from users who have a combination of specific profile features and tweet content. \n\n\\noindent \\textit{Example 3.} ``All tweets from users $U_1$ whose tweets appear in hashtag-cooccurrence in the 3 neighborhood around \\texttt{\\#ados} is used together with all tweets of users $U_2$ who belong to the user-mention-user networks of these users (i.e., $U_1$) '', where \\texttt{\\#ados} refers to ``American Descendant of Slaves'', which represents an African American cause.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{kamala-ht-vol.png}\n\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=6.5cm]{kamala-ht.png}\n \n \\end{minipage}\n\n \\caption{Two views of the \\texttt{\\#ados} cluster of hashtags }\n \\label{fig:ados}\n\\end{figure*}\n\n\n\\noindent The end result of $Q_0$ is a collection of posts that we call the \\textit{initial post set} $P_0$. Using the posts in $P_0$, the system creates a background graph as follows.\n\n\\medskip\n\n\\noindent \\textbf{Initial Background Graph.} The initial background graph $G_0$ is the graph derived from $P_0$ over which the discovery process runs. However, to define the initial graph, we first develop the notion of a \\textit{conversation}.\n\n\\begin{definition}[\\textbf{Semantic Neighborhood}.] $N(p)$, the semantic neighborhood of a post $p$ is the graph connecting $p$ to instances of $U \\cup H \\cup P \\cup V$ that directly relates to $p$.\n\\end{definition}\n\n\\begin{definition}[\\textbf{Conversation Context}.] The conversation context $C(p)$ of post $p$ is a subgraph satisfying the following conditions:\n\\begin{enumerate}[leftmargin=*]\n \\item $P_1$: The set of posts reachable to\/from $p$ along the relationships \\texttt{repostOf, replyTo} belong to $C(p)$. \n \\item $P_2$: The union of posts in the semantic neighborhood of $P_1$ belong to $C(p)$.\n \\item $E$: The induced subgraph of $P_1 \\cup P_2$ belong to $C(p)$\n \\item Nothing else belongs to $C(p)$.\n\\end{enumerate}\n\\end{definition}\n\\noindent Clearly, we can assert that $C(p)$ is a connected graph and that $N(p)~ \\sqsubset_g~ C(p)$ where $\\sqsubset_g$ denotes a subgraph relationship.\n\n\\begin{definition}[\\textbf{Initial Background Graph}.] The initial background graph $G_0$ is a merger of all conversation contexts $C(p_i), p_i \\in P_0$, together with all computed edges induces the nodes of $\\cup_i C(p_i)$\n\\end{definition}\nThe initial background graph itself can be a gateway to finding interesting properties of the graph. To illustrate this based on the graph obtained from our Example 2. Figure \\ref{fig:ados} presents two views of the \\texttt{\\#ados} cluster of hashtags from January 2021. The left chart shows the time vs. count of the hashtags while the right chart shows the dominant hashtags of the same period in this cluster. The strong peak in the timeline, was due to an intense discussion, revealed by topic modeling, on the creation of an office on African American issues. The occurrence of this peak is interesting because most of the social media conversation in this time period was focused on the Capitol attack on January 6.\n\n\nGiven the $G_0$ graph,we discover subgraphs $S_i \\subset G_o$ whose content and structure are distinctly different that of $G_0$. However, unlike previous approaches, we apply a generate-and-test paradigm for discovery. The generate-step (Section \\ref{sec:generate}) uses a graph cube like \\citep{zhao2011graph} technique to generate candidate subgraphs that might be interesting and the test-step (Section \\ref{sec:testing}) computes if (a) the candidate is sufficiently distinct from the $G'$, and (b) the collection of candidates are sufficiently distinct from each other.\n\n\\noindent \\textbf{Subgraph Interestingness.} For a subgraph $S_i$ to be considered as a candidate, it must satisfy the following conditions.\n\n\\noindent \\textbf{C1.} $S_i$ must be connected and should satisfy a size threshold $\\theta_n$, the minimal number of nodes.\n\n\\noindent \\textbf{C2.} Let $A_{ij}$ (resp. $B_{ik}$) be the set of \\textit{local} properties of node $j$ (resp. edge $k$) of subgraph $S_i$. A property is called ``local'' if it is not a network property like vertex degree. All nodes (resp. edges) of $S_i$ must satisfy some user-specified predicate $\\phi_N$ (resp. $\\phi_E$) specified over $A_{ij}$ (resp. $B_{ik}$). For example, a node predicate might require that all ``post'' nodes in the subgraph must have a re-post count of at least 300, while an edge predicate may require that all hashtag co-occurrence relationships must have a weight of at least 10. A user defined constraint on the candidate subgraph improves the interpretability of the result. Typical subjective interestingness techniques \\citep{van2016subjective, adriaens2019subjectively} use only structural features of the network and do not consider attribute-based constraints, which limits their pragmatic utility.\n\n\\noindent \\textbf{C3.} For each text-valued attribute $a$ of $A_{ij}$, let $C(a)$ be the collection of the values of $a$ over all nodes of $S_i$, and $\\mathcal{D}(C(a))$ is a textual diversity metric computed over $C(a)$. For $S_i$ to be interesting, it must have at least one attribute $a$ such that $\\mathcal{D}(C(a))$ does not have the usual power-law distribution expected in social networks. Zheng et al \\citep{zheng2019social} used \\textit{vocabulary diversity} and \\textit{topic diversity} as textual diversity measures.\n\n\n\\section{Candidate Subgraph Generation}\n\\label{sec:generate}\nSection \\ref{sec:hquery} describes the creation of the initial background graph $G_0$ that serves as the domain of discourse for discovery. Depending on the number of initial posts $P_0$ resulting form the initial query, the size of $G_0$ might be too large -- in this case the user can specify followup queries on $G_0$ to narrow down the scope of discovery. We call this narrowed-down graph of interest as $G'$ -- if no followup queries were used, $G' = G_0$. The next step is to generate some candidate subgraphs that will be tested for interestingness. \n\n\\noindent \\textbf{Node Grouping.} A node group is a subset of \\textit{nodes($G'$)} where all nodes in a group have some similar property. We generalize the \\textit{groupby} operation, commonly used in relational database systems, to heterogeneous information networks. To describe the generalization, let us assume $R(A, B, C, D, \\ldots)$ is a relation (table) with attributes $A, B, C, D, \\ldots$ A \\textit{groupby} operation takes as input (a) a subset of \\textit{grouping attributes} (e.g. $A, B$), (b) a \\textit{grouped attribute} (e.g., $C$) and (c) an \\textit{aggregation function} (e.g., \\textit{count}). The operation first computes each distinct cross-product value of the grouping attributes (in our example, $A \\times B$) and creates a list of all values of the grouped attribute corresponding to each distinct value of the grouping attributes, and then applies the aggregation function to the list. Thus, the result of the \\textit{groupby} operation is a single aggregated value for each distinct cross-product value of grouping attributes. \n\nTo apply this operation to a social network graph, we recognize that there are two distinct ways of defining the ``grouping-object''. \\\\\n(1) Node properties can be directly used just like in the relational case. For example, for tweets a grouping condition might be \\texttt{getDate} \\texttt{(Tweet.created\\_at)} $\\wedge$ \\texttt{bin(Tweet.favoriteCount, 100)}, where the \\texttt{getDate} function extracts the date of a tweet and the \\texttt{bin} function creates buckets of size 100 from the favorite count of each tweet. \\\\\n(2) The grouping-object is a subgraph pattern. For example, the subgraph pattern\\\\\n\\texttt{(:tweet\\{date\\})-[:uses]->(:hashtag\\{text\\})} \\hfill{(P1)}\\\\\nstates that all ''tweet'' nodes having the same posting date, together with every distinct hashtag text will be placed in a separate group. Notice that while (1) produces disjoint tweets, (2) produces a ``soft'' partitioning on the tweets and hashtags due to the many-to-many relationship between tweets and hashtags. \\\\\nIn either case, the result is a set of node groups, designated here as $N_i$. For example, the grouping pattern P1 expressed in a Cypher-like syntax \\citep{francis2018cypher} (implemented in the Neo4J graph data management system) states that all tweets having the same posting date, together with every distinct hashtag text will be placed in a separate group.\nNotice that this process produces a ``fuzzy'' partitioning on the tweets and hashtags due to the many-to-many relationship between tweets and hashtags. Hence, the same tweet node can belong to two different groups because it has multiple hashtags. Similarly, a hashtag node can belong to multiple groups because tweets from different dates may have used the same hashtag. While the grouping condition specification language can express more complex grouping conditions, in this paper, we will use simpler cases to highlight the efficacy of the discovery algorithm. We denote the node set in each group as $N_i$.\n\n\\noindent \\textbf{Graph Construction.} To complete the \\textit{groupby} operation, we also need to specify the aggregation function in addition to the grouping-object and the grouped-object. This function takes the form of a graph construction operation that constructs a subgraph $S_i$ by expanding on the node set $N_i$. Different expansion rules can be specified, leading to the formation of different graphs. Here we list three rules that we have found fairly useful in practice.\n\n\\noindent \\textbf{G1.} Identify all the \\texttt{tweet} nodes in $N_i$. Construct a \\textit{relaxed induced subgraph} of the \\texttt{tweet}-labeled nodes in $N_i$. The subgraph is induced because it only uses tweets contained within $N_i$, and it is \\textit{relaxed} because contains all nodes \\textit{directly associated} with these tweet nodes, such as author, hashtags, URLs, and mentioned-users. \n\n\\noindent \\textbf{G2.} Construct a \\textit{mention network} from within the tweet nodes in $N_i$ -- the mention network initially connects all \\texttt{tweet} and \\texttt{user}-labeled nodes. Extend the network by including all nodes \\textit{directly associated} with these tweet nodes.\n\n\\noindent \\textbf{G3.} A third construction relaxes the grouping constraint. We first compute either \\textbf{G1} or \\textbf{G2}, and then extend the graph by including the first order neighborhood of mentioned users or hashtags. While this clearly breaks the initial group boundaries, a network thus constructed includes tweets of similar themes (through hashtags) or audience (through mentions).\n\n\\noindent \\textbf{Automated Group Generation.} In a practical setting, as shown in Section \\ref{sec:experiments}, the parameters for node grouping operation can be specified by a user, or it can be generated automatically. Automatic generation of grouping-objects is based on the considerations described below. To keep the autogeneration manageable, we will only consider single and two objects for attribute grouping and only a single edge for subgraph patterns.\n\\begin{itemize}[leftmargin=*]\n \\item Since temporal shifts in social media themes and structure are almost always of interest, the posting timestamp is always a grouping variable. For our purposes, we set the granularity to a day by default, although a user can set it.\n \\item The frequency of most nontemporal attributes (like hashtags) have a mixture distribution of double-pareto lognormal distribution and power law \\citep{gupta:osn:2020}, we will adopt the following strategy.\n \\begin{itemize}[label={$\\circ$}]\n \\item Let $f(A)$ be distribution of attribute $A$, and $\\kappa(f(A))$ be the curvature of $f(A)$. If $A$ is a discrete variable, we find $a*$, the maximum curvature (elbow) point of $f(A)$ numerically \\citep{antunes2018knee}.\n \\item We compute $A'$, the values of attribute $A$ to the left of $a*$ for all attributes and choose the attribute where the cardinality of $A'$ is maximum. In other words, we choose attributes which have the highest number of pre-elbow values. \n \\end{itemize}\n \\item We adopt a similar strategy for subgraph patterns. If $T_1(a_i)\\stackrel{L}{\\longrightarrow}T_2(b_j)$ is an edge where $T_1, T_2$ are node labels, $a_i, b_j$ are node properties and $L$ is an edge label, then $a_i$ and $b_j$ will be selected based on the conditions above. Since the number of edge labels is fairly small in our social media data, we will evaluate the estimated cardinality of the edge for all such triples and select one with the lowest cardinality. \n\\end{itemize}\n\n\\section{The Discovery Process}\n\\label{sec:discovery}\n\n\\subsection{Measures of for Relative Interestingness}\n\\label{sec:interestingness}\nWe compute the interestingness of a subgraph $S$ in reference to a background graph $G_b$ (e.g., $G'$), and consists of a structural as well as a content component. We first discuss the structural component. To compare a subgraph $S_i$ with the background graph, we first compute a set of network properties $P_j$ (see below) for nodes (or edges) and then compute the frequency distribution $f(P_j(S_i))$ of these properties over all nodes (resp. edges) of (a) subgraphs $S_i$, and (b) the reference graph (e.g., $G'$). A distance between $f(P_j(S_i))$ and $f(P_j(G_b))$ is computed using Jensen\u2013Shannon divergence (JSD). In the following, we use $\\Delta(f_1,f_2)$ to refer to the JS-divergence of distributions $f_1$ and $f_2$. \n\\medskip\n\\noindent \\textbf{Eigenvector Centrality Disparity:} The testing process starts by identifying the distributions of nodes with high node centrality between the networks. While there is no shortage of centrality measures in the literature, we choose eigenvector centrality \\citep{das2018study} defined below, to represent the dominant nodes. Let $A = (a_{i,j})$ be the adjacency matrix of a graph. The eigenvector centrality $x_{i}$ of node $i$ is given by: $$x_i = \\frac{1}{\\lambda} \\sum_k a_{k,i} \\, x_k$$ where $\\lambda \\neq 0$ is a constant. \n The rationale for this choice follows from earlier studies in \\citep{Bonacich2007-mx,Ruhnau2000-jy,Yan2014-dn}, who establish that since the eigenvector centrality can be seen as a weighted sum of direct and indirect connections, \n it represents the true structure of the network more faithfully than other centrality measures. \n Further, \\citep{Ruhnau2000-jy} proved that the eigenvector-centrality under the Euclidean norm can be transformed into node-centrality, a property not exhibited by other common measures.\n Let the distributions of eigenvector centrality of subgraphs $A$ and $B$ be $\\beta_a$ and $\\beta_b$ respectively, and that of the background graph be $\\beta_t$, then \n $|\\Delta_e(\\beta_t, \\beta_a)| > \\theta $ indicates that $A$ is sufficiently structurally distinct from $G_b$ \n $|\\Delta_e(\\beta_t, \\beta_a)| > |\\Delta_e(\\beta_t, \\beta_b)|$ indicates that $A$ contains significantly more or significantly less influential nodes than $B$. \n\n\\medskip\n\\noindent \\textbf{Topical Navigability Disparity:} Navigability measures ease of flow. If subgraph $S$ is more navigable than subgraph $S'$, then there will be more traffic through $S$ compared to $S'$. However, the likelihood of seeing a higher flow through a subgraph depends not just on the structure of the network, but on extrinsic covariates like time and topic. So, a subgraph is interesting in terms of navigability if for some values of a covariate, its navigability measure is different from that of a background subgraph. \n\nInspired by its application in biology \\citep{seguin2018navigation}, traffic analysis \\citep{scellato2010traffic}, and network attack analysis \\citep{lekha2020central}, we use \\textit{edge betweenness centrality} \\citep{das2018study} as the generic (non-topic) measure of navigability. Let $\\alpha_{ij}$ be the number of shortest paths from node i to j and $\\alpha_{ij}(k)$ is the number of paths passes through the edge $k$. Then the edge-betweenness centrality is $$C_{eb}(k)= \\sum_{(i,j)\\in V} \\frac{\\alpha_{ij}(k)}{\\alpha_{ij}}$$\nBy this definition, the edge betweenness centrality is the portion of all-pairs shortest paths that pass through an edge. Since edge betweenness centrality of edge $e$ measures the proportion of paths that passes through $e$, a subgraph $S$ with a higher proportion of high-valued edge betweenness centrality implies that $S$ may be more \\textit{navigable} than $G_b$ or another subgraph $S'$ of the graph, i.e., information propagation is higher through this subgraph compared to the whole background network, for that matter, any other subgraph of network having a lower proportion of nodes with high edge betweenness centrality. Let the distribution of the edge betweenness centrality of two subgraphs $A$ and $B$ are $c_1$ and $c_2$ respectively, and that of the reference graph is $c_0$. Then, $|\\Delta_b(c_0, c_1)| < |\\Delta_b(c_0, c_2)|$ means the second subgraph is more navigable than the first.\n\nTo associate navigability with topics, we detect topic clusters over the background graph and the subgraph being inspected. The exact method for topic cluster finding is independent of the use of topical navigability. In our setting, we have used topic topic modeling and dense region detection in hashtag cooccurrence networks. For each topic cluster, we identify posts (within the subgraph) that belong to the cluster. If the number of posts is greater than a threshold, we compute the navigability disparity.\n\n\\medskip\n\n\\noindent \\textbf{Propagativeness Disparity:} The concept of propagativeness builds on the concept of navigability. Propagativeness attempts to capture how strongly the network is spreading information through a navigable subgraph $S$. We illustrate the concept with a network constructed over tweets where a political personality (Senator Kamala Harris in this example) is mentioned in January 2021. The three rows in Figure \\ref{fig:kamala} show the network characteristics of the subregions of this graph, related, respectively to the themes of \\texttt{\\#ados} and ``black lives matter'' (Row 1), Captiol insurrection (Row 2) and Socioeconomic issues related to COVID-19 including stimulus funding ad business reopening (Row 3). In earlier work \\citep{zheng2019social}, we have shown that a well known propagation mechanism for tweets is to add user-mentions to improve the reach of a message - hence the user-mention subgraph is indicative of propagative activity. In Figure \\ref{fig:kamala}, we compare the hashtag activity (measured by the Hashtag subgraph) and the mention activity (size of the mention graph) in these three subgraphs. Figure \\ref{fig:kamala} (e) shows a low and fairly steady size of the user mention activity in relation to the hashtag activity on the same topic, and these two indicators are not strongly correlated. Further, Figure \\ref{fig:kamala} (f) shows that the mean and standard deviation of node degree of hashtag activity are fairly close, and the average degree of user co-mention (two users mentioned in the same tweet) graph is relatively steady over the period of observation -- showing low propagativeness. In contrast, Row 1 and Row 2 show sharper peaks. But the curve in Figure \\ref{fig:kamala} (c) declines and has low, uncorrelated user mention activity. Hence, for this topic although there is a lot of discussion (leading to high navigability edges), the propagativeness is quite low. In comparison, Figure \\ref{fig:kamala} (a) shows a strong peak and a stronger correlation be the two curves indicating more propagativeness. The higher standard deviation in the co-mention node degree over time (Figure \\ref{fig:kamala} (b)) also shows the making of more propagation around this topic compared to the others.\n\nWe capture propagativeness using current flow betweenness centrality \\citep{brandes2005centrality} which is based on Kirchoff's current laws. We combine this with the average neighbor degree of the nodes of $S$ to measure the spreading propensity of $S$. The current flow betweenness centrality is the portion of all-pairs shortest paths that pass through a node, and the average neighbor degree is the average degree of the neighborhood of each node. If a subgraph has higher current flow betweenness centrality plus a higher average neighbor degree, the network should have faster communicability. Let $\\alpha_{ij}$ be the number of shortest paths from node $i$ to $j$ and $\\alpha_{ij}(n)$ is the number of paths passes through the node $n$. Then the current flow betweenness centrality: $$C_{nb}(n)= \\sum_{(i,j)\\in V} \\frac{\\alpha_{ij}(n)}{\\alpha_{ij}}$$\n\nSuppose the distribution of the current flow betweenness centrality of two subgraphs $A$ and $B$ is $p_1$ and $p_2$ respectively, and distribution of the reference graph is $p_t$. Also the distribution of the $\\beta_{n}$, the average neighbor degree of the node $n$, for the subgraph $A$ and $B$ is $\\gamma_1$ and $\\gamma_1$ respectively, and the reference distribution is $\\gamma_t$. If the condition\n $$\\Delta(p_t, p_1) * \\Delta(\\gamma_t, \\gamma_1) < \\Delta(p_t, p_2) * \\Delta(\\gamma_t, \\gamma_2)$$\nholds, we can conclude that subgraph $B$ is a faster propagating network than subgraph $A$. This measure is of interest in a social media based on the observation that misinformation\/disinformation propagation groups either try to increase the average neighbor degree by adding fake nodes or try to involve influential nodes with high edge centrality to propagate the message faster \\citep{besel2018full}. \n\n\\medskip\n \\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{ados-kamal.png}\n\n \\caption{\\#ADOS and Black Lives Matter }\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{ados-kamal-deg.png}\n \n \\caption{AVG Degree Distributions and Std Dev }\n \\end{minipage}\n\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{insurrection-kamal.png}\n \\caption{Insurrection and Capitol Attack}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{insurrection-kamal-deg.png}\n \n \\caption{AVG Degree Distributions and Std Dev}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5cm]{stimulus-kamal.png}\n \\caption{Socioeconomic Issues During COVID-19 }\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5cm]{stimulus-kamal-deg.png}\n \\caption{AVG Degree Distributions and Std Dev}\n \\end{minipage}\n \\caption{Various Metrics for Daily Partitioned Hashtag co-occurrences and User mentioned Graph from the Political Personalty Dataset. $Q_0$ retrieves tweets where Kamala Harris is mentioned in a hashtag, text body or user mention.}\n \\label{fig:kamala}\n\\end{figure*}\n\\noindent \\textbf{Subgroups within a Candidate Subgraph:} \n \n The purpose of the last metric is to determine whether a candidate subgraph identified using the previous measures need to be further decomposed into smaller subgraphs. We use subgraph centrality \\citep{estrada2005subgraph} and coreness of nodes as our metrics. \n \n \n \n The subgraph centrality measures the number of subgraphs a vertex participates in, and the core number of a node is the largest value $k$ of a $k$-core containing that node. So a subgraph for which the core number and subgraph centrality distributions are right-skewed compared to the background subgraph are (i) either split around high-coreness nodes, or (ii) reported to the user as a mixture of diverse topics. \nThe node grouping, per-group subgraph generation and candidate subgraph identification process is presented in Algorithm \\ref{alg:graph-metrics}. In the algorithm, function \\textit{cut2bin} extends the cut function, which compares the histograms of the two distributions whose domains (X-values) must overlap, and produces equi-width bins to ensure that two histograms (i.e., frequency distributions) have compatible bins.\n\n\\subsection{The Testing Process}\n\\label{sec:testing}\n\n\\begin{algorithm}\n\\scriptsize\n\\caption{Graph Construction Algorithm}\n\\label{alg:graph-metrics}\n\\SetKwProg{ComputeMetrics}{Function \\emph{ComputeMetrics}}{}{end}\nINPUT : $Q_{out}$ Output of the query, $L$ Graph construction rules, $gv$ grouping variable, $th_{size}$ is the minimum size of the subgraph\\;\n\\SetKwProg{gmetrics}{Function \\emph{gmetrics}}{}{end}\n\\SetKwProg{CompareHistograms}{Function \\emph{CompareHistograms}}{}{end}\n\\gmetrics{($Q_{out}$, $L$, $groupVar$)}{\nG[]$\\leftarrow$ ConstructGraph($Q_{out}$, $L$)\\;\n$T \\leftarrow$ []\\;\n\\For{$g \\in G $}{\n $t_{\\alpha} \\leftarrow$ ComputeMetrics(g)\\; \n $T.push(t_{alpha})$\\;\n \n }\nreturn $T$\n}\n\\ComputeMetrics{(Graph g)}{\n$m\\leftarrow[]$\\;\n$m.push(eigenVectorCentrality(g))$\\;\n.........\n$m.push(coreNumber(g))$\\;\nreturn $m$\n}\n\\CompareHistograms{(List $t_{1}$, List $x_{2}$)}{\n$s_g \\leftarrow cut2bin(x_2, bin_{edges})$\\;\n$bin_{edges} \\leftarrow$ getBinEdges($x_{2}$)\\;\n$t_g \\leftarrow cut2bin(t_1, bin_{edges})$\\;\n\n$\\beta_{js} \\leftarrow distance.jensenShannon(t_g, s_g)$\\;\n$h_t \\leftarrow histogram(t_g, s_g,bin_{edges} )$\\;\nreturn $\\beta_{js}, h_t, bin_{edges}$\\;\n}\n\\end{algorithm}\n\n\n\n\n\n\n\\begin{algorithm}\n \\scriptsize\n\\caption{Graph Discovery Algorithm}\n\\label{alg:discovery-algo}\n\\SetKwProg{discover}{Function \\emph{discover}}{}{end}\n\\KwIn{ Set of all subgraphs divergence $\\sigma$}\n\\KwOut{Feature vectors $v_1$, $v_2$, $v_3$, List for re-partition recommendations $l$}\n$ev$ : eigenvector centrality\\;\n$ec$ : edge current flow betweenness centrality\\;\n$nc$ : current flow betweenness centrality\\;\n$\\mu$ : core number\\;\n$z$ : average neighbor degree\\;\n\\discover{($\\sigma$)}{\n\\For{any two set of divergence from $\\sigma_1$ ans $\\sigma_2$}{\n\\If{$\\sigma_2(ev) > \\sigma_1(ev)$}{\n $v_1(\\sigma_2) = v_1(\\sigma_2) + 1$\\;\n \\If{$\\sigma_2(ec) > \\sigma_1(ec)$}{\n $v_2(\\sigma_2) = v_2(\\sigma_2) + 1$\\;\n \\If{($\\sigma_2(nc)+ \\sigma_2(\\mu)) > (\\sigma_1(ec) + \\sigma_2(\\mu)$)}{\n $v_3(\\sigma_2) = v_3(\\sigma_2) + 1$\\;\n }\n \\If{($\\sigma_2(sc)+ \\sigma_2(z)) > (\\sigma_1(sc) + \\sigma_2(z)$)}{\n $l(\\sigma_2) = 1$\\;\n }\n }\n }\n }\n}\n\\end{algorithm} \nThe discovery algorithm's input is the list of divergence values of two candidate sets computed against the same reference graph. It produces four lists at the end. Each of the first three lists contains one specific factor of interestingness of the subgraph. The most interesting subgraph should present in all three vectors. \nIf the subgraph has many cores and is sufficiently dense, then the system considers the subgraph to be \\textit{uninterpretable} and sends it for re-partitioning. \nTherefore, the fourth list contains the subgraph that should partition again. Currently, our repartitioning strategy is to take subsets of the original keyword list provided by the user at the beginning of the discovery process to re-initiate the discovery process for the dense, uninterpretable subgraph.\\\\\n\n\\noindent The output of each metric produces a value for each participant node of the input. However, to compare two different candidates, in terms of the metrics mentioned above, we need to convert them to comparable histograms by applying a binning function depending on the data type of the grouping function. \n\\\\\n\\noindent \\textit{Bin Formation (cut2bin):} Cut is a conventional operator (available with R, Matlab, Pandas etc. ) segments and sorts data values into bins. The cut2bin is an extension of a standard cut function, which compares the histograms of the two distributions whose domains (X-values) must overlap. The cut function accepts as input a set of set of node property values (e.g., the centrality metrics), and optionally a set of edge boundaries for the bins. It returns the histograms of distribution. Using the cut, first, we produce $n$ equi-width bins from the distribution with the narrower domain. Then we extract bin edges from the result and use it as the input bin edges to create the wider distribution`s cut. This enforces the histograms to be compatible. In case one of the distribution is known to be a reference distribution (distribution from the background graph) against which the second distribution is compared, we use the reference distribution for equi-width binning and bin the second distribution relative to the first.\n\\\\\n\\noindent The $CompareHistograms$ function uses the \\textit{cut2bin} function to produce the histograms, and then computes the JS Divergence on the comparable histograms. The $CompareHistograms$ function returns the set of divergence values for each metric of a subgraph, which is the input of the discovery algorithm. The function requires the user to specify which of the compared graphs should be considered as a reference -- this is required to ensure that our method is scalable for large background graphs (which are typically much larger than the interesting subgraphs). If the background graph is very large, we take several random subgraphs from this graph to ensure they are representative before the actual comparisons are conducted. To this end, we adopt the well-known random walk strategy.\n \n\\noindent In the algorithm $v_1$, $v_2$ and $v_3$ are the three vectors to store the interestingness factors of the subgraphs, and $l$ is the list for repartitioning. For two subgraphs, if one of them qualified for $v_1$ means, the subgraph contains higher centrality than the other. In that case, it increases the value of that qualified bit in the vector by one. Similarly, it increases the value of $v_2$ by one, if the same candidate has high navigability. Finally, it increases the $v_3$, if it has higher propagativeness. The algorithm selects the top-$k$ scores of candidates from each vector, and marks them interesting. \n\n\n\n\n\n\\begin{table*}[]\n\\caption{Dataset Descriptions }\n\\label{tab:dataset}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\textbf{Data Set} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Total \\\\ Collection\\\\ Size\\end{tabular}} & \\textit{\\textbf{\\begin{tabular}[c]{@{}l@{}}Sub\\\\ Query\\end{tabular}}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Network \\\\ Type\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Total \\\\ Tweets\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Unique \\\\ nodes\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Unique \\\\ edges\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Self\\\\ Loop\\end{tabular}} & \\textbf{Density} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Avg\\\\ Degree\\end{tabular}} \\\\ \\hline\n\\textit{\\begin{tabular}[c]{@{}l@{}}Kamala \\\\ Harris\\end{tabular}} & 12469480 & \\textit{\\begin{tabular}[c]{@{}l@{}}Capitol \\\\ Attack\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 164397 & 1398 & 7801 & 16 & 0.0025 & 4.3 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user\\\\ co-mention\\end{tabular} & 164397 & 8012 & 48604 & 87 & 0.00012 & 3.19 \\\\ \\hline\n\\textit{} & & \\textit{\\#ADOS} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 158419 & 3671 & 10738 & 29 & 0.0015 & 5.8 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 158419 & 30829 & 39865 & 49 & 8.3 & 2.5 \\\\ \\hline\n\\textit{} & & \\textit{\\begin{tabular}[c]{@{}l@{}}Economic \\\\ Issues\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 36678 & 1278 & 1828 & 4 & 0.0022 & 2.8 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 36678 & 6971 & 11584 & 19 & 0.0004 & 3.4 \\\\ \\hline\n\\textit{Joe Biden} & 45258151 & \\textit{\\begin{tabular}[c]{@{}l@{}}Capitol \\\\ Attack\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 676898 & 7728 & 21422 & 50 & 0.00071 & 5.49 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 676898 & 82046 & 101646 & 130 & 3.0146 & 2.473 \\\\ \\hline\n\\textit{} & & \\textit{\\#ADOS} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 183765 & 3007 & 11008 & 29 & 0.002 & 3.85 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 158419 & 29547 & 40932 & 56 & 9.3 & 2.7 \\\\ \\hline\n\\textit{} & & \\textit{\\begin{tabular}[c]{@{}l@{}}Economic \\\\ Issues\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 138754 & 2961 & 5733 & 10 & 0.0013 & 3.87 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 138754 & 21417 & 19691 & 23 & 8.5 & 1.83 \\\\ \\hline\n\\textit{Vaccine} & 24172676 & \\textit{\\begin{tabular}[c]{@{}l@{}}Vaccine\\\\ Anti-vax\\end{tabular}} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 1000000 & 18809 & 24195 & 44 & 2.52 & 2.5 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 1000000 & 203211 & 41877 & 46 & 2.02 & 0.4 \\\\ \\hline\n\\textit{} & & \\textit{Covid Test} & \\begin{tabular}[c]{@{}l@{}}Hashtag \\\\ co-occur\\end{tabular} & 1000000 & 26671 & 45378 & 69 & 0.00012 & 3.4 \\\\ \\hline\n\\textit{} & & \\textit{} & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 1000000 & 188761 & 83656 & 109 & 4.67 & 0.886 \\\\ \\hline\n & & economy & \\begin{tabular}[c]{@{}l@{}}hashtag \\\\ co-occur\\end{tabular} & 917890 & 3002 & 4395 & 9 & 0.0009 & 2.9 \\\\ \\hline\n & & & \\begin{tabular}[c]{@{}l@{}}user \\\\ co-mention\\end{tabular} & 917890 & 20590 & 8528 & 13 & 4.023 & 0.8 \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\subsection{Data Sets}\n\\label{sec:datasets}\nData sets used for the experiments are tweets collected using the Twitter Streaming API using a set of domain-specific, hand-curated keywords. We used three data sets, all collected between the 1st and the 31st of January, 2021. The first two sets are for political personalities. The Kamala Harris data set was collected by using variants of Kamala Harris's name together with her Twitter handle and hashtags constructed from her name. The second data set was similarly constructed for Joe Biden. The third data set was collected during the COVID-19 pandemic. The keywords were selected based on popularly co-occurring terms from Google Trends. We selected the terms manually to ensure that they are related to the pandemic and vaccine related issues (and not, for example, partisan politics). Table \\ref{tab:dataset} presents a quantitative summary on the three data sets. We used a set of subqueries to find subsets from each of our datasets. These subqueries are temporal, and represent trending terms that stand for the emerging issues. For the first two datasets, we used three themes to construct the background graphs: a) capitol attack and insurrection, b) Black Lives Matter and ADOS movement, and c) American economic crisis. and recovery efforts. For the vaccine data set, we selected two subsets of posts, one for vaccine-related concerns, anti-vaccine movements, and related issues, and the second for covid testing and infection-related issues. The vaccine data set is larger, and the content is more diverse than the first two. \nAll the data sets are publicly available from The Awesome Lab \\footnote{https:\/\/code.awesome.sdsc.edu\/awsomelabpublic\/datasets\/int-springer-snam\/)}\n\nIn the experiments, we constructed two subgraphs from each subquery. The first is the \"hashtag co-occurrence\" graph, where each hashtag is a node, and they are connected through an edge if they coexist in a tweet. The second is the \"user co-mention\" graph, where each user is a node, and there is an edge between two nodes if a tweet mentioned them jointly. Intuitively, the hashtag co-occurrence subgraph captures topic prevalence and propagation, whereas the co-mention subgraph captures the tendency to influence and propagate messages to a larger audience. Our goal is to discover surprises (and lack thereof) in these two aspects for our data sets.\n\nWe note that the dataset chosen is from a month where the US experienced a major event in the form of the Capitol Attack, and a new administration was sworn in. This explains why the number of tweets in ``Capitol Attack'' subgraph is high for both politicians in this week, and not surprisingly it is also the most discussed topic as evidenced by the high average node degree. Therefore, this ``selection bias'' sets our expectation for subjective interestingness -- given the specific week we have chosen, this issue will dominate most social media conversations in the USA. We also observe the low ratio of the number of unique nodes to the number of tweets, signifying the high number of retweets, that signals a form of information propagation over the network. The propagativeness of the network during this eventful week is also evidenced by the fact that the unique node count of a co-mention network is almost 75\\% - 88\\% higher on average compared to the hashtag co-occur network of the same class. In Section \\ref{sec:analysis}, we show how our interestingness technique performs in the face of this dataset.\n\n\n\\begin{figure*}[t] \n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-capitol-attack-ht-hist.png}\n \\caption{Capitol Attack}\n \\label{fig:hist-results-harris-capitol}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-ados-ht-hist.png}\n \\caption{\\#ADOS}\n \\label{fig:hist-results-harris-ados}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{kh-results-economy-ht-hist.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-harris}\n \\end{minipage}%\n\n \\caption{Top Hashtag Distributions of the Kamala Harris Data set }\n \\label{fig:hist-results-harris}\n\\end{figure*}\n\\begin{figure*}[t] \n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-capitol-attack-ht-hist.png}\n \\caption{Capitol Attack}\n \\label{fig:hist-results-biden-capitol}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-ados-ht-hist.png}\n \\caption{\\#ADOS Issues}\n \\label{fig:hist-results-biden-ados}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-biden-economy-ht-hist.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-biden-eco}\n \\end{minipage}%\n\n \\caption{Top Hashtag Distributions of the Joe Biden set }\n \\label{fig:hist-results-biden}\n\\end{figure*} \n\\begin{figure*}[t]\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-vax-anti-vax.png}\n \\caption{Vaccine anti-vaccine Issues}\n \\label{fig:hist-results-vac-vac-anti-vax}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-covid-test.png}\n \\caption{COVID-19 Test }\n \\label{fig:hist-results-vac-covid}\n \\end{minipage}%\n \\begin{minipage}{.33\\textwidth}\n \\includegraphics[width=6.5cm, height=5.5cm]{figures-covid-result-economy.png}\n \\caption{Economic issues}\n \\label{fig:hist-results-vac-eco}\n \\end{minipage}%\n \\caption{Top Hashtag Distributions of the Vaccine Data set }\n \\label{fig:hist-results-vac}\n\\end{figure*} \n\\subsection{Experimental Setup}\n\\label{sec:setup}\nThe experimental setup has three steps a) data collection and archival storage, b)Indexing and storing data, and c) executing analytical pipelines. We used The AWESOME project's continuous tweet ingestion system that collects all the tweets through Twitter 1\\% REST API using a set of hand-picked keywords. We used the AWESOME Polysotre for indexing, storing, and search the data. For computation, we used the Nautilus facility of the Pacific Research Platform (PRP). Our hardware configurations are as follows. The Awesome server has 64 GB memory and 32 core processors, the Nautilus has 32 core, and 64 GB nodes. The data ingestion process required a large memory. Depending on the density of the data, this requirement varies. Similarly, centrality computation is a CPU bounded process. Performance optimizations that we implemented are outside the scope of this paper. \n\\medskip\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-egc-kamal-ht.png}\n\n \\caption{Eigenvector Centrality Disparity Hashtag Network}\n \\label{fig:result-pipline-harris-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-nav-kamal-ht.png}\n \\caption{Topical Navigability Disparity Hashtag Network}\n \\label{fig:result-pipline-harris-ht-nav}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-prop-kamal-ht.png}\n\n \\caption{Propagativeness Disparity Hashtag Network }\n \\label{fig:result-pipline-harris-ht-prop}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-core-kamal-mention.png}\n \\caption{Eigenvector Centrality Disparity co-mention Network}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-nav-kamal-mention.png}\n\n \\caption{Topical Navigability Disparity co-mention Network}\n \\label{fig:result-pipline-harris-co-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{kh-results-prop-kamal-mention.png}\n \\caption{Propagativeness Disparity co-mention Network}\n \\label{fig:result-pipline-harris-co-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Kamala Harris\" data set.}\n \\label{fig:result-pipline-harris}\n\\end{figure*}\n\\begin{figure*}[t]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-egc-biden-ht.png}\n\n \\caption{Eigenvector Centrality Disparity Hashtag Network}\n \\label{fig:result-pipline-biden-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-nav-biden.png}\n \\caption{Topical Navigability Disparity Hashtag Network}\n \\label{fig:result-pipline-biden-ht-nav}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-prop-biden-ht.png}\n\n \\caption{Propagativeness Disparity Hashtag Network }\n \\label{fig:result-pipline-biden-ht-prop}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-egc-biden-mention.png}\n \\caption{Eigenvector Centrality Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-egc}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-nav-biden-mention.png}\n\n \\caption{Topical Navigability Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{biden-prop-biden-mention.png}\n \\caption{Propagativeness Disparity co-mention Network}\n \\label{fig:result-pipline-biden-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Joe Biden\" data set.}\n \\label{fig:result-pipline-biden}\n\\end{figure*}\n\\begin{figure*}\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-egc-covid-ht.png}\n\n \\caption{Eigenvector Centrality Disparity: \"Vaccine\" Hashtag Network}\n \\label{fig:result-pipline-ht-egc}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-nav-covid-ht.png}\n \\caption{Topical Navigability Disparity: Vaccine Hashtag Network}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-prop-covid-ht.png}\n \\caption{Propagativeness Disparity: \"Vaccine\" Hashtag Network }\n \\label{fig:result-pipline-ht-vax}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-egc-covid-mention.png}\n \\caption{Eigenvector Centrality Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-egc}\n \\end{minipage}\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-nav-covid-mention.png}\n\n \\caption{Topical Navigability Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-nav}\n \\end{minipage}%\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=7.5cm, height=5.5cm]{figures-covid-result-prop-covid-mention.png}\n \\caption{Propagativeness Disparity: \"Vaccine\" co-mention Network}\n \\label{fig:result-pipline-mention-prop}\n \\end{minipage}\n \\caption{Comparative studies of all sub-queries using the \"Vaccine\" data set.}\n \\label{fig:result-pipline-vax}\n\\end{figure*}\n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-ados.png}\n \\caption{\\#ADOS}\n \\label{fig:core-peri-kamal-ados}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-capitol.png}\n \\caption{Capitol attack}\n \\label{fig:core-peri-kamal-cap}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harrris-economy.png}\n \\caption{Economic issues}\n \\label{fig:core-peri-kamal-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-harris-random.png}\n \\caption{Random Graph}\n \\label{fig:core-peri-kamal-random}\n \\end{minipage}%\n \\caption{Core and Periphery visualization of \"Kamala Harris\" Data set}\n \\label{fig:core-peri-kamal}\n\\end{figure*} \n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-bidon-ados.png}\n \\caption{\\#ADOS}\n \\label{fig:core-peri-biden-ados}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-capitol.png}\n \\caption{Capitol attack}\n \\label{fig:core-peri-biden-capitol}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-economy.png}\n \\caption{Economic issues}\n \\label{fig:core-peri-biden-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-biden-random.png}\n \\caption{Random Data}\n \\label{fig:core-peri-biden-random}\n \\end{minipage}%\n \\caption{Core and Periphery visualization of \"Joe Biden\" Data set}\n \\label{fig:core-peri-biden}\n\\end{figure*} \n\\begin{figure*}\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-vax-ht.png}\n \\caption{Vaccine Issues}\n \\label{fig:core-peri-vax-vax-anti-vax}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-test-ht.png}\n \\caption{COVID-19 Test }\n \\label{fig:core-peri-vax-test}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-economy.png}\n \\centering\n \\caption{Economic issues}\n \\label{fig:core-peri-vax-eco}\n \\end{minipage}%\n \\begin{minipage}{.25\\textwidth}\n \\includegraphics[width=5cm, height=3.5cm]{figures-gephi-core-covid-random.png}\n \\caption{Random Graph}\n \\label{fig:core-peri-vax-random}\n \\end{minipage}%\n \\centering\n \\caption{Core and Periphery visualization of the Vaccine Data set}\n \\label{fig:core-peri-vax}\n\\end{figure*}\n\\subsection{Result Analysis}\n\\label{sec:analysis}\n\\textbf{Kamala Harris Network}: Figure \\ref{fig:result-pipline-harris} represents Kamala Harris network. This network is interesting because even in the context of the Capitol Attack and the Senate approval of election results, it is dominated by \\#ADOS conversations, not by the other political issues including economic policies. An analysis of the three interesting measures for this network reveals the following.\nThe Eigenvector Centrality Disparity for the hashtag network (Figure \\ref{fig:result-pipline-harris-ht-egc}) shows that all the groups generated by the subqueries are equally distributed, while the random graph has a higher volume with similar trends. Hence, these three subqueries are equally important. However, there are a few spikes on \\#ADOS and \"Capitol Attack\" that indicates the possibility of interestingness. Figure \\ref{fig:result-pipline-harris-ht-nav} shows that most of the \"economic issues\" has a spike on lower centrality nodes, but it touches zero with the increased centrality. While \"Capitol attack\" and \"\\#ADOS\" has many more spikes in different centrality level. Hence, we conclude that this network is much more navigable for \"Capitol attack\" and \"\\#ADOS\". However, the figure \\ref{fig:result-pipline-harris-ht-prop} represents Propagativeness Disparity, and it is clear from this picture that African American issues dominated the conversation here, any subtopic related to it including non-US issues like ``Tigray'' (on Ethiopian genocide) propagated quickly in this network. \n\\\\\n\\noindent\n\\textbf{Biden Network}: While the predominance of \\#ADOS issues might still be expected for the Kamala Harris data set, we discovered it to be an ``dominant enough'' subgraph in the Joe Biden data set represented in figure \\ref{fig:result-pipline-biden}. The eigenvector centrality disparity shows that the three subgroups are equally dominated in this network. The \"Joe Biden\" data set represented in figure \\ref{fig:result-pipline-biden-ht-egc}. The eigenvector centrality disparity shows that the three subgroups are equally dominated in this network. However, the navigability of the network(figure \\ref{fig:result-pipline-biden-ht-nav}) also shows that it is navigable for all three subgraphs. It has two big spikes for \"economic issues\" and \"Capitol attack,\" plus many mid-sized spikes for \"\\#ADOS\". Interestingly, in the figure \\ref{fig:result-pipline-biden-ht-prop} the propagativeness shows that the network is strongly propagative with the economic issues and the \\#ADOS issue, which shows up both in the Capitol Attack and the ADOS subgroups. Interestingly, the \"Joe Biden\" data's co-mention network shows more propagativeness than the hashtag co-occur network, which indicates exploring the co-mention subgraph will be useful. We also note the occurrence of certain complete unexpected topics (e.g., COVIDIOT, AUSvIND -- Australia vs. India) within the ADOS group, while Economic Issues for Biden do not exhibit surprising results.\n\\\\\n\\noindent\n\\textbf{Vaccine Network} In the vacation network, we found that \"economic issues\" and \"covid tests\" are more propagative than \"vaccine and anti-vaccine\" related topics (Figure \\ref{fig:result-pipline-vax}). The surprising result here is that the \"vaccine - anti-vaccine\" topics show a strong correlation with \"Economy\" in the other two charts. We observe that while the vaccine issues are navigable through the network, but this topic cluster is is not very propagative in the network. In contrast, in the co-mention network, vaccine and anti-vaccine issues are both very navigable and strongly propagative. Further, the propagativeness in the co-mention network for the Covid-test shows many spikes at the different levels, which signifies for testing related issues, the network serves as a vehicle of message propagation and influencing. \n\n\\subsection{Result Validation}\n\\label{sec:validation}\n\nThere is not a lot of work in the literature on interesting subgraph finding. Additionally, there are no benchmark data sets on which our ``interestingness finding'' technique can be compared to. This prompted us to evaluate the results using a core-periphery analysis as indicated earlier in Figure \\ref{fig:sparse}. The idea is to demonstrate that the parts of the network claimed to be interesting stand out in comparison to the network of a random sample of comparable size from the background graph. These results are presented in Figures \\ref{fig:core-peri-kamal}, \\ref{fig:core-peri-biden}, \\ref{fig:core-peri-vax}. In each of these cases, we have shown a representative random graph in the rightmost subfigure to represent the background graph. To us, the large and dense core formation in Figure \\ref{fig:core-peri-kamal}(b) is an expected, non-surprising result. However, the lack of core formation on Figure \\ref{fig:core-peri-kamal}(c) is interesting because it shows while there was a sizeable network for economics related term for Kamala Harris, the topics never ``gelled'' into a core-forming conversation and showed little propagation. Figure \\ref{fig:core-peri-kamal}(a) is somewhat more interesting because the density of the periphery is far less strong than the random graph while the core has about the same density as the random graph. The core periphery separation is much more prominent in first three plots of Figure \\ref{fig:core-peri-biden}. Unlike the Kamala Harris random graph, Figure \\ref{fig:core-peri-biden}(d) shows that the random graph of this data set itself has a very large dense core and a moderately dense periphery. Among the three subfigures, the small (but lighter) core of Figure \\ref{fig:core-peri-biden}(b) has the maximum difference compared to the random graph, although we find \\ref{fig:core-peri-biden}(a) to be conceptually more interesting. For the vaccine data set, Figure \\ref{fig:core-peri-vax} (c)is closest to the random graph showing that most conversations around the topic touches economic issues, while the discussion on the vaccine itself (Figure \\ref{fig:core-peri-biden}(a)) and that of COVID-testing (Figure \\ref{fig:core-peri-biden}(b)) are more focused and propagative. \n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we presented a general technique to discover interesting subgraphs in social networks with the intent that social network researchers from different domains would find it as a viable research tool. The results obtained from the tool will help researchers to probe deeper into analyzing the underlying phenomena that leads to the surprising result discovered by the tool. While we used Twitter as our example data source, similar analysis can be performed on other social media, where the content features would be different. Further, in this paper, we have used a few centrality measures to compute divergence-based features -- but the system is designed to plug in other measures as well. \n\\FloatBarrier\n\\begin{acknowledgements}\nThis work was partially supported by NSF Grants \\#1909875, \\#1738411. We also acknowledge SDSC cloud support, particularly Christine Kirkpatrick \\& Kevin Coakley, for generous help and support for collecting and managing our tweet collection.\n\\end{acknowledgements}\n\\bibliographystyle{spbasic} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure*}[t]\n\t\\centering\n\\includegraphics[width=150mm]{ky_1}\n\t\\vspace{-5mm}\n\t\\caption{Schematic diagram of a redox flow battery.}\n\t\\label{fig: sec1_1_1}\n\\end{figure*}\n\nOver the past years, there has been growing interest in renewable energy since the increase of carbon emission poses a great threat to the environment. Despite the significance of renewable energy, the intermittent characteristics of sources, such as solar, wind or water, is a fatal drawback for renewable energy, which leads to the increased uncertainty in the supply of electricity. To address this issue, one promising solution is to regulate the power delivery via energy storage technology. Among the energy storage systems, vanadium redox flow batteries (VRFBs) attract a lot of attention due to the advantageous features: scalability, low cost and long cycle life \\cite{alotto2014redox}. However, achieving high performance in terms of power density is a critical issue for cost-effectiveness of VRFBs.\n\nThe polarization losses in VRFBs are mainly caused by ohmic, mass transfer and charge transfer losses \\cite{milshtein2017quantifying}. Several researchers contributed to reduce overpotentials via a new cell architecture \\cite{aaron2012dramatic}, and modified electrode configuration and membrane \\cite{sun1992modification,wang2007investigation,chen2013optimizing}. For charge transfer losses, Li et al.~\\cite{li2013bismuth} proposed electrodes containing nanoparticles that can improve performance of VRFBs due to the acceleration of charge transfer compared to the conventional VRFBs. Li et al.~\\cite{li2011graphite} claimed that reduction of charge transfer losses can be achieved by adding graphite oxide to electrodes. \n\nFor given electrochemical conditions, the performance of VRFBs depends on mass transfer losses mainly, where mass transfer effect can be ameliorated by different flow fields \\cite{zhou2017critical}. \nXu et al.~\\cite{xu2013numerical} numerically investigated the performance of VRFBs with several different types of flow fields and found the VRFB with serpentine flow field achieved maximum power-based efficiency at the optimal flow rate. Studies based on experiments demonstrated that interdigitated flow fields can further improve the performance of VRFBs due to the enhanced mass transfer effect \\cite{tsushima2014efficient,darling2014influence,houser2016influence}. Although few types of flow field have been found to reduce mass transfer losses, it is still laborious for the design of flow field due to the complicated physical and chemical mechanisms in VRFBs.\n\nBased on physical principles and mathematical models, Bends{\\o}e and Kikuchi \\cite{bendsoe1988generating} proposed topology optimization that is a powerful approach to find optimal configurations. \nTopology optimization expresses a structural optimization problem as a material distribution problem in a given design domain and then derives promising configuration on the basis of mathematical programming \\cite{bendsoe2003topology}. \nOne of the attractive feature of topology optimization is that an innovative configuration can be automatically generated from a blank design domain without designer's intuition.\nDue to its high degree of design freedom, topology optimization has been applied to various structural optimization problems, e.g., stiffness maximization problems \\cite{bendsoe1988generating,bendsoe1989optimal}, eigenfrequency problems \\cite{diaz1992solutions,ma1995topological}, thermal problems \\cite{li1999shape,iga2009topology}, and electromagnetic problems \\cite{nomura2007structural,yamasaki2011level}; furthermore, applications to practical device designs have also been attracted attention in micro actuator design \\cite{sigmund2001design}, fuel cell design \\cite{iwai2011power,song20132d} and so on.\n\nFor flow field design problems, Borrvall and Petersson \\cite{borrvall2003topology} proposed a topology optimization method to minimize power dissipation in Stokes flow, and this has been expanded to laminar Navier-Stokes flow problems \\cite{gersborg2005topology,olesen2006high,kubo2017level} and turbulence problems \\cite{yoon2016topology,dilgen2018topology}.\nThe fluid topology optimization has been applied to multiphysics problems such as fluid-structure interaction problems \\cite{yoon2010topology,jenkins2015level}, forced convection problems \\cite{matsumori2013topology,yaji2015topology,yaji2018large}, natural convection problems \\cite{alexandersen2014topology,coffin2016level,alexandersen2016large} and turbulent heat transfer problems \\cite{kontoleontos2013adjoint,dilgen2018density}.\n\n\nRecently, \nYaji et al. \\cite{yaji2018topology} proposed a topology optimization method for the design of flow fields in VRFBs. \nIn their approach, instead of the formula of practical electrochemical reactions, a simplified formula is introduced as a two-dimensional model.\nThey provided novel flow field configurations of a VRFB and clarified that the optimized configurations tend to be the type of the interdigitated flow field.\n\nAs a more comprehensive study, this paper aims to construct topology optimization for flow fields in VRFBs based on a three-dimensional model incorporating with electrochemical reaction kinetics. Referring to the models presented by several researchers \\cite{shah2008dynamic,you2009simple,ma2011three}, a three-dimensional numerical model of a negative electrode in a VRFB is introduced and the electrolyte flow is assumed as stationary and isothermal Stokes flow for simplification. \nWe demonstrate that the proposed approach enables the generation of a novel flow field configuration through the numerical example.\nTo confirm the performance of the topology optimized flow field, we investigate the mass transfer effect and overpotential of the topology optimized flow field in comparison with reference flow fields---parallel and interdigitated flow fields.\nIn addition, we discuss the power loss \\cite{blanc2010understanding,xu2013numerical} in terms of polarization loss and pumping power at different operating conditions.\n\nThe reminder of this paper is organized as follows.\nIn Section 2, we introduce the mathematical model and assumptions of a VRFB.\nIn Section 3, we formulate a topology optimization problem that aims to maximize the mass transfer effect of a three-dimensional flow field in the VRFB and construct the optimization algorithm based on the use of mathematical programming and the finite element method (FEM).\nIn Section 4, we provide numerical examples and demonstrate the usefulness of the proposed approach.\nFinally, Section 5 concludes this paper and summarizes the obtained results.\n\n\n\n\\section{Mathematical model}\n\\subsection{Model assumptions}\nA schematic diagram of a typical redox flow battery is shown in Fig.~1. \nThe positive and negative electrodes are separated by the ion exchange membrane, which only allows protons to penetrate. \nWe suppose that the electrodes compose of the carbon fiber electrode and flow channel.\nNote that the use of flow channel enables the reduction of pressure loss in comparison with the case of only using the carbon fiber electrode \\cite{xu2013numerical}.\nWhen the electrolyte stored in tanks circulates through the positive and negative electrode separately by pumps, the electric energy is released or stored by the electrochemical reactions in the electrodes. \nThe main reactions can be described as follows:\n\\begin{align}\n & \\text{Positive electrode: } \\text{VO}^{2+} + \\text{H}_{2}\\text{O} \\rightleftharpoons \\text{VO}^{+}_{2} + 2\\text{H}^{+} + \\text{e}^{-} \\\\\n & \\text{Negative electrode: } \\text{V}^{3+} + \\text{e} \\rightleftharpoons \\text{V}^{2+}\n\\end{align}\n\nFurthermore, only the negative electrode is considered in this work and some basic assumptions are used for simplification as follows \\cite{you2009simple}:\n\\begin{enumerate}[1.]\n\\item The electrolyte flow is treated as stationary, incompressible and isothermal Stokes flow.\n\\item The dilute-solution approximation is used in this numerical model.\n\\item The side reactions are neglected in electrochemical reactions.\n\\item The migration phenomenon is ignored in the species transport process.\n\\end{enumerate}\n\n\\subsection{Governing equations}\nBased on the assumptions presented in the previous section, governing equations incorporated in the numerical model are introduced here. The electrolyte flow passing through the flow channel in VRFBs can be described by Stokes equation and continuity equation as follows:\n\\begin{align}\n & -\\nabla p +\\mu\\nabla^{2}\\mathbf{u}= \\mathbf{0}, \\\\\n & \\nabla\\cdot \\mathbf{u} = 0,\n\\end{align} \nwhere $\\mu$ is the viscosity of electrolyte flow, and $p(\\mathbf{x})$ and $\\mathbf{u}(\\mathbf{x})$ are the pressure and velocity at position $\\mathbf{x}$, respectively.\n\nIn addition to the flow channel, the porous electrode is also permeated with the electrolyte flow and the velocity in the electrode can be expressed by Darcy's law:\n\\begin{align}\n & \\frac{\\mu}{K}\\mathbf{u} = -\\nabla p,\n\\end{align}\nwhere $K$ is the permeability coefficient, which can be described by the Kozeny-Carmen equation \\cite{tomadakis2005viscous} as follows:\n\\begin{align}\n & K = \\frac{d^{2}_\\text{f}\\epsilon^{3}}{16K_{\\text{ck}}(1-\\epsilon)^{2}},\n\\end{align}\nwhere $d_{\\text{f}}$ is the fiber diameter, $\\epsilon$ is the porosity of electrodes, $K_{\\text{ck}}$ is the Carman-Kozeny constant described by the characteristic of the fibrous material.\nIn addition, instead of the electrolyte flow described by Stokes equation and Darcy's law respectively, \nBrinkman equation that combines Stokes equation with Darcy's law can be used to describe the electrolyte flow in the mixture of different porous medium generally and is formulated as\n\\begin{align}\n & -\\nabla p + \\mu\\nabla^{2}\\mathbf{u} +\\mathbf{F}= \\mathbf{0},\n \\label{eq:br}\n\\end{align}\nwhere $\\mathbf{F}$ is the body force given by\n\\begin{align}\n\\mathbf{F} = - \\alpha \\mathbf{u},\n\\label{eq:f}\n\\end{align}\nwhere $\\alpha$ is the so-called inverse-permeability that is defined as $\\alpha=\\mu\/K$ in the porous medium, while $\\alpha=0$ in the pure fluid domain.\n\nWith the electrolyte flow containing vanadium species, the species transport needs to be considered in the numerical model, which can be expressed as follows:\n\\begin{align}\n & \\mathbf{u}\\cdot\\nabla c_{i}-D^{\\text{eff}}_{i}\\nabla^{2}c_{i}=-s_{i},\n\\end{align} \nwhere $c_{i}$ is the concentration of vanadium species $i\\in \\{\\text{V}^{2+}, \\text{V}^{3+}$\\}, and $s_{i}$ is the source term of species $i$ due to electrochemical reactions, $s_{\\text{V}^{2+}}=j\/F$ and $s_{\\text{V}^{3+}}=-j\/F$, where $j$ and $F$ are the transfer current density and the Faraday constant, respectively. \nIn addition, $D^{\\text{eff}}_{i}$ is the effective diffusion coefficient of species $i$ and is given by the Bruggemann correction, as follows:\n\\begin{align}\nD^{\\text{eff}}_{i} = \\epsilon^{1.5}D_{i},\n\\end{align}\nwhere $D_i$ is the diffusion coefficient of species $i$.\nThe charges in a VRFB conserve since the charge entering the electrolyte is balanced by the charge leaving the electrode, where the expression is shown as follows:\n\\begin{align}\n & \\nabla\\cdot \\mathbf{i}_\\text{e} + \\nabla\\cdot \\mathbf{i}_\\text{s} = 0,\n \\label{eq:cc}\n\\end{align} \nwhere $\\mathbf{i}_\\text{e}$ is the ionic current density, and $\\mathbf{i}_\\text{s}$ is the electronic current density. However, when the charge flows from the electrolyte to the electrode, the electrochemical reactions should take place on the electrode surface, which must be also taken account into the charge conservation. \nWith the electrochemical reaction kinetics, Eq.~(\\ref{eq:cc}) can be rewritten as\n\\begin{align}\n & \\nabla\\cdot \\mathbf{i}_\\text{e} = -\\nabla\\cdot \\mathbf{i}_\\text{s} = j.\n\\end{align}\nIn addition, since the electrolyte is assumed to be electrically neutral and the migration term is ignored, the total ionic current density can be further expressed in terms of electric potential in the electrolyte as follows:\n\\begin{align}\n & \\mathbf{i}_\\text{e} = \\sum_{i}\\mathbf{i}_i = -\\kappa^{\\text{eff}}_{\\text{e}}\\nabla\\phi_{\\text{e}},\n \\label{eq:ie}\n \\\\\n & \\kappa^{\\text{eff}}_{\\text{e}} = \\frac{F^2}{RT}\\sum_{i}z^{2}_{i}D^{\\text{eff}}_{i}c_{i},\n \\label{eq:ke}\n\\end{align} \nwhere $\\mathbf{i}_i$ is the ionic current density of species $i$, $\\phi_\\text{e}$ is the electric potential of the electrolyte, $\\kappa^{\\text{eff}}_\\text{e}$ is the effective conductivity of the electrolyte, $R$ is the gas constant, $T$ is the temperature, and $z_{i}$ is the valence.\nThe detailed derivation of Eqs.~(\\ref{eq:ie}) and (\\ref{eq:ke}) refers to the previous work by Shah et al.~\\cite{shah2008dynamic}.\n\nLikewise, the electronic current density can also be expressed in terms of electric potential in the electrode given by the Ohm's law, as follows:\n\\begin{align}\n & \\mathbf{i}_\\text{s} = -\\sigma^{\\text{eff}}_{\\text{s}}\\nabla\\phi_{\\text{s}}, \\\\\n & \\sigma^{\\text{eff}}_{\\text{s}} = (1-\\epsilon)^{1.5}\\sigma_{\\text{s}},\n\\end{align} \nwhere $\\sigma_\\text{s}$ is the conductivity of the solid material of the electrode, $\\sigma^\\text{eff}_\\text{s}$ is the effective conductivity of the electrode, and $\\phi_\\text{s}$ is the electric potential of the electrode.\nNote that the effective conductivity of the electrode is corrected by the Bruggemann correction.\n\nThe transfer current density $j$, which originates from electrochemical reactions, can be described using Butler-Volmer equation, as follows:\n\\begin{align}\n&j = i_{0}\\left[R^\\text{sb}_{\\text{V}^{3+}}\\exp\\left(-\\frac{\\alpha_\\text{c}F\\eta}{RT}\\right)-R^\\text{sb}_{\\text{V}^{2+}}\\exp\\left(\\frac{\\alpha_\\text{a}F\\eta}{RT}\\right)\\right],\\\\\n&i_{0} = aFk(c_{\\text{V}^{2+}})^{\\alpha_\\text{c}}(c_{\\text{V}^{3+}})^{\\alpha_\\text{a}},\n\\end{align}\nwhere $i_{0}$ is the exchange current density, $\\eta$ is the overpotential, $\\alpha_\\text{c}$ and $\\alpha_\\text{a}$ are the cathodic and anodic transfer coefficients, $k$ is the reaction rate constant, $a$ is the specific area of the electrode, and $R^\\text{sb}_{i}=c^\\text{s}_{i}\/c_{i}$ is the ratio of the surface concentration of species $i$ to the bulk concentration in the negative electrode, in which $c^\\text{s}_{i}$ is the species concentration at the surface of the negative electrode.\n\nThe overpotential in Butler-Volmer equation is the difference between the electrode potential and the electrolyte potential, which can be expressed as follows:\n\\begin{align}\n&\\eta = \\phi_\\text{s}-\\phi_\\text{e}-U,\n\\end{align}\nwhere $U$ is the open-circuit potential in the negative electrode, which can be estimated by Nernst equation as follows:\n\\begin{align}\n&U = U_{0}+\\frac{RT}{F}\\ln\\left(\\frac{c_{\\text{V}^{3+}}}{c_{\\text{V}^{2+}}}\\right),\n\\end{align}\nwhere $U_{0}$ is the equilibrium potential.\n\nThe species concentration at the electrode surface $c^\\text{s}_{i}$ differs from the bulk concentration $c_{i}$ owing to the electrochemical reactions taking place at the electrode surface and the conductivity difference between the electrode and electrolyte. \nAccording to the previous research \\cite{you2009simple}, $c^\\text{s}_{i}$ are given by\n\\begin{align}\n&c^\\text{s}_{\\text{V}^{2+}}=\\frac{\\overline{P}c_{\\text{V}^{3+}}+(1+\\overline{P})c_{\\text{V}^{2+}}}{1+\\overline{M}+\\overline{P}},\\\\\n&c^\\text{s}_{\\text{V}^{3+}}=\\frac{\\overline{M}c_{\\text{V}^{2+}}+(1+\\overline{M})c_{\\text{V}^{3+}}}{1+\\overline{M}+\\overline{P}},\n\\label{eq:c2s}\n\\end{align}\nwhere $\\overline{M}$ and $\\overline{P}$ are defined as follows:\n\\begin{align}\n&\\overline{M}=\\frac{k}{k_\\text{m}}(c_{\\text{V}^{2+}})^{\\alpha_\\text{c}-1}(c_{\\text{V}^{3+}})^{\\alpha_\\text{a}}\\exp\\left(\\frac{\\alpha_\\text{a}F\\eta}{RT}\\right),\\\\\n&\\overline{P}=\\frac{k}{k_\\text{m}}(c_{\\text{V}^{2+}})^{\\alpha_\\text{c}}(c_{\\text{V}^{3+}})^{\\alpha_\\text{a}-1}\\exp\\left(-\\frac{\\alpha_\\text{c}F\\eta}{RT}\\right),\n\\end{align}\nwhere $k_\\text{m}$ is the mass transfer coefficient, which can be estimated by the following equation \\cite{schmal1986mass}:\n\\begin{align}\n&k_\\text{m} = 1.6 \\times 10^{-4}|\\mathbf{u}|^{0.4}.\n\\end{align}\n\n\n\\subsection{Boundary conditions}\n\nIn this section, the boundary conditions used in the numerical model during topology optimization process are introduced. \nFigure \\ref{fig: sec2_3_1} shows the schematic diagram of the analysis domain and boundary settings.\nThe pressure conditions are imposed on the inlet and the outlet, and the no-slip condition is applied on the remaining outer boundaries. The expressions are shown below:\n\\begin{alignat}{2}\n&p = p_\\text{in} &&\\ \\ \\text{on the inlet},\\\\\n&p = p_\\text{out} &&\\ \\ \\text{on the outlet},\\\\\n&\\mathbf{u} =\\mathbf{0} &&\\ \\ \\text{on the remaining boundaries},\n\\end{alignat}\nwhere $p_\\text{in}$ is the given pressure value on the inlet, and $p_\\text{out}$ is the given pressure value on the outlet.\n\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=75mm]{ky_2}\n\t\\vspace{-5mm}\n\t\\caption{Schematic diagram of the analysis domain and boundary settings.}\n\t\\label{fig: sec2_3_1}\n\\end{figure}\n\n\\begin{table*}[t]\n\\centering \n\\caption{Parameter settings of the electrode.}\n\t\\vspace{-1mm}\n\\scalebox{0.8}{\n\\begin{tabular}[t]{lllll}\n\\hline\nParameter & Symbol & Value & Unit & Ref.\\\\\n\\hline\nPorosity & $\\epsilon$ & 0.929 & - & \\cite{you2009simple} \\\\\nSpecific surface area & $a$ & $1.62\\times10^{4}$ & m & \\cite{you2009simple} \\\\\nCarbon fiber diameter & $d_\\text{f}$ & $1.76\\times10^{-5}$ & m & \\cite{you2009simple} \\\\\nElectronic conductivity of solid phase & $\\sigma_\\text{s}$ & $1.0\\times10^3$ & $\\text{S m}^{-1}$ & \\cite{you2009simple} \\\\\nKozeny-Carman constant & $K_\\text{ck}$ & $4.28$ & - & \\cite{you2009simple} \\\\\nLength & $L$ & 0.1 & m & \\cite{xu2013numerical} \\\\\nWidth & $W$ & 0.1 & m & \\cite{xu2013numerical} \\\\\nElectrode thickness & $t_\\text{e}$ & $3.0\\times10^{-3}$ & m & \\cite{xu2013numerical} \\\\\n\\hline\n\\end{tabular}\n}\n\\label{table:sec3_3_1}\n\\end{table*}\n\\begin{table*}[t]\n\\centering \n\\caption{Parameter settings of the electrolyte.}\n\t\\vspace{-1mm}\n\\scalebox{0.8}{\n\\begin{tabular}[t]{lllll}\n\\hline\nParameter & Symbol & Value & Unit & Ref.\\\\\n\\hline\nViscosity & $\\mu$ & $4.928\\times10^{-3}$ & Pa s & \\cite{you2009simple} \\\\\nInitial vanadium $V^{2+}$concentration & $c^\\text{in}_{\\text{V}^{2+}}$ & 750 & $\\text{mol m}^{-3}$ & \\cite{you2009simple} \\\\\nInitial vanadium $V^{3+}$concentration & $c^\\text{in}_{\\text{V}^{3+}}$ & 750 & $\\text{mol m}^{-3}$ & \\cite{you2009simple} \\\\\n$V^{2+}$ diffusion coefficient & $D_{\\text{V}^{2+}}$ & $2.4\\times10^{-4}$ & $\\text{m}^{2}$ $\\text{s}^{-1}$ & \\cite{ma2011three} \\\\\n$V^{3+}$ diffusion coefficient & $D_{\\text{V}^{3+}}$ & $2.4\\times10^{-4}$ & $\\text{m}^{2}$ $\\text{s}^{-1}$ & \\cite{ma2011three} \\\\\nIonic conductivity of electrolyte & $\\kappa_\\text{e}$ & 7.8 & $\\text{S m}^{-1}$ & Estimated \\\\\n\n\\hline\n\\end{tabular}\n}\n\\label{table:sec3_3_2}\n\\end{table*}\n\nFor species conservation, the given concentration of each species is applied on the inlet and the diffusive fluxes of each species are set to zero on the outlet. Besides, the no-flux condition is also applied on the remaining boundaries. The boundary conditions for species conservation are shown below:\n\\begin{alignat}{2}\n&c_{i} = c^\\text{in}_{i} &&\\ \\ \\text{on the inlet},\\\\\n&-D^\\text{eff}_{i}\\nabla c_{i}\\cdot\\mathbf{n}=0 &&\\ \\ \\text{on the remaining boundaries}.\n\\label{eq:nof}\n\\end{alignat}\nFor charge conservation, the VRFB is assumed to be operated in galvanostatic situation. During charging process, the flux conditions in the negative electrode can be described as follows: \n\\begin{alignat}{2}\n&-\\sigma^\\text{eff}_\\text{s}\\nabla \\phi_\\text{s}\\cdot \\mathbf{n} = -{\\frac{I}{A}} &&\\ \\ \\text{on the wall 1},\\\\\n&-\\kappa^\\text{eff}_\\text{e}\\nabla \\phi_\\text{e}\\cdot \\mathbf{n} = {\\frac{I}{A}} &&\\ \\ \\text{on the wall 2},\\\\\n& \\phi_\\text{s} = 0 &&\\ \\ \\text{on the interface},\n\\end{alignat}\nwhere $I$ is the applied current, and $A$ is the electrode surface area.\nNote that the remaining boundary conditions for $\\phi_\\text{s}$ and $\\phi_\\text{e}$ are the no-flux conditions as with Eq.~(\\ref{eq:nof}).\n\n\\section{Topology optimization for flow fields}\n\n\\subsection{Concepts of topology optimization for fluid problems}\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=150mm]{ky_3}\n\t\\vspace{-5mm}\n\\caption{Iteration history of topology optimized flow field expressed as $\\rho\\ge 0.5$.}\n\t\\label{fig: sec4_1_1}\n\\end{figure*}\n\n\\begin{table*}[t]\n\\centering \n\\caption{Parameter settings of the electrochemical reaction model.}\n\t\\vspace{-1mm}\n\\scalebox{0.8}{\n\\begin{tabular}[t]{lllll}\n\\hline\nParameter & Symbol & Value & Unit & Ref.\\\\\n\\hline\nStandard reaction rate & $k_\\text{c}$ & $1.7\\times10^{-7}$ & $\\text{m s}^{-1}$ & \\cite{you2009simple} \\\\\nCathodic transfer coefficient & $\\alpha_\\text{c}$ & 0.5 & - & Assumed \\\\\nAnodic transfer coefficient & $\\alpha_\\text{a}$ & 0.5 &- & Assumed \\\\\nEquilibrium & $U_{0}$ & -0.255 & V & \\cite{ma2011three} \\\\\n\n\\hline\n\\end{tabular}\n}\n\\label{table:sec3_3_3}\n\\end{table*}\n\nTopology optimization aims to obtain the improved structural design in a specific domain with a given objective function and constraints. The main idea is to formulate a topology optimization problem as a material distribution problem, where the expression for material distribution in a fixed design domain $D$ is defined as follows \\cite{bendsoe2003topology}: \n\\begin{eqnarray}\n\\chi(\\mathbf{x})=\n\\begin{cases}\n1 &\\text{if }\\mathbf{x}\\in \\Omega,\\cr 0 &\\text{if }\\mathbf{x}\\in D\\backslash\\Omega, \\end{cases}\n\\end{eqnarray}\nwhere $\\mathbf{x}$ is the position in $D$, and $\\Omega$ is the design domain in $D$. In this expression, $\\chi(\\mathbf{x}) = 1$ and $\\chi(\\mathbf{x}) = 0$ represent the material and void at $\\mathbf{x}$, respectively. \nSince the characteristic function $\\chi$ is a discontinuous function, topology optimization problems typically require relaxation techniques for numerical treatment.\nAs the popular and simple way for relaxing topology optimization problems, the density approach \\cite{bendsoe1989optimal} replaces the characteristic function with a continuous function, $0 \\le \\rho(\\mathbf{\\mathbf{x}}) \\le 1$, which is also used in this paper.\n\nTo determine which points in $D$ should be fluid or solid, based on the previous research dealing with fluid topology optimization \\cite{borrvall2003topology}, the body force in Eq.~(\\ref{eq:f}) is redefined using the fictitious body force, $\\mathbf{F}^{\\text{fic}}$, as follows:\n\\begin{align}\n &\\mathbf{F}^{\\text{fic}}= -\\alpha^{\\text{fic}}_{\\rho} \\mathbf{u} \\quad\\text{with }\\ \\alpha^{\\text{fic}}_{\\rho} = \\frac{q(1-\\rho)}{\\rho+q}\\alpha^{\\text{fic}},\n \\label{eq:f_fic}\n\\end{align}\nwhere $\\alpha^{\\text{fic}}$ is the fictitious inverse-permeability used for expressing the solid domain $D\\setminus\\Omega$ as with the previous work \\cite{borrvall2003topology}, and $q$ is a tuning parameter for controlling the convexity of $\\alpha^{\\text{fic}}_{\\rho}$. \nIn addition, $\\rho = 1$ represents the fluid domain with $\\alpha^{\\text{fic}}_{\\rho}=0$, and $\\rho = 0$ represents the solid domain with $\\alpha^{\\text{fic}}_{\\rho}=\\alpha^{\\text{fic}}\\gg 1$. In the solid domain, since the fictitious body force is large enough compared to the fluid domain, it is difficult for fluid to pass through the solid domain. Therefore, according to the sensitivity information of the objective function, the fictitious body force is determined at each point in $D$ so that the structural design of flow channel can be obtained. \nNote that the fictitious inverse permeability, $\\alpha^{\\text{fic}}$, is different from $\\alpha$ in Eq.~(\\ref{eq:br}).\nThat is, the former is used for expressing the solid domain in the fixed design domain $D$, whereas the latter is used for expressing the porous electrode that is the non-design domain. \nIn this study, $q$ and $\\alpha^\\text{fic}$ are set to $0.01$ and $5\\alpha$, respectively.\n\n\\subsection{Description of optimization problem}\n\nIn VRFBs, the mobility of ions in electrolyte is poor compared to the electrons, which means it is more difficult to reach the electrode surface for ions. \nWith the flow channel embedded in a VRFB, the mass transfer effect is improved so that the concentration of reactants at the electrode surface will also increase. Therefore, whether the mass transfer effect is improved can be estimated by the concentration of the reactants at the electrode surface. During charging process, the optimization problem can be defined as a maximization problem of average concentration of oxidized reactants at the electrode surface in a negative electrode. The expressions of this optimization problem are shown as follows:\n\\begin{align}\n&\\begin{array}{ll}\n\\displaystyle\\underset{\\rho}{\\text{maximize }}\\ F = \\int_{D}c^\\text{s}_{\\text{V}^{3+}}\\text{d}\\Omega\\bigg\/\\int_{D}\\text{d}\\Omega,\\\\\n\\text{subject to }\\ 0 \\le \\rho(\\mathbf{x}) \\le 1\\quad\\text{for }\\ \\forall\\mathbf{x}\\in D.\n\\end{array}\n\\label{eq:optp}\n\\end{align}\nNote that the detailed expressions of $c^\\text{s}_{\\text{V}^{3+}}$ can be found in Eq.~(\\ref{eq:c2s}).\n\n\\subsection{Numerical implementation}\n\nThe governing equations are solved by using the package COMSOL Multiphysics$^{\\textregistered}$, which is based on the FEM. The optimization algorithm is constructed on the basis of mathematical programming and is briefly enumerated as follows:\n\\begin{description}\n\\item[{\\it Step 1.}]The design variables and all of the parameters shown in Table \\ref{table:sec3_3_1}--\\ref{table:sec3_3_4} are initialized.\n\\item[{\\it Step 2.}]The objective function $F$ in (\\ref{eq:optp}) is evaluated by solving the governing equation via the FEM.\n\\item[{\\it Step 3.}] If the objective function is converged, the iteration will terminate. Otherwise, the sensitivities---gradient of objective function with respect to the design variables---are calculated.\n\\item[{\\it Step 4.}] The design variables are redistributed in the fixed design domain $D$ using sequential linear programming (SLP), and the iteration will return to the second step.\n\\end{description}\n\nWe utilize a partial differential equation (PDE)-based filter \\cite{kawamoto2011heaviside} for ensuring the smoothness of the design variables \\cite{yaji2018topology}. \nIn addition, we use the adjoint method that enables the derivation of the sensitivities without depending on the number of design variables. \nThe detailed concepts and formulation of the adjoint method can be seen in the literatures on structural optimization \\cite{haftka1992elements,bendsoe2003topology}.\n\n\\begin{figure*}[t]\n\t\\centering\n\\includegraphics[width=150mm]{ky_4}\n\t\\vspace{-5mm}\n\t\\caption{Comparison results for different flow fields: (a) Flow field configurations; (b) Distributions of vanadium species $c_{\\text{V}^{2+}}$ on middle plane of electrode; (c) Distributions of vanadium species $c_{\\text{V}^{3+}}$ on middle plane of electrode.}\n\t\\label{fig: sec4_2_2}\n\\end{figure*}\n\\begin{figure*}[t]\n\t\\centering\n\\includegraphics[width=150mm]{ky_5y}\n\t\\vspace{-5mm}\n\t\\caption{Values of objective and overpotential at variant pressure drop for different flow fields.}\n\t\\label{fig: sec4_2_1}\n\\end{figure*}\n\\begin{figure*}[t]\n\t\\centering\n\\includegraphics[width=150mm]{ky_6}\n\t\\vspace{-5mm}\n\t\\caption{Power loss with different flow fields at $\\epsilon = 0.929$.}\n\t\\label{fig: sec4_3_1}\n\\end{figure*}\n\\begin{figure}[t]\n\t\\centering\n\\includegraphics[width=75mm]{ky_7}\n\t\\vspace{-5mm}\n\t\\caption{Power loss with different flow fields at $\\epsilon = 0.68$ and $I = 10$~A.}\n\t\\label{fig: sec4_3_3}\n\\end{figure}\n\n\n\n\\section{Numerical examples}\n\n\\subsection{Topology optimized flow field}\n\\begin{table}[b]\n\\centering \n\\caption{Operating parameter settings.}\n\t\\vspace{-1mm}\n\\scalebox{0.8}{\n\\begin{tabular}[t]{lllll}\n\\hline\nParameter & Symbol & Value & Unit & Ref.\\\\\n\\hline\nTemperature & $T$ & $298$ & K & \\cite{you2009simple} \\\\\nInlet pressure & $P_\\text{in}$ & $1.0\\times10^3$ & Pa & Assumed \\\\\nOutlet pressure & $P_\\text{out}$ & 0 & Pa & Assumed \\\\\nApplied current & $I$ & 4.0 & A & \\cite{xu2013numerical} \\\\\n\n\\hline\n\\end{tabular}\n}\n\\label{table:sec3_3_4}\n\\end{table}\n\\begin{table}[b]\n\\centering \n\\caption{Pressure drop (Pa) in the different flow fields.}\n\t\\vspace{-1mm}\n\\scalebox{0.8}{\n\\begin{tabular}[t]{lllll}\n\\hline\n & 1 mL\/s & 5 mL\/s & 10 mL\/s & 15 mL\/s\\\\\n\\hline\nParallel & 51 & 262 & 541 & 833 \\\\\nInterdigitated & 83 & 427 & 876 & 1343 \\\\\nOptimized & 105 & 531 & 1075 &1630 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{table:sec4_3_1}\n\\end{table}\n\nFigure \\ref{fig: sec4_1_1} shows the iteration history of topology optimized flow field, in which the electrolyte domain is expressed as the isosurface of $\\rho\\ge0.5$.\nThe analysis domain is discretized using $2.4\\times 10^5$ hexahedral elements for all variables in this study.\nIn the topology optimized design shown in Fig.~\\ref{fig: sec4_1_1}, as with the interdigitated flow field, some of the flow channels are not connected in the topology optimized flow field, where the disconnectivity of the flow field can enhance the mass transfer effect so that the mass transfer loss can be further reduced. \n\n\\subsection{Effect of flow field design}\n\nTo compare the differences between the concentration of vanadium species with different flow fields, Fig.~\\ref{fig: sec4_2_2} shows the geometric model of the flow fields and the distributions of $c_{\\text{V}^{2+}}$ and $c_{\\text{V}^{3+}}$ on the plane located in the middle of the electrode.\nThe geometric parameters of the electrode refers to Table \\ref{table:sec3_3_1}. \nBesides, the width and thickness of the flow channel is 3~mm and the interval between the branches of the flow field is 9~mm. \nAs shown in Fig.~\\ref{fig: sec4_2_2}, the under-rib convection of the parallel flow field is weak, especially on both sides of the electrode. The velocity of electrolyte flow to both sides is small in the electrode since it is difficult for the parallel flow field to distribute the electrolyte to both sides of the electrode effectively. In contrast, the under-rib convection is strong in the interdigitated flow field and topology optimized flow field. \n\nFigure \\ref{fig: sec4_2_1} shows that the value of objective function in the optimization problem described as (\\ref{eq:optp}), and overpotential in the topology optimized flow field and interdigitated flow filed indicate higher performance than those of the parallel flow field at constant pressure drop. \nSince the under-rib convection is weak in the parallel flow field, the value of objective function with the parallel flow field is the lowest. \nBesides, the electrode surface concentration decreases, as the pressure drop decreases for all the flow fields.\nIn other words, the species will be more difficult to reach the electrode surface at low flow rate, which means the mass transfer effect is dominated by the flow channel.\nAccordingly, the overpotential with topology optimized flow field is the lowest among the flow fields due to the strongest mass transfer effect caused by the topology optimized flow field.\nHowever, it should be noted that the overall performances of the optimized flow field and the interdigitated flow field are almost same in this numerical example.\n\n\\subsection{Power loss}\n\nIn a VRFB system, the polarization loss is caused by the need for extra voltage to occur electrochemical reactions. \nIn addition to the polarization loss, the pumping power should be also taken into account for the evaluation of VRFB system.\nWe therefore introduce the following evaluation index:\n\\begin{align}\nP_{\\text{loss}} = I\\eta + Q\\Delta P,\n\\label{eq:loss}\n\\end{align}\nwhere $P_{\\text{loss}}$ is the sum of the polarization loss ($I\\eta$), and the pumping power loss ($Q\\Delta P$), $Q$ is the flow rate, and $\\Delta P$ is the pressure drop.\nNote that, for brevity in this study, the evaluation index in Eq.~(\\ref{eq:loss}) is defined as a simple expression using the dominant factors, whereas the performance of VRFB systems relates with various factors on the authority of the previous works \\cite{blanc2010understanding,xu2013numerical}.\n\nFigure \\ref{fig: sec4_3_1} shows the power loss corresponding to variant flow rates. At low flow rate of 1 mL$\/$s, the power loss with the topology optimized flow field is the lowest one at current of 4 A or 10 A. Although the pressure drop in the topology optimized flow field shown in Table \\ref{table:sec4_3_1} is the highest, the pumping power is trivial compared to the polarization loss, which results in the lowest power loss with the topology optimized flow field. At high flow rate of 15~mL$\/$s, the parallel flow field shows the lowest power loss at current of 4~A. However, the power loss with these flow fields is almost the same at current of 10~A. At high applied current, the electrochemical reactions at the electrode surface become strong and the polarization loss will increase. Therefore, with the strongest mass transfer effect caused by the topology optimized flow field, it is obvious that the difference of the total power loss between these flow fields decreases as the applied current density increases at high flow rates, even if the pumping power increases as the flow rate increases. In contrast, at flow rate of 1~mL$\/$s, the differences in the power loss between the flow fields increases as the applied current increases because the power loss is mainly from the polarization loss at current of 4~A and the polarization loss will be more dominant in power loss as the current increases from 4~A to 10~A. The results demonstrate the topology optimized flow field is more useful for VRFBs at high applied current.\n\nFigure \\ref{fig: sec4_3_3} shows the comparison results of the power loss at different operating condition at $\\epsilon=0.68$ and $I=10$~A. For all the flow fields, the power loss increases in comparison with the case of $\\epsilon=0.929$ in Fig.~\\ref{fig: sec4_3_1}. \nAt flow rate of 1~mL$\/$s, with the weaker mass transfer effect, the power loss of the interdigitated flow field and the topology optimized flow field increases from 0.413 to 0.505 and from 0.405 to 0.470 as the porosity decreases from 0.929 to 0.68. \nThe differences between the flow fields become larger at low porosity. At flow rate of 15~mL$\/$s, since the pressure drop in the topology optimized flow field is larger than that of the interdigitated flow field and the polarization loss falls slowly when the flow rate increases from 1 mL$\/$s to 15 mL$\/$s, the total power loss with interdigitated flow field is less than the loss with the topology optimized flow field. For low porosity, the polarization loss and pumping power will be more dominant at low and high flow rate respectively. \n\n\\section{Conclusion}\nIn this paper, we proposed a topology optimization method for the flow field in a VRFB based on a three-dimensional numerical model. \nWe demonstrated that the optimization problem can be formulated as a maximization problem of the electrode surface concentration of oxidized reactants during the charging process.\nBased on the proposed formulations, we derived a novel flow field configuration and evaluated its performances in comparison with reference flow fields---parallel and interdigitated flow fields.\nAs a result, we verified that a VRFB with the topology optimized flow field can achieve almost same performances with those of the interdigitated flow field, which has stronger mass transfer effect than the parallel flow field.\n\nWe further investigated the power loss with the parallel, interdigitated and topology optimized flow fields under different operating conditions.\nThe results demonstrated that a VRFB with the topology optimized flow field is more suitable for a VRFB at high applied current density.\nWe confirmed that the topology optimized flow field has a potential to be an alternative design candidate when dealing with low porosity electrode, as the difference of the power loss with respect to the flow field configurations is notably observed in comparison with the case of high porosity electrode. \n\n\\section*{Acknowledgments}\nThis work is partially supported by a research grant from The Mazda Foundation.\n\\color{black}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzayqf b/data_all_eng_slimpj/shuffled/split2/finalzzayqf new file mode 100644 index 0000000000000000000000000000000000000000..9a25306186e4110c2d38ab04d07d53e108c26de5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzayqf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIt is well known that Lie symmetry theory plays a significant role in the analysis of differential equations[1-5]. The basic idea of this method is that the infinitesimal transformation leaves the set of solution manifold of the considered differential equation invariant. This efficient method invented by Sophus Lie is a highly algorithmic process, and it often involves lengthy symbolic computation. The method systematically unifies and extends well-known techniques to construct explicit solutions for differential equations, especially for nonlinear differential equations. In recent years this method has a successful extension to discrete systems exhibiting solitons governed by nonlinear partial differential-difference equations and pure difference equations[6-8].\n\nRecently the study of fractional differential equations(FDEs), as generalizations of classical integer order differential equations, has attracted much attention due to an exact description of nonlinear phenomena in fluid mechanics, viscoelasticity, biology, physics, engineering and other areas of science[9-12]. However, unlike the classical integer order derivatives, there exists a number of different definitions of fractional order derivatives and corresponding FDEs. These definition differences lead to the FDEs having similar form but significantly different properties. It means that there exists no well-defined method to analyze them systematically. As a consequence, several different analytical methods such as differential transform method[13], Adomian decomposition method[14,15], invariant subspace method[16], Green function approach[17] and symmetry analysis[18-26] have been formulated to reduce and solve FDEs.\n\nIn this article, we consider the following time-fractional convection-diffusion equation\n\\begin{equation} \\label{eq:1}\n\\frac{\\partial^\\alpha {u}}{\\partial{t}^\\alpha}=(D(u)u_{x})_{x}+P(u)u_{x},0<\\alpha<2,\n\\end{equation}\nwhere $\\frac{\\partial^\\alpha{u}}{\\partial{t}^\\alpha}$ is the Riemann-Liouville fractional derivative of order $\\alpha$ with respect to the variable $t$. The definition of this derivative is\n\\begin{align}\\label{eq:2}\n\\frac{\\partial^\\alpha {u(t,x)}}{\\partial{t}^\\alpha}=\\left\\{\n\\begin{array}{lll}\n \\frac{\\partial^n {u}}{\\partial{t}^n} & , & \\alpha=n, \\\\\n \\frac{1}{\\Gamma(n-\\alpha)} \\frac{\\partial^n}{\\partial{t}^n} \\int^{t}_{0}(t-s)^{n-\\alpha-1}u(s,x)ds & , & 0\\leq n-1<\\alpha0,\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\binom{\\alpha}{n}=\\frac{(-1)^{n-1}\\alpha\\Gamma(n-\\alpha)}{\\Gamma(1-\\alpha)\\Gamma(n+1)},\n\\end{equation*}\nthe above Eq.(5) can be written as\n\\begin{equation*}\n\\eta^{t}_{\\alpha}=D^{\\alpha}_{t}\\eta-\\alpha(D_t\\tau)\\frac{\\partial^{\\alpha}u}{\\partial t^{\\alpha}}-\\sum^{\\infty}_{n=1}\\binom{\\alpha}{n}(D^{n}_{t}\\xi)(D^{\\alpha-n}_{t}u_{x})-\\sum^{\\infty}_{n=1}\\binom{\\alpha}{n+1}(D^{n+1}_{t}\\tau)(D^{\\alpha-n}_{t}u).\n\\end{equation*}\nFurthermore, using the generalized chain rule for a compound function [28]\n\\begin{equation*}\n\\frac{d^{\\alpha}u(v(t))}{dt^{\\alpha}}=\\sum^{\\infty}_{n=0}\\sum^{n}_{k=0}\\binom{n}{k}\\frac{(-v(t))^{k}}{n!}\\frac{\\partial^{\\alpha}(v^{n-k}(t))}{\\partial{t}^{\\alpha}}\\frac{d^{n}u(v(t))}{dv^{n}}\n\\end{equation*}\nalong with the above generalized Leibnitz rule with $f(t)=1$,\nthe first term $D^{\\alpha}_{t}\\eta$ in $\\eta^{t}_{\\alpha} $ can be written as\n\\begin{equation*}\nD^{\\alpha}_{t}\\eta=\\frac{\\partial^{\\alpha}\\eta}{\\partial{t}^{\\alpha}}+\\eta_{u}\\frac{\\partial^{\\alpha}u}{\\partial{t}^{\\alpha}}-u\\frac{\\partial^{\\alpha}\\eta_{u}}{\\partial{t}^{\\alpha}}+\\sum^{\\infty}_{n=1}\\binom{\\alpha}{n}\\frac{\\partial^{n}\\eta_{u}}{\\partial{t}^{n}}D^{\\alpha-n}_{t}(u)+\\mu,\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\mu=\\sum^{\\infty}_{n=2}\\sum^{n}_{m=2}\\sum^{m}_{k=2}\\sum^{k-1}_{r=0}\\binom{\\alpha}{n}\\binom{n}{m}\\binom{k}{r}\\frac{t^{n-\\alpha}}{\\Gamma(n+1-\\alpha)}\\frac{(-u)^{r}}{k!}\\frac{\\partial^{m}u^{k-r}}{\\partial{t}^{m}}\\frac{\\partial^{n-m+k}\\eta}{\\partial{t}^{n-m+k}\\partial{u}^{k}}.\n\\end{equation*}\nTherefore\n\\begin{align*}\n\\eta^{t}_{\\alpha}=&\\frac{\\partial^{\\alpha} \\eta}{\\partial{t}^{\\alpha}}+(\\eta_{u}-\\alpha\nD_{t}\\tau)\\frac{\\partial^{\\alpha}u}{\\partial{t}^{\\alpha}}-u\\frac{\\partial^\\alpha{\\eta_{u}}}{\\partial{t}^{\\alpha}}+\\mu+\n\\sum^{\\infty}_{n=1}\\big[\\binom{\\alpha}{n}\\frac{\\partial^{n}{\\eta_{u}}}{\\partial{t}^{n}}\n-\\binom{\\alpha}{n+1}D^{n+1}_{t}(\\tau)\\big]D^{\\alpha-n}_t u\\\\\n&-\\sum^{\\infty}_{n=1}\\binom{\\alpha}{n}(D^{n}_{t}\\xi)(D_t^{\\alpha-n}u_x).\n\\end{align*}\n\\par\nFor the invariance of Eq.(1) under transformations(3), we have\n\\begin{equation}\n\\frac{\\partial^\\alpha {u^*}}{\\partial{t^*}^\\alpha}=(D(u^*)u^*_{x^*})_{x^*}+P(u^*)u^*_{x^*},0<\\alpha<2\n\\end{equation}\nfor any solution $u=u(t,x)$ of Eq.(1). Expanding Eq.(6) about $\\epsilon=0$, making use of infinitesimals and their extensions, equating the coefficients of $\\epsilon$, and neglecting the terms of higher power of $\\epsilon$, we obtain the following invariant equation of Eq.(1)\n\\begin{align}\n[\\eta^{t}_{\\alpha}-(P'(u)u_{x}+D''(u)(u_{x})^{2}+D'(u)u_{xx})\\eta\n-(P(u)+2u_{x}D'(u))\\eta^{x}-D(u)\\eta^{xx}]|_{Eq.(1)}=0.\n\\end{align}\nHere we assume that $D(u)$ and $P(u)$ are not equal to zero, otherwise Eq.(1) would be another equation that have been considered in [20]. Substituting the expressions for $\\eta^{t}_{\\alpha}$,$\\eta^{x}$ and $\\eta^{xx}$ into the above equation and equating various powers of derivatives of $u$ to zero, we obtain an over determined system of linear equations. They are\n\\begin{align}\n&\\xi_{t}=\\xi_{u}=\\tau_{x}=\\tau_{u}=\\eta_{uu}=0,\\nonumber\\\\\n&P(u)(\\xi_{x}-\\alpha\\tau_{t})-P'(u)\\eta-2D'(u)\\eta_{x}-D(u)(2\\eta_{xu}-\\xi_{xx})=0,\\nonumber\\\\\n&D''(u)\\eta+D'(u)(\\eta_{u}-2\\xi_{x}+\\alpha\\tau_{t})=0,\\nonumber\\\\\n&D(u)(2\\xi_{x}-\\alpha\\tau_{t})-D'(u)\\eta=0,\\\\\n&\\frac{\\partial^{\\alpha}{\\eta}}{\\partial{t}^{\\alpha}}-u\\frac{\\partial^{\\alpha}{\\eta_{u}}}{\\partial{t}^{\\alpha}}-P(u)\\eta_{x}-D(u)\\eta_{xx}=0,\\nonumber\\\\\n&\\binom{\\alpha}{n}\\frac{\\partial^{n}{\\eta_{u}}}{\\partial{t}^{n}}-\\binom{\\alpha}{n+1}D^{n+1}_{t}(\\tau)=0,n=1,2,\\cdots\\nonumber.\n\\end{align}\n\nIn order to solve the above system, we consider the following different conditions for $D(u)$, $P(u)$ and obtain their corresponding Lie symmetries of Eq.(1). And if $\\alpha=1$, Eq.(1) becomes partial differential equation which has been considered by Oron, Rosenau[29] and Edwards[30]. Therefore, $\\alpha\\in (0,2)$ and $\\alpha\\neq1$ in our paper.\\\\\n\\textbf{Case 1 }$D(u)$ and $P(u)$ arbitrary\\\\\nSolving the determining equations (8), we obtain the explicit form of infinitesimals\n$$\\xi=a_1, \\tau=0, \\eta=0,$$\nwhere $a_1$ is an arbitrary constant. Hence the infinitesimal generator is\n\\begin{equation}\\label{eq:9}\nX_{1}=a_1\\frac{\\partial}{\\partial{x}}.\n\\end{equation}\n\\textbf{Case 2 }$D(u)=u^{k}(k\\neq0)$, $P(u)=\\beta(\\beta=\\pm1)$\\\\\nSimilarly, under this assumption the following infinitesimals are obtained,\n$$\\xi=a_1+a_2x, \\tau=a_2\\frac{t}{\\alpha}, \\eta=a_2\\frac{u}{k},$$\nwhere $a_1$ and $a_2$ are arbitrary constants. Hence in this case Eq.(1) admits a two-parameter group with infinitesimal generators\n\n\\begin{equation}\\label{eq:10}\nX_{1}=\\frac{\\partial}{\\partial{x}},X_{2}=x\\frac{\\partial}{\\partial{x}}+\\frac{t}{\\alpha}\\frac{\\partial}{\\partial{t}}+\\frac{u}{k}\\frac{\\partial}{\\partial{u}},\n\\end{equation}\nwhich are the basis of 2-dimensional Lie algebras admitted by Eq.(1). \n\nThe other cases are listed in Table 1.\n\n\n\\begin{table}[h]\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l||}\n \\hline\n No. & $D(u)$ &$P(u)$ & $\\xi$ & $\\tau$ & $\\eta$ \\\\\n \\hline\n 3 & $u^{k}(k\\neq0,-2,\\frac{2\\alpha}{1-\\alpha})$ & $\\beta u^{k}(\\beta=\\pm1)$& $a_1$ & $a_2t$& $-a_2\\frac{\\alpha u}{k}$ \\\\\\hline\n 4 & $u^{k}(k\\neq0)$ & $\\beta u^{\\gamma}(\\beta=\\pm1,\\gamma\\neq k)$ & $a_1+a_2x$& $\\frac{2\\gamma-k}{\\alpha(\\gamma-k)}a_2t$ & $-\\frac{a_2u}{\\gamma-k}$ \\\\\\hline\n 5& $u^{-2}$ & $\\beta u^{-2}(\\beta=\\pm1)$& $a_1+a_3e^{-\\beta x}$&$a_2t$&$\\frac{a_2}{2}\\alpha u+a_3\\beta u e^{-\\beta x}$\\\\\\hline\n 6& $u^{\\frac{2\\alpha}{1-\\alpha}}$ & $ \\beta u^{\\frac{2\\alpha}{1-\\alpha}}(\\beta=\\pm1)$&$a_1$&$a_2t+a_3t^2$&$\\frac{a_2(\\alpha-1)}{2}u+a_3(\\alpha-1)tu$ \\\\\\hline\n\n 7& $1$ & $\\beta(\\beta=\\pm1)$& $a_1$&$0$ &\\tabincell{l}{$a_2u+h(t,x),$ \\\\ where $h(t,x)$ satisfies \\\\ $\\frac{\\partial^{\\alpha}{h(t,x)}}{\\partial{t}^\\alpha}=\\beta h_{x}+h_{xx}$}\\\\\\hline\n 8 & $1$ & $\\beta u^{\\gamma}(\\beta=\\pm1,\\gamma\\neq0)$ & $a_1+a_2x$ & $\\frac{2a_2}{\\alpha}t$ & $-\\frac{a_2}{\\gamma}u$\\\\\n \\hline\n \\end{tabular}\n \\caption{Infinitesimals of Eq.(1)}\n\n\\end{table}\n\n In the above table, $a_1,a_2,a_3$ are three arbitrary parameters. Like case 1 and case 2, the infinitesimal generators of Eq.(1) in different cases can easily obtained. Next we will use infinitesimal generators to deduce the similarity reductions and construct invariant solutions of Eq.(1). The definition of group invariant solution of FDEs has given in [22]. Here we use it directly.\n \\section{Similarity reductions of Eq.(1)}\n\\textbf{Case 1.}$D(u)$ arbitrary,$P(u)$ arbitrary\\\\\nThe infinitesimal generator is $X_{1}=\\frac{\\partial}{\\partial{x}}$. The characteristic equations become\n\\begin{equation*}\n\\frac{dx}{1}=\\frac{dt}{0}=\\frac{du}{0},\n\\end{equation*}\nwhich have two invariants $t,u.$\nThus, the similarity transformation is\n\\begin{equation}\\label{eq:18}\nu=\\varphi(t).\n\\end{equation}\nSubstitution of (11) into Eq.(1) leads to $\\varphi(t)$ satisfying the reduced fractional ordinary differential equation\n\\begin{equation}\\label{eq:19}\n\\frac{d^\\alpha {\\varphi(t)}}{d{t}^\\alpha}=0.\n\\end{equation}\nHence,the group invariant solutions of Eq.(1) are given by\n\\begin{align*}\nu=\\left\\{\n\\begin{array}{lll}\n c_{1}t^{\\alpha-1} & , & 0<\\alpha<1, \\\\\n c_{1}t^{\\alpha-1}+ c_{2}t^{\\alpha-2} & , &1<\\alpha<2.\n\\end{array}\\right.\n\\end{align*}\nwhere $c_{1}$,$c_{2}$ are arbitrary constants.\n\n\\textbf{Case 2.}$D(u)=u^{k}(k\\neq0,-2,\\frac{2\\alpha}{1-\\alpha})$,$P(u)=\\beta(\\beta=\\pm1)$ \\\\\nThe characteristic equation corresponding to $X_{2}=x\\frac{\\partial}{\\partial{x}}+\\frac{t}{\\alpha}\\frac{\\partial}{\\partial{t}}+\\frac{u}{k}\\frac{\\partial}{\\partial{u}}$ is\n\\begin{equation*}\n\\frac{dx}{x}=\\frac{\\alpha dt}{t}=\\frac{kdu}{u}.\n\\end{equation*}\nSolving the above equation, we get the similarity transformation\n\\begin{equation}\nu=t^{\\frac{\\alpha}{k}}\\varphi(\\zeta), \\zeta=xt^{-\\alpha}.\n\\end{equation}\nSubstituting transformation(13) into Eq.(1) leads to\n\\begin{equation}\\label{eq:22}\n\\frac{\\partial^\\alpha(t^{\\frac{\\alpha}{k}}\\varphi(\\zeta))}{\\partial{t}^{\\alpha}}=t^{\\frac{\\alpha}{k}-\\alpha}(\\varphi^{k}\\frac{d^2\\varphi}{d\\zeta^2}+ k\\varphi^{k-1}(\\frac{d\\varphi}{d\\zeta})^{2}+\\beta\\frac{d\\varphi}{d\\zeta}).\n\\end{equation}\nBecause $\\alpha\\in(0,2)$ and $\\alpha\\neq1$, according to the definition of the Riemann-Liouville fractional derivative, we should consider $0<\\alpha<1$ and $1<\\alpha<2$ separately.\n\nWhen $0<\\alpha<1$, for the similarity transformation(13) becomes\n\\begin{equation}\\label{eq:23}\n\\frac{\\partial^\\alpha(t^{\\frac{\\alpha}{k}}\\varphi(\\zeta))}{\\partial{t}^{\\alpha}}=\\frac{1}{\\Gamma(1-\\alpha)}\\frac{\\partial}{\\partial{t}}\\int^{t}_{0}(t-s)^{-\\alpha}s^{\\frac{\\alpha}{k}}\\varphi(xs^{-\\alpha})ds.\n\\end{equation}\nLet $\\theta=\\frac{t}{s}$, then Eq.(15) can be written as\n\\begin{equation*}\n\\begin{aligned}\n\\frac{\\partial^\\alpha(t^{\\frac{\\alpha}{k}}\\varphi(\\zeta))}{\\partial{t}^{\\alpha}}&=\\frac{1}{\\Gamma(1-\\alpha)}\\frac{\\partial}{\\partial{t}}\\int^{\\infty}_{1}(t-\\frac{t}{\\theta})^{-\\alpha}(\\frac{t}{\\theta})^{\\frac{\\alpha}{k}}\\varphi(\\zeta \\theta^{\\alpha})\\frac{t}{\\theta^{2}}d\\theta\\\\\n&=\\frac{\\partial}{\\partial{t}}[t^{\\frac{\\alpha}{k}-\\alpha+1}\\frac{1}{\\Gamma(1-\\alpha)}\\int^{\\infty}_{1}(\\theta-1)^{-\\alpha}\\theta^{\\alpha-\\frac{\\alpha}{k}-2}\\varphi(\\zeta\\theta^{\\alpha})d\\theta]\\\\\n&=\\frac{\\partial}{\\partial{t}}[t^{\\frac{\\alpha}{k}-\\alpha+1}(K^{1+\\frac{\\alpha}{k},1-\\alpha}_{\\frac{1}{\\alpha}}\\varphi)(\\zeta)]\\\\\n&=t^{\\frac{\\alpha}{k}-\\alpha}(1+\\frac{\\alpha}{k}-\\alpha-\\alpha\\zeta\\frac{d}{d\\zeta})[(K^{1+\\frac{\\alpha}{k},1-\\alpha}_{\\frac{1}{\\alpha}}\\varphi)(\\zeta)]\\\\\n&=t^{\\frac{\\alpha}{k}-\\alpha}[(P^{1+\\frac{\\alpha}{k}-\\alpha,\\alpha}_{\\frac{1}{\\alpha}}(\\varphi))(\\zeta)],\n\\end{aligned}\n\\end{equation*}\nwhere $P^{\\tau,\\alpha}_{\\beta}$ is Erdelyi-Kober fractional derivative operator and its definition is\n\\begin{align*}\n(P^{\\tau,\\alpha}_{\\beta}\\varphi)(\\zeta)=\\prod^{n-1}_{j=0}(\\tau+j-\\frac{1}{\\beta}\\zeta\\frac{d}{d\\zeta})[K^{\\tau+\\alpha,n-\\alpha}_{\\beta}(\\varphi)(\\zeta)],\n\\zeta>0, \\beta>0, \\alpha>0,\nn=\\left\\{\n\\begin{array}{lll}\n[\\alpha]+1&,&\\alpha\\notin N,\\\\\n\\alpha&,&\\alpha\\in N.\n\\end{array}\\right.\n\\end{align*}\nhere\n\\begin{align*}\n(K^{\\tau,\\alpha}_{\\beta}\\varphi)(\\zeta)=\\left\\{\n\\begin{array}{lll}\n\\frac{1}{\\Gamma(\\alpha)}\\int^{\\infty}_{1}(\\theta-1)^{\\alpha-1}\\theta^{-(\\tau+\\alpha)}\\varphi(\\zeta\\theta^{\\frac{1}{\\beta}})d\\theta&,&\\alpha>0,\\\\\n\\varphi(\\zeta)&,&\\alpha=0.\n\\end{array}\\right.\n\\end{align*}\nWhen $1<\\alpha<2$, we can also obtain $$\\frac{\\partial^\\alpha(t^{\\frac{\\alpha}{k}}\\varphi(\\zeta))}{\\partial{t}^{\\alpha}}=t^{\\frac{\\alpha}{k}-\\alpha}[(P^{1+\\frac{\\alpha}{k}-\\alpha,\\alpha}_{\\frac{1}{\\alpha}}(\\varphi))(\\zeta)],$$\nby using the same method.\nThen Eq.(1) can be reduced into an ordinary differential equation of fractional order\n\\begin{equation}\\label{eq:24}\n(P^{1+\\frac{\\alpha}{k}-\\alpha,\\alpha}_{\\frac{1}{\\alpha}}\\varphi)(\\zeta)=\\varphi^{k}\\frac{d^{2}\\varphi}{d\\zeta^{2}}+ k\\varphi^{k-1}(\\frac{d\\varphi}{d\\zeta})^{2}+\\beta\\frac{d\\varphi}{d\\zeta}.\n\\end{equation}\nAs for other cases, Eq.(1) can also be reduced by the similarity transformations corresponding to other infinitesimal generators. The results are as follows.\n\n\\textbf{Case 3.} $D(u)=u^{k}(k\\neq0)$,$P(u)=\\beta u^{k}(\\beta=\\pm1)$\\\\\nThe similarity transformation $u=t^{-\\frac{\\alpha}{k}}\\psi(\\zeta)$ along with the similarity variable $\\zeta=x$ reduces Eq.(1) to the nonlinear ordinary differential equation of the form\n\n\\begin{equation}\\label{eq:29}\n\\frac{d^2\\psi}{d \\zeta^2}+ k\\psi^{-1}(\\frac{d\\psi}{d\\zeta})^2+\\beta\\frac{d\\psi}{d\\zeta}-\\frac{\\Gamma(1-\\frac{\\alpha}{k})}{\\Gamma(1-\\frac{\\alpha}{k}-\\alpha)}\\psi^{1-k}(\\zeta)=0,\n\\end{equation}\nwhich is corresponding to the infinitesimal generator $t\\frac{\\partial}{\\partial t}-\\frac{\\alpha u}{k}\\frac{\\partial}{\\partial u}.$\\\\\n\\textbf{Case 4.}$D(u)=u^{k}(k\\neq0)$,$P(u)=\\beta u^{\\gamma}(\\beta=\\pm1,\\gamma\\neq k)$\n\nThe similarity transformation $u=x^{\\frac{1}{k-\\gamma}}H(\\omega)$ along with the similarity variable $\\omega=tx^{-b},b=\\frac{2\\gamma-k}{\\alpha(\\gamma-k)}$ reduces Eq.(1) to the nonlinear ordinary differential equation of fractional order of the form\n\\begin{equation}\n\\begin{array}{r@{~}l}\n&(P^{1-\\alpha,\\alpha}_{-1}H)(\\omega)=\\omega^{\\alpha}[\\frac{(\\gamma+1)}{(k-\\gamma)^{2}}H^{k+1}-b\\omega(\\frac{k+\\gamma+2}{k-\\gamma}-b)H^{k}\\frac{dH}{d\\omega}\\\\\n&+b^{2}\\omega^{2}(kH^{k-1}(\\frac{dH}{d\\omega})^{2}+H^{k}\\frac{d^2H}{d\\omega^2})+\\frac{\\beta}{k-\\gamma}H^{\\gamma+1}-b\\beta\\omega H^{\\gamma}\\frac{dH}{d\\omega}],\n\\end{array}\n\\end{equation}\nwhich is corresponding to the infinitesimal generator\n$x\\frac{\\partial}{\\partial{x}}+\\frac{2\\gamma-k}{\\alpha(\\gamma-k)}t\\frac{\\partial}{\\partial{t}}+\\frac{u}{k-\\gamma}\\frac{\\partial}{\\partial{u}}$.\\\\\n\\textbf{Case 5.} $D(u)= u^{-2}$,$P(u)=\\beta u^{-2}(\\beta=\\pm1)$\n\nThe similarity transformation $u=e^{\\beta x}G(\\zeta)$ along with the similarity variable $\\zeta=t$ reduces Eq.(1) to the nonlinear ordinary differential equation of fractional order of the form\n\\begin{equation}\\label{eq:30}\n\\frac{d^\\alpha G(\\zeta)}{d{\\zeta}^{\\alpha}}=0,\n\\end{equation}\nwhich is corresponding to the infinitesimal generator $e^{-\\beta x}(\\frac{\\partial}{\\partial{x}}+\\beta u\\frac{\\partial}{\\partial{u}})$.\n\nBecause the solutions of Eq.(19) is\n\\begin{align*}\nG(\\zeta)=\\left\\{\n\\begin{array}{lll}\n c_{3}\\zeta^{\\alpha-1} & , & 0<\\alpha<1, \\\\\n c_{3}\\zeta^{\\alpha-1}+ c_{4}\\zeta^{\\alpha-2} & , &1<\\alpha<2.\n\\end{array}\\right.\n\\end{align*}\nwhere $c_{3}$,$c_{4}$ are arbitrary constants, the group invariant solution of Eq.(1) is\n\\begin{align*}\nu=\\left\\{\n\\begin{array}{lll}\n c_{3}t^{\\alpha-1}e^{\\beta x} & , & 0<\\alpha<1, \\\\\n (c_{3}t^{\\alpha-1}+ c_{4}t^{\\alpha-2})e^{\\beta x} & , &1<\\alpha<2.\n\\end{array}\\right.\n\\end{align*}\\\\\n\\textbf{Case 6.}$D(u)= u^{\\frac{2\\alpha}{1-\\alpha}}$,$P(u)=\\beta u^{\\frac{2\\alpha}{1-\\alpha}}(\\beta=\\pm1)$\n\nThe similarity transformation $u=t^{\\alpha-1}F(\\zeta)$ along with the similarity variable $\\zeta=x$ reduces Eq.(1) to the nonlinear ordinary differential equation of the form\n\\begin{equation}\\label{eq:32}\n\\frac{d^2F}{d\\zeta^2}+\\frac{2\\alpha}{1-\\alpha}F^{-1}(\\frac{dF}{d\\zeta})^{2}+\\beta \\frac{dF}{d\\zeta}=0.\n\\end{equation}\nwhich is corresponding to the infinitesimal generator\n$t^{2}\\frac{\\partial}{\\partial{t}}+(\\alpha-1)tu\\frac{\\partial}{\\partial{u}}$.\\\\\n\\textbf{Case 7.}$D(u)=1$ and $P(u)=\\beta(\\beta=\\pm1)$\n\nThe similarity transformation $u=e^{x}Q(\\zeta)$ along with the similarity variable $\\zeta=t$ reduces Eq.(1) to the nonlinear ordinary differential equation of fractional order of the form\n\\begin{equation}\\label{eq:35}\n\\frac{d^\\alpha Q(\\zeta)}{d{\\zeta}^\\alpha}=(1+\\beta)Q(\\zeta),\n\\end{equation}\nwhich is corresponding to the infinitesimal generator\n$\\frac{\\partial}{\\partial{x}}+u\\frac{\\partial}{\\partial{u}}.$\\\\\nBecause the solution of Eq.(21) is $$Q(\\zeta)=\\zeta^{\\alpha-1}E_{\\alpha,\\alpha}[(1+\\beta)\\zeta^{\\alpha}],$$the group invariant solution of Eq.(1) is $$u=t^{\\alpha-1}e^{x}E_{\\alpha,\\alpha}[(1+\\beta)t^{\\alpha}],$$\\\\\nwhere $E_{\\alpha,\\beta}(z)$ is the Mittag-Leffler function[11].\\\\\n\\textbf{Case 8.}$D(u)=1$ and $P(u)=\\beta u^{\\gamma}(\\beta=\\pm1,\\gamma\\neq0)$\n\nThe similarity transformation $u=t^{-\\frac{\\alpha}{2\\gamma}}\\phi(\\sigma)$ along with the similarity variable $\\sigma=xt^{-\\frac{\\alpha}{2}}$ reduces Eq.(1) to the nonlinear ordinary differential equation of fractional order of the form\n\\begin{equation}\\label{eq:36}\n(P^{1-\\frac{\\alpha}{2\\gamma}-\\alpha,\\alpha}_{\\frac{2}{\\alpha}}\\phi)(\\sigma)=\\frac{d^{2}\\phi}{d\\sigma^{2}}+\\beta\\phi^{\\gamma}\\frac{d\\phi}{d\\sigma},\n\\end{equation}\nwhich is corresponding to the infinitesimal generator\n$x\\frac{\\partial}{\\partial{x}}+\\frac{2t}{\\alpha}\\frac{\\partial}{\\partial{t}}-\\frac{u}{\\gamma}\\frac{\\partial}{\\partial{u}}$.\\\\\n\n\\section{Summary and discussion}\nIn this paper, we illustrate the application of Lie symmetry analysis to study time-fractional convection-diffusion equation. We consider the group classification of this equation for two variable functions. Eight cases are discussed. In every case Lie point symmetries are derived and\nsimilarity reductions of this equation are performed by means of non-trivial Lie point symmetry. In some cases, the time fractional convection-diffusion equation can be transformed into a nonlinear ODE of fractional order. In other cases, this equation can be reduced to a nonlinear ODE. Some invariant solutions are given in some cases. In addition, it is easily shown that the infinitesimal generator admitted by time-fractional convection-diffusion equation in each cases can form Lie algebra and the dimension of Lie algebra is decided by the number of parameters in transformations. It is necessary to remark that case 7 is different from other cases because in that case Lie algebra is infinite dimensional and in other cases Lie algebra is finite dimensional.\n\nLie symmetry analysis also can be used for other time-fractional differential equations. But there are little conclusions on the symmetry property for one kind of fractional equations. This is a possible direction for future work.\n\\section*{Acknowledgements}\n\nThis work is supported by natural science foundation of Zhejiang Province (Grant No.Y6100611) and the national natural science foundation of China(Grant No.11371323)\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nWhitham in his 1967 paper \\cite{Whitham1967} (see also \\cite{Whitham;book}) put forward\n\\begin{equation}\\label{eqn:Whitham0}\nu_t+c_\\text{ww}(|\\partial_x|)u_x+\\frac32\\sqrt{\\frac{g}{h}} uu_x=0\n\\end{equation}\nto address, qualitatively, breaking and peaking of waves in shallow water. Here and throughout, $u(x,t)$ is related to the displacement of the fluid surface from the rest state at position~$x$ and time~$t$,\nand $c_{\\rm{\\tiny ww}}(|\\partial_x|)$ is a Fourier multiplier operator, defined as \n\\begin{equation}\\label{def:c0}\n\\reallywidehat{c_\\text{ww}(|\\partial_x|)f}(k)=\\sqrt{\\frac{g\\tanh(kh)}{k}}\\widehat{f}(k),\n\\end{equation}\nwhere $g$ is the gravitational constant and $h$ the undisturbed fluid depth. Notice that $c_\\text{ww}(k)$ is the phase speed of $2\\pi\/k$ periodic waves in the linear theory of water waves \\cite{Whitham;book}.\n\nFor relatively shallow water or, equivalently, relatively long waves for which $kh\\ll1$, one can expand the right hand side of \\eqref{def:c0} to obtain\n\\[\nc_{\\rm{\\tiny ww}}(k)=\\sqrt{gh}\\Big(1-\\frac16k^2h^2\\Big)+O(k^4h^4),\n\\]\nwhereby arriving at the celebrated Korteweg--de Vries (KdV) equation:\n\\begin{equation}\\label{eqn:KdV}\nu_t+\\sqrt{gh}\\Big(u_x+\\frac16h^2 u_{xxx}\\Big)+\\frac32\\sqrt{\\frac{g}{h}}uu_x=0.\n\\end{equation}\nTherefore \\eqref{eqn:Whitham0} and \\eqref{def:c0} can be regarded as augmenting \\eqref{eqn:KdV} to include the {\\em full} dispersion in the linear theory of water waves and, thus, improving over \\eqref{eqn:KdV} for short and intermediately long waves. Indeed, numerical studies (see \\cite{MKD;comp}, for instance) reveal that the Whitham equation performs on par with or better than the KdV equation and other shallow water models in a wide range of amplitude and wavelength parameters. \n\nThe KdV equation admits solitary and cnoidal waves but no traveling waves can `peak'. By contrast, the so-called extreme Stokes wave possesses a $120^\\circ$ peaking at the crest. Also no solutions of \\eqref{eqn:KdV} can `break'. That means, the solution remains bounded but its slope becomes unbounded. See \\cite[Section~13.14]{Whitham;book} for more discussion.\nThis is perhaps not surprising because the dispersion\\footnote{The phase speed is $\\sqrt{gh}(1-\\frac16k^2h^2)$, poorly approximating $c_\\text{ww}(k)$ when $kh\\gg1$.} of the KdV equation is inadequate for explaining high frequency phenomena of water waves. Whitham \\cite{Whitham1967,Whitham;book} conjectured breaking and peaking for \\eqref{eqn:Whitham0} and \\eqref{def:c0}. Recently, one of the authors \\cite{Hur;breaking} proved breaking, and Ehrnstr\\\"om and Wahl\\'en \\cite{EW;peaking} proved peaking.\nJohnson and one of the authors \\cite{HJ;whitham} proved that a small amplitude and periodic traveling wave of \\eqref{eqn:Whitham0} and \\eqref{def:c0} is modulationally unstable, provided that $kh>1.145\\dots$, comparing with the well-known Benjamin--Feir instability of a Stokes wave.\nBy contrast, all cnoidal waves of \\eqref{eqn:KdV} are modulationally stable.\n\nWhen the effects of surface tension are included, Johnson and one of the authors \\cite{HJ;capillary} proposed to replace \\eqref{def:c0} by \n\\begin{equation}\\label{def:cT0}\n\\reallywidehat{c_\\text{ww}(|\\partial_x|;T)f}(k)\n=\\sqrt{(g+Tk^2)\\frac{\\tanh(kh)}{k}}\\widehat{f}(k),\n\\end{equation}\nwhere $T$ is the surface tension coefficient. Notice that $c_\\text{ww}(k;T)$ is the phase speed in the linear theory of capillary-gravity waves \\cite{Whitham;book}. When $T=0$, \\eqref{def:cT0} becomes \\eqref{def:c0}. Since\n\\[\nc_\\text{ww}(k;T)=\\sqrt{gh}\\Big(1-\\frac12\\Big(\\frac13-\\frac{T}{gh^2}\\Big)k^2h^2\\Big)+O(k^4h^4)\n\\quad\\text{as $kh\\to0$},\n\\]\none arrives at the KdV equation for capillary-gravity waves:\n\\begin{equation}\\label{eqn:KdVT}\nu_t+\\sqrt{gh}\\Big(u_x+\\frac12\\Big(\\frac13-\\frac{T}{gh^2}\\Big)h^2u_{xxx}\\Big)+\\frac32\\sqrt{\\frac{g}{h}}uu_x=0\n\\end{equation}\nin the long wavelength limit unless $T\/gh^2=1\/3$. Notice that \\eqref{eqn:KdV} and \\eqref{eqn:KdVT} behave alike, qualitatively, possibly after a sign change. By contrast, whenever $T>0$, $c_\\text{ww}(k;T)\\to\\sqrt{T|k|}$ as $kh\\to\\infty$, so that \\eqref{eqn:Whitham0} and \\eqref{def:cT0} become \n\\begin{equation}\\label{eqn:fKdV1\/2}\nu_t+\\sqrt{T}|\\partial_x|^{1\/2}u_x+\\frac32\\sqrt{\\frac{g}{h}}uu_x=0\n\\end{equation}\nto leading order whereas when $T=0$, $u_t+\\sqrt{g}|\\partial_x|^{-1\/2}u_x+\\frac32\\sqrt{\\frac{g}{h}}uu_x=0$.\n\nIn recent years, the {\\em capillary-gravity Whitham equation} has been a subject of active research \\cite{HJ;capillary,Kalisch;physD,Kalisch;cWhitham,carter;fluids,EJ;WW} (see also \\cite{HP;BW,HP;FDCH,Hur;vWhitham,Pandey;comp,KP;LWP}). Particularly, Remonato and Kalisch \\cite{Kalisch;physD} combined a spectral collocation method and a numerical continuation approach to discover a rich variety of local bifurcation of periodic traveling waves of \\eqref{eqn:Whitham0} and \\eqref{def:cT0}, notably, crossing and connecting solution branches. Here we adopt a robust numerical continuation scheme to corroborate the results\nand produce convincing results for global bifurcation. Our findings support local bifurcation theorems (see \\cite{EJ;WW}, for instance) and help classify {\\em all} periodic traveling waves. \n\nWe employ an efficient numerical method for solving stiff nonlinear PDEs implemented with fourth-order time differencing, to experiment with (nonlinear) orbital stability and instability for a plethora of periodic traveling waves of \\eqref{eqn:Whitham0} and \\eqref{def:cT0}. (Spectral) modulational stability and instability were investigated numerically \\cite{carter;fluids} and also analytically for small amplitude \\cite{HJ;capillary}. To the best of the authors' knowledge, however, nonlinear stability and instability have not been addressed. Our novel findings include, among many others, orbital stability for the $k=1$ branch whenever $T>0$ versus instability for $k\\geq2,\\in\\mathbb{N}$ branches for great wave height when $00$ the wave number, and $\\phi$ satisfies, by quadrature and Galilean invariance,\n\\begin{equation}\\label{eqn:phi}\n-c\\phi+c_\\text{ww}(k|\\partial_z|;T)\\phi+\\phi^2=0. \n\\end{equation}\nWe assume that $\\phi$ is $2\\pi$ periodic and even in the $z$ variable, so that $2\\pi\/k$ periodic in the $x$ variable.\nNotice\n\\begin{equation}\\label{def:cT}\nc_\\text{ww}(k|\\partial_z|;T)\n\\left\\{\\begin{matrix}\\cos \\\\ \\sin \\end{matrix}\\right\\}(nz)\n=c_\\text{ww}(kn;T)\\left\\{\\begin{matrix}\\cos \\\\ \\sin \\end{matrix}\\right\\}(nz),\\quad n\\in\\mathbb{Z}.\n\\end{equation}\n\n\nFor any $T\\geq0$, $k>0$ and $c\\in\\mathbb{R}$, clearly, $\\phi(z)=0$ solves \\eqref{eqn:phi} and \\eqref{def:cT}. Suppose that $T$ and $k$ are fixed. A necessary condition for nontrivial solutions to bifurcate from such trivial solution at some $c$ is that \n\\[\n\\text{$c_\\text{ww}(k|\\partial_z|;T)\\phi-c\\phi=0$ admits a nontrivial solution},\n\\]\nif and only if $c=c_\\text{ww}(kn;T)$ for some $n\\in\\mathbb{N}$, by symmetry, whence \n\\[\n\\text{$\\cos(nz)\\in\\ker(c_\\text{ww}(k|\\partial_z|;T)-c_\\text{ww}(kn;T))$ in some space of even functions}.\n\\] \n\nWhen $T=0$, $c_\\text{ww}(\\cdot\\,;0)$ monotonically decreases to zero over $(0,\\infty)$, whence for any $k>0$, $c_\\text{ww}(kn_1;T)>c_\\text{ww}(kn_2;T)$ whenever $n_1,n_2\\in\\mathbb{N}$ and $n_1>n_2$. Therefore, for any $k>0$ for any $n\\in\\mathbb{N}$, $\\ker(c_\\text{ww}(k|\\partial_z|;0)-c_\\text{ww}(kn;0))=\\text{span}\\{\\cos(nz)\\}$ in spaces of even functions. \n\nWhen $T\\geq1\/3$, $c_\\text{ww}(\\cdot\\,;T)$ monotonically increases over $(0,\\infty)$ and unbounded from above, whereby for any $k>0$ for any $n\\in\\mathbb{N}$, $\\ker(c_\\text{ww}(k|\\partial_z|;T)-c_\\text{ww}(kn;T))=\\text{span}\\{\\cos(nz)\\}$.\n\nWhen $T<1\/3$, to the contrary, $c_\\text{ww}(kn_1;T)=c_\\text{ww}(kn_2;T)$ for some $k>0$ for some $n_1,n_2\\in\\mathbb{N}$ and $n_1\\neq n_2$, provided that $T=T(kn_1,kn_2)$, where\n\\begin{equation}\\label{def:T(k1,k2)}\nT(kn_1,kn_2)=\\frac{1}{kn_1}\\frac{1}{kn_2}\n\\frac{n_1\\tanh(kn_2)-n_2\\tanh(kn_1)}{n_1\\tanh(kn_1)-n_2\\tanh(kn_2)},\n\\end{equation}\nand $\\ker(c_\\text{ww}(k|\\partial_z|;T)-c)=\\text{span}\\{\\cos(n_1z),\\cos(n_2z)\\}$, where $c=c_\\text{ww}(kn_1;T)=c_\\text{ww}(kn_2;T)$; otherwise, $\\ker(c_\\text{ww}(k|\\partial_z|;T)-c_\\text{ww}(kn;T))=\\text{span}\\{\\cos(nz)\\}$. \n\nSuppose that $T\\geq0$, $k>0$, and $T\\neq T(kn,kn')$ for any $n,n'\\in\\mathbb{N}$ and $n\\neq n'$, particularly, either $T=0$ or $T\\geq1\/3$. We assume without loss of generality $n=1$. There exists a one-parameter curve of nontrivial, $2\\pi$ periodic and even solutions of \\eqref{eqn:phi} and \\eqref{def:cT}, denoted by\n\\begin{equation}\\label{def:loc;bifur}\n(\\phi(z;s),c(s)),\\quad |s|\\ll1,\n\\end{equation}\nin some function space (see \\cite{EW;peaking,EJ;WW}, for instance, for details), and \n\\begin{equation}\\label{eqn:|s|<<1}\n\\begin{aligned}\n&\\begin{aligned}\n\\phi(z;s)=s\\cos(z)&+\\frac{s^2}{2}\\left(\\frac{1}{c_\\text{ww}(k;T)-c_\\text{ww}(0;T)}\n-\\frac{1}{c_\\text{ww}(k;T)-c_\\text{ww}(2k;T)}\\cos(2z)\\right)\\\\\n&+\\frac{s^3}{2}\\frac{1}{(c_\\text{ww}(k;T)-c_\\text{ww}(2k;T))(c_\\text{ww}(k;T)-c_\\text{ww}(3k;T))}\\cos(3z)\n+O(s^4), \\end{aligned}\\\\\n&c(s)=c_\\text{ww}(k;T)+s^2\\left(\\frac{1}{c_\\text{ww}(k;T)-c_\\text{ww}(0;T)}\n-\\frac12\\frac{1}{c_\\text{ww}(k;T)-c_\\text{ww}(2k;T)}\\right)+O(s^4)\n\\end{aligned}\n\\end{equation}\nas $|s|\\to0$. Moreover, subject to a `bifurcation condition' (see \\cite{EW;peaking,EJ;WW}, for instance, for details), \\eqref{def:loc;bifur} extends to all $s\\in[0,\\infty)$. See \\cite{EW;peaking,EJ;WW} and references therein for a rigorous proof. \n\nWhen $T=0$ and, without loss of generality, $k=1$, $(\\phi(z;s_j),c(s_j))\\to (\\phi(z),c)$ as $j\\to\\infty$ for some $s_j\\to \\infty$ as $j\\to\\infty$ for some $\\phi\\in C^{1\/2}([-\\pi,\\pi])\\bigcap C^\\infty([-\\pi,0)\\bigcup(0,\\pi])$ and $0-1\\quad\\text{for all $z\\in[-\\pi,\\pi]$},\n\\]\nso that the fluid surface would not intersect the impermeable bed, and we stop numerical continuation once we reach a {\\em limiting admissible solution}, for which \n\\begin{equation}\\label{def:stopping}\n\\min_{z\\in[-\\pi,\\pi]}\\phi(z)-\\frac{1}{2\\pi}\\int^\\pi_{-\\pi} \\phi(z)~dz=-1.\n\\end{equation} \nSection~\\ref{sec:result} provides examples.\n\nSuppose, to the contrary, that $T=T(kn_1,kn_2)$ (see \\eqref{def:T(k1,k2)}) for some $k>0$ for some $n_1,n_2\\in\\mathbb{N}$ and $n_1s_0>0$ for some $s_0$. In other words, there cannot exist $2\\pi\/n_1$ periodic and `unimodal' waves, whose profile monotonically decreases from its single crest to the trough over the period. See \\cite{EJ;WW}, for instance, for a rigorous proof.\nThe global continuation of \\eqref{eqn:s12} and limiting configurations have not been well understood analytically, though, and here we address numerically.\n\nTurning the attention to the (nonlinear) orbital stability and instability of a periodic traveling wave of \\eqref{eqn:Whitham}, notice that \\eqref{eqn:Whitham} possesses three conservation laws:\n\\begin{equation}\\label{def:EPM}\n\\begin{aligned}\nE(u)=&\\int \\left(\\frac12 uc_\\text{ww}(|\\partial_x|;T)u+\\frac13u^3\\right)~dx \\qquad\\text{(energy)},\\\\\nP(u)=&\\int \\frac12 u^2~dx\\qquad\\text{(momentum)}, \\\\\nM(u)=&\\int u~dx \\qquad\\text{(mass)},\n\\end{aligned}\n\\end{equation} \nand so does the KdV equation with fractional dispersion: \n\\begin{equation}\\label{eqn:fKdV}\nu_t+|\\partial_x|^\\alpha u_x+(u^2)_x=0,\\quad 0<\\alpha\\leq2,\n\\quad\\text{where}\\quad \\widehat{|\\partial_x|^\\alpha f}(k)=|k|^\\alpha\\widehat{f}(k),\n\\end{equation}\nfor which $E(u)=\\int (\\frac12u|\\partial_x|^\\alpha u+\\frac13u^3)~dx$ rather than the first equation of \\eqref{def:EPM}. We pause to remark that \\eqref{eqn:Whitham} becomes \\eqref{eqn:fKdV}, $\\alpha=1\/2$, for high frequency (see \\eqref{eqn:fKdV1\/2}), and \\eqref{eqn:KdV} and \\eqref{eqn:KdVT} compare with $\\alpha=2$. A solitary wave of \\eqref{eqn:fKdV} is a constrained energy minimizer and orbitally stable, provided that $\\alpha>1\/2$, if and only if $P_c>0$. See \\cite{LPS;orbital} and references therein for a rigorous proof. See also \\cite{HJ;orbital} for an analogous result for periodic traveling waves. It seems not unreasonable to expect that the orbital stability and instability of a periodic traveling wave of \\eqref{eqn:Whitham} change likewise at a critical point of $P$ as a function of $c$ although, to the best of the authors' knowledge, there is no rigorous analysis of constrained energy minimization. Indeed, numerical evidence \\cite{Kalisch;specVV} supports the conjecture when $T=0$ and $k=1$. Here we take matters further to $T\\geq0$ and $k\\geq1,\\in\\mathbb{N}$. Also we numerically elucidate the instability scenario when $T=0$ and $k=1$.\n\n\\section{Methodology}\\label{sec:numerical}\n\nWe begin by numerically approximating $2\\pi$ periodic and even solutions of \\eqref{eqn:phi} and \\eqref{def:cT} by means of a spectral collocation method \\cite{JBoyd,GO;spec}. See \\cite{Kalisch;specVV}, among others, for nonlinear dispersive equations of the form \\eqref{eqn:Whitham} and \\eqref{eqn:fKdV}. We define the collocation projection as a discrete cosine transform as\n\\begin{equation}\\label{def:phiN}\n\\phi_N(z)=\\sum_{n=0}^{N-1}\\omega(n)\\widehat{\\phi}(n)\\cos(nz), \n\\end{equation}\nwhere the discrete cosine coefficients are\n\\begin{equation}\\label{def:phi(n)}\n\\widehat{\\phi}(n)=\\omega(n)\\sum_{m=1}^N \\phi_N(z_m)\\cos(nz_m)\n\\end{equation}\nand \n\\[\n\\omega(n)=\\begin{cases}\n\\sqrt{1\/N} &\\text{for $n=0$},\\\\\n\\sqrt{2\/N} &\\text{otherwise};\n\\end{cases}\n\\]\nthe collocation points are\n\\[\nz_m=\\pi\\frac{2m-1}{2N},\\quad m=1,2,\\dots,N.\n\\]\nTherefore\n\\[\n\\phi(z)\\approx\\phi_N(z)=\\sum_{m=1}^N\\sum_{n=0}^{N-1}\\omega^2(n)\\cos(nz_m)\\cos(nz)\\phi(z_m).\n\\]\nWe can compute \\eqref{def:phiN} and \\eqref{def:phi(n)} efficiently using a fast Fourier transform (FFT) \\cite{GO;spec}. \nFor $T\\geq0$ and $k>0$, likewise,\n\\begin{align*}\nc_\\text{ww}(k|\\partial_z|;T)\\phi(z)\\approx (c_\\text{ww}(k|\\partial_z|;T)\\phi)_N(z)\n:=\\sum_{\\ell=1}^N\\sum_{n=0}^{N-1}\\omega^2(n)c_\\text{ww}(kn;T)\\cos(nz)\\cos(nz_\\ell)\\phi_N(z_\\ell), \n\\end{align*}\nand we can compute $(c_\\text{ww}(k|\\partial_z|;T)\\phi)_N(z_m)$, $m=1,2,\\dots,N$, via an FFT. \n\nSuppose that $T\\geq0$ and $k>0$ are fixed and we take $N=1024$. We numerically solve\n\\begin{equation}\\label{eqn:F=0}\n-c\\phi_N(z_m)+(c_\\text{ww}(k|\\partial_z|;T)\\phi)_N(z_m)+\\phi_N(z_m)^2=0,\\quad m=1,2,\\dots,N,\n\\end{equation}\nby means of Newton's method. We parametrically continue the numerical solution over $c$ by means of a pseudo-arclength continuation method \\cite{Doedel} (see \\cite{Kalisch;specVV}, among others, for nonlinear dispersive equations of the form \\eqref{eqn:Whitham} and \\eqref{eqn:fKdV}), for which the (pseudo-)arclength of a solution branch is the continuation parameter, whereas a parameter continuation method would use $c$ as the continuation parameter and vary it sequentially. The pseudo-arclength continuation method can successfully bypass a turning point of $c$, at which a parameter continuation method fails because the Jacobian of \\eqref{eqn:F=0} becomes singular,\nwhence Newton's method diverges. See \\cite{Kalisch;physD} for more discussion. \n\nWe say that Newton's method converges if the residual\n\\[\n\\sqrt{\\sum_{m=1}^N \\big(-c\\phi_N(z_m)+(c_\\text{ww}(k|\\partial_z|;T)\\phi)_N(z_m)+\\phi_N(z_m)^2\\big)^2}\\leq 10^{-10\n\\]\n(see \\eqref{eqn:F=0}), and this is achieved provided that an initial guess is sufficiently close to a true solution of \\eqref{eqn:phi} and \\eqref{def:cT}. To this end, we take a small amplitude cosine function and $c\\approx c_\\text{ww}(k;T)$ as an initial guess for the local bifurcation at $\\phi=0$ and $c=c_\\text{ww}(k;T)$, or \\eqref{eqn:|s|<<1} so long as it makes sense. We have also solved \\eqref{eqn:F=0} using a (Jacobian-free) Newton--Krylov method \\cite{Kelley_nsoli} with absolute and relative tolerance of $10^{-10}$. The results correctly match those obtained from Newton's method, corroborating our numerical scheme. \n\nWe have run a pseudo-arclength continuation code (see \\cite{Jennie_KM}, for instance, for the applicability of the code used herein) with a fixed arclength stepsize, to numerically locate and trace solution branches. We have additionally run AUTO \\cite{AUTO}, a robust numerical continuation and bifurcation software, to corroborate the results. All the results in Section~\\ref{sec:result} have been obtained by employing AUTO with strict tolerances (\\verb|EPSL = 1e-12| and \\verb|EPSU=1e-12| in the AUTO's constants file). AUTO has the advantage, among many others, of making variable arclength stepsize adaptation (using the option \\verb|IADS=1|) and of detecting branch points (using the option \\verb|ISP=2| for \\textit{all} special points). For the former, provided with minimum and maximum allowed absolute values of the pseudo-arclength stepsize, AUTO adjusts the arclength stepsize to be used during the continuation process.\n\nLet $\\phi$ and $c$ numerically approximate a $2\\pi\/k$ periodic traveling wave of \\eqref{eqn:Whitham}, and we turn to numerically experimenting with its (nonlinear) orbital stability and instability. After a $2048$-point (full) Fourier spectral discretization in $x\\in[-\\pi,\\pi]$, \\eqref{eqn:Whitham} leads to\n\\begin{equation}\\label{eqn:u}\nu_t=\\mathbf{L}u+\\mathbf{N}(u),\\quad\\text{where}\\quad\n\\mathbf{L}=-c_\\text{ww}(|\\partial_x|;T)\\partial_x\\quad\\text{and}\\quad\\mathbf{N}(u)=-(u^2)_x,\n\\end{equation}\nand we numerically solve \\eqref{eqn:u}, subject to\n\\begin{equation}\\label{eqn:IC}\nu(x,0)=\\phi(kx)+10^{-3}\\max_{x\\in[-\\pi,\\pi]}|\\phi(kx)|\\,U(-1,1),\n\\end{equation}\nby means of an integrating factor (IF) method (see \\cite{KassamTrefethen}, for instance, and references therein), where $U(-1,1)$ is a uniform random distribution. In other words, at the initial time, we perturb $\\phi$ by uniformly distributed random noise of small amplitude, depending on the amplitude\\footnote{We treat $\\|\\phi\\|_{L^\\infty}$ as the amplitude of $\\phi$ although $\\phi$ is not of mean zero, because the difference is insignificant.} of $\\phi$. We pause to remark that the well-posedness for the Cauchy problem of \\eqref{eqn:Whitham} can be rigorously established at least for short time in the usual manner via a compactness argument. \n\nFollowing the IF method, let $v=e^{-\\mathbf{L}t}u$, so that \\eqref{eqn:u} becomes\n\\begin{equation}\\label{eqn:v}\nv_{t}=e^{-\\mathbf{L}t}\\mathbf{N}(e^{\\mathbf{L}t}v).\n\\end{equation}\nThis ameliorates the stiffness of \\eqref{eqn:u}. Notice that \\eqref{eqn:v} is of diagonal form, rather than matrix, in the Fourier space. We advance \\eqref{eqn:v} forward in time by means of a fourth-order four-stage Runge--Kutta (RK4) method \\cite{HW1} with a fixed time stepsize $\\Delta t$: \n\\[ v_{n+1}=v_n+\\frac{a+2b+2c+d}{6},\\]\nwhere $v_n=v(t_n)$ and $t_n=n\\Delta t$, $n=0,1,2,\\dots$, \n\\begin{align*}\na&=f(v_n,t_n)\\Delta t,\n&b&=f(v_n+a\/2,t_n+\\Delta t\/2)\\Delta t,\\\\\nc&=f(v_n+b\/2,t_n+\\Delta t\/2)\\Delta t,\n&d&=f(v_n+c,t_n+\\Delta t)\\Delta t\n\\end{align*}\nand $f(v,t)=e^{-\\mathbf{L}t}\\mathbf{N}(e^{\\mathbf{L}t}v)$ (see \\eqref{eqn:v}). \nWe take $\\Delta t=10^{-4}\\times2\\pi\/c$, where $c$ is the wave speed of $\\phi$. While such value of $\\Delta t$ ensures numerical stability, some computations, depending on $c$, require smaller\\footnote{In most cases, $10^{-4}\\times 2\\pi\/c =O(10^{-4})$ but in Figure~\\ref{fig6}(b), for instance, $c=0.36$, whence $10^{-4}\\times2\\pi\/c \\approx 0.00174533$, whereas $10^{-5}\\times 2\\pi\/c =O(10^{-4})$.} values, for instance, $\\Delta t=10^{-4}\\times \\pi\/c$ or $10^{-5}\\times2\\pi\/c$. At each time step, we remove aliasing errors by applying the so-called $3\/2$-rule so that the Fourier coefficients well decay for high frequencies (see \\cite{JBoyd}, for instance).\n\nWe assess the fidelity of our numerical scheme by monitoring \\eqref{def:EPM} for numerical solutions. For unperturbed solutions, $E(t)$, $P(t), M(t)$ have all been conserved to machine precision, whereas for (randomly) perturbed ones, $E(t)$ and $P(t)$ to the order of $10^{-7}$ and $M(t)$ to machine precision. We have also corroborated our numerical results using two other methods: a higher-order Runge--Kutta method, such as the Runge--Kutta--Fehlberg (RKF) method \\cite{HW1} with time stepsize adaptation and a two-stage (and, thus, fourth-order) Gauss--Legendre implicit\\footnote{An implicit method, by construction, enjoys wider regions of absolute stability. This allows us to assess all our results, while avoiding numerical instability that might be observed in explicit methods such as RK4 and RKF methods. However, the IRK4 method requires a fixed point iteration (in the form of Newton's method) at each time step.} Runge--Kutta (IRK4) method \\cite{HW2}.\nFor stable solutions, the results obtained from the RK4 method correctly match those from the RKF and IRK4 methods. However, and for unstable solutions, the instability manifests at slightly different times for the same initial conditions. This is somewhat expected because the local truncation error (LTE) of each method and the convergence error of the IRK4 method act as perturbations. Recall that the RK4 and IRK4 methods have an LTE of the order of $(\\Delta t)^{4}$ whereas the RKF method has $(\\Delta t)^{6}$.\n\n\\section{Results}\\label{sec:result}\n\nWe begin by taking $T=0$ and $k=1$. Figure~\\ref{fig1} shows the wave height\n\\begin{equation}\\label{def:H}\nH=\\max_{z\\in[-\\pi,\\pi]}\\phi(z)-\\min_{z\\in[-\\pi,\\pi]}\\phi(z)\n\\end{equation}\nand the momentum\n\\begin{equation}\\label{def:P}\nP=\\frac12\\int^\\pi_{-\\pi}\\phi(z)^2~dz\n\\end{equation}\nfrom our numerical continuation of $2\\pi$ periodic and even solutions of \\eqref{eqn:phi} and \\eqref{def:cT}. The result agrees, qualitatively, with \\cite[Figure~4]{Kalisch;specVV} and others.\nWe find that $H$ monotonically increases to $\\approx 0.58156621$, highlighted with the red square, for which \\eqref{def:stopping0} holds, whereas $P$ increases and then decreases, making one turning point.\nAlso we find that the crest becomes sharper and the trough flatter as $H$ increases. The limiting solution must possess a cusp at the crest \\cite{EW;peaking}. \n\nThe left column of Figure~\\ref{fig2} shows almost limiting waves. The inset is a close-up near the crest, emphasizing smoothness. The right column shows the profiles of \\eqref{eqn:IC}, namely periodic traveling waves of \\eqref{eqn:Whitham} perturbed by uniformly distributed random noise of small amplitudes at $t=0$ (dash-dotted), and of the solutions of \\eqref{eqn:Whitham} at later instants of time (solid). Wave (a), prior to the turning point of $P$, remains unchanged for $10^3$ time periods, after translation of the $x$ axis, implying orbital stability, whereas the inset reveals that wave (b), past the turning point, suffers from crest instability. Indeed, our numerical experiments point to transition from stability to instability at the turning point of $P$. But there is numerical evidence \\cite{SKCK;MI} that waves (a) and (b) are (spectrally) modulationally unstable.\n\nWhen $T=4\/\\pi^2$ and $k=1$, Figure~\\ref{fig3} shows $H$ and $P$ versus $c$. Our numerical findings suggest that $H,P\\to\\infty$ monotonically. Also $\\min_{z\\in[-\\pi,\\pi]}\\phi(z)\\to -\\infty$. The solution branch discontinues once \\eqref{def:stopping} holds, highlighted with the red square, though, because the solution would be physically unrealistic, for the capillary-gravity Whitham equation models water waves in the finite depth. Our numerical continuation works well past the limiting admissible solution, nevertheless. We find that the crest becomes wider and flatter, the trough narrower and more peaked, as $H$ increases. But all solutions must be smooth \\cite{EJ;WW}. \n\nThe left panel of Figure~\\ref{fig3a} shows an example wave. The right panel shows the profiles perturbed by small random noise at $t=0$ and of the solution of \\eqref{eqn:Whitham} after $10^3$ time periods, after translation of the $x$ axis. Our numerical experiments suggest orbital stability for all wave height. But there is numerical evidence of modulational instability when $H$ is large \\cite{carter;fluids}. \n\nNumerical evidence \\cite{Kalisch;physD,carter;fluids} points to qualitatively the same results whenever $T\\geq1\/3$.\n\nWe turn the attention to $T=T(1,2)\\approx 0.2396825$ (see \\eqref{def:T(k1,k2)}). There exists a two-parameter sheet of nontrivial, periodic and even solutions of \\eqref{eqn:phi} and \\eqref{def:cT} in the vicinity of $\\phi=0$ and $c=c_\\text{ww}(1,T)=c_\\text{ww}(2,T)\\approx 0.97166609$ \\cite{EJ;WW}. See also Section~\\ref{sec:prelim}. Figure~\\ref{fig4} shows $H$ versus $c$ for the $k=1$ and $k=2$ branches, all the way up to the limiting admissible solutions. There are no\\footnote{We observe a turning point of $P$ for greater wave height, but the solution is inadmissible.} turning points of $P$. The left column of Figure~\\ref{fig5} shows waves in the $k=1$ and $k=2$ branches for small and large $H$. The small height result agrees with \\cite[Figure~6]{Kalisch;physD}. \n\nObserve `bimodal' waves in the $k=1$ branch. Indeed, there cannot exist $2\\pi$ periodic and unimodal waves, whose profile monotonically decreases from a single crest to the trough over the period \\cite{EJ;WW}. See also Section~\\ref{sec:prelim}. For small wave height, the fundamental mode seems dominant, so that there is one crest over the period~$=2\\pi$, but the fundamental and second modes are resonant, whereby a much smaller wave breaks up the trough into two. See the left panel of Figure~\\ref{fig5}(a). As $H$ increases, the effects of the second mode seem more pronounced, so that the wave separating the troughs becomes higher. See the left of Figure~\\ref{fig5}(b). Observe, to the contrary, $\\pi$ periodic and unimodal waves in the $k=2$ branch. See the left of Figure~\\ref{fig5}(c,d). We find that the crests become wider and flatter, the troughs narrower and more peaked, as $H$ increases in the $k=1$ and $k=2$ branches. See the left of Figure~\\ref{fig5}(b,d). \n\nOur numerical experiments suggest orbital stability for the $k=1$ branch (see the right panels of Figure~\\ref{fig5}(a,b)) versus instability for $k=2$ (see the right of Figure~\\ref{fig5}(c,d)).\n\nWe take matters further to $T=T(1,3)$. See Figures~\\ref{fig6}, \\ref{fig7} and \\ref{fig8}. The results for the $k=1$ and $k=3$ branches are similar to those when $T=T(1,2)$ and $k=1,2$. We pause to remark that in the $k=1$ branch, for small $H$, a smaller wave breaks up the trough into two, and a much smaller wave breaks up the crest into two. See the left panel of Figure~\\ref{fig7}(a). As $H$ increases, the wave separating the crests becomes lower, transforming into one wide and flat crest, whereas the wave separating the troughs become higher (see the left of Figure~\\ref{fig7}(b)), whereby resembling those when $T=T(1,2)$ and $k=1$. Observe $\\pi$ periodic and unimodal waves in the $k=2$ branch, orbitally stable for small $H$ versus unstable for large $H$. See Figure~\\ref{fig8}(e,f). \n\nWhen $T=T(2,3)$, to the contrary, we find $\\pi$ and $2\\pi\/3$ periodic unimodal waves in the $k=2$ and $k=3$ branches, respectively, corroborating a local bifurcation theorem (see \\cite{EJ;WW}, for instance).\nSee also Section~\\ref{sec:prelim}. We find that the crests become wider and flatter, the troughs narrower and more peaked, as $H$ increases. See the left column of Figure~\\ref{fig10}. Our numerical experiments (see the right of Figure~\\ref{fig10}) suggest\norbital instability for the $k=2$ branch for large $H$ and for $k=3$ for all $H$.\n\nWhen $T=T(2,5)$, on the other hand, the left column of Figure~\\ref{fig12} shows bimodal waves in the $k=2$ branch. The local bifurcation theorem \\cite{EJ;WW} dictates $\\pi$ periodic and unimodal waves, but we numerically find that they are not in the $k=2$ branch. For small $H$, for wave (a), for instance, a smaller wave breaks up the trough into two over the half period. As $H$ increases, for wave (b), for instance, the troughs become narrower and more peaked. Our numerical experiments (see the right of Figures~\\ref{fig12} and \\ref{fig13}) suggest orbital stability for the $k=2$ branch for small $H$ versus instability for $k=2$ for large $H$ and for $k=5$ for all $H$. \n\nTo proceed, when $T=T(1,4)$, the $k=1$ branch lies above and to the right of the $k=4$ branch at least for small $H$ (not shown), but as $T$ increases, $c_\\text{ww}(4;T)$ increases more rapidly than $c_\\text{ww}(1;T)$, so that when $T=T(1,4)+0.001$, for instance, the $k=1$ branch crosses the $k=4$ branch. Figure~\\ref{fig14} shows $H$ and $P$ versus $c$ in the $k=1$ and $k=4$ branches for all admissible solutions. The small height result agrees with \\cite[Figure~10(a)]{Kalisch;physD}. We find that in the $k=1$ branch, $H$ and $P$ turn to connect the $k=4$ branch, whereas in the $k=4$ branch, $H$ and $P$ monotonically increase to the limiting admissible solution, highlighted by the red square in the insets. \n\nThe left panels of Figures~\\ref{fig15}, \\ref{fig16} and \\ref{fig17} show several profiles along the $k=1$ and $k=4$ branches. In the $k=1$ branch, for small $H$, wave (a), for instance, is $2\\pi$ periodic and unimodal. After the $k=1$ branch crosses the $k=4$ branch, on the other hand, wave (b), for instance, becomes bimodal, resembling those when $T=T(1,4)$ and $k=1$. Continuing along the $k=1$ branch, for waves (c) and (d), for instance, high frequency ripples of $k=4$ ride a carrier wave of $k=1$.\nWhen the $k=1$ and $k=4$ branches almost connect, wave (e) in the $k=1$ branch and wave (f) for $k=4$ are almost the same. In the $k=4$ branch, wave (g), for instance, is $\\pi\/2$ periodic and unimodal.\n\nOur numerical experiments (see the right panels of Figures~\\ref{fig15}, \\ref{fig16} and \\ref{fig17}) suggest orbital stability for the $k=1$ branch for all $H$ and for $k=4$ for small $H$, versus instability for $k=4$ for large $H$. Particularly, stability and instability do not change at the turning point of $P$. \n\nLast but not least, when $T=T(1,5)+0.0001$, Figure~\\ref{fig18} shows $H$ and $P$ versus $c$ for the $k=1$ and $k=5$ branches. The small height result agrees with \\cite[Figure~10(b)]{Kalisch;physD}. We find that the $k=1$ branch crosses and connects the $k=5$ branch, like when $T=T(1,4)+0.0001$ and $k=1,4$, but it continues after connecting all the wave up to the limiting admissible solution. See the insets. The left panels of Figures~\\ref{fig19}, \\ref{fig20} and \\ref{fig21} show several profiles along the $k=1$ and $k=5$ branches. The results for the $k=1$ branch up to connecting and for $k=5$ are similar to those when $T=T(1,4)+0.0001$ and $k=1,4$. In the $k=1$ branch, after it connects the $k=4$ branch at the point (c), the results are similar to those when $T=T(1,5)$ and $k=1$. See waves (d), (e) and (f). \nOur numerical experiments (see the right panels of Figures~\\ref{fig19}, \\ref{fig20} and \\ref{fig21}) suggest orbital stability for the $k=1$ branch for all $H$ and for $k=5$ for small $H$, versus instability for $k=5$ for large $H$. \n\nWe emphasize orbital stability for the $k=1$ branch for all $T>0$ for all wave height throughout our numerical experiments.\n\n\\section{Discussion}\\label{sec:discussion}\n\nHere we employ efficient and highly accurate numerical methods for computing periodic traveling waves of \\eqref{eqn:Whitham} and experimenting with their (nonlinear) orbital stability and instability. Our findings suggest, among many others, stability whenever $T>0$ for the $k=1$ branch versus instability when $0