diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbtvk" "b/data_all_eng_slimpj/shuffled/split2/finalzzbtvk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbtvk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec1}\n\n\\subsection{Background and motivation}\n\n\n\n\nThe study of the Boolean matrices \\cite{bapat, bapatbook, luce, rao} play an important role in linear algebra \\cite{belohlavek2015below, kim, rao72}, combinatorics \\cite{brual}, graph theory \\cite{berge} and network theory \\cite{ledley, li2015logical}. However, this becomes particularly challenging to store huge volumes of multidimensional data. This potential difficulty can be easily overcome, thanks to tensors, which are natural multidimensional generalizations of matrices \\cite{kolda, loan}. Here the notion of tensors is different in physics and engineering (such as stress tensors) \\cite{Nar93}, which are generally referred to as tensor fields in mathematics \\cite{de2008tensor}. However, it will be more appropriate if we study the Boolean tensors and the generalized inverses of Boolean tensors. Hence the generalized inverses of Boolean tensors will encounter in many branches of mathematics, including relations theory \\cite{plem}, logic, graph theory, lattice theory \\cite{birkhoff} and algebraic semigroup theory.\n\n\n\nRecently, there has been increasing interest in studying inverses \\cite{BraliNT13} and different generalized inverses of tensors based on the Einstein product \\cite{bm, wei18, stan, sun}, and opened new perspectives for solving multilinear systems \\cite{kolda, Mao19}. In \\cite{wei18, stan}, the authors have introduced some basic properties of the range and null space of multidimensional arrays. Further, in \\cite{stan}, it was discussed the adequate definition of the tensor rank, termed as reshaping rank. Corresponding representations of the weighted Moore-Penrose inverse introduced in \\cite{BehMM19, we17} and investigated a few characterizations in \\cite{PanMi19}. Though this work is focusing on the binary case; i.e., concentrating some interesting results based on the Boolean tensors and generalized inverses of Boolean tensors via the Einstein product. In many instances, the result in the general case does not immediately follow even though it is not difficult to conclude.\n\n\n\nOn the other hand, one of the most successful developments in the world of multilinear algebra is the concept of tensor decomposition \\cite{ Kolda01, kolda, LalmV00}. This concept gives a clear and convenient way to implement all basic operations efficiently. Recently this concept is extended in Boolean tensors \\cite{, erdos2013discovering, khamis2017Boolean, rukat2018tensormachine}. Further, the fast and scalable distributed algorithms for Boolean tensor decompositions were discussed in \\cite{miettinen2011Boolean}. In addition to that, a few applications of these decompositions are discussed in \\cite{erdos2013discovering, metzler2015clustering} for information extraction and clustering. At that same time, Brazell, et al. in \\cite{BraliNT13} discussed decomposition of tensors from the isomorphic group structure on the influence of the Einstein Product and demonstrated that they are special cases of the canonical polyadic decomposition \\cite{carroll1970analysis}. The vast work on decomposition on the tensors and its several applications in different areas of mathematics in the literature, and the recent works in \\cite{BraliNT13, sun}, motivate us to study the generalized inverses and space decomposition in the framework of Boolean tensors. \n This study leads to introduce the rank and the weight for the Boolean tensor with its application to generalized inverses.\n\n\n \n\n\n\\subsection{Organization of the paper}\n\n\nThe rest of the paper is organized as follows. In Section 2 we present some definitions, notations, and preliminary results, which are essential in proving the main results. The main results are discussed in Section 3. It has four subparts. In the first part, some identities are proved while the generalized inverses for Boolean tensor are discussed in the second part. The third part mainly focuses on weighted Moore-Penrose inverses. Space decomposition and its application to generalized inverses are discussed in the last part. Finally, the results along with a few questions are concluded in Section 4.\n \n \n\n\n\n\n\n\n\n\n\\section{Preliminaries}\nWe first introduce some basic definitions and notations which will be used throughout the article. \n\\subsection{Definitions and terminology}\nFor convenience, we first briefly explain some of the terminologies which will be used here onwards. The tensor notation and definitions are followed from the article \\cite{BraliNT13, sun}. We refer $\\mathbb{R}^{I_1\\times\\cdots\\times I_N}$ as the set of order $N$ real tensors. Indeed, a matrix is a second order tensor, and a vector is a first order tensor. Let $\\mathbb{R}^{I_1\\times\\cdots\\times I_N}$ be the set of order $N$ and dimension $I_1 \\times \\cdots \\times I_N$ tensors over the real\nfield $\\mathbb{R}$. $\\mc{A} \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N}$ is a tensor with $N$-th order tensor, and each entry of $\\mc{A}$ is denoted by $a_{i_1...i_N}$. Note that throughout the paper, tensors are represented in calligraphic letters like $\\mc{A}$, and the notation $(\\mc{A})_{i_1...i_N}= a_{i_1...i_N}$ represents the scalars. The Einstein product (\\cite{ein}) $ \\mc{A}{*_N}\\mc{B} \\in \\mathbb{R}^{I_1\\times\\cdots\\times\nI_N \\times J_1 \\times\\cdots\\times J_M }$ of tensors $\\mc{A} \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times K_1\n\\times\\cdots\\times K_N }$ and $\\mc{B} \\in\n\\mathbb{R}^{K_1\\times\\cdots\\times K_N \\times J_1 \\times\\cdots\\times\nJ_M }$ is defined\nby the operation ${*_N}$ via\n\\begin{equation}\\label{Eins}\n(\\mc{A}{*_N}\\mc{B})_{i_1...i_Nj_1...j_M}\n=\\displaystyle\\sum_{k_1...k_N}a_{{i_1...i_N}{k_1...k_N}}b_{{k_1...k_N}{j_1...j_M}}.\n\\end{equation}\nSpecifically, if $\\mc{B} \\in \\mathbb{R}^{K_1\\times\\cdots\\times K_N}$, then $\\mc{A}{*_N}\\mc{B} \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N}$ and \n\\begin{equation*}\\label{Einsb}\n(\\mc{A}{*_N}\\mc{B})_{i_1...i_N} = \\displaystyle\\sum_{k_1...k_N}\na_{{i_1...i_N}{k_1...k_N}}b_{{k_1...k_N}}.\n\\end{equation*}\nThis product is discussed in the area of continuum mechanics \\cite{ein} and the theory of relativity \\cite{lai}. Further, the addition of two tensors $\\mc{A}, ~\\mc{B}\\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times K_1 \\times\\cdots\\times K_N }$ is defined\nas \n\\begin{equation}\\label{Eins1}\n(\\mc{A} + \\mc{B})_{i_1...i_N k_1...k_N}\n=a_{{i_1...i_N}{k_1...k_N}} + b_{{i_1...i_N}{k_1...k_N}}.\n\\end{equation}\nFor a tensor $~\\mc{A}=(a_{{i_1}...{i_N}{j_1}...{j_M}})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1 \\times\\cdots\\times J_M},$ let $\\mc{B} =(b_{{i_1}...{i_M}{j_1}...{j_N}}) \\in \\mathbb{R}^{J_1\\times\\cdots\\times J_M \\times I_1 \\times\\cdots\\times I_N}$, be the {\\it transpose} of $\\mc{A}$, where $b_{i_1\\cdots i_Mj_1\\cdots j_N} = a_{j_1\\cdots j_M i_1\\cdots i_N}.$ The tensor $\\mc{B}$ is denoted by $\\mc{A}^T$. Also, we denote $\\mc{A}^T=\\left({a}_{{i_1}...{i_N}{j_1}...{j_M}}^t\\right).$ \n The trace of a tensor $\\mc{A}$ with entries $(\\mc{A})_{{i_1}...{i_N}{j_1}...{j_N}}$, denoted by $tr(\\mc{A})$,\n is defined as the sum of the diagonal entries, i.e., \n$tr(\\mc{A}) = \\displaystyle\\sum_{i_1 \\cdots i_N}a_{{i_1...i_N}{i_1...i_N}}.$ Further, a tensor $\\mc{O}$ denotes the {\\it zero tensor} if all the entries are zero. \n A tensor $\\mc{A}\\in\n\\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times I_1 \\times\\cdots\\times\nI_N}$ is {\\it symmetric} if $\\mc{A}=\\mc{A}^T,$ and {\\it orthogonal} if $\\mc{A}{*_M}\\mc{A}^T= \\mc{A}^T{*_N} \\mc{A}=\\mc{I}$. Further, a tensor\n$\\mc{A}\\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times I_1\n\\times\\cdots\\times I_N}$ is {\\it idempotent} if $\\mc{A}\n{*_N} \\mc{A}= \\mc{A}$. The definition of a diagonal tensor follows. \nFurther, a tensor with entries $(\\mc{D})_{{i_1}...{i_N}{j_1}...{j_N}}$ is\n called a {\\it diagonal\n tensor} if $d_{{i_1}...{i_N}{j_1}...{j_N}} = 0$ for $(i_1,\\cdots,i_N) \\neq (j_1,\\cdots,j_N).$\nA few more notations and definitions are discussed below for defining generalized inverses of Boolean tensors. We first recall the definition\nof an identity tensor below.\n\n\n\n\n\n\\begin{definition} (Definition 3.13, \\cite{BraliNT13}) \\\\\nA tensor \n with entries \n $ (\\mc{I})_{i_1 \\cdots i_Nj_1\\cdots j_N} = \\prod_{k=1}^{N} \\delta_{i_k j_k}$,\n where\n\\begin{numcases}\n{\\delta_{i_kj_k}=}\n 1, & $i_k = j_k$,\\nonumber\n \\\\\n 0, & $i_k \\neq j_k $.\\nonumber\n\\end{numcases}\n is called a {\\it unit tensor or identity tensor}.\n\\end{definition}\nThe permutation tensor is defined as follows.\n\n\\begin{definition}\\label{perm} \nLet $\\pi$ be a permutation map on $(i_1,i_2,\\cdots, i_N,j_1,j_2,\\cdots , j_N)$ defined by \n$$\\pi:=\\begin{pmatrix}\ni_1&i_2&\\cdots &i_N&j_1&j_2&\\cdots &j_N \\\\\n\\pi(i_1)&\\pi(i_2)&\\cdots &\\pi(i_N)&\\pi(j_1)&\\pi(j_2)&\\cdots&\\pi(j_N) \\\\\n\\end{pmatrix}.\n$$\nA tensor $\\mc{P}$\n with entries \n $ (\\mc{P})_{i_1 \\cdots i_Nj_1\\cdots j_N} = \\prod_{k=1}^{N} \\epsilon_{i_k} \\epsilon_{j_k}$,\n where\n\\begin{numcases}\n{\\epsilon_{i_k}\\epsilon_{j_k} = }\n 1, & $\\pi(i_k) = j_k$,\\nonumber\n \\\\\n 0, & otherwise.\\nonumber\n\\end{numcases}\n is called a {\\it permutation tensor}.\n\\end{definition}\n\n\nNow we recall the block tensor as follows. \n\\begin{definition}{\\cite{sun}}\nFor a tensor $\\mc{A} = (a_{i_1... i_Nj_1...j_M})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1 \\times\\cdots\\times\n J_M},\\\\\n \\mc{A}_{(i_1...i_N|:)}= (a_{i_1...i_N:...:})\\in \\mathbb{R}^{J_1\\times\\cdots\\times J_M}$ is a\n subblock of $\\mc{A}$. $Vec(\\mc{A})$ is obtained by lining up all the subtensors\n in a column, and $t$-th subblock of $Vec(\\mc{A})$ is $\\mc{A}_{(i_1...i_N|:)}$,\n where $$t=i_N + \\displaystyle\\sum_{K=1}^{N-1} \\left[ (i_K - 1) \\displaystyle\\prod_{L=K+1}^{N} I_L \\right].$$\n\\end{definition}\n\\vspace{-0.5cm}\nLet $\\mc{A} = (a_{i_1\\cdots i_N j_1 \\cdots j_M}) \\in\n\\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1 \\times\\cdots\\times\nJ_M}$ and $\\mc{B} = (b_{i_1\\cdots i_N k_1 \\cdots k_M}) \\in\n\\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times K_1 \\times\\cdots\\times\nK_M}$. The {\\it row block tensor} consisting of $\\mc{A}$ and\n$\\mc{B}$ is denoted by\n$[\\mc{A} ~ \\mc{B}] \\in \\mathbb{R}^{\\alpha^N\\times\\beta_1\\times \\cdots \\times \\beta_M},$\nwhere $\\alpha^N = I_1\\times\\cdots\\times I_N, \\beta_i = J_i + K_i, i\n= 1, \\cdots, M$, and is defined by\n\\begin{equation*}\n[\\mc{A} ~ \\mc{B}]_{i_1 \\cdots i_N l_1 \\cdots l_M} =\n\\begin{cases}\na_{i_1 \\cdots i_N l_1 \\cdots l_M}, & i_1 \\cdots i_N \\in [I_1] \\times \\dots \\times [I_N], l_1 \\cdots l_M \\in [J_1] \\times \\cdots \\times [J_M];\n\\\\\nb_{i_1 \\cdots i_N l_1 \\cdots l_M}, & i_1 \\cdots i_N \\in [I_1] \\times \\dots \\times [I_N], l_1 \\cdots l_M \\in \\Gamma_1 \\times \\cdots \\times \\Gamma_M;\n\\\\\n0, & \\textnormal{otherwise}.\n\\end{cases}\n\\end{equation*}\nwhere $\\Gamma_i = \\{ J_i +1, \\cdots, J_i+K_i\\}, i=1,\\cdots, M.$\n\nLet $\\mc{C} = (c_{j_1 \\cdots j_M i_1 \\cdots i_N}) \\in\n\\mathbb{R}^{J_1\\times\\cdots\\times J_M \\times I_1 \\times\\cdots\\times\nI_N}$ and $\\mc{D} = (d_{k_1 \\cdots k_M i_1 \\cdots i_N}) \\in\n\\mathbb{R}^{K_1\\times\\cdots\\times K_M \\times I_1 \\times\\cdots\\times\nI_N}$. The {\\it column block tensor} consisting of $\\mc{C}$ and\n$\\mc{D}$ is\n\\begin{equation*}\\label{eq224}\n\\left[%\n\\begin{array}{c}\n \\mc{C} \\\\\n \\mc{D} \\\\\n\\end{array}%\n\\right]= [\\mc{C}^T ~ \\mc{D}^T]^T \\in \\mathbb{R}^{\\beta_1 \\times\n\\cdots \\times \\beta_M\\times\\alpha^N}.\n\\end{equation*}\nFor $\\mc{A}_1 \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1\n\\times\\cdots\\times J_M}, \\mc{B}_1 \\in\n\\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times K_1 \\times\\cdots\\times\nK_M}, \\mc{A}_2 \\in \\mathbb{R}^{L_1\\times\\cdots\\times L_N \\times J_1\n\\times\\cdots\\times J_M}$ and $ \\mc{B}_2 \\in\n\\mathbb{R}^{L_1\\times\\cdots\\times L_N \\times K_1 \\times\\cdots\\times\nK_M}$, we denote $\\tau_1 = [\\mc{A}_1 ~ \\mc{B}_1]$ and $\\tau_2 =\n[\\mc{A}_2 ~ \\mc{B}_2]$ as the {\\it row block tensors}.\n The {\\it column block tensor} $ \\left[\n\\begin{array}{c}\n {\\tau}_1 \\\\\n {\\tau}_2 \\\\\n\\end{array}\n\\right]\n$ can be written as\n\\begin{equation*}\\label{eq225}\n\\left[\n\\begin{array}{c}\n \\mc{A}_1 ~~ \\mc{B}_1\\\\\n \\mc{A}_2 ~~ \\mc{B}_2\\\\\n\\end{array}\n\\right] \\in \\mathbb{R}^{\\rho_1\\times\\cdots\\times \\rho_N \\times \\beta_1 \\times\\cdots\\times \\beta_M},\n\\end{equation*}\nwhere $\\rho_i = I_i +L_i, i=1,\\cdots,N; \\beta_j = J_j + K_j$ and $j=1,\\cdots , M.$\n\n\n\n\n\n\n\n\n\n\\begin{definition} (Definition 2.1, \\cite{stan}) \\\\\nThe range space and null space of a tensor $\\mc{A}\\in \\mathbb{R}^{{I_1}\\times \\cdots\\times {I_M}\\times {J_1}\\times\\cdots \\times {J_N}}$ are defined as per the following: \n$$\n\\mathfrak{R}(\\mc{A}) = \\left\\{\\mc{A}{*_N}\\mc{X}:~\\mc{X}\\in\\mathbb{R}^{{J_1}\\times\\cdots\\times {J_N}}\\right\\}\\mbox{ and } \\mc{N}(\\mc{A})=\\left\\{\\mc{X}:~\\mc{A}{*_N}\\mc{X}=\\mc{O}\\in\\mathbb{R}^{{I_1}\\times \\cdots \\times {I_M}}\\right\\}.\n$$\n\\end{definition}\nThe relation of range space for tensors is discussed in \\cite{stan} as follows.\n\\begin{lemma}[Lemma 2.2. \\cite{stan}]\\label{range-stan}\nLet $\\mc{A}\\in \\mathbb{R}^{{I_1}\\times\\cdots\\times {I_M}\\times {J_1}\\times\\cdots\\times {J_N}}$, $\\mc{B}\\in \\mathbb{R}^{{I_1}\\times\\cdots\\times {I_M}\\times {K_1}\\times\\cdots\\times {K_L}}.$ Then $\\mathfrak{R}(\\mc{B})\\subseteq\\mathfrak{R}(\\mc{A})$ if and only if there exists $\\mc{U}\\in \\mathbb{R}^{{J_1}\\times\\cdots\\times {J_N}\\times {K_1}\\times\\cdots\\times {K_L}}$ such that \n$\\mc{B}=\\mc{A}{*_N}\\mc{U}.$\n\\end{lemma}\n\n\n\n\nThe next subsection is discussed the Boolean tensor and some useful definitions\n\\subsection{The Boolean tensor}\nThe binary Boolean algebra $\\mathfrak{B}$ consists of the set $\\{0,1\\}$ equipped with the operations of addition and multiplication defined as follows:\n\\begin{center}\n\\begin{tabular}{ c|c c } \n & 0 & 1 \\\\\n\\hline\n 0 & 0 & 1 \\\\ \n1 & 1 & 1 \\\\ \n\\end{tabular} \n\\hspace{2cm}\n\\begin{tabular}{ c|c c } \n. & 0 & 1 \\\\\n\\hline\n 0 & 0 & 0 \\\\ \n1 & 0 & 1 \\\\ \n\\end{tabular}\n\\end{center}\n\\begin{definition}\nLet $\\mc{A}=(a_{{i_1}...{i_M}{j_1}...{j_N}})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_M \\times J_1 \\times\\cdots\\times J_N}.$ If $a_{{i_1}...{i_M}{j_1}...{j_N}}\\in\\{0,1\\},$ then the tensor $\\mc{A}$ is called Boolean tensor. \n \\end{definition}\n The addition and product of Boolean tensors are defined as in Eqs. (\\ref{Eins}) and (\\ref{Eins1}) but addition and product of two entries will follow addition and product rule of Boolean algebra.\nThe order relation for tensors is defined as follows.\n\\begin{definition}\nLet $\\mc{A}=(a_{{i_1}...{i_M}{j_1}...{j_N}})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_M \\times J_1 \\times\\cdots\\times J_N} \\text{ and }~\\mc{B}\n =(b_{{i_1}...{i_M}{j_1}...{j_N}})~~ \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_M\n\\times J_1 \\times\\cdots\\times J_N}. $ Then $\\mc{A}\\leq \\mc{B}$ if and only if $a_{{i_1}...{i_M}{j_1}...{j_N}}\\leq b_{{i_1}...{i_M}{j_1}...{j_N}}$ for all $i_s$ and $j_t$ where $1\\leq s\\leq M$ and $1\\leq t\\leq N.$\n\\end{definition}\n\n\n\n We generalize the component-wise complement of the Boolean matrix \\cite{fitz} to Boolean tensors and defined below.\n\n\\begin{definition}\\label{CompDef} \nLet $\\mc{A}=(a_{{i_1}...{i_N}{j_1}...{j_M}})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1 \\times\\cdots\\times J_M}$ be a Boolean tensor. A tensor $\\mc{B}=(b_{{i_1}...{i_N}{j_1}...{j_M}})\n \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_N \\times J_1 \\times\\cdots\\times J_M}$ is called component-wise complement of $\\mc{A}$ if \n\\begin{equation*}\nb_{{i_1}...{i_N}{j_1}...{j_M}}=\\left\\{\\begin{array}{cc}\n 1, & \\mbox{ when } a_{i_1\\cdots i\n _Nj_1j_2\\cdots j\n _M}=0. \\\\\n 0, & \\mbox{ when } a_{i_1\\cdots i\n _Nj_1j_2\\cdots j\n _M}=1.\n \\end{array}\\right.\n \\end{equation*}\nThe tensor $\\mc{B}$ and its entries respectively, denoted by $\\mc{A}^C$ and $\\left(a_{i_1\\cdots i\n _Nj_1\\cdots j\n _M}^c\\right).$\n\\end{definition}\n\n\n\n\n \\section{Main Results} \nIn this section, we prove a few exciting results on tensors which are emphasized in the binary case. We divided this section into four folds. In the first part of this section, we discuss some identities on the Boolean tensors. Then, after having introduced some necessary ingredients, we study the generalized inverses of the Boolean tensor and some equivalence results to other generalized inverses in the second part. The existence and uniqueness of weighted Moore-Penrose inverses are discussed in the third part. The space decomposition and its connection to generalized inverses are presented in the final part.\n\n\\subsection{Some identities on Boolean tensors}\n \nBy the definition of Boolean tensor $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times I_1\\times \\cdots \\times I_M},$ we always get $\\mc{A}+\\mc{A}=\\mc{A}.$ The infinite series of the Boolean tensor, $\\displaystyle\\sum_{k=1}^\\infty\\mc{A}^k$, is convergent and reduces to a finite series, since there are only finite number of Boolean tensors of the same order. Now we denote $\\overline{\\mc{A}}$ for the infinite series of the Boolean tensors, i.e., $$\\overline{\\mc{A}}=\\displaystyle\\sum_{k=1}^ \\infty \\mc{A}^k.$$ \n\n\n\nSince $\\mc{A}\\leq\\mc{A}+\\mc{B}$ for any two Boolean tensor (suitable order for addition) $\\mc{A}$ and $\\mc{B}$, likewise $\\mc{A}=\\mc{A}+\\mc{A}\\geq \\mc{A}+\\mc{B}$ for any two Boolean tensor $\\mc{A}\\geq \\mc{B}$. This is stated in the next result.\n\n\\begin{theorem}\\label{thm3.11}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1 \\times\\cdots \\times J_N}$ and $\\mc{B}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1\\times\\cdots \\times J_N}.$ Then $\\mc{A}\\geq\\mc{B}$ if and only if $\\mc{A}+\\mc{B}=\\mc{A}$.\n\\end{theorem}\n\n\nIf we consider $\\mc{A}\\geq \\mc{I}$ in the above theorem, then it is easy to verify that \n$\\mc{I}+\\mc{A}+\\cdots+\\mc{A}^n=\\mc{A}^n$ and hence we can have the following result as a corollary. \n\\begin{corollary}\\label{cor3.4}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times\\cdots \\times I_N\\times I_1 \\times\\cdots \\times I_N}$ and $\\overline{\\mc{A}}=\\sum_{k=1}^\\infty \\mc{A}^k.$ If $\\mc{A}\\geq\\mc{I},$ then there exist $n,$ such that\n\\begin{enumerate}\n \\item[(a)] $\\overline{\\mc{A}}=\\mc{A}^{n};$\n \\item[(b)] $\\left(\\overline{\\mc{A}}\\right)^2=\\overline{\\mc{A}};$\n \\item[(c)] $\\overline{\\left(\\overline{\\mc{A}}\\right)}=\\overline{\\mc{A}}.$\n\\end{enumerate}\n \\end{corollary}\n\n\n \nUsing the above theorem, we now prove another result on the Boolean tensor. As follows,\n\n \n \n\\begin{theorem}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_N\\times I_1 \\times \\cdots \\times I_N}$ and $\\mc{B}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times I_1\\times \\cdots \\times I_N},$ with $\\mc{A}\\geq \\mc{I}$ and $\\mc{B}\\geq \\mc{I}.$ Then $$\\overline{(\\mc{A}+\\mc{B})}=\\overline{(\\overline{\\mc{A}}{*_N}\\overline{\\mc{B}})}=\\overline{(\\overline{\\mc{B}}{*_N}\\overline{\\mc{A}})}.$$ \n\\end{theorem}\n\n\n\n\\begin{proof}\nSince $\\mc{A}\\geq \\mc{I}$ and $\\mc{B}\\geq \\mc{I}.$ So $\\overline{\\mc{A}}\\geq \\mc{I}$ and $\\overline{\\mc{B}}\\geq \\mc{I}.$ Also we have $\\overline{\\mc{A}}\\geq \\mc{A}$ and $\\overline{\\mc{B}}\\geq \\mc{B}.$ Combining these results, we get $\\overline{A}{*_N}\\overline{B}\\geq \\mc{A}$ and $\\overline{A}{*_N}\\overline{B}\\geq \\mc{B}.$ Thus $\\overline{A}{*_N}\\overline{B}\\geq \\mc{A}+\\mc{B}$ and hence\n\\begin{equation}\\label{eq3.61}\n \\overline{\\left(\\overline{\\mc{A}}{*_N}\\overline{\\mc{B}}\\right)}\\geq \\overline{\\mc{A}+\\mc{B}}. \n\\end{equation}\nNow $\\overline{\\mc{A}+\\mc{B}}\\geq \\overline{\\mc{A}}$ and $\\overline{\\mc{A}+\\mc{B}}\\geq \\overline{\\mc{B}}.$ By using Corollary \\ref{cor3.4} $(c)$, we get $\\overline{\\mc{A}}{*_N}\\overline{\\mc{B}}\\leq \\left(\\overline{\\mc{A}+\\mc{B}}\\right)^2=\\overline{\\mc{A}+\\mc{B}}.$ From Corollary \\ref{cor3.4} $(b)$, we have \n\\begin{equation}\\label{eq3.362}\n \\overline{\\left(\\overline{\\mc{A}}{*_N}\\overline{\\mc{B}}\\right)}\\leq \\overline{\\left(\\overline{\\mc{A}+\\mc{B}}\\right)}=\\overline{\\mc{A}+\\mc{B}}.\n\\end{equation}\nFrom Eqs.(\\ref{eq3.61}) and (\\ref{eq3.362}), the proof is complete. \n\\end{proof}\n\nIf $\\mathfrak{R}(\\mc{B}^T)=\\mathfrak{R}(\\mc{B}^T{*_M}\\mc{A}^T),$ then there exist a tensor $\\mc{U}$ such that $\\mc{B}=\\mc{U}{*_M}\\mc{A}{*_N}\\mc{B}$ and hence, we obtain $\\mc{B}{*_M}\\mc{C}=\\mc{U}{*_M}\\mc{A}{*_N}\\mc{B}{*_M}\\mc{C}=\\mc{U}{*_M}\\mc{A}{*_N}\\mc{B}{*_M}\\mc{D}=\\mc{B}{*_M}\\mc{D}.$ This leads the following result.\n\n\n\\begin{theorem}\\label{ltcan}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M \\times J_1\\times \\cdots \\times J_N}$, $\\mc{B}\\in\\mathbb{R}^{J_1\\times \\cdots \\times J_N\\times K_1\\times \\cdots \\times K_M} $, \\\\ $\\mc{C}\\in\\mathbb{R}^{K_1\\times \\cdots \\times K_M\\times J_1\\times \\cdots\\times J_N}$ and $\\mc{D}\\in\\mathbb{R}^{K_1\\times \\cdots \\times K_M \\times J_1\\times \\cdots \\times J_N}$ be Boolean tensors with\\\\ $\\mc{A}{*_N}\\mc{B}{*_M}\\mc{C}=\\mc{A}{*_N}\\mc{B}{*_M}\\mc{D}.$ If $\\mathfrak{R}(\\mc{B}^T)=\\mathfrak{R}(\\mc{B}^T{*_N}\\mc{A}^T),$ then $\\mc{B}{*_M}\\mc{C}=\\mc{B}{*_M}\\mc{D}.$\n\\end{theorem}\n\nSimilar way, we can prove the following corollary.\n\n\n\\begin{corollary}\\label{rtcan}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M \\times J_1 \\times \\cdots \\times J_N}$, $\\mc{B}\\in\\mathbb{R}^{J_1\\times \\cdots \\times J_N\\times K_1 \\times \\cdots \\times K_M} $, \\\\ $\\mc{C}\\in\\mathbb{R}^{K_1\\times \\cdots \\times K_M\\times I_1\\times \\cdots\\times I_M}$ and $\\mc{D}\\in\\mathbb{R}^{K_1 \\times \\cdots \\times K_M \\times I_1\\times I_2\\times\\cdots\\times I_M}$ be Boolean tensors with $\\mc{C}{*_M}\\mc{A}{*_N}\\mc{B}=\\mc{D}{*_M}\\mc{A}{*_N}\\mc{B}.$ If $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{B}),$ then $\\mc{C}{*_M}\\mc{A}=\\mc{D}{*_M}\\mc{A}.$\n\\end{corollary}\n\n\nWe now discuss the important result on a transpose of an arbitrary order Boolean tensor, as follows.\n\\begin{lemma}\\label{lemma1}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M \\times J_1 \\times \\cdots \\times J_N}$ be any Boolean tensor. Then $\\mc{A}\\leq\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}.$\n \\begin{proof}\n Let $\\mc{B} = \\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}.$ We need to show that\n \\begin{equation*}\n {a}_{i_1\\cdots i_M j_1\\cdots j_N}\\leq {b}_{i_1\\cdots i_M j_1\\cdots j_N}.\n \\end{equation*}\n This inequality is trivial if ${a}_{i_1\\cdots i_M j_1\\cdots j_N}= 0.$ Let us assume ${a}_{i_1\\cdots i_M j_1\\cdots j_N}=1.$ Now\n\\begin{equation*}\n{b}_{i_1\\cdots i_M j_1\\cdots j_N} =\\sum_{k_1\\cdots k_N}\\sum_{l_1\\cdots l_M}a_{{i_1\\cdots i_M}{k_1\\cdots k_N}}a_{{l_1\\cdots l_M}{k_1\\cdots k_N}}a_{{l_1\\cdots l_M}{j_1\\cdots j_N}}.\n\\end{equation*}\nFor $1\\leq s\\leq N,$ if $k_s=j_s$ and $l_s=i_s,$ then\n\\begin{equation*}\n {b}_{i_1\\cdots i_M j_1\\cdots j_N} \\geq ({a}_{i_1\\cdots i_M j_1\\cdots j_N})^3={a}_{i_1\\cdots i_M j_1\\cdots j_N}=1.\n\\end{equation*}\nHence the proof is complete.\n \\end{proof}\n\\end{lemma} \n\n\n\\begin{theorem}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_N\\times J_1 \\times \\cdots \\times J_N}$ and $\\mc{B}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times J_1 \\times \\cdots \\times J_N}.$ Then the equation $\\mc{A}{*_N}\\mc{X}=\\mc{B}$ is solvable if and only if $\\mc{X}=\\mc{C},$ where \n$$ c_{i_1\\cdots i_N j_1\\cdots j_n}=\\left\\{\\begin{array}{cc}\n 1 & \\mbox{ if } a_{i_1\\cdots i\n _Ni_1\\cdots i\n _N}=0 \\mbox{ or } b_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1 \\mbox{ for all } i_k,~1\\leq k\\leq N,\\\\\n 0 & otherwise.\n\\end{array}\\right.\n$$\n\\end{theorem}\n\n\n\\begin{proof}\nLet $\\mc{A}{*_N}\\mc{X}=\\mc{B}$ is solvable and $\\mc{A}{*_N}\\mc{X}=D.$ To claim $\\mc{D}=\\mc{B},$ it is enough to show $d_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1$ if and only if $b_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1.$ Let $d_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1.$ This implies $a_{i_1\\cdots i\n _Np_1\\cdots p\n _N}=1$ and $c_{p_1\\cdots p\n _Nj_1\\cdots j\n _N}=1$ for some $p_k~~1\\leq k\\leq N.$ The condition $c_{p_1\\cdots p\n _Nj_1\\cdots j\n _N}=1$ yields either $a_{i_1\\cdots i\n _Np_1\\cdots p\n _N}=0 $ or $b_{p_1\\cdots p\n _Nj_1\\cdots j\n _N}=1$ for all $p_k~~1\\leq k\\leq N.$ Since $a_{i_1\\cdots i\n _Np_1\\cdots p\n _N}=1$ which makes $b_{p_1\\cdots p\n _Nj_1\\cdots j\n _N}=1$ for all $p_k,~~1\\leq k\\leq N.$ Therefore $b_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1.$ Now if $b_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1,$ then $a_{i_1\\cdots i\n _Nr_1\\cdots r\n _N}=1$ and $x_{r_1\\cdots r\n _Nj_1\\cdots j\n _N}=1$ for some $r_k,~~1\\leq k\\leq N.$ Suppose $c_{r_1\\cdots r\n _Nj_1\\cdots j\n _N}=0.$ Then $a_{q_1\\cdots q\n _Nr_1\\cdots r\n _N}=1$ and $b_{q_1\\cdots q\n _Nj_1\\cdots j\n _N}=0$ for some $q_k,~~1\\leq k\\leq N.$ Combining $a_{q_1\\cdots q\n _Nr_1\\cdots r\n _N}=1$ and $x_{r_1\\cdots r\n _Nj_1\\cdots j\n _N}=1$, we get $b_{q_1\\cdots q\n _Nj_1\\cdots j\n _N}=1.$ Which is the contradiction. So $c_{r_1\\cdots r\n _Nj_1\\cdots j\n _N}=1$ and hence $d_{i_1\\cdots i\n _Nj_1\\cdots j\n _N}=1.$ The converse part is trivial. \n\\end{proof}\n\n\nIn view of the Definition \\ref{CompDef} the following theorem is true for Boolean tensors.\n\n\\begin{preposition}\\label{equitc}\nLet $\\mc{A} \\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1\\times \\cdots \\times J_N}$ be a Boolean tensor, then\n\\begin{enumerate}\n \\item[(a)] $(\\mc{A}^C)^C=\\mc{A};$\n \\item[(b)] $(\\mc{A}^C)^T=(\\mc{A}^T)^C = \\mc{A}^{CT}.$\n \\end{enumerate}\n\\end{preposition}\n \n \n \n \\begin{remark}\nIn general $ \\mc{B}^C *_N \\mc{A}^C \\neq (\\mc{A}*_N\\mc{B})^C \\neq \\mc{A}^C *_N \\mc{B}^C$ for any two tensor $\\mc{A},~\\mc{B} \\in\\mathbb{R}^{I_1\\times\\cdots \\times I_M\\times I_1\\times\\cdots \\times I_M}$\n \\end{remark}\n \n\n\\begin{example}\nConsider two Boolean tensor\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ and $~\\mc{B}=(b_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ such that\n\\begin{eqnarray*}\na_{ij11} =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 0 & 1\n \\end{pmatrix},\na_{ij12} =a_{ij13} =a_{ij21}=a_{ij22}=a_{ij23}=\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 1\n \\end{pmatrix}, \\mbox{ and }\n\\end{eqnarray*}\n\\begin{eqnarray*}\nb_{ij11} =b_{ij12}=b_{ij13}=b_{ij21}=b_{ij22}=b_{ij23}=\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 0 & 0\n \\end{pmatrix}.\n\\end{eqnarray*}\nIt is easy to verify $ \\mc{B}^C {*_2} \\mc{A}^C \\neq (\\mc{A}{*_2}\\mc{B})^C \\neq \\mc{A}^C {*_2} \\mc{B}^C$, where\n$~(\\mc{A}*_2\\mc{B})^C=\\mc{X}=(x_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ $~\\mc{A}^C*_2\\mc{B}^C=\\mc{Y} =(y_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ and \n$~\\mc{B}^C*_2\\mc{A}^C=\\mc{Z} =(z_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}},$ where\n\\begin{eqnarray*}\nx_{ij11} =x_{ij12} =x_{ij13} =x_{ij21} =x_{ij22} =x_{ij23} =\n \\begin{pmatrix}\n 0 & 1 & 1 \\\\\n 1 & 1 & 0\n \\end{pmatrix},\n\\end{eqnarray*}\n\\begin{eqnarray*}\ny_{ij11} =y_{ij12} =y_{ij13} =y_{ij21} =y_{ij22} =y_{ij23} =\n \\begin{pmatrix}\n 1 & 1 & 1 \\\\\n 1 & 1 & 0\n \\end{pmatrix}, \\mbox{ and }\n\\end{eqnarray*}\n\\begin{eqnarray*}\nz_{ij11} =z_{ij12} =z_{ij13} =z_{ij21} = z_{ij22} =z_{ij23} =\n \\begin{pmatrix}\n 0 & 1 & 1 \\\\\n 1 & 1 & 1\n \\end{pmatrix}.\n\\end{eqnarray*}\n\\end{example}\n\n\nThe next result is the one of the important tool to prove trace of a Boolean tensor.\n\n\\begin{theorem}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times J_1 \\times \\cdots \\times J_N}.$ Then $\\mc{A}{*_N}\\mc{A}^C=\\mc{O}$ if and only either $\\mc{A}=\\mc{O}$ or $\\mc{A}=\\mc{O}^C.$\n\\end{theorem}\n \\begin{proof}\n Since the converse part is trivial, it is enough to show the sufficient part only. Let $\\mc{A}{*_N}\\mc{A}^C=\\mc{O}.$ Thus \n$\\displaystyle\\sum_{k_1\\cdots k_N}a_{i_1\\cdots i_Nk_1\\cdots k_N}a_{k_1\\cdots k_Nj_1\\cdots j_N}^c=0$. This implies, $ a_{i_1\\cdots i_Nk_1\\cdots k_N}a_{k_1\\cdots k_Nj_1\\cdots j_N}^c=0 $ for all $k_s,~1\\leq s\\leq N.$ Which again yields either $a_{i_1\\cdots i_Nk_1\\cdots k_N}=0 $ for all $i_s,~k_s$ or $a_{k_1\\cdots k_Nj_1\\cdots j_N}^c=0$ for all $j_s,~k_s,~1\\leq s\\leq N.$. Therefore either $\\mc{A}=\\mc{O}$ or $\\mc{A}^C=\\mc{O}.$ Hence completes the proof.\n \\end{proof}\n \nFurther, when $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times J_1 \\times \\cdots \\times J_N}$ is symmetric Boolean tensor, one can write \n \\begin{eqnarray*}\n tr(\\mc{A}{*_N}\\mc{A}^C)&=&\\sum_{i_1\\cdots i_N}\\sum_{k_1\\cdots k_N}a_{i_1\\cdots i_Nk_1\\cdots k_N}a_{k_1\\cdots k_Ni_1\\cdots i_N}^c\\\\\n &=&\\sum_{i_1\\cdots i_N}\\sum_{k_1\\cdot, k_N}a_{k_1\\cdots k_Ni_1\\cdots i_N}^ca_{i_1\\cdots i_Nk_1\\cdots k_N}\\\\\n &=&\\sum_{k_1\\cdots k_N}\\sum_{i_1\\cdots i_N}a_{k_1\\cdots k_Ni_1\\cdots i_N}^ca_{i_1\\cdots i_Nk_1\\cdots k_N}\\\\\n&=&tr(\\mc{A}^C{*_N}\\mc{A}).\n \\end{eqnarray*}\n\n Hence, the tensors in the trace of a product of symmetric tensor and its complement can be switched without changing the result. This is stated in the next result.\n \n \\begin{theorem}\\label{trresult}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times J_1 \\times \\cdots \\times J_N}.$ If $\\mc{A}$ is symmetric, then \n $$\n tr(\\mc{A}{*_N}\\mc{A}^C)=tr(\\mc{A}^C{*_N}\\mc{A}).\n $$\n \\end{theorem}\n \n\n \n \\begin{remark}\\label{rmk3.10}\n In addition to the result of Theorem \\ref{trresult} one can write $tr(\\mc{A}{*_N}\\mc{A}^C)=tr(\\mc{A}^C{*_N}\\mc{A}) = 0$ . \nFurther, the symmetricity condition in Theorem \\ref{trresult} is only sufficient but not necessary.\n \\end{remark}\n One can verify the Remark \\ref{rmk3.10} by the following example.\n \n \\begin{example}\n Let a Boolean tensor\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times2}\\times{2 \\rtimes 2}}$ such that\n\\begin{eqnarray*}\na_{ij11} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix},\na_{ij12} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0 \n \\end{pmatrix},\na_{ij21} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix},\na_{ij22} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix}.\n\\end{eqnarray*}\nIt is clear that $\\mc{A}$ is not symmetric but $tr(\\mc{A}*_2\\mc{A}^C) = tr(\\mc{A}^C*_2\\mc{A}) = 2$, where $~\\mc{A}*_2\\mc{A}^C=(x_{ijkl}) \\in \\mathbb{R}^{{2\\times2}\\times{2 \\rtimes 2}}$ and $~\\mc{A}^C*_2\\mc{A}=(y_{ijkl}) \\in \\mathbb{R}^{{2\\times2}\\times{2 \\rtimes 2}}$ with entries\n\\begin{eqnarray*}\nx_{ij11} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix},\nx_{ij12} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0 \n \\end{pmatrix},\nx_{ij21} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix},\nx_{ij22} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 1 & 0\n \\end{pmatrix},\n\\end{eqnarray*}\n\\begin{eqnarray*}\ny_{ij11} =\n \\begin{pmatrix}\n 0 & 1 \\\\\n 0 & 1\n \\end{pmatrix},\ny_{ij12} =\n \\begin{pmatrix}\n 0 & 1 \\\\\n 0 & 1 \n \\end{pmatrix},\ny_{ij21} =\n \\begin{pmatrix}\n 0 & 1 \\\\\n 0 & 1\n \\end{pmatrix},\ny_{ij22} =\n \\begin{pmatrix}\n 0 & 1 \\\\\n 0 & 1\n \\end{pmatrix}.\n\\end{eqnarray*}\n \\end{example}\n \n\n \n \n Using the complement of a tensor, we now prove the following result.\n \n \\begin{lemma}\\label{complem}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N},$ $\\mc{B}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_N\\times K_1 \\times \\cdots \\times K_L}$ and $\\mc{C}\\in\\mathbb{R}^{K_1 \\times \\cdots \\times K_L\\times J_1 \\times \\cdots \\times J_N}$ be Boolean tensors. Then \n $$\\mc{A}{*_N}\\mc{B}{*_L}\\mc{C}\\leq \\mc{I}^C~~if ~and ~only ~if ~~\\mc{A}^C\\geq (\\mc{B}{*_L}\\mc{C})^T.$$\n \\end{lemma}\n \\begin{proof}\n $\\mc{A}{*_N}\\mc{B}{*_L}\\mc{C}\\leq \\mc{I}^C$ if and only $ \\sum_{j_1\\cdots j_N}\\sum_{k_1\\cdots k_L}a_{{i_1\\cdots i_M}{j_1\\cdots j_N}}b_{{j_1\\cdots j_N}{k_1\\cdots k_L}}c_{{k_1\\cdots k_L}{i_1\\cdots i_M}}=0$\nfor all $i_r,$ $1\\leq r\\leq M.$ This is equivalent to $a_{{i_1\\cdots i_M}{j_1\\cdots j_N}}b_{{j_1\\cdots j_N}{k_1\\cdots k_L}}c_{{k_1\\cdots k_L}{i_1\\cdots i_M}}=0$ \nfor all $i_r,$ $j_s$ and $k_t,$ where $1\\leq r\\leq M,$ $1\\leq s\\leq N,$ $1\\leq t\\leq L.$ This in turn is true if and only\n\\begin{eqnarray*}\n\\left(a_{{i_1\\cdots i_M}{j_1\\cdots j_N}}^c\\right)&\\geq& b_{{j_1\\cdots j_N}{k_1\\cdots k_L}}c_{{k_1\\cdots k_L}{i_1\\cdots i_M}} \\mbox{ for all $k_t$ }\\\\\n &=&\\left(c_{{i_1\\cdots i_M}{k_1\\cdots k_L}}^t\\right) \\left(b_{{k_1\\cdots k_L}{j_1\\cdots j_N}}^t\\right)~\\mbox{ for all $k_t$}\\\\\n &\\geq& \\sum_{k_1\\cdots k_L}\\left(c_{{i_1\\cdots i_M}{k_1\\cdots k_L}}^t\\right) \\left(b_{{k_1\\cdots k_L}{j_1\\cdots j_N}}^t\\right)\\\\\n &\\geq& \\left(\\mc{C}^T{*_N}\\mc{B}^T\\right)_{{i_1\\cdots i_M}{j_1\\cdots j_N}}=\\left(\\left(\\mc{B}{*_L}\\mc{C}\\right)^T\\right)_{{i_1\\cdots i_M}{j_1\\cdots j_N}}.\n \\end{eqnarray*}\n Thus the proof is complete.\n \\end{proof}\n Now we discuss the important result based on transpose and component-wise complement of an arbitrary order Boolean tensor, as follows.\n\\begin{theorem}\\label{lucethm}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}.$ Then $\\mc{X}{*_M}\\mc{A}\\leq \\mc{B}$ if and only if $\\mc{X}\\leq \\left(\\mc{B}^C{*_N}\\mc{A}^T\\right)^C$, and $\\mc{A}{*_N}\\mc{X}\\leq \\mc{B}$ if and only if $\\mc{X}\\leq \\left(\\mc{A}^T {*_M} \\mc{B}^C\\right)^C$.\n\\end{theorem}\n\\begin{proof}\nLet $\\mc{X}{*_M}\\mc{A}\\leq \\mc{B}.$ This yields \n$\\sum_{k_1\\cdots k_M}x_{i_1\\cdots i_M k_1\\cdots k_M}a_{k_1\\cdots k_M j_1\\cdots j_N }\\leq b_{i_1\\cdots i_M j_1\\cdots j_N }$ for all $i_r,~(1\\leq r\\leq M)$ and $j_s,~(1\\leq s\\leq N).$ This is equivalent to $x_{i_1\\cdots i_M k_1\\cdots k_M}a_{k_1\\cdots k_M j_1\\cdots j_N }\\leq b_{i_1\\cdots i_M j_1\\cdots j_N }$ for all $i_r$ and $j_s$ and $k_t~(1\\leq t\\leq M).$ This in turns is true {\\it if and only if} $x_{i_1\\cdots i_M k_1\\cdots k_M}a_{k_1\\cdots k_M j_1\\cdots j_N } b_{i_1\\cdots i_M j_1\\cdots j_N }^c=0,$ for all $j_s$ and $k_t.$ Which is equivalent to\\\\ $x_{i_1\\cdots i_M k_1\\cdots k_M}a_{k_1\\cdots k_M j_1\\cdots j_N } \\{b_{j_1\\cdots j_N i_1\\cdots i_M }^t\\}^c=0$ for all $j_s$ and $k_t.$ Summing over all $j_s$ and $k_t,$ we get, $\\sum_{k_1\\cdots k_M}\\sum_{j_1\\cdots j_N} x_{i_1\\cdots i_M k_1\\cdots k_M}a_{k_1\\cdots k_M j_1\\cdots j_N } \\{b_{j_1\\cdots j_N i_1\\cdots i_M }^t\\}^c=0.$ This is true if and only $\\mc{X}{*_M}\\mc{A}{*_N}\\left(\\mc{B}^T\\right)^C\\leq \\mc{I}^C.$ By Preposition \\ref{equitc} $(a)$, this is equivalent to $\\mc{X}{*_M}\\mc{A}{*_N}\\left(\\mc{B}^C\\right)^T\\leq \\mc{I}^C.$ By Lemma \\ref{complem}, this in turns true if and only $\\mc{X}^C\\geq \\left(\\mc{A}{*_N}(\\mc{B}^C)^T\\right)^T,$ that is, {\\it if and only if} $ \\mc{X}\\leq \\left(\\mc{B}^C{*_M}\\mc{A}^T\\right)^C.$\n\nThis completes first part of the theorem. Similar way, we can show the second part of the theorem.\n\\end{proof}\n\\begin{corollary}\nLet $\\mc{E}=\\mc{O}^C,$ where $\\mc{O}$ is the zero tensor. Then the following statements are equivalent:\n\\begin{enumerate}\n \\item[(a)] $\\mc{X}{*_M}\\mc{A}=\\mc{O};$\n \\item[(b)] $\\mc{X}\\leq \\left(\\left(\\mc{A}{*_N}\\mc{E}\\right)^T\\right)^C;$\n \\item[(c)] $\\mc{E}{*_N}\\mc{X}\\leq \\left(\\left(\\mc{A}{*_N}\\mc{E}\\right)^T\\right)^C.$\n\\end{enumerate}\n\\end{corollary}\nThe same result is also true for $\\mc{A}{*_N}\\mc{X}=\\mc{O}.$ Also the following corollary easily follow from Theorem \\ref{lucethm}.\n\n\\begin{corollary}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ and $\\mc{X}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times I_1 \\times \\cdots \\times I_M}.$ Then $\\mc{X}{*_M}\\mc{A}=\\mc{B}$ has a solution if and only if $\\mc{B}\\leq \\left(\\mc{B}^C{*_N}\\mc{A}^T\\right)^C{*_M}\\mc{A}.$\n\\end{corollary}\n\n\n\n \n \n \n \n\\subsection{Generalized inverses of Boolean tensors}\nFor the generalization of the generalized inverses of Boolean matrix \\cite{rao}, we introduce the definition of $\\{i\\}$-inverses $(i = 1, 2, 3, 4)$ and the Moore-Penrose inverse of Boolean tensors via the Einstein product, as follows. \n\n \n \\begin{definition}\\label{defgi}\n For any Boolean tensor $\\mc{A} \\in \\mathbb{R}^{I_1\\times\\cdots\\times I_M \\times J_1 \\times\\cdots\\times J_N},$ consider the following equations in $\\mc{X} \\in\n\\mathbb{R}^{J_1\\times\\cdots\\times J_N \\times I_1 \\times\\cdots\\times\nI_M}:$\n\\vspace{-.4cm}\n\\begin{eqnarray*}\n&&(1)~\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A} = \\mc{A},\\\\\n&&(2)~\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X} = \\mc{X},\\\\\n&&(3)~(\\mc{A}{*_N}\\mc{X})^T = \\mc{A}{*_N}\\mc{X},\\\\\n&&(4)~(\\mc{X}{*_M}\\mc{A})^T = \\mc{X}{*_M}\\mc{A}.\n\\end{eqnarray*}\n\\vspace{-.34cm}\nThen $\\mc{X}$ is called\n\\begin{enumerate}\n\\item[(a)] \na generalized inverse of $\\mc{A}$ if it satisfies $(1)$ and denoted by $\\mc{A}^{(1)}.$\n\\item[(b)] a reflexive generalized inverse of $\\mc{A}$ if it satisfies $(1)$ and $(2)$, which is denoted by $\\mc{A}^{(1,2)}.$\n\\item[(c)] a $\\{1,3\\}$ inverse of $\\mc{A}$ if it satisfies $(1)$ and $(3)$, which is denoted by $\\mc{A}^{(1,3)}.$\n\\item[(d)] a $\\{1,4\\}$ inverse of $\\mc{A}$ if it satisfies $(1)$ and $(4)$, which is denoted by $\\mc{A}^{(1,4)}.$\n\\item[(e)] the Moore-Penrose inverse of $\\mc{A}$ if it satisfies all four conditions $[(1)-(4)]$, which is denoted by $\\mc{A}^{\\dagger}.$\n\\end{enumerate}\n\\end{definition}\n\nThe following remark and corollary are follows from the Definition \\ref{defgi}.\n\n\\begin{remark}\\label{rm11}\n If $\\mc{X}$ is the generalized inverse of a Boolean tensor $\\mc{A} \\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ then $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X}$ is the reflexive generalized inverse of $\\mc{A}.$\n\\end{remark}\n\n\\begin{corollary}\\label{corm1}\nIf $\\mc{X}$ is the generalized inverse of a Boolean tensor $\\mc{A} \\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ Then \n\\begin{enumerate}\n \\item[(a)] $\\mc{X}^T$ is the generalized inverse of $\\mc{A}^T;$\n \\item[(b)] $(\\mc{X}_1+\\mc{X}_2)$ is the generalized inverse of of a Boolean tensor $\\mc{A}$ when $\\mc{X}_1$ and $\\mc{X}_2$ are two generalized inverse of $\\mc{A}.$\n\\end{enumerate}\n\\end{corollary}\n\n\nThus the existence of generalized inverse of a Boolean tensor guarantees the existence of a reflexive generalized inverse. In addition to that, the Remark \\ref{rm11} and Corollary \\ref{corm1} (b) ensures that the existence of one-generalized inverse implies the existence of finite number generalized inverses. In view of the fact, we define the maximum generalized inverse of a Boolean tensor, as follow:\n\n\\begin{definition}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}.$ A tensor $\\mc{X}$ is called maximum generalized inverse of $\\mc{A}$ if $\\mc{G}\\leq \\mc{X}$ for every generalized inverse $\\mc{G}$ of $\\mc{A}.$\n\\end{definition}\n\nNote that, the generalized inverse of a Boolean tensor need not be unique which explained in the next example.\n\n\\begin{example}\\label{example18}\nConsider a Boolean tensor\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ with entries\n\\begin{eqnarray*}\na_{ij11} =a_{ij12} =a_{ij13} =a_{ij21} =a_{ij22} =a_{ij23} =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 1 & 0 & 0\n \\end{pmatrix}.\n\\end{eqnarray*}\nThen it can be easily verified that both tensors\n$~\\mc{X}=(x_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ and $~\\mc{Y}=(y_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ with entries\n\\begin{eqnarray*}\nx_{ij11} =\n \\begin{pmatrix}\n 0 & 1 & 1 \\\\\n 1 & 1 & 1\n \\end{pmatrix},\nx_{ij12} =x_{ij13} =x_{ij21} = x_{ij22} =x_{ij23} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\mbox{ and }\n \\end{eqnarray*}\n\\begin{eqnarray*}\ny_{ij11} =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 0 & 1\n \\end{pmatrix},\ny_{ij12} =y_{ij13} =y_{ij21} =y_{ij22} =y_{ij23} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\n\\end{eqnarray*}\nare satisfies the required condition of the Definition \\ref{defgi}.\n\\end{example}\n \n Foa a Boolean tensor $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N \\times J_1 \\times \\cdots \\times J_N},$ the number of generalized inverses are finite and the maximum number of generalized inverses is $2^{I_1 \\times \\cdots \\times I_N\\times I_1 \\times \\cdots \\times I_N}.$\n The next result assures the uniqueness and is true only for invertiable tensors.\n \\begin{lemma}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times I_1 \\times \\cdots \\times I_N}$ be any Boolean tensor. If $\\mc{A}$ is invertiable then $\\mc{A}^{-1}$ is the only generalized inverse of $\\mc{A}.$\n \\end{lemma}\n Next, we discus the equivalence condition for consistent system and generalized inverse. \n \n \\begin{theorem}\\label{eqgen}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ and $\\mc{X}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_N\\times I_1 \\times \\cdots \\times I_M}.$ Then the followings are equivalent:\n\\begin{enumerate}\n \\item[(a)] $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A}.$\n \\item[(b)] $\\mc{X}{*_M}\\mc{Y}$ is a solution of the tensor equation $\\mc{A}{*_N}\\mc{Z}=\\mc{Y}$ whenever $\\mc{Y}\\in\\mathfrak{R}(\\mc{A}).$\n \\item[(c)] $\\mc{A}{*_N}\\mc{X}$ is idempotent and $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{X}).$\n \\item[(d)] $\\mc{X}{*_M}\\mc{A}$ is idempotent and $\\mathfrak{R}(\\mc{A^T})=\\mathfrak{R}(\\mc{A}^T{*_N}\\mc{X}^T).$\n\\end{enumerate}\n\\end{theorem}\n\n\n\n\n\n\\begin{proof}\nFirst we will claim $(a)$ {\\it if and only if} $(b).$ Let us assume $(a)$ holds and $\\mc{Y}\\in\\mathfrak{R}(\\mc{A}).$ Then there exists a Boolean tensor $\\mc{Z}\\in \\mathbb{R}^{J_1\\times J_2\\times\\cdots\\times J_N}$ such that $\\mc{A}{*_N}\\mc{Z}=\\mc{Y}.$ Now \n$$\\mc{A}{*_N}\\mc{X}{*_M}\\mc{Y}=\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}{*_N}\\mc{Z}=\\mc{A}{*_N}\\mc{Z}=\\mc{Y}.$$ Therefore, $\\mc{X}{*_M}\\mc{Y}$ is a solution of $\\mc{A}{*_N}\\mc{Z}=\\mc{Y}.$ Conversely assume $(b)$ is true. That is $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{Y}=\\mc{Y}$ for all $\\mc{Y}\\in\\mathfrak{R}(\\mc{A}).$ Since $\\mc{Y}\\in\\mathfrak{R}(\\mc{A})$ which implies there exists $\\mc{U}\\in \\mathbb{R}^{J_1\\times\\cdots \\times J_N}$ such that $\\mc{A}{*_N}\\mc{U}=\\mc{Y}.$ Thus $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}{*_N}\\mc{U}=\\mc{A}{*_N}\\mc{U}$ for all $\\mc{U}\\in \\mathbb{R}^{J_1\\times\\cdots\\times J_N}.$ Therefore $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A}.$ Next we show the equivalence between $(a)$ and $(c).$ Clearly $(a)$ implies $\\mc{A}{*_N}\\mc{X}$ idempotent. Since $\\mc{A}=\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}$ and $\\mc{A}{*_N}\\mc{X}=\\mc{A}{*_N} {\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X}},$ so by Lemma \\ref{range-stan} $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{X}).$ Using the same idea, we can easily show the equivalence between $(a)$ and $(d).$ Hence completes the proof. \n\\end{proof}\n\n\n\n\nSince $\\mc{A}^T{*_M}\\mc{A}{*_N}\\mc{X}_1{*_N}\\mc{A}^T{*_M}\\mc{A}=\\mc{A}^T{*_M}\\mc{A}$, so by Theorem \\ref{ltcan}, $\\mc{A}{*_N}\\mc{X}_1{*_N}\\mc{A}^T{*_M}\\mc{A}=\\mc{A}.$ Which leads the following corollary. \n\n\\begin{corollary}\nLet $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_M}\\mc{A}).$ If $\\mc{X}_1$ and $\\mc{X}_2$ are generalized inverses of $\\mc{A}^T{*_M}\\mc{A}$ and $\\mc{A}{*_N}\\mc{A}^T$ respectively, then $\\mc{X}_1{*_N}\\mc{A}^T$ and $\\mc{A}^T{*_M}\\mc{X}_2$ are generalized inverse of $\\mc{A}.$\n\\end{corollary}\n\n\n\nFurther, from the range conditions, if $\\mathfrak{R}(\\mc{A}^T)\\subseteq \\mathfrak{R}(\\mc{B}^T)$ and $\\mathfrak{R}(\\mc{C})\\subseteq \\mathfrak{R}(\\mc{B}).$ Then $\\mc{A}=\\mc{V}{*_M}\\mc{B}$ and $\\mc{C}=\\mc{B}{*_N}\\mc{U}$ for some tensors $\\mc{U}$ and $\\mc{V}.$ Now \n$\\mc{A}{*_N}\\mc{X}{*_M}\\mc{C}= \\mc{V}{*_M}\\mc{B}{*_N}\\mc{X}{*_M}\\mc{B}{*_N}\\mc{U}=\\mc{V}{*_M}\\mc{B}{*_N}\\mc{U}$ which does not rely on $\\mc{X}$. So it is invariant to the choice of $\\mc{X}.$ So, we conclude this observation in the following corollary.\n\n\n\n\n\n\n\n\n\\begin{corollary}\nLet $\\mc{A},$ $\\mc{B}$ and $\\mc{C}$ be suitable tensors such that $\\mathfrak{R}(\\mc{A}^T)\\subseteq \\mathfrak{R}(\\mc{B}^T)$ and $\\mathfrak{R}(\\mc{C})\\subseteq \\mathfrak{R}(\\mc{B}).$ If the generalized inverse of $\\mc{B}$ exists, then $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{C}$ is invariant to $\\mc{X},$ where $\\mc{X}$ is the generalized inverse of $\\mc{B}.$\n\\end{corollary}\n\n\n\n\nTo prove the next result, we define regular and singular of tensors, i.e., \nA tensor $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1\\times \\cdots J_N},$ is called regular if the tensor equation $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A}$ has a solution, otherwise called \\textit{singular}.\n\n\\begin{theorem}\\label{thm3.43}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N},$ $\\mc{S}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times I_1 \\times \\cdots \\times I_M},$ and $\\mc{T}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_M\\times J_1 \\times \\cdots \\times J_N}.$ If $\\mc{S}$ and $\\mc{T}$ are invertible, then the following are equivalent:\n\\begin{enumerate}\n \\item[(a)] $\\mc{A}$ is regular.\n \\item[(b)] $\\mc{S}{*_M}\\mc{A}{*_N}\\mc{T}$ is regular.\n \\item[(c)] $\\mc{A}^T$ is regular.\n \\item[(d)] $\\mc{T}{*_N}\\mc{A}^T{*_M}\\mc{S}$ is regular.\n\\end{enumerate}\n\\end{theorem}\n\n\n\nBased on the block tensor\\cite{sun} and their properties, we have the following lemma.\n\\begin{lemma}\\label{block}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}.$ Then $\\mc{A}$ is regular if and only if $\\begin{bmatrix}\n\\mc{A} &\\mc{O}\\\\\n\\mc{O} & \\mc{B}\n\\end{bmatrix}$ is regular for all regular tensors $\\mc{B}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}.$\n\\end{lemma}\n\n\\begin{proof}\nLet $\\mc{A}$ and $\\mc{B}$ be regular tensors. Then there exist tensors $\\mc{X}$ and $\\mc{Y}$ such that $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A}$ and $\\mc{B}{*_N}\\mc{Y}{*_M}\\mc{B}=\\mc{B}.$ Let $\\mc{Z}=\\begin{bmatrix}\n\\mc{X} & \\mc{O}\\\\\n\\mc{O} & \\mc{Y}\\\\\n\\end{bmatrix}.$ Now \n\\begin{eqnarray*}\n\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}{*_N}\\mc{Z}{*_M}\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}&=&\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}{*_N}\\begin{bmatrix}\n\\mc{X} & \\mc{O}\\\\\n\\mc{O} & \\mc{Y}\\\\\n\\end{bmatrix}{*_M}\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}\\\\\n&=&\n\\begin{bmatrix}\n\\mc{A}{*_N}\\mc{X} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}{*_N}\\mc{Y}\\\\\n\\end{bmatrix}{*_M}\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}\\\\\n&=&\\begin{bmatrix}\n\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}{*_N}\\mc{Y}{*_M}\\mc{B}\\\\\n\\end{bmatrix}=\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}.\n\\end{eqnarray*}\nThus $\\begin{bmatrix}\n\\mc{A} & \\mc{O}\\\\\n\\mc{O} & \\mc{B}\\\\\n\\end{bmatrix}$ is regular. The converse part can be proved in the similar way. \n\\end{proof}\n\n We now present another characterization of the generalized inverse of the Boolean tensor, as follows.\n\\begin{theorem}\\label{lemcomp}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}.$ Then \n$$\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}\\leq \\mc{A}~~ if ~and~ only~ if ~~\\mc{X}\\leq \\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}.$$\n\\end{theorem}\n\\begin{proof}\nApplying Theorem \\ref{lucethm} repetitively, we get \n$\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}\\leq \\mc{A}$ if and only if $\\mc{X}{*_M}\\mc{A}\\leq \\left(\\mc{A}^T{*_M}\\mc{A}^C\\right)^C$, which equivalently if and only if \n\\begin{equation*}\n\\mc{X}\\leq \\left(\\left(\\left(\\mc{A}^T{*_M}\\mc{A}^C\\right)^C\\right)^C{*_N}\\mc{A}^T\\right)^C = \\left(\\mc{A}^T{*_M}\\mc{A}^C{*_N}\\mc{A}^T\\right)^C =\\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}.\n\\end{equation*}\n\\end{proof}\nUsing the Theorem \\ref{lemcomp}, and the fact of transpose and component-wise complement of a Boolean tensor, we obtain an important result for finding the maximum generalized inverse of a Boolean tensor. \n\\begin{corollary}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ be regular. Then the following are holds\n\\begin{enumerate}\n\\item[(a)] $\\mc{A}=\\mc{A}{*_N}\\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}{*_M}\\mc{A};$\n \\item[(b)] $\\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}$ is the maximum generalized inverse of $\\mc{A};$ \n \\item[(c)] $\\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}{*_M}\\mc{A}{*_N}\\left(\\mc{A}{*_N}\\mc{A}^{CT}{*_M}\\mc{A}\\right)^{CT}$ is the maximum reflexive generalized inverse of $\\mc{A}.$\n\\end{enumerate} \n\\end{corollary}\n\n\n\n\nNext, we discuss some equivalence results between generalized and other inverses. \n\n\n\n\n\n\\begin{theorem}\\label{eqv14}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ be any Boolean tensor, then the following statements are equivalent:\n\\begin{enumerate}\n\\item[(a)] $\\mc{A}^{(1,4)}$ exists.\n\\item[(b)] $\\mc{A}^{(1)}$ exists and $\\mathfrak{R}(\\mc{A}) = \\mathfrak{R}(\\mc{A}{*_N}\\mc{A}^T).$ \n\\item[(c)] $(\\mc{A}{*_N}\\mc{A}^T)^{(1)}$ exists and $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{A}^T = \\mc{A}^T$ for some tensor $\\mc{X}.$\n\\end{enumerate}\n\n\n\\begin{proof}\nConsider $(a)$ is true and $\\mc{A}^{(1,4)}=\\mc{X}.$ Existence of $\\mc{A}^{(1)}$ is trivial and hence $\\mathfrak{R}(\\mc{A}) = \\mathfrak{R}(\\mc{A}{*_N}\\mc{A}^T).$ Now we claim $(b)\\Rightarrow (c).$ Let $\\mc{A}^{(1)}$ exists and $\\mathfrak{R}(\\mc{A}) = \\mathfrak{R}(\\mc{A}{*_N}\\mc{A}^T).$ Then there exist a Boolean tensor $\\mc{U}\\in\\mathbb{R}^{I_1\\times\\cdots \\times I_M\\times J_1\\times\\cdots\\times J_N}$ such that $\\mc{A} = \\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{U}.$ Which implies $\\mc{A}{*_N}\\mc{A}^{T}=\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{U}{*_N}\\mc{U}^T{*_M}\\mc{A}{*_N}\\mc{A}^T.$ So generalized inverse of $\\mc{A}{*_N}\\mc{A}^T$ exists. If we take $\\mc{X}=\\mc{A}^T{*_N}(\\mc{A}{*_N}\\mc{A}^T)^{(1)},$ then \n\\begin{eqnarray*}\n\\mc{X}{*_M}\\mc{A}{*_N}\\mc{A}^T &=& {\\mc{A}^T}{*_M}(\\mc{A}{*_N}\\mc{A}^T )^{(1)}{*_M}\\mc{A}{*_N} \\mc{A}^T=\\mc{U}^T{*_M}\\mc{A}{*_N}\\mc{A}^T{*_M}(\\mc{A}{*_N}\\mc{A}^T )^{(1)}{*_M}\\mc{A}{*_N}\\mc{A}^T\\\\\n&=&\\mc{U}^T{*_M}\\mc{A}{*_N}\\mc{A}^T=\\mc{A}^T.\n\\end{eqnarray*}\nFinally, we claim $(c)\\Rightarrow (a).$ Let $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{A}^T = \\mc{A}^T$. Taking transpose on both sides, we get $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A} = \\mc{A} $, As\n\\begin{eqnarray*}\n(\\mc{X}{*_M}\\mc{A})^T &= &\\mc{A}^T{*_M}\\mc{X}^T = \\mc{X}{*_M}\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{X}^T\\\\\n&=& (\\mc{X}{*_M}\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{X}^T)^T = (\\mc{A}^T{*_M}\\mc{X}^T)^T = \\mc{X}{*_M}\\mc{A}. \n\\end{eqnarray*}\nThus $\\mc{X}=\\mc{A}^{(1,4)}.$ Hence the proof is complete.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\nUsing the similar way, we can show the following theorem.\n\n\n\n\n\\begin{theorem}\\label{eqv13}\nLet $\\mc{A}$ be any Boolean tensor, then the following statements are equivalent:\n\\begin{enumerate}\n\\item[(a)] $\\mc{A}^{(1,3)}$ exists.\n\\item[(b)] $\\mc{A}^{(1)}$ exists and $\\mathfrak{R}(\\mc{A}^T) = \\mathfrak{R}(\\mc{A}^T{*_M}\\mc{A}).$ \n\\item[(c)] There exists a Boolean tensor $\\mc{X}$ such that $\\mc{A}^T=\\mc{A}^T{*_M}\\mc{A}{*_N}\\mc{X}.$ \n\\end{enumerate}\n\\end{theorem}\n\n\n\n\nWe now discuss the characterization of Moore-Penrose inverse of Boolean tensors. The similar proof of Theorem 3.2 in \\cite{sun}, we have the uniqueness of the Moore-Penrose inverse of a Boolean tensor in $\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1\\times\\cdots\\times J_N}$, as follows.\n\n\n\n\n\\begin{lemma}\\label{mpiu}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ be any Boolean tensor. If the Moore-Penrose inverse of $\\mc{A}$ exists then it is unique.\n\\end{lemma}\n In the next lemma, we discuss an estimate of Moore-Penrose inverse a tensor, as follows.\n \\begin{lemma}\\label{lemma2}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ be a Boolean tensor and suppose $\\mc{A}$ admits a Moore-Penrose inverse. Then $\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}\\leq\\mc{A}$\n \\begin{proof}\n Let $\\mc{B}= \\mc{A}^T{*_M}\\mc{A}$. Since $\\mc{B}$ is a Boolean tensor of even order and there are finitely many Boolean tensors of same order, so there must exist positive integers $s,t\\in\\mathbb{N}$ such that $\\mc{B}^s$ = $\\mc{B}^{s+t}.$ Without loss of generality, we can assume that $s$ is the smallest positive integer for which $\\mc{B}^s= \\mc{B}^{s+t}$ for some $t\\in \\mathbb{N}.$ Now we will show $s =1.$ Suppose $s\\geq 2$. Let $\\mc{X}$ be the Moore-Penrose inverse of $\\mc{A}$. Since $\\mc{B}= \\mc{A}^T{*_M}\\mc{A}$ and $\\mc{B}^s= \\mc{B}^{s+t}$ which implies $\\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s-1} = \\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s+t-1}.$ Pre-multiplying both side $\\mc{X}^T$ yields $\\mc{A}{{*_N}}\\mc{B}^{s-1} = \\mc{A}{{*_N}}\\mc{B}^{s+t-1},$ which implies $\\mc{A}{*_N}\\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s-2} = \\mc{A}{*_N}\\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s+t-2}.$ Further, pre-multiplying both side $\\mc{X}$ yields $\\mc{A}^T{*_M}\\mc{X}^T{*_N}\\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s-2} =\\mc{A}^T{*_M}\\mc{X}^T{*_N}\\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{s+t-2},$ which implies $\\mc{B}^{s-1}= \\mc{B}^{s+t-1}.$\n \nThus, the minimality of $s$ is false and hence $s=1.$ Therefore $\\mc{B}=\\mc{B}^{t+1}$ for some $t\\in\\mathbb{N}.$ Again we have $\\mc{B}=\\mc{B}^{t+1}$, which implies $\\mc{A}^T{{*_M}}\\mc{A} = \\mc{A}^T{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{t}$. Pre-multiplying $\\mc{X}^T$ both sides yields\n$\\mc{A}{*_N}\\mc{X}{{*_M}}\\mc{A} = \\mc{A}{*_N}\\mc{X}{{*_M}}\\mc{A}{{*_N}}\\mc{B}^{t}.$\n Thus \n \\begin{equation}\\label{eqfl}\n \\mc{A} = \\mc{A}{{*_N}}\\mc{B}^{t}=\\mc{A}{*_N}(\\mc{A}^T{*_M}\\mc{A})^t.\n \\end{equation}\nApplying Lemma \\ref{lemma1} to $\\mc{A}{*_N}\\mc{A}^T{*_N}\\mc{A}$ repetitively and combining Eq. (\\ref{eqfl}), we obtain\n$$\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}\\leq \\mc{A}{*_N}(\\mc{A}^T{*_M}\\mc{A})^2\\leq\\cdots\\leq \\mc{A}{*_N}(\\mc{A}^T{*_M}\\mc{A})^t=\\mc{A}.$$\n \\end{proof}\n\\end{lemma}\n\n\nUsing the Lemma \\ref{lemma1} and \\ref{lemma2} one can obtain an interesting result on invertibility of Boolean tensor as follows, \n\n\\begin{corollary}\nA Boolean tensor $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_N\\times I_1 \\times \\cdots \\times I_N}$ is invertible if and only if \n $$\\mc{A}{*_N}\\mc{A}^T=\\mc{A}^T{*_M}\\mc{A}=\\mc{I}.$$ \n\\end{corollary}\nFrom the Definition \\ref{perm}, we obtain\n\\begin{eqnarray*}\n(\\mc{P}{*_N}\\mc{P}^T)_{i_1 \\cdots i_Nj_1\\cdots j_N}&=&\\sum_{k_1\\cdots k_N}(\\mc{P})_{i_1 \\cdots i_Nk_1\\cdots k_N}(\\mc{P}^T)_{k_1 \\cdots k_Nj_1\\cdots j_N}\\\\\n&=&\\sum_{k_1\\cdots k_N}(\\mc{P})_{i_1 \\cdots i_Nk_1\\cdots k_N}(\\mc{P})_{j_1 \\cdots j_Nk_1\\cdots k_N}\\\\\n&=&(\\mc{P})_{i_1 \\cdots i_N\\pi(j_1)\\cdots \\pi(j_N)}(\\mc{P})_{j_1 \\cdots j_N\\pi(j_1)\\cdots \\pi(j_N)}\\\\\n&=& \\left\\{\\begin{array}{cc}\n 1 & \\mbox{ if } i_s=j_s \\mbox{ for all } 1\\leq s\\leq N. \\\\\n 0 & \\mbox{ otherwise.}\n\\end{array}\\right.\\\\\n&=&(\\mc{I})_{i_1 \\cdots i_Nj_1\\cdots j_N}.\n\\end{eqnarray*}\nSimilar way, we can also show $\\mc{P}^T{*_N}\\mc{P}=\\mc{I}.$ Therefore, every permutation tensors are orthogonal and invertible. Adopting this result, we now present a characterization of the permutation tensor, as follows.\n\\begin{preposition}\\label{permu}\nA Boolean tensor $\\mc{A}$ has an inverse {\\it if and only if} it is a permutation tensor.\n\\end{preposition}\nNext result contains five equivalent conditions involving the existence of Moore-Penrose inverse of a Boolean tensor.\n \\begin{theorem}\\label{thm1}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M \\times J_1\\times \\cdots \\times J_N.}$ be any tensor. Then the following statements are equivalent:\n \\begin{enumerate}\n \\item[(i)] The Moore-Penrose inverse of $ \\mc{A}$ exists and unique.\n \\item[(ii)] $\\mc{A}{{*_N}}\\mc{A}^T{{*_N}}\\mc{A} \\leq \\mc{A}.$\n \\item[(iii)] $\\mc{A}{{*_N}}\\mc{A}^T{{*_N}}\\mc{A} = \\mc{A}.$\n\\item[(iv)] The Moore-Penrose inverse of $\\mc{A}$ exists and equals $\\mc{A}^T$.\n\\item[(v)] There exist a tensor $\\mc{G}$ such that \n $\\mc{G}{{*_N}}\\mc{A}{{*_N}}\\mc{A}^T=\\mc{A}^T$ and $\\mc{A}^T{{*_N}}\\mc{A}{{*_N}}\\mc{G}=\\mc{A}^T$.\n\\end{enumerate}\n \\end{theorem}\n \n \n \n \\begin{proof}\n If $(i)$ holds then by Lemma \\ref{lemma2} $(ii)$ holds. Also $(ii)\\Rightarrow (iii)$ by Lemma \\ref{lemma1}. The statements $(iii)\\Rightarrow (iv)$ and $(iv)\\Rightarrow (i)$ are trivial by definition. Now we will show equivalence between $(i)$ and $(v)$. Suppose $(i)$ holds. If we take $\\mc{G}=\\mc{A}^T$ then $(v)$ hold. Conversely assume $(v)$ is true. To prove Moore-Penrose inverse of $A$ exists, first we show the following results:\n \\begin{itemize}\n \\item $\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A} = \\mc{A}$\\\\\n Since $\\mc{G}{*_M} \\mc{A}{*_N}\\mc{A}^T=\\mc{A}^T$ which implies $ \\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{G}^T = \\mc{A}. $ Pre multiplying $\\mc{G}$ and post multiplying $\\mc{A}^T$ both sides, we obtain $\\mc{G}{*_M}\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{G}^T{*_N}\\mc{A}^T=\\mc{G}{*_M}\\mc{A}{*_N}\\mc{A}^T. $ \n Thus $\\mc{A}^T{*_M}\\mc{G}^T{*_N}\\mc{A}^T=\\mc{A}^T.$ \n Hence $\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A} = \\mc{A}.$\n \\item $(\\mc{G}{*_M}\\mc{A})^T = \\mc{A}^T{*_M}\\mc{G}^T=\\mc{G}{{*_M}}\\mc{A}{{*_N}}\\mc{A}^T{*_M}\\mc{G}^T=\\mc{G}{*_M}\\mc{A}.$ Therefore $\\mc{G}{*_M}\\mc{A}$ is symmetric.\n \\item $(\\mc{A}{*_N}\\mc{G})^T = \\mc{G}^T{*_N}\\mc{A}^T=\\mc{G}^T{*_N}\\mc{A}^T{*_M}\\mc{A}{*_N}\\mc{G}=\\mc{A}{*_N}\\mc{G}.$ Thus $\\mc{A}{*_N}\\mc{G}$ is symmetric.\n \\end{itemize}\n Now we will show the tensor $\\mc{X}=\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G}$ is the Moore-Penrose of $\\mc{A}.$ Since \n \\begin{enumerate}\n \\item [$\\bullet$] \n $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A} =\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A}=\\mc{A}.$\n \\item [$\\bullet$] \n $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X} =\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G} =\\mc{X}.$\n \\item [$\\bullet$] \n $(\\mc{A}{*_N}\\mc{X})^T=(\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G})^T=(\\mc{A}{*_N}\\mc{G})^T{*_M}(\\mc{A}{*_N}\\mc{G})^T\n =\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G}=\\mc{A}{*_N}\\mc{X}.$\n \\item [$\\bullet$] \n $(\\mc{X}{*_M}\\mc{A})^T=(\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A})^T=(\\mc{G}{*_M}\\mc{A})^T{*_N}(\\mc{G}{*_M}\\mc{A})^T\n =\\mc{G}{*_M}\\mc{A}{*_N}\\mc{G}{*_M}\\mc{A}=\\mc{X}{*_M}\\mc{A}.$\n \\end{enumerate}\n\n\n Therefore, $\\mc{X}$ is the Moore-Penrose inverse of $\\mc{A}$ and By Lemma \\ref{mpiu} it is unique.\n \\end{proof}\n \n \n The reverse order law for the Moore-Penrose inverses of tensors yields a class of challenging problems that are fundamental research in the theory of generalized inverses. Research on reverse order law tensors has been very active recently \\cite{Mispa18, PanRad18} but as per the above theorem it is trivially true in case of Boolean tensors. \n\n \\begin{remark}\n If Moore-Penrose inverses of $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$, $\\mc{B}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_N\\times K_1 \\times \\cdots \\times K_L},$ and $\\mc{A}{*_N}\\mc{B}$ exists, then the reverse-order law for the Moore-Penrose inverse is always exists, i.e., \n $$(\\mc{A}{*_N}\\mc{B})^\\dagger=\\mc{B}^\\dagger{*_N}\\mc{A}^\\dagger.$$\n \\end{remark}\n\n\n\n\n\\subsection{Weighted Moore-Penrose inverse}\n\nUtilizing the Einstein product, weighted Moore-Penrose inverse of even-order tensor and arbitrary-order tensor was introduced in \\cite{BehMM19, we17}, very recently. This work motivate us to study weighted Moore-Penrose inverse for Boolean tensors.\n\n\\begin{definition}\\label{wmpi}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$, $\\mc{M}\\in\\mathbb{R}^{I_1 \\times \\cdots \\times I_M\\times I_1 \\times \\cdots \\times I_M}$ and $\\mc{N}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_N\\times J_1 \\times \\cdots \\times J_N}$ be three Boolean tensors. If a Boolean tensor $\\mc{Z}\\in\\mathbb{R}^{J_1 \\times \\cdots \\times J_N\\times I_1 \\times \\cdots \\times I_M}$ satisfying\n\\vspace{-.5cm}\n\\begin{eqnarray*}\n&(1)&\\mc{A}{*_N}\\mc{Z}{*_M}\\mc{A} = \\mc{A},\\\\\n&(2)&\\mc{Z}{*_M}\\mc{A}{*_N}\\mc{Z} = \\mc{Z},\\\\\n&(3)&(\\mc{M}{*_M}\\mc{A}{*_N}\\mc{Z})^T = \\mc{M}{*_M}\\mc{A}{*_N}\\mc{Z},\\\\\n&(4)&(\\mc{Z}{*_M}\\mc{A}{*_N}\\mc{N})^T = \\mc{Z}{*_M}\\mc{A}{*_N}\\mc{N},\n\\end{eqnarray*}\nis called weighted Moore-Penrose inverse of $\\mc{A}$ and it is denoted by $A^{\\dagger}_{\\mc{M},\\mc{N}}.$\n\\end{definition}\nNote that, the weighted Moore-Penrose inverse need not be unique in general. This can be verified by the following example.\n\\begin{example}\nLet the Boolean tensor\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ be defined as in Example \\ref{example18} with $\\mc{N}=\\mc{O} \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ and $\\mc{M}=(m_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ such that \n\\begin{eqnarray*}\nm_{ij11} =m_{ij21}=\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 0 & 0\n \\end{pmatrix},\nm_{ij12} = m_{ij22} =\n \\begin{pmatrix}\n 1 & 0 & 0\\\\\n 0 & 0 & 1\n \\end{pmatrix},\nm_{ij13} =m_{ij23}\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 1\n\\end{pmatrix}.\n\\end{eqnarray*}\nThen it can be easily verified that both $~\\mc{X}=(x_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$, $~\\mc{Y}=(y_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ defined in Example \\ref{example18} satisfies all conditions of Definition \\ref{wmpi}. \\end{example}\nThe uniqueness and existence of weighted Moore-Penrose inverse and some equivalent properties will be discussed in the next part of this subsection.\n\n \\begin{theorem}\\label{uwmpi}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots\\times I_M\\times J_1 \\times \\cdots\\times J_N},~~\\mc{M}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times I_1 \\times \\cdots\\times I_M},$\\\\ $\\mc{N}\\in\\mathbb{R}^{J_1 \\times \\cdots\\times J_N\\times J_1 \\times \\cdots\\times J_N}$ be three Boolean tensors with $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{N})$ and $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_M}\\mc{M}^T).$ If $\\mc{A}_{\\mc{M},\\mc{N}}^{\\dagger}$ exists, then\n\\begin{enumerate}\n\\item[(a)] $\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T = \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T;$\n\\item[(b)] $\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A} = \\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A}; $\n\\item[(c)] $ \\mc{A}_{\\mc{M},\\mc{N}}^{\\dagger} $ is unique.\n\\end{enumerate}\n\\end{theorem}\n \\begin{proof}\n Let $\\mc{X}$ be a weighted Moore-Penrose inverse of $\\mc{A}.$ Now \n \\begin{eqnarray*}\n \\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T &=&\\mc{A}{*_N} {\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{X}^T}{*_N}\\mc{A}^T\n =\\mc{A}{*_N} {(\\mc{X}{*_M}\\mc{A}{*_N}\\mc{N})^T}{*_N}\\mc{A}^T\\\\\n &=& {\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}}{*_N}\\mc{N}{*_N}\\mc{A}^T\n = \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T.\n \\end{eqnarray*}\n This completes the proof of part $(a).$ Using the similar lines of part $(a)$ and relation $(3)$ of Definition \\ref{wmpi}, we can prove part $(b).$ Next we will claim the uniqueness of $A^\\dagger_{\\mc{M},\\mc{N}}.$ \\\\\n Suppose there exists two weighted Moore-Penrose inverses (say $\\mc{X}_1$ and $\\mc{X}_2$) for $\\mc{A}.$ Then \n \\begin{eqnarray*}\n \\mc{X}_1{*_M}\\mc{A}{*_N}\\mc{N} &=& \\mc{X}_1{*_N}\\mc{A}{*_N} {\\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{N}}\n = \\mc{X}_1{*_M} {\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T}{*_M}\\mc{X}_2^T\\\\\n &= & {\\mc{X}_1{*_M}\\mc{A}{*_N}\\mc{N}}{*_N}\\mc{A}^T{*_M}\\mc{X}_2^T \n = \\mc{N}^T{*_N} {\\mc{A}^T{*_M}\\mc{X}_1^T{*_N}\\mc{A}^T}{*_M}\\mc{X}_2^T\\\\\n & =& \\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{X}_2^T\n = \\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{N}.\n \\end{eqnarray*}\n Since $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{N}).$ Which implies there exists $\\mc{U}$ such that $\\mc{A}{*_N}\\mc{N}{*_N}\\mc{U} = \\mc{A}.$ Thus $\\mc{X}_1{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{U} = \\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{U}.$ Hence $\\mc{X}_1{*_M}\\mc{A} =\\mc{X}_2{*_M}\\mc{A}$. Therefore\n \\begin{equation}\\label{eq4.51}\n \\mc{X}_1=\\mc{X}_1{*_M}\\mc{A}{*_N}\\mc{X}_1=\\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{X}_1.\n \\end{equation}\n Now by using Eq. (\\ref{eq4.51}), we get\n \\begin{eqnarray*}\n \\mc{M}{*_M}\\mc{A}{*_N} {\\mc{X}_1} &=& {\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}_2}{*_M}\\mc{A}{*_N}\\mc{X}_1\n = \\mc{X}_2^T{*_N} {\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A}}{*_N}\\mc{X}_1\\\\\n &=& \\mc{X}_2{*_N}\\mc{A}^T{*_M} {\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}_1}\n = \\mc{X}_2^T{*_N} {\\mc{A}^T{*_M}\\mc{X}_1^T{*_N}\\mc{A}^T}{*_M}\\mc{M}^T\\\\\n &=& \\mc{X}_2^T{*_N}\\mc{A}^T{*_N}\\mc{M}^T\n = \\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}_2.\n \\end{eqnarray*}\n Again as $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_M}\\mc{M}^T).$ This implies there exists $\\mc{V}^T$ such that $\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{V}^T = \\mc{A}^T.$ It leads $\\mc{V}{*_M}\\mc{M}{*_M}\\mc{A}=\\mc{A}.$ Thus $ \\mc{V}{*_M}\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}_1 = \\mc{V}{*_M}\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}_2.$ Hence $\\mc{A}{*_N}\\mc{X}_1 =\\mc{A}{*_N}\\mc{X}_2$. Therefore\n \\begin{equation}\\label{eq4.52}\n \\mc{X}_2=\\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{X}_2=\\mc{X}_2{*_M}\\mc{A}{*_N}\\mc{X}_1.\n \\end{equation}\n Combining Eq. (\\ref{eq4.51}) and (\\ref{eq4.52}), we obtain $\\mc{X}_1=\\mc{X}_2$ and hence the proof is complete. \n \\end{proof}\n The existence of weighted Moore-Penrose inverse is not trivial like other generalized inverses. The next theorem discusses the existence of weighted Moore-Penrose inverse. \n \n \\begin{theorem}\\label{ewmpi}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots\\times I_M\\times J_1\\times \\cdots\\times J_N},~~\\mc{M}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times I_1\\times \\cdots\\times I_M},$\\\\ $\\mc{N}\\in\\mathbb{R}^{J_1 \\times \\cdots\\times J_N\\times J_1 \\times \\cdots\\times J_N}$ be three Boolean tensors with $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{N})$ and $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_M}\\mc{M}^T).$ If\n $ \\mc{M}\\geq\\mc{I}$ and $\\mc{N}\\geq\\mc{I},$ then $\\mc{A}_{\\mc{M},\\mc{N}}^\\dagger$ exists if and only if any one of the following conditions holds:\n \\begin{enumerate}\n \\item[(a)] $\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A} = \\mc{A}.$\n \\item[(b)] $\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A} = \\mc{A}.$\n \\item[(c)] $\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A} = \\mc{A}.$\n \\item[(d)] $\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A} = \\mc{A}.$\n \\end{enumerate}\n In particular, $\\mc{A}_{\\mc{M},\\mc{N}}^\\dagger = \\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T.$\n \\end{theorem}\n \\begin{proof}\nAssume $\\mc{A}_{\\mc{M},\\mc{N}}^\\dagger$ exists and let $\\mc{X}=\\mc{A}_{\\mc{M},\\mc{N}}^\\dagger$. Let $\\mc{B}= \\mc{A}^T{*_M}\\mc{A}$. Since for every Boolean tensor, there are finitely many Boolean tensors of same order, so there must exist positive integers $s,t\\in\\mathbb{N}$ such that \n\\begin{equation}\\label{eq4.6}\n (\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^s = (\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s+t}.\n\\end{equation}\nWithout loss of generality, we can assume that $s$ is the smallest positive integer for which Eq. (\\ref{eq4.6}) holds. Now we will claim $s=1.$ Suppose on contradiction, assume $s>1.$ Now using Eq. (\\ref{eq4.6}), and properties of weighted Mooore-Penrose inverse, we get \n\\begin{eqnarray}\\label{eq4.7}\n\\nonumber\n {\\mc{X}{*_M} \\mc{A}{*_N}\\mc{N}}{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}&&\\hspace*{-0.7cm}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1} .\\\\\n\\nonumber\n&&\\hspace*{-3.5cm}= {\\mc{X}{*_M}\\mc{A}{*_N}\\mc{N}}{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}\\\\\n\\nonumber\\textnormal{This yield~~} \\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{X}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T&&\\hspace*{-0.7cm}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1} \\\\\n&&\\hspace*{-4.5cm}=\\mc{N}^T{*_N}\\mc{A}^T{*_N}\\mc{X}^T{*_M}\\mc{A}^T{*_M}\\mc{M}^T(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}.\n\\end{eqnarray}\nSince $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{N}),$ which implies there exists a tensor $\\mc{U}$ such that $\\mc{A}{*_N}\\mc{N}{*_N}\\mc{U}=\\mc{A}.$ Now premultiplying $\\mc{U}^T$ to Eq. (\\ref{eq4.7}) and using the properties $\\mc{U}^T{*_N}\\mc{N}^T{*_N}\\mc{A}^T=\\mc{A}^T$ and $\\mc{A}^T {*_M}\\mc{X}^T{*_N}\\mc{A}^T=\\mc{A}^T,$ we get\n\\begin{equation}\\label{eq4.71}\n \\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1} = \\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}.\n\\end{equation}\n Again, premultiplying $\\mc{X}^T $ to Eq. (\\ref{eq4.71}) and using the symmetricity of $\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}$, we get\n$\\mc{M}{*_M} {\\mc{A}{*_N}\\mc{X}{*_M}(\\mc{A}}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1}= \\mc{M}{*_M} {\\mc{A}{*_N}\\mc{X}{*_M}(\\mc{A}}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}.$ This gives \n\\begin{equation}\\label{eq4.8}\n \\mc{M}{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1} = \\mc{M}{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}.\n\\end{equation} \n Since $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_N}\\mc{M}^T),$ which implies there exists a tensor $\\mc{Z}$ such that $\\mc{Z}{*_M}\\mc{M}{*_M}\\mc{A}=\\mc{A}.$ Premultiplying $\\mc{Z}$ to Eq. (\\ref{eq4.8}) yields \n $$(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1} = (\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{s-1+t}.$$\n and contradicts the minimality of $s.$ Therefore \n \\begin{equation}\\label{eq4.9}\n \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T = (\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{t+1},~\\mbox{for some }~t\\in\\mathbb{N}.\n \\end{equation}\n Premultiplying Eq. (\\ref{eq4.9}) by $\\mc{X},$ and using $(\\mc{X}{*_M}\\mc{A}{*_N}\\mc{N})^T=\\mc{X}{*_M}\\mc{A}{*_N}\\mc{N},$ we obtain \n$$\n\\mc{N}^T{*_N} {\\mc{A}^T{*_M}\\mc{X}^T{*_N}\\mc{A}^T}{*_M}\\mc{M}^T = \\mc{N}^T{*_M} {\\mc{A}^T{*_N}\\mc{X}^T{*_N}\\mc{A}^T}{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^t.$$\nSince $ \\mc{A}^T{*_M}\\mc{X}^T{*_N}\\mc{A}^T=\\mc{A}^T,$ we get\n\\begin{equation}\\label{eqn121}\n \\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T = \\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^t. \n\\end{equation}\nPremultiplying Eq. (\\ref{eqn121}) by a tensor $\\mc{U}^T$ and using $\\mathfrak{R}(\\mc{A})=\\mathfrak{R}(\\mc{A}{*_N}\\mc{N}),$ we again obtain $ \\mc{A}^T{*_M}\\mc{M}^T = \\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^t.$ Postmultiplying $\\mc{Z}^T$ and applying $\\mathfrak{R}(\\mc{A}^T)=\\mathfrak{R}(\\mc{A}^T{*_M}\\mc{M}^T),$ we have \n$$\n \\mc{A}^T = \\mc{A}^T{*_M}\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^{t-1}{*_M} \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T.$$\n\nNow \n\\begin{eqnarray*}\n \\mc{A}^T &=& \\mc{A}^T{*_M} {\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}{*_M}\\mc{M}^T)^{t-1}{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T\\\\\n &=&\\mc{A}^T{*_M}(\\mc{M}^T{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T){*_M} {\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}{*_M}\\mc{M}^T)^{t-2}{*_N}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T\\\\\n &=&\\mc{A}^T{*_M}(\\mc{M}^T{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T)^2{*_M} {\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}{*_M}\\mc{M}^T)^{t-3}{*_N}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T\\\\\n &=&\\cdots~~~~~~~~~~~~~~\\cdots~~~~~~~~~~~~~\\cdots\\\\\n &=&\\mc{A}^T{*_M}(\\mc{M}^T{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T)^{t-2}{*_M} {\\mc{M}^T{*_M}(\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}{*_M} {\\mc{M}^T){*_N}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}\\\\\n &=&\\mc{A}^T{*_M}(\\mc{M}^T{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T)^{t}=\\mc{A}^T{*_M}\\left[(\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M})^{t}\\right]^T.\n\\end{eqnarray*}\nThus \n\\begin{equation}\\label{eq4.10}\n \\mc{A}=(\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M})^{t}{*_M}\\mc{A}.\n\\end{equation}\nAs $\\mc{M}\\geq\\mc{I},\\mc{N}\\geq\\mc{I} $, so by Lemma \\ref{lemma1}\n\\begin{equation}\\label{eq4.11}\n \\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A}\\geq \\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}\\geq \\mc{A}.\n\\end{equation}\nPostmultiplying $\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A}$, we obtain \n \\begin{equation}\\label{eq4.12}\n \\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}\\leq(\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M})^2{*_M}\\mc{A}.\n \\end{equation}\n Combining Eqs.(\\ref{eq4.10}), (\\ref{eq4.11}) and (\\ref{eq4.12}), we have\n \n\n \n\\begin{eqnarray*}\n\\mc{A}&\\leq&\\mc{A}{*_N}\\mc{N}^T {*_N} \\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A}\n\\leq (\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T*_M\\mc{M})^2{*_M}\\mc{A} \\\\\n&\\leq & (\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M})^3{*_M}\\mc{A}\\leq \\cdots\\leq (\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M})^t{*_M}\\mc{A}=\\mc{A}.\n\\end{eqnarray*}\nTherefore \n\\begin{equation}\\label{eq4.13}\n \\mc{A} = \\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A},\n\\end{equation}\nand hence completes the proof of the condition $(b).$ By using Theorem \\ref{uwmpi}, the other conditions are holds since\n\\begin{eqnarray}\\nonumber\n \\mc{A} &=& {\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T}{*_M}\\mc{M}{*_M}\\mc{A} = \\mc{A}{*_N}\\mc{N}{*_N} {\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A}}\\\\\\label{eq199} &=& {\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T}{*_M}\\mc{M}^T{*_M}\\mc{A} = \\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A}.\n\\end{eqnarray}\nFurther, we will claim not only the four conditions holds but also $\\mc{A}^\\dagger_{\\mc{M},\\mc{N}}=\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T.$ Let $\\mc{X}=\\mc{A}_{\\mc{M},\\mc{N}} ^\\dagger.$ From Eq. (\\ref{eq199}), $\\mc{A}=\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}$ and\n\\begin{equation*}\n \\mc{X}{*_M}\\mc{A}{*_N}\\mc{X}=\\mc{N}^T{*_N} {\\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T}{*_M}\\mc{M}^T=\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T=\\mc{X}.\n\\end{equation*}\nUsing Theorem \\ref{uwmpi}, we show \n\\begin{eqnarray*}\n\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X} &=&\\mc{M}{*_M} {\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T}{*_M}\\mc{M}^T =\\mc{M}{*_M}\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}^T\\\\\n &=&(\\mc{M}{*_M}\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T)^T\n=(\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X})^T.\n\\end{eqnarray*}\nTherefore, $\\mc{M}{*_M}\\mc{A}{*_N}\\mc{X}$ is symmetric. Similarly, we can show $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{N}$ is symmetric. So $\\mc{A}_{\\mc{M},\\mc{N}} ^\\dagger=\\mc{N}^T{*_N}\\mc{A}^T{*_M}\\mc{M}^T $. Next we will show the converse part. Let $\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A} = \\mc{A}.$ Since $\\mc{M}\\geq \\mc{I}$ and $\\mc{N}\\geq\\mc{I}$, so by Lemma \\ref{lemma1}, \\begin{equation*}\n \\mc{A}\\leq\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A}\\leq\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{A}\\leq\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{M}{*_M}\\mc{A} = \\mc{A}\n\\end{equation*}\n and hence \n\\begin{equation}\\label{eq4.14}\n \\mc{A} =\\mc{A}{*_N}\\mc{A}^T{*_M}\\mc{A} = \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{A}.\n\\end{equation}\nUsing the Eq. (\\ref{eq4.14}) and symmetricity of $ \\mc{A}{*_N}\\mc{A}^T$, we obtain\n\\begin{equation}\\label{eq4.15}\n \\mc{A}{*_N}\\mc{A}^T = \\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{A}{*_N}\\mc{A}^T\n=\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T\n=\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T .\n\\end{equation}\nSimilar argument yields,\n\\begin{eqnarray}\\label{eq4.16}\n\\nonumber\n \\mc{A}^T{*_M}\\mc{A}&=&\\mc{A}^T{*_M}\\mc{M}^T{*_M} {\\mc{A}{*_N}\\mc{N}^T{*_N}\\mc{A}^T}{*_M} \\mc{A}=\\mc{A}^T{*_M}\\mc{M}^T{*_M} {\\mc{A}{*_N}\\mc{N}{*_N}\\mc{A}^T{*_M}\\mc{A}}\\\\&=&\\mc{A}^T{*_M}\\mc{M}{*_N}\\mc{A} = \\mc{A}^T{*_M}\\mc{M}^T{*_M}\\mc{A}.\n\\end{eqnarray}\nUsing Eqs. (\\ref{eq4.14})-(\\ref{eq4.16}), it can be easily verified that $\\mc{X}=\\mc{N}^T{*_N}\\mc{A}^T {*_M}\\mc{M}^T$ is satisfies all four conditions of the weighted Moore-Penrose inverse. Similarly, one can start from other conditions to verify the same. Thus the proof is complete.\n \\end{proof}\n \\begin{remark}\n The equality condition in Theorem \\ref{ewmpi} $(a)$ can be replaced by ${\\bf{`\\geq'}}.$\n \\end{remark}\n\n\n\\subsection{Space Decomposition}\nUsing the theory of Einstein product, we introduce the definition of the space decomposition for Boolean tensors, which generalizes the matrix space decomposition \\cite{rao}. \n\n\n\\begin{definition}\\label{FRD}\n Let $\\mc{F}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times K_1 \\times \\cdots\\times K_L}$ and $\\mc{R}\\in\\mathbb{R}^{K_1 \\times \\cdots\\times K_L\\times J_1 \\times \\cdots\\times J_N}$ be two tensors with\n \\vspace{-.5cm}\n\\begin{eqnarray*}\n &&(a)~\\mc{A} = \\mc{F}{*_L}\\mc{R};\\\\\n &&(b)~\\mathfrak{R}(\\mc{A}) = \\mathfrak{R}(\\mc{F});\\\\\n &&(c)~\\mathfrak{R}(\\mc{A}^T) = \\mathfrak{R}(\\mc{R}^T),\n\\end{eqnarray*}\nthen the tensor $\\mc{A}$ is called space decomposable and this decomposition is called a space decomposition of $\\mc{A}$.\n\\end{definition}\n\n\nIn connection with the fact of the above Definition \\ref{FRD} and Lemma \\ref{range-stan}, one can conclude the existence of a generalized inverse, as follows. \n\n\n\n\\begin{theorem}\\label{exisgen}\nLet $\\mc{A}=\\mc{F}{*_L}\\mc{R}$ be a space decomposition of $\\mc{A}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times J_1 \\times \\cdots\\times J_N},$ where $\\mc{F}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times K_1 \\times \\cdots\\times K_L}$ and $\\mc{R}\\in\\mathbb{R}^{K_1 \\times \\cdots\\times K_L\\times J_1 \\times \\cdots\\times J_N}$ . Then $\\mc{A}^{(1)}$ exists.\n\\end{theorem}\n\n\nWe now present one of our essential result which represents not only the existence of reflexive generalized inverse but also other inverses through this decomposition.\n\n\n\\begin{theorem}\\label{the336}\nLet $\\mc{X}$ be a generalized inverse of the Boolean tensor $\\mc{A}.$ If $\\mc{A}=\\mc{F}{*_L}\\mc{R}$ is a space decomposition of $\\mc{A},$ where $\\mc{F}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times K_1 \\times \\cdots\\times K_L}$ and $\\mc{R}\\in\\mathbb{R}^{K_1 \\times \\cdots\\times K_L\\times J_1 \\times \\cdots\\times J_N}.$ Then the following are holds:\n\\begin{enumerate}\n\\label{eqvspace}\n \\item[(a)] $\\mc{F}^{(1)}$ and $\\mc{R}^{(1)}$ exists.\n \\item[(b)] $\\mc{F}^{(1)}{*_M}\\mc{F}=\\mc{R}{*_N}\\mc{R}^{(1)}.$\n \\item[(c)] $\\mc{F}^{(1)}{*_M}\\mc{A}=\\mc{R}$ and $\\mc{A}{*_N}\\mc{R}^{(1)}=\\mc{F}.$\n \\item[(d)] $\\mc{R}^{(1)}{*_M}\\mc{F}^{(1)}$ is a generalized inverse of $\\mc{A}.$\n \\item[(e)] $\\mc{R}{*_N}\\mc{X}$ is a reflexive inverse of $\\mc{F}$ and $\\mc{X}{*_M}\\mc{F}$ is a reflexive inverse of $\\mc{R}.$ \n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nSince $\\mc{X}$ is the generalized inverse of $\\mc{A}.$ Then we have $\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A},$ which implies\\\\ \n$\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X}{*_M}\\mc{F}{*_L}\\mc{R}=\\mc{F}{*_L}\\mc{R}=\\mc{I}{*_M}\\mc{F}{*_L}\\mc{R}.$ Further, using Corollary \\ref{rtcan}, we get $\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X}{*_M}\\mc{F}=\\mc{F}.$ Thus $\\mc{R}{*_N}\\mc{X}$ is a generalized inverse of $\\mc{F}.$ Similarly, one can determine $\\mc{X}{*_M}\\mc{F}$ is a generalized inverse of $\\mc{R}$. Hence $(a)$ is proved. Now using the result $(a),$ one can prove $(b)$ and $(c).$ To prove $(d)$ we use the fact $(a)$ and obtain. \n\\begin{equation*}\n \\mc{A}{*_N}\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}{*_M}\\mc{A}= \\mc{A}{*_N}\\mc{X}{*_M}\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X}{*_M}\\mc{A}= \\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X}{*_M}\\mc{A}=\\mc{A}.\n\\end{equation*}\nHence $\\mc{R}^{(1)}{*_M}\\mc{F}^{(1)}$ is a generalized inverse of $\\mc{A}.$ \nIn a similar manner, one can prove $(e)$ using the fact $\\mc{R}{*_N}\\mc{X}{*_M}\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X}\n=\\mc{R}{*_N}\\mc{X}$ and $\\mc{X}{*_M}\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X}{*_M}\\mc{F}\n=\\mc{X}{*_M}\\mc{F}.$ This completes the proof.\n\\end{proof}\nIn view of the above theorem one can draw a conclusion, as follows. \n\\begin{remark}\\label{rmk3.43}\n Every generalized inverse of $\\mc{A}$ need not of the form $\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.$ \n \\end{remark}\n We verify the Remark \\ref{rmk3.43} with the following example.\n\\begin{example}\\label{example3.45}\nLet\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ be a Boolean tensor with\n\\begin{eqnarray*}\na_{ij11} =\n \\begin{pmatrix}\n 1 & 1 & 0 \\\\\n 1 & 0 & 0\n \\end{pmatrix},\na_{ij12}=a_{ij13} =a_{ij21}=a_{ij22}=a_{ij23}=\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix}.\n\\end{eqnarray*}\nConsider $\\mc{A}^{(1)}=(x_{ijkl}) \\in \\mathbb{R}^{{2\\times3}\\times{2 \\rtimes 3}}$ is a generalized inverse of $\\mc{A}$ with \n\\begin{eqnarray*}\nx_{ij11} =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 0 & 0\n \\end{pmatrix},\nx_{ij12} =\n \\begin{pmatrix}\n 0 & 1 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\nx_{ij13} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n\\end{pmatrix},\\\\\n %\n %\n %\nx_{ij21} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 1 & 0 & 0\n \\end{pmatrix},\n x_{ij22} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\nx_{ij23} =\n \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix}.\n\\end{eqnarray*}\n\nIn light of the Theorem \\ref{the336} (e) one can conclude\n$$\\mc{R}^{(1)}*_2\\mc{F}^{(1)}=\\mc{A}^{(1)}*_2\\mc{F}*_2\\mc{R}*_2\\mc{A}^{(1)}=\\mc{A}^{(1)}*_2\\mc{A}*_2\\mc{A}^{(1)} \\neq \\mc{A}^{(1)}.$$\nTherefore, every generalized inverse of $\\mc{A}$ need not of the form $\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.$ \n\\end{example}\nAt this point one may be interested to know when does the generalized inverse of a Boolean tensor of the form $\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}$ ? The answer to this question is explained in the following Remark. \n\\begin{remark}\\label{rmk3.46}\n If $\\mc{X}$ is a reflexive inverse of a Boolean tensor $\\mc{A}\\in \\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times J_1 \\times \\cdots\\times J_N}$ and $\\mc{A}=\\mc{F}{*_L}\\mc{R}$ has a space decomposition, where $\\mc{F}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times K_1 \\times \\cdots\\times K_L}$ and $\\mc{R}\\in\\mathbb{R}^{K_1 \\times \\cdots\\times K_L\\times J_1 \\times \\cdots\\times J_N}.$ Then every generalized inverse is of the form $\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.$\n\\end{remark}\nNow considering the fact of Remark \\ref{rmk3.46} and the observation of the Example \\ref{example3.45}, one can get the desired result.\n\n\\begin{theorem}\\label{refeqv}\nLet $\\mc{A} \\in \\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times J_1 \\times \\cdots\\times J_N}$ be a Boolean tensor and \n$\\mc{A}=\\mc{F}{*_L}\\mc{R}$ be a space decomposition of $\\mc{A}, $ where $\\mc{F}\\in\\mathbb{R}^{I_1 \\times \\cdots\\times I_M\\times K_1 \\times \\cdots\\times K_L},$ $\\mc{R}\\in\\mathbb{R}^{K_1 \\times \\cdots\\times K_L\\times J_1 \\times \\cdots\\times J_N}.$ Assume that generalized inverse of either $\\mc{F}$ reflexive or $\\mc{R}$ reflexive. Then $\\mc{X}$ is a reflexive generalized inverse of $\\mc{A}$ if and only if $\\mc{X} = \\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.$\n\\end{theorem}\n\\begin{proof}\nConsider the generalized inverse of $\\mc{F}$ is reflexive. Taking into account of Theorem \\ref{eqvspace} $(d)$, we obtain $\\mc{X} = \\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}$, which is a generalized inverse of $\\mc{A}$. Therefore, it is enough to show $\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X}=\\mc{X}.$ Now using Theorem \\ref{eqvspace} $(c)$, we get\n\\begin{eqnarray*}\n\\mc{X}{*_M}\\mc{A}{*_N}\\mc{X} =\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}{*_M} {\\mc{A}{*_N}\\mc{R}^{(1)}}{*_L}\\mc{F}^{(1)}\n=\\mc{R}^{(1)}{*_L} {\\mc{F}^{(1)}{*_M}\\mc{F}{*_L}\\mc{F}^{(1)}} = \\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.\n\\end{eqnarray*}\nConversely, let $\\mc{X}$ be a reflexive inverse of $\\mc{A}.$ Then by Theorem \\ref{eqvspace} $(e)$,\n\\begin{eqnarray*}\n\\mc{X} = \\mc{X}{*_M}\\mc{A}{*_N}\\mc{X} = \\mc{X}{*_M}\\mc{F}{*_L}\\mc{R}{*_N}\\mc{X} = \\mc{R}^{(1,2)}{*_L}\\mc{F}^{(1,2)}=\\mc{R}^{(1)}{*_L}\\mc{F}^{(1)}.\n\\end{eqnarray*}\n\\vspace{-.3cm}\n\\end{proof}\n\n\n\n\n\\begin{remark}\\label{rk2.41}\nIf we drop the condition either $\\mc{F}$ or $\\mc{R}$ is reflexive generalized inverse of $\\mc{A}$ in Theorem \\ref{refeqv}, then the theorem will not true in general.\n \\end{remark}\n\n\n\nIn favour of the the Remark \\ref{rk2.41} we produce an example as follows. \n\n\n\n\n\\begin{example}\nLet $\\mc{A}$ be the Boolean tensor defined in Example \\ref{example3.45} and $\\mc{A}=\\mc{F}=\\mc{R}.$ Since $\\mc{A}{*_2}\\mc{I}{*_2}\\mc{A}=\\mc{I}$ and $\\mc{I}{*_2}\\mc{A}{*_2}\\mc{I}\\neq\\mc{I}$, it follows that $\\mc{I}$ is the generalized inverse for both $\\mc{F}$ and $\\mc{G}$ but not reflexive. In view of the Theorem \\ref{refeqv}, \none can conclude \n$\\mc{R}^{(1)}{*_2}\\mc{F}^{(1)}=\\mc{I}$\nis not a reflexive generalized inverse of $\\mc{A}.$\n\\end{example}\n\nIn \\cite{beasley} and \\cite{song}, the authors have defined the rank of a Boolean matrix through space decomposition. Next, we discuss the rank and weight of a Boolean tensors. \n\n\n\\begin{definition}\n Let $\\mc{A}\\in\\mathbb{R}^{I_1\\times\\cdots\\times I_M\\times J_1 \\times\\cdots\\times\n J_N}$ be a Boolean tensor. If there exist a least positive integer, $r=K_1 \\times\\cdots\\times K_L$ such that the Boolean tensors $\\mc{B}\\in\\mathbb{R}^{I_1 \\times\\cdots\\times I_M\\times K_1\\times\\cdots\\times K_L}$ and $\\mc{C}\\in\\mathbb{R}^{K_1 \\times\\cdots\\times K_L\\times J_1\\times\\cdots\\times J_N}$ satisfies $\\mc{A}=\\mc{B}{*_L}\\mc{C}$. Then $r$ is called the Boolean rank of $\\mc{A}$ and denoted by $r_b(\\mc{A}).$\n\\end{definition}\n\n\\begin{example}\\label{exrank}\nConsider a Boolean tensor\n$~\\mc{A}=(a_{ijkl}) \\in \\mathbb{R}^{{2\\times2}\\times{2 \\rtimes 2}}$ with entries\n\\begin{eqnarray*}\na_{ij11} =\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 0\n \\end{pmatrix},~\na_{ij12} =\n \\begin{pmatrix}\n 0 & 0\\\\\n 0 & 0\n \\end{pmatrix},~\na_{ij21} =\n \\begin{pmatrix}\n 0 & 0\\\\\n 1 & 0\n \\end{pmatrix},~\na_{ij22} =\n \\begin{pmatrix}\n 1 & 0\\\\\n 0 & 0\n \\end{pmatrix}.\n \\end{eqnarray*}\n There exist a least positive integer $r=2$ and two tensor\n $~\\mc{B}=(b_{ijk}) \\in \\mathbb{R}^{{2\\times2 \\times 2}}$ and $~\\mc{C}=(c_{ijk}) \\in \\mathbb{R}^{{2\\times2 \\times 2}}$ with entries\n \\begin{eqnarray*} \n b_{ij1} =\n \\begin{pmatrix}\n 1 & 0\\\\\n 0 & 0\n \\end{pmatrix},\n b_{ij2} =\n \\begin{pmatrix}\n 0 & 0\\\\\n 1 & 0\n \\end{pmatrix},\n c_{ij1} =\n \\begin{pmatrix}\n 1 & 0\\\\\n 0 & 1\n \\end{pmatrix},\n c_{ij2} =\n \\begin{pmatrix}\n 0 & 1\\\\\n 0 & 0\n \\end{pmatrix},\n \\end{eqnarray*}\n such that $\\mc{A}= \\mc{B}*_1\\mc{C}$. However, $r=1$ gives two matrices $B$ and $C$, which is impossible to get a tensor. Thus rank of the tensor is $2$.\n\\end{example}\n\nOn the other hand, the rank of the Boolean tensor is zero if it is zero tensor. Further, we have $\\mc{A}=\\mc{I}_m{*_M}\\mc{A}=\\mc{A}{*_N}\\mc{I}_n$, where $\\mc{A}\\in\\mathbb{R}^{I_1\\times\\cdots\\times I_M\\times J_1 \\times\\cdots\\times\n J_N} $. It is quite apparent that\n$$0\\leq r_b(\\mc{A})\\leq \\min\\{I_1 \\times\\cdots\\times I_M,~J_1\\times\\cdots\\times J_N\\}. $$\n\nTo prove the last result of this paper, we define weight of Boolean tensor as.\n\n\\begin{definition}\nThe weight of Boolean tensor is denoted by $w(\\mc{A})$ and defined as\n$$\nw(\\mc{A})=\\{\\mbox{ Total number of non zero elements of } \\mc{A}\\}.\n$$\n\\end{definition}\n\nThe existence of generalized inverse can be discussed through Boolean rank, as follows. \n\n\n\\begin{theorem}\\label{rankthm}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots \\times I_M\\times J_1 \\times \\cdots \\times J_N}$ be any tensor with $r_b(\\mc{A})\\leq 1.$ Then $\\mc{A}$ is regular.\n\\end{theorem}\n\n\\begin{proof}\nIt is trivial for $r_b(\\mc{A})=0,$ as a consequence of the fact $\\mc{O}$ tensor is always regular.\n Further, consider $r_b(\\mc{A})=1$ and define a tensor $\\mc{J},$ with no zero elements. Then there exist permutation tensors $P$ and $\\mc{Q}$ such that $\\mc{P}{*_M}\\mc{A}{*_N}\\mc{Q}=\\begin{bmatrix}\n\\mc{J} & \\mc{O}\\\\\n\\mc{O} & \\mc{O}\\\\\n\\end{bmatrix}.$ As $\\mc{J}$ is regular, it implies $\\mc{P}{*_M}\\mc{A}{*_N}\\mc{Q}$ is regular.\n In view of the Lemma \\ref{block} and Preposition \\ref{permu} one can conclude $\\mc{A}$ is regular \n\\end{proof}\nIt is clear, if the weight of a Boolean tensor is $1,$ then the rank is also $1$. In view of this we obtain the following result. \n\\begin{corollary}\\label{weightcoro}\nLet $\\mc{A}\\in\\mathbb{R}^{I_1\\times \\cdots I_M\\times J_1\\times\\cdots J_N}$ be any tensor with $w(\\mc{A})\\leq 1.$ Then $\\mc{A}$ is regular.\n\\end{corollary}\n\n\n\n\\section{Conclusion}\nIn this paper, we have introduced generalized inverses $(\\{i\\}$-inverses $(i = 1, 2, 3, 4))$ with the Moore-Penrose inverse and weighted Moore-Penrose inverse for Boolean tensors via the Einstein product, which is a generalization of the generalized inverses of Boolean matrices. In addition to this, we have discussed their existence and uniqueness. This paper also provides some characterization through complement and its application to generalized inverses. \nFurther, we explored the space decomposition for the Boolean tensors, at the same time, we have studied rank and the weight for the Boolean tensor. \nIn particular, we limited our study for Boolean tensors with $r_b(\\mc{A}) \\leq 1$ and $w(\\mc{A}) \\leq 1$. Herewith left as open problems for future studies.\\\\\n{\\bf Problem:} If the Boolean rank or weight of a tensor $\\mc{A}$ is greater than 1, then under which conditions the Boolean tensor $\\mc{A}$ is regular $?$\\\\\nAdditionally, it would be interesting to investigate more generalized inverses on the Boolean tensors; this work is currently underway.\n\n\n\n\n\n\n\n\\noindent {\\bf{Acknowledgments}}\\\\\nThis research work was supported by Science and Engineering Research Board (SERB), Department of Science and Technology, India, under the Grant No. EEQ\/2017\/000747.\n\n\\bibliographystyle{abbrv}\n\\bibliographystyle{vancouver}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\paragraph{Background and Aim} Choreographies are high-level descriptions of communicating \nsystems, inspired by the ``Alice \nand Bob'' notation for security protocols, where the behaviours of participants is defined from a \nglobal viewpoint. Over the last two decades, they have become popular and have been applied in \ndifferent contexts, including: the specification of communication protocols \nin web service standards \\cite{wscdl}, business process notations \\cite{bpmn}, and \ntheoretical models of communications \\cite{HYC16}; the synthesis of correct-by-construction \nconcurrent software \\cite{CM13,chor:website,DGGLM17}; the runtime verification of (object-oriented) \nactor systems \\cite{NY17}; and the static verification of concurrent process models \n\\cite{LTY15,CLM17}. A notable application to software engineering is the Testable Architecture \nmethodology \\cite{savara:website}, a development lifecycle that keeps service implementations \naligned with the choreographies specified by designers.\n\nThe promise of choreographies is that they will improve correctness in concurrent programming. \nUnfortunately, this promise remains unfulfilled, because the choreography models explored so far have the unrealistic assumption that communications never fail. The only exception \nis the work in \\cite{APN17} (the state of the art in the topic \nof failures in choreographies so far), which equips choreographies with optional blocks that can be \ncancelled non-deterministically at runtime. This is an interesting direction, but it still has \nlimitations that impede its applicability (\\eg, communications are \nsynchronous\/instantaneous; we discuss more in related work) and, just as important, does not allow \nchoreographies to specify how the system should recover from a failure.\n\nThe aim of this work is to develop a choreography model that brings choreographies all the way to \nbeing applicable to settings with realistic communication failures.\nReaching this objective is challenging, because we need to provide enough \nexpressivity to program the local recovery strategies of participants (which may \nbe different) and at the same time retain the global viewpoint on communications offered by \nchoreographies.\nTo this end, we split choreographic communications into their declarations and \nimplementations as shown in the snippet below.\n\\begin{snippet}\n\\cnewframe{k}{\\pid s}{\\pid r}{}{\n\t\\\n\t\\code{sendExpBackoff}(k,\\pid s)\\keyword{;}\\,\n\t\\ \n\t\\code{recvTimeout}(k,\\pid r)\n}\n\\end{snippet}\nHere, we use a choreographic notation---$\\cframe{k}{\\pid s}{\\pid r}{}$---to declare a \ncommunication from a sender process $\\pid s$ to a receiver process $\\pid r$, and \nname it $k$. Then, we \\emph{implement} the communication $k$ by invoking the \nprocedures \\code{sendExpBackoff} and \\code{recvTimeout}, which respectively handle the send and \nreceive part of the communication. Both procedures handle communication failures and may perform \ndifferent retries at sending or receiving in case of failures, but with different policies: the \nfirst procedure uses exponential backoff between send attempts, while the second is based on a \nfixed timeout.\nThus, implementation needs not be symmetric between sender(s) and receiver(s).\n\n\\paragraph{Contribution}\nWe develop Robust Choreographies (RC\\xspace for short), a new choreographic programming model \nwith support for communication failures. In RC\\xspace, the programmer declares which communications \nshould take place using an Alice and Bob notation, and then defines how processes will enact these \ncommunications through asynchronous send and receive actions. Code regarding different \ncommunications can be interleaved freely, allowing for the modelling of dependencies between \nthe implementations of different communications.\n\nDifferently from previous work on choreographies, all send and receive actions might fail, \nmodelling that there may be connection problems and\/or timeouts on both ends.\nWe formalise this behaviour by giving an operational semantics for RC\\xspace.\nWhen a process tries to perform an action, it can later check whether this action succeeded (as in \ntypical mainstream APIs for networking), and it is possible to program recovery (\\eg, by retrying \nto perform the action, or by executing alternative code).\nRC\\xspace supports further features that are studied in the setting of choreographies with communication \nfailures for the first time, like name mobility, dynamic creation of processes (networks in RC\\xspace can \ngrow at runtime), and branching. This allows us, for example, to program processes that offload \ncommunication attempts to parallel computations.\n\nRC\\xspace is intended as an implementation model that sits on a lower level than previous work \non choreographies. The abstraction of state-of-the-art choreographic models can be recovered by \nimplementing their language constructs as procedures and thus offer them ``as a library''---or \nDomain-Specific Languages (DSLs), if we see them as macros. For example, for previous models that \ntake communication robustness for granted, we can write parametric procedures for robust send and \nreceive actions that attempt at performing the desired action until it succeeds (or follow \nbest-effort strategies).\nWe illustrate this idea by implementing in RC\\xspace different language constructs from previous work,\nincluding procedural choreographies \\cite{CM17:forte}, \none-to-any\/any-to-one interactions \\cite{LNN16,LH17}, and scatter\/gather \\cite{LMMNSVY15,CMP18}.\nWe also exemplify how some of these constructs can be extended to allow for failure and specify \ncompensations.\nA pleasant consequence of sharing RC\\xspace as underlying model for these constructs is that we can now \ncombine these features from different works. (For example, the calculus with map\/reduce\nin \\cite{LNN16} does not support parametric procedures and \\cite{CM17:forte} \\viceversa.)\n\nThe realistic failure model in RC\\xspace allows us to identify more programming mistakes and \nprogram properties than in previous work, in particular some related to robustness. In general, we \nare interested in studying which guarantees we can provide for each communication that has been \n(choreographically) declared in a program, based on an analysis of its following implementation.\nExamples of relevant questions are: Can we check whether a communication will be eventually \nimplemented, even if failures occur? And, will it have the right (type of) payload?\nWe develop a novel typing discipline that can answer questions of this kind, and apply it to the\nstatic verification of at-most-once and exactly-once delivery guarantees for user-defined code.\n\nWe end our development by showing that the foundations given by RC\\xspace are applicable in practice. \nSpecifically, we define a formal translation (a compiler, if you like) from choreographies in RC\\xspace to \na more standard process model, \\ie, an asynchronous variant of the $\\pi$-calculus equipped with \nstandard I\/O actions that might fail. These asynchronous fallible I\/O actions are the only way \nprocesses may interact: there is no shared memory or agreement primitive. \nWe prove that, if the original choreography is well-typed, the synthesised\ncode is operationally equivalent and enjoys deadlock-freedom.\nFor space reasons, the synthesis procedure and process model are given in \\cref{sec:synthesis}.\n\n\\section{Failure model}\n\\label{sec:failure-model}\n\nWe discuss the failure model that we adopt in this work.\n\nCommunications are asynchronous and may fail.\nA successful send action implies that the sent message is\nnow handed over to the communication stack of the sender, which will attempt at transmitting the \nmessage to the receiver.\nIf transmission succeeds, the message reaches the receiver and is stored by the communication stack \nof the receiver in a dedicated memory.\nA successful receive action means that a message has been consumed by the intended receiver, \n\\ie, the message has been successfully \\emph{delivered}---this \nrequires that transmission was successful.\nA receive action fails if it is executed when there is no message that it can consume.\nThis models that there may be connection problems on the end of the receiver or that a timeout \noccurred on the receive action. We assume that communication and node failures are transient, \nmeaning that failing to interact with a node does not impede eventually doing it in later retries. \nWe leave persistent failures to future work.\n\nThere are two settings that we consider, depending on the kind of system that the programmer is dealing with.\n\n\\begin{setting}[Reliable Transmission]\n\\label{fm:reliable}\nSuccessfully executing a send action means that the message has been reliably stored in the \ncommunication medium between sender and receiver. By reliably stored, we mean that the message is \nnot going to be lost until its transmission from the sender's to the receiver's communication stack \neventually takes place.\nThis is the case, for example, of local Inter-Process Communication (IPC) mechanisms, like unnamed \npipes in POSIX systems, shared memory, or file-based communications. It is also the case of \ndistributed systems using reliable message delivery protocols like TCP, under the assumption that \nthere are no connection resets (or similar issues)---in these cases, middleware can be employed to \nre-establish connections.\nCommunication failures happen in this setting, \\eg, because send actions may \nfail to hand over messages to the sender's communication stack, and receive actions may fail \nbecause of timing issues (trying to receive messages before they reach the receiver's communication \nstack, or trying to receive something that is never sent).\n\\end{setting}\n\n\\begin{setting}[Unreliable Transmission]\n\\label{fm:unreliable}\nThis setting is more low-level than the previous one. Here, we assume no reliable middleware for \nmessage transport.\nThis is the case, for example, of distributed systems that use protocols with unreliable \nmessage delivery. It is also the case for systems that use protocols which, in theory, guarantee \nmessage delivery (like TCP) but, in practice, messages acknowledged on the protocol level may fail \nin reaching the application of the receiver due to connection resets and no middleware to deal with \nthese issues is employed. We assume absence of corruption (\\eg, through checksums).\n\nSuccessfully executing a send action in this setting still means that the local communication \nstack of the sender has accepted the task of eventually sending the message, but there is no \nguarantee that the message is actually going to be successfully passed on to the receiver's stack:\ncommunication media can lose messages.\nTherefore, a sender cannot know if a successfully sent message is going to be transmitted\nto the receiver, unless the programmer explicitly implements an acknowledgement mechanism on the \napplication level and such acknowledgement is received by the sender.\n\\end{setting}\n\nNaturally, the first setting allows for stronger guarantees. We will show that our typing can be \nused to guarantee at-most once message delivery (reception by the receiver's application) in both \nsettings. For the first setting, we will also show that typing can be used to guarantee \nexactly-once message delivery. For the second setting this is unrealistic, and as typically done in \npractice we have to switch to a best-effort strategy. Therefore, we demonstrate that our typing can \nbe used to guarantee best-effort delivery, in the sense that every time an application is \nprogrammed to receive a message, it is guaranteed at least the chance to receive it correctly.\n\n\\section{Related Work}\n\\label{sec:related}\n\nThe work nearest to ours is \\cite{APN17}, where choreographies are used to specify communication \nprotocols that abstract from data---a variant of Multiparty Session Types \\cite{HYC16}.\nUnreliability is modelled by allowing parts of a choreography declared in special optional blocks \nto become no-op non-deterministically. A static analysis guarantees that the network cannot get \nstuck even if all optional blocks are not executed. We see our work as complementary to\n\\cite{APN17}: while our initial motivation is similar, the aim is different. Our focus is \nproviding guarantees on implementations, and consequently choreographies in RC\\xspace are concrete \nprograms, in contrast with protocol specifications. There are also several major technical \ndifferences that make our choreography language more expressive. We mention the most \nrelevant ones.\n\nCommunications are synchronous in \\cite{APN17}. This means that if \na participant succeeds in sending a message, it knows that the receiver has also succeeded. In RC\\xspace, \nwe are interested in systems with asynchronous message passing. This requires \ndefining and analysing send and receive actions separately, since succeeding in one does not \nnecessarily mean succeeding with the other.\nSeparating between send and receive actions is also essential to the programming of recovery \nstrategies in RC\\xspace, which may be asymmetric for sender and receiver. For example, a sender may \nhave different conditions to check (\\eg, a number of retries) than those at the intended receiver \nfor deciding whether an action should be retried, which cannot be captured in \\cite{APN17}. \nRecovery strategies cannot be specified at all in the choreographies of \\cite{APN17}, which is \nanother key distinction with our work.\nThe modelling of recovery strategies in RC\\xspace is also what allows us to develop our type system for \nthe static verification of at-most-once and exactly-once delivery, which is not studied in \n\\cite{APN17}.\nThe choreography model in \\cite{APN17} does not include features equivalent to our \nprimitives for process spawning, name mobility, or parametric procedures. Parametric procedures are \nparticularly important for RC\\xspace: including error handling code in choreographies makes them \nnecessarily more complicated, and having procedures to modularise programs is useful, as we \nillustrate with our examples. Procedures are also key to our implementation\nof language constructs from previous choreography models ``as libraries'' in RC\\xspace.\n\nFrom a broader perspective, we think that merging our research direction with that of \n\\cite{APN17} (choreographic programs with protocol specifications) would be a very interesting \nfuture work, because it may yield a static analysis for RC\\xspace to check whether a given recovery \nstrategy guarantees the eventual execution of a high-level protocol (the latter might even \nabstract from failures, leaving their handling to the implementation).\n\nIn \\cite{CGY16}, choreographies for protocol specifications are augmented with controlled \nexceptions. These are different from communication failures, because they are controlled by the \nprogrammer and their propagation is ensured through communications that are assumed never to fail \n(thus, they are also different from our compensations for communication failures, where we react to \nunexpected failures and do not make this assumption). This approach has been refined in \n\\cite{CVBZE16}, by allowing for more fine-grained propagation of errors (but errors are still \nuser-defined, so similar comments apply).\n\nIn \\cite{LNN16}, the authors present a choreography model that considers potential failures of \nnodes (processes in our terminology). This approach is far from ours and that in \\cite{APN17}, \nsince the idea is that a system has redundant copies of a node type, and a choreography can \nspecify how many nodes of a type are needed to continue operating. No recovery can be programmed, \nand there is no presentation of how the approach can be adopted in realistic process models \n(compilation). Communications among functioning nodes are assumed infallible.\n\nOur work is also related to the research line on Choreographic Programming \n\\cite{M13:phd,CM13,DGGLM17}, a paradigm where choreographies are used to define implementations of \ncommunicating systems. This is the first work that studies how communication failures can be dealt \nwith in this paradigm. Our primitives for name mobility and parametric procedures are inspired \nfrom \\cite{CM17:forte}, but our methods are different, since we brought them into an asynchronous \nsetting with potentially-failing communications. Also, the fact that interactions are \nimplemented through separate send and receive actions in RC\\xspace is new to choreographic programming. \nPrevious work \\cite{CM17:ice} explored this distinction to achieve asynchronous communications in \nchoreographies, but these cannot fail and the distinction is used only in the runtime semantics (the \nseparate terms cannot be used programmatically).\n\nPrevious work explored a notion of bisimulation for a process calculus with explicit locations and \nlinks, where both nodes and links may fail at runtime \\cite{FH08}. Differently from our \nsetting, communications are synchronous, messages cannot be lost, and failures are permanent.\nExploring a similar notion of behavioural equivalence for RC\\xspace and our target process calculus is \ndefinitely an interesting future work, because it may lead to a substitutability principle for \ngenerated processes wrt choreographies. For example, we could replace a block of process code \nprojected from a choreography with an equivalent one without having to re-run compilation.\nAnother interesting application could be extending RC\\xspace to allow for ``undefined'' processes, whose \nbehaviour is left to be defined in other choreographies, or even (legacy) process code. Previous \nwork showed how to design such extensions for choreographies based on multiparty session \ntypes, obtaining choreographies with ``partial'' terms that refer to externally-defined code \n\\cite{MY13}. A notion of bisimulation for such partial terms could lead to relaxing the conditions \nfor well-typedness given in \\cite{MY13}.\n\nOur scatter construct in \\cref{sec:examples} recalls the unreliable \nbroadcast studied in \\cite{KGG14} for the setting of process calculi. \nA key difference is that, in \\cite{KGG14}, recovery cannot re-attempt failed communications \n(a process exits all current sessions when a failure occurs). Moreover, \ncommunications are synchronous in \\cite{KGG14}.\n\nOur formalisation of messages in transit partly recalls that in \\cite{FMN07}, which presents an \nagreement protocol that works under the assumption of quasi-reliable communications and node \ncrashes. While we do not consider (permanent) node crashes in this work, the programming of recovery \nstrategies is similar in our setting. This is to be expected, since in a distributed setting a node \nmay suspect that another node crashed by being unable to communicate with it (over some time).\nA detailed study of consensus in RC (and extentions to node failures) is definitely interesting \nfuture work.\n\nPrevious works on choreographies include choice operators that behave non-de\\-term\\-in\\-ist\\-ic\\-ally, \\eg,\n$C + C'$, read ``run either $C$ or $C'$'' \n\\cite{QZCY07,LGMZ08,CHY12} (and their labelled variant, in \n\\cite{HYC16}).\nThese operators do not capture the communication failures that we are interested in, for two \nreasons. First, they are programmed explicitly and are thus predictable.\nSecond, their formalisations assume that the propagation of choice \ninformation among processes is reliable (for compilation).\nThus, similar comments to those for the comparison with \\cite{CGY16} apply.\nObserve also that the soundness of these models is ensured under the assumption that any two \nprocesses involved in some interactions together perform exactly the same number of (dual) \ncommunication actions. This is unrealistic, since sender and receiver can have different policies in \npractice, as we already discussed.\n\nSome previous choreography models, like \\cite{CHY12}, include explicit parallel \ncomposition, $C \\mid C'$. RC\\xspace captures process parallelism using out-of-order \nexecution---a practice shared by other choreography models, see also\nthe paper that introduced it \\cite{CM13}. In general, actions performed by distinct processes can \nbe performed in any order. However, $C \\mid C'$ in \\cite{CHY12} (and other works, like \n\\cite{LTY15}) allows processes to have internal threads that share (and compete for) resources, \npossibly leading to races to due internal sharing. This is not allowed for in RC\\xspace; if you \nlike, this follows Go's slogan ``do not communicate by sharing memory; instead, share memory by \ncommunicating'' \\cite{effective-go}.\n\n\\section{Choreography Model}\n\\label{sec:chor-model}\n\n\\paragraph{Syntax}\nAn RC\\xspace program is a pair $\\langle \\mathcal{D},C\\rangle$, where $C$ is a choreography and \n$\\mathcal{D}$ is a set of (global) procedure definitions following the syntax displayed below.\nWe assume the Barendregt convention and work up to $\\alpha$-equivalence, renaming bound names \n(frame identifiers, process references, and procedure parameters) as needed.\n\\begin{align*}\n\t\\mathcal{D} \\Coloneqq {} &\n\t\t\\procdef{X}{\\vec{P}}{C}, \\mathcal{D} \\mid \\emptyset\n\t\\\\\n\tP \\Coloneqq {} &\n\t\t\\cframe{k}{\\pid p}{\\pid q}{T} \\mid \\pid p\\colon T \\mid f\\colon T\n\t\t\\mid l\n\t\\\\\n\tC \\Coloneqq {} &\n\t\t\\cbindin{N}{C} \\mid \n\t\tI\\keyword{;}\\, C\\mid \\keyword{0} \n\t\\\\\n\tI \\Coloneqq {} &\n\t\\clocal{\\pid p}{f} \\mid\n\t\t\\csend{k}{S} \\mid\n\t\t\\crecv{k}{R} \\mid\n\t\tX(\\vec{A})\\mid\n\t\t\\cond{E}{C}{C'} \\mid \n\t\\\\ \\mid {} & \\keyword{0} \n\t\\\\\n\tS \\Coloneqq {} &\n\t\t\\pid p.f \\mid \\pid p.\\pid r \\mid \\pid p.l\n\t\\\\\n\tR \\Coloneqq {} &\n\t\t\\pid q.f \\mid \\pid q\n\t\\\\\n\tE \\Coloneqq {} &\n\t\t\\pid p.f \\mid \\pid p.\\csent{k} \\mid \\pid q.\\creceived{k} \\mid \\pid q.\\creceivedlbl{k}{l}\n\t\\\\\n\tA \\Coloneqq {} &\n\t\tk \\tteq k' \\mid \\pid p\\tteq \\pid p' \\mid f \\tteq f' \\mid l \\tteq l'\n\t\\\\\n\tN \\Coloneqq {} &\n\t\t\\cstart{\\pid p}{\\pid q}{f} \\mid \n\t\t\\cframe{k}{\\pid p}{\\pid q}{T}\n\\end{align*}\nProcess names ($\\pid p$,$\\pid q$,$\\pid r$,\\dots) identify processes that execute concurrently. Each process has exclusive access to a private memory cell for storing values of a fixed type $T$ from a fixed set $\\mathcal{V}$ of datatypes (\\eg $\\type{Nat}$, $\\type{Char}$, $\\type{Bool}$). Values are manipulated only via functions (terms $f$) specified in a \\emph{guest language} which is intentionally left as a parameter of the model. \nFollowing practices established in previous choreography models \\cite{M13:phd,CM13,DGGLM17,CM17:forte} we assume that evaluation of internal computations is local and terminates. \nThe only further assumptions about the guest language of internal computations \nare that it comes with a typing discipline and that it supports boolean values (or an equivalent mechanism). Typing judgements will have the form $\\vdash f\\colon T \\to S$ and the type of boolean values will be denoted as $\\type{Bool}$.\nBesides values used by internal computations, processes can communicate process names and label selections (terms $\\pid p.\\pid r$, $\\pid p.l$).\nThese payloads are inaccessible to the guest language and hence are assigned (disjoint) types $\\type{PID}$ and $\\type{LBL}$ not in $\\mathcal{V}$. For exposition convenience, we define $\\type{VAL}$ as the super type of all datatypes used by the guest language \\ie we define the subtyping relation $\\subtype$ as the smallest partial order such that $T \\subtype \\type{VAL}$ for any $T \\in \\mathcal{V}$. We assume a constructor $\\mid$ making (disjoint) union types based on $\\type{VAL}$, $\\type{PID}$, and $\\type{LBL}$ (\\eg $\\type{PID} \\mid \\type{LBL}$) and extend the relation $\\subtype$ accordingly ($\\type{PID} \\subtype \\type{PID} \\mid \\type{LBL}$, $\\type{LBL} \\subtype \\type{PID} \\mid \\type{LBL}$, \\etc). Note that we do not require the guest language to come with subtyping or unions.\nWe write $t \\in T$ to express that $t$ inhabits $T$.\n\nChoreography declarations ($N$) introduce new processes and frames in their continuation ($C$).\nTerm $\\cnewframe{k}{\\pid p}{\\pid q}{T}{C}$ declares a communication from $\\pid p$ to $\\pid q$ where $T$ is the payload type and $k$ is the frame identifier to be used by the implementation of the communication.\nTerm $\\cnewproc{\\pid p}{\\pid q}{f}{C}$ declares a new process $\\pid q$ where $\\pid p$ is the process that spawns $\\pid q$ and $f$ is a function used by $\\pid p$ to compute the initial value for the memory cell of $\\pid q$.\nChoreography statements ($I$) can be local computations, communication actions, conditionals, calls; all statements have continuations. Term \n$\\clocal{\\pid p}{f}$ represents an internal computation where process $\\pid p$ evaluates the \nfunction $f$ against its memory cell and updates its content. Send and receive actions in \nthe implementation of $k$ are described by terms of the form $\\csend{k}{S}$ and $\\crecv{k}{R}$ \nwhere subterms $S$ and $R$ depend on the payload type.\nIn value exchanges, terms $\\csend{k}{\\pid p.f}$ and $\\crecv{k}{\\pid q.f'}$ read ``$\\pid p$ applies \n$f$ to the content of its memory cell and attempts to send the result on frame $k$'' and ``$\\pid q$ \nattempts to receive a value on frame $k$ and, if successful, applies $f'$ to its memory cell and \nthe received value'', respectively. (We assume functions in sends and receives to accept \nrespectively exactly one argument and exactly two arguments, where the first argument is the \nprocess memory content.)\nIn label selections, terms $\\csend{k}{\\pid p.l}$ and $\\crecv{k}{\\pid q}$ read\n``$\\pid p$ attempts to send the selection of label $l$ on frame $k$'' and ``$\\pid p$ attempts \nto receive a selection on frame $k$''. Selections are meant to propagate information regarding \ninternal choices and as such have no side effects on process memory or network knowledge. (As we \nwill discuss in \\cref{sec:synthesis}, this mechanism is crucial for synthesising correct \nimplementations of conditionals.)\nIn process exchanges, terms $\\csend{k}{\\pid p.\\pid r}$ and $\\crecv{k}{\\pid q.\\pid r}$ read ``$\\pid \np$ attempts to send $\\pid r$ on frame $k$'' and ``$\\pid q$ attempts to receive $\\pid r$ on frame \n$k$'', respectively. The only side effect of process exchanges is on network knowledge of the \nreceiver which may learn a new process reference. This is necessary since networks may grow during \nthe execution of choreographic programs as new processes are spawn. \nIn a conditional term $\\cond{E}{C_1}{C_2}$, a process evaluates the guard $E$ and chooses between \nthe possible continuations $C_1$ and $C_2$ accordingly. We explain the meaning of each kind of \nguard $E$ in the following.\n\\begin{itemize}\n\t\\item $\\pid p.f$: $\\pid p$ chooses $C_1$ if applying $f$ to its memory content yields \n$\\literal{true}$, and $C_2$ otherwise; \n\t\n\t\\item $\\pid p.\\csent{k}$: $\\pid p$ chooses $C_1$ if its last send attempt \nfor $k$ was successful, and $C_2$ otherwise; \n\t\n\t\\item $\\pid q.\\creceived{k}$: $\\pid q$ chooses $C_1$ if its last receive \nattempt for $k$ was successful, and $C_2$ otherwise;\n\t\n\t\\item $\\pid q.\\creceivedlbl{k}{l}$: $\\pid q$ chooses $C_1$ if it successfully\nreceived the label $l$ on $k$, and $C_2$ otherwise.\n\\end{itemize}\nTerm $X(\\vec{A})$ is a call of procedure $X$ with the set of named arguments $\\vec{A}$; these can \nbe frame identifiers, process names, or (names of) functions in the guest language.\nTerm $\\keyword{0}$ is the \\emph{no-op} statement, also used to represent terminated choreographies.\n\nProcedures are defined by terms $\\procdef{X}{\\vec{P}}{C}$ where $X$ is the procedure name, \n$\\vec{P}$ is a set of parameter declarations, and the program term $C$ is the procedure body. A term \n$f\\colon T\\to S$ in $\\vec{P}$ binds a function (name) in $C$ and specifies its type. A term \n$\\cframe{k}{\\pid p}{\\pid q}{T}$ in $\\vec{P}$ binds the frame identifier $k$ in $C$ and specifies its \ntype, sender and receiver. A term $\\pid p\\colon T$ in $\\vec{P}$ binds the process name $\\pid p$ in \n$C$ and specifies the type of its memory cell. \nA set of procedure definitions $\\mathcal{D}$ is well-formed provided that its procedure definitions \nhave unique names, all free names in their bodies are captured by their parameters, and all calls \nare to procedures in $\\mathcal{D}$.\n\nIn the sequel, we may omit $\\keyword{0}$, empty $\\keyword{else}$ branches, and use basic logical connectors in guards as syntactic sugar. \nFor instance, we write $\\scond{\\neg\\pid p.\\csent{k}}{C}$ as sugar for\n$\\cond{\\pid p.\\csent{k}}{\\keyword{0}}{C}$.\nIn procedure calls, we may omit assignments if formal and actual parameters have the same name and,\n\\eg, simply write $X(k)$ instead of $X(k \\tteq k)$.\n\n\\paragraph{Semantics}\nDynamics of RC\\xspace is specified by the reduction semantics defined in \\cref{fig:chor-semantics}. The \nsemantics is parameterised over global procedures $\\mathcal{D}$, and its states (hereafter \n\\emph{runtime configurations}) are quadruples $\\langle C,\\sigma,\\phi,G\\rangle$. We describe\nthe components of runtime configurations in the following.\n\\begin{description}\n\t\\item[$C$]\n\tThe first component is the current term program.\n\t\n\t\\item[$\\sigma$]\n\tThe second component, called \\emph{memory configuration}, keeps track of the memory cell of each process in the system, which is\n\taccessible to the guest language for performing internal computation.\n\tFormally, it is a partial map from process names to values, \\ie $\\sigma(\\pid p) = v$ denotes \n\tthat the memory cell of process $\\pid p$ stores value $v$.\n\t\t\n\t\\item[$\\phi$]\n\tThe third component, called (concrete) \\emph{frame dictionary}, is a partial map from frame \n\tnames to representations of their states.\n\tMore specifically, $\\phi(k) = \\phiframe{\\pid s}{u}{\\pid r}{u'}$ denotes that ``frame $k$ has \n\tstate $\\phiframe{\\pid s}{u}{\\pid r}{u'}$.\n\tThe processes $\\pid s$ and $\\pid r$ are, respectively, the (intended) sender and receiver of \n\t$k$. The elements $u$ and $u'$ represent the states of the communication stacks of sender and \n\treceiver for the specific frame, respectively. Formally, $u$ and $u'$ can be a payload (a \n\tvalue, a label, or a process name), $\\bot$ (the payload did not enter the stack), or a \n\tpayload flagged as ``removed from the stack'' (denoted by the decoration $\\checkmark$, as \n\tin $\\removed{v}$). Removal from the sender stack happens when the stack attempts at \n\ttransmitting the payload to the intended receiver, and \n\tremoval from the receiver stack means a successful delivery to the receiver's application.\n\t\n\t\\item[$G$]\n\tThe fourth component, called \\emph{connection graph}, is a directed graph with process names as \n\tnodes. A process neighbourhood represents the processes that it knows (and thus can communicate\n\twith).\n\\end{description}\nTogether, the last two components form a \\emph{network configuration}.\n\n\\begin{figure*}[t]\n\\begin{infrules}\n\t\\infrule[\\rname[C]{NP}][rule:c-new-proc]{\n\t\tf(\\sigma(\\pid p)) \\downarrow v\n\t\t\\and\n\t\tG' = G\\cup \\{\\pid p \\leftrightarrow \\pid q\\} \\cup \\{\\pid q \\rightarrow \\pid r \\mid G_1 \\vdash \\pid p \\rightarrow \\pid r \\}\n\t}{\n\t\t\\langle\\cnewproc{\\pid p}{\\pid q}{f}{C},\\sigma,\\phi, G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\n\t\t\\langle C,\\sigma(\\pid q)[v],\\phi, G'\\rangle\n\t}\n\t\\infrule[\\rname[C]{NF}][rule:c-new-frame]{\n\t}{\n\t\t\\langle\\cnewframe{k}{\\pid p}{\\pid q}{T}{C},\\sigma,\\phi, G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\n\t\t\\langle C,\\sigma,\\phi(k)[\\phiframe{\\pid p}{\\bot}{\\pid q}{\\bot}], G\\rangle\n\t}\n\n\t\\infrule[\\rname[C]{Int}][rule:c-int-comp]{\n\t\tf(\\sigma(\\pid p)) \\downarrow v\n\t}{\n\t\t\\langle\\clocal{\\pid p}{f}\\keyword{;}\\, C ,\\sigma,\\phi\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma(\\pid p)[v],\\phi, G\\rangle \n\t}\n\t\\infrule[\\rname[C]{Snd}][rule:c-send]{\n\t\ts(\\sigma(\\pid p)) \\downarrow u \\and\n\t\t\\phi(k) = \\phiframe{\\pid p}{\\_}{\\pid q}{\\_} \\and\n\t\tG \\vdash \\pid p \\to \\pid q \\and\n\t}{\n\t\t\\langle\\csend{k}{\\pid p.s}\\keyword{;}\\, C ,\\sigma,\\phi\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi(k)_2[u], G\\rangle \n\t}\n\t\\infrule[\\rname[C]{SndFail}][rule:c-send-fail]{}{\n\t\t\\langle\\csend{k}{S}\\keyword{;}\\, C ,\\sigma,\\phi, G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi, G\\rangle \n\t}\n\\\\\n\t\\infrule[\\rname[C]{Loss}][rule:c-loss]{\n\t\t\\phi(k) = \\phiframe{\\pid s}{u}{\\pid q}{\\_}\n\t\t\\and\n\t\tG \\vdash \\pid p \\leftrightarrow \\pid q\n\t}{\n\t\t\\langle C,\\sigma,\\phi\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi(k)_2[\\removed{u}], G\\rangle \n\t}\n\t\\infrule[\\rname[C]{Comm}][rule:c-comm]{\n\t\t\\phi(k) = \\phiframe{\\pid p}{u}{\\pid q}{\\_}\n\t\t\\and\n\t\tu \\in \\mathcal{U}\n\t\t\\and\n\t\tG \\vdash \\pid p \\leftrightarrow \\pid q\n\t}{\n\t\t\\langle C,\\sigma,\\phi\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi(k)[\\phiframe{\\pid p}{\\removed{u}}{\\pid q}{u}], G\\rangle \n\t}\n\t\\infrule[\\rname[C]{RcvV}][rule:c-recv-val]{\n\t\t\\phi(k) = \\phiframe{\\pid p}{\\_}{\\pid q}{u}\n\t\t\\and\n\t\tu \\in \\{v,\\removed{v}\\}\n\t\t\\and\n\t\tf(\\sigma(\\pid q),v) \\downarrow w\n\t}{\n\t\t\\langle \\crecv{k}{\\pid q.f}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma(\\pid q)[w],\\phi(k)_4[\\removed{w}], G\\rangle \n\t}\n\t\\infrule[\\rname[C]{RcvP}][rule:c-recv-pid]{\n\t\t\\phi(k) = \\phiframe{\\pid p}{\\_}{\\pid q}{u}\n\t\t\\and\n\t\tu \\in \\{\\pid r,\\removed{\\pid r}\\}\n\t}{\n\t\t\\langle \\crecv{k}{\\pid q}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi(k)_4[\\removed{\\pid r}], G \\cup \\{\\pid q \\to \\pid r\\}\\rangle \n\t}\n\t\\infrule[\\rname[C]{RcvL}][rule:c-recv-lbl]{\n\t\t\\phi(k) = \\phiframe{\\pid p}{\\_}{\\pid q}{u}\n\t\t\\and\n\t\tu \\in \\{l,\\removed{l}\\}\n\t}{\n\t\t\\langle \\crecv{k}{\\pid q}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi(k)_4[\\removed{l}], G\\rangle \n\t}\n\t\\infrule[\\rname[C]{RcvFail}][rule:c-recv-fail]{\n \t\t\\phi(k)_4 = \\bot\n\t}{\n\t\t\\langle\\crecv{k}{R}\\keyword{;}\\, C ,\\sigma,\\phi, G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C,\\sigma,\\phi, G\\rangle \n\t}\n\t\\infrule[\\rname[C]{IfSnt}][rule:c-if-sent]{\n\t\t\\phi(k)_1 = \\pid p \\and\n\t\t\\text{if } \\phi(k)_2 \\neq \\bot\n\t\t\\text{ then } j = 1\n\t\t\\text{ else } j = 2\n\t}{\n\t\t\\langle \\cond{\\pid p.\\csent{k}}{C_1}{C_2}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_j\\keyword{;}\\, C,\\sigma,\\phi,G\\rangle \n\t}\n\t\\infrule[\\rname[C]{IfRcv}][rule:c-if-recv]{\n\t\t\\phi(k)_3 = \\pid q \\and\n\t\t\\text{if } \\phi(k)_4 \\in \\mathcal{U}^\\checkmark\n\t\t\\text{ then } j = 1\n\t\t\\text{ else } j = 2\n\t}{\n\t\t\\langle \\cond{\\pid q.\\creceived{k}}{C_1}{C_2}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_j\\keyword{;}\\, C,\\sigma,\\phi,G\\rangle \n\t}\n\t\\infrule[\\rname[C]{IfLbl}][rule:c-if-lbl]{\n\t\t\t\\phi(k)_3 = \\pid q \\and\n\t\t\\text{if } \\sigma(k)_4 = \\removed{l}\n\t\t\\text{ then } j = 1\n\t\t\\text{ else } j = 2\n\t}{\n\t\t\\langle \\cond{\\pid q.\\creceivedlbl{k}{l}}{C_1}{C_2}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_j\\keyword{;}\\, C,\\sigma,\\phi,G\\rangle \n\t}\n\t\\infrule[\\rname[C]{IfExp}][rule:c-if-exp]{\n\t\t\\text{if } f(\\sigma(\\pid p)) \\downarrow \\literal{true}\n\t\t\\text{ then } j = 1\n\t\t\\text{ else } j = 2\n\t}{\n\t\t\\langle \\cond{\\pid p.f}{C_1}{C_2}\\keyword{;}\\, C ,\\sigma,\\phi,G\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_j\\keyword{;}\\, C,\\sigma,\\phi,G\\rangle \n\t}\n\t\n\t\\infrule[\\rname[C]{Str}][rule:c-struct]{\n\t\tC_1 \\precongr_{\\mathcal{D}} C_1'\n\t\t\\and\n\t\t\\langle C_1' ,\\sigma_1,\\phi_1,G_1\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_2',\\sigma_2,\\phi_2,G_2\\rangle\n\t\t\\and\n\t\tC_2' \\precongr_{\\mathcal{D}} C_2\n\t}{\n\t\t\\langle C_1 ,\\sigma_1,\\phi_1,G_1\\rangle \n\t\t\\reducesto_{\\mathcal{D}}\t\t\n\t\t\\langle C_2,\\sigma_2,\\phi_2,G_2\\rangle \n\t}\n\n\t\\infrule[\\rname[C]{Unfold}][rule:c-unfold]{\n\t\tX(\\vec{P}) = C_2 \\in \\mathcal{D}\n\t\t\\qquad\n\t\t\\vec{P} = \\dom(\\vec{A})\n\t}{\n\t\tX(\\vec{A})\\keyword{;}\\, C_1\n\t\t\\precongr_{\\mathcal{D}}\t\n\t\tC_2[\\vec{A}]\\keyword{;}\\, C_1\n\t}\n\t\\infrule[\\rname[C]{Nil}][rule:c-nil]{}{\n\t\t\\keyword{0}\\keyword{;}\\, C \\precongr_{\\mathcal{D}}\tC\n\t}\n\t\\infrule[\\rname[C]{Swap}][rule:c-swap]{\n\t\tC_1\n\t\t\\dotrel{\\congr}\n\t\tC_2\n\t}{\n\t\tC_1\n\t\t\\congr_{\\mathcal{D}}\t\n\t\tC_2\n\t}\n\t\\infrule[\\rname[C]{I-I}][rule:c-i-i]{\n\t\t\\mathrm{pn}(I_1) \\cap \\mathrm{pn}(I_2) = \\emptyset\n\t}{\n\t\tI_1\\keyword{;}\\, I_2\n\t\t\\dotrel{\\congr}\n\t\tI_2\\keyword{;}\\, I_1\n\t}\n\t\\infrule[\\rname[C]{I-N}][rule:c-i-n]{\n\t\t\\mathrm{pn}(I) \\cap \\mathrm{pn}(N) = \\emptyset\t\n\t}{\n\t\tI\\keyword{;}\\, \\cbindin{N}{C}\n\t\t\\dotrel{\\congr}\n\t\t\\cbindin{N}{I \\keyword{;}\\, C}\n\t}\n\t\\infrule[\\rname[C]{N-N}][rule:c-n-n]{\n\t\t\\mathrm{pn}(N_1) \\cap \\mathrm{pn}(N_2) = \\emptyset\t\n\t}{\n\t\t\\cbindin{N_1}{\\cbindin{N_2}{C}}\n\t\t\\dotrel{\\congr}\n\t\t\\cbindin{N_2}{\\cbindin{N_1}{C}}\n\t}\n\t\\infrule[\\rname[C]{I-If}][rule:c-i-if]{\n\t\t\\mathrm{pn}(I) \\cap \\mathrm{pn}(E) = \\emptyset\n\t}{\n\t\t\\cond{E}{I\\keyword{;}\\, C_1}{I\\keyword{;}\\, C_2}\n\t\t\\dotrel{\\congr}\n\t\tI\\keyword{;}\\, \\cond{E}{C_1}{C_2}\n\t}\n\t\\infrule[\\rname[C]{If-I}][rule:c-if-i]{}{\n\t\t\\cond{E}\n\t\t{C_1\\keyword{;}\\, I}\n\t\t{C_2\\keyword{;}\\, I}\n\t\t\\dotrel{\\congr}\n\t\t\\cond{E}{C_1}{C_2}\n\t\t\\keyword{;}\\, I\n\t}\n\t\\infrule[\\rname[C]{N-If}][rule:c-n-if]{\n\t\t\t\\mathrm{pn}(N) \\cap \\mathrm{pn}(E) = \\emptyset\t\t\n\t}{\n\t\t\\cond{E}{\\cbindin{N}{C_1}}{\\cbindin{N}{C_2}}\n\t\t\\dotrel{\\congr}\n\t\t\\cbindin{N}{\\cond{E}{C_1}{C_2}}\n\t}\n\t\\infrule[\\rname[C]{If-If}][rule:c-if-if]{\n\t\t\\mathrm{pn}(E_1) \\cap \\mathrm{pn}(E_2) = \\emptyset\n\t}{\n\t\t\\begingroup\\def1.1{1.1}\n\t\t\\begin{array}{r}\n\t\t\t\\cond{E_1}\n\t\t\t\t{\\cond{E_2}{C_{1}^{1}}{C_{1}^{2}}\\\\\\!}\n\t\t\t\t{\\cond{E_2}{C_{2}^{1}}{C_{2}^{2}}}\n\t\t\\end{array}\n\t\t\\endgroup\n\t\t\\dotrel{\\congr}\n\t\t\\begingroup\\def1.1{1.1}\n\t\t\\begin{array}{r}\n\t\t\t\\cond{E_2}\n\t\t\t\t{\\cond{E_1}{C_{1}^{1}}{C_{2}^{1}}\\\\\\!}\n\t\t\t\t{\\cond{E_1}{C_{1}^{2}}{C_{2}^{2}}}\n\t\t\\end{array}\n\t\t\\endgroup\n\t}\n\\end{infrules}\n\t\\caption{Choreographic model, operational semantics}\n\t\\label{fig:chor-semantics}\n\\end{figure*}\n\n\nWe define some convenient notation for the definition of our semantics.\nLet $\\mathcal U$ be the set of all payloads, \\ie, $\\mathcal{U} = \\type{VAL} \\uplus \\type{LBL} \n\\uplus \\type{PID}$. A frame state $\\phiframe{\\pid s}{u}{\\pid r}{u'}$ is thus an element of\n$\\type{PID} \\times \\mathcal{U}{}_\\bot^\\checkmark\\times\\type{PID}\\times\\mathcal{U}{}_\\bot^\\checkmark$,\nwhere: \n$\\mathcal{U}{}_\\bot^\\checkmark = \n\\mathcal{U} \\uplus \\{\\bot\\} \\uplus \\mathcal{U}^\\checkmark$, with $\\mathcal{U}^\\checkmark \n= \\{\\removed{u} \\mid u \\in \\mathcal{U}\\}$ the set of payloads flagged as ``removed'' from the stack.\nAn expression $\\phi(k)_i$ denotes the $i$-th component of the frame state $\\phi(k)$. A judgement $G \n\\vdash \\pid p \\to \\pid q$ states that $G$ has the edge $\\pid p \\to \\pid q$.\nUpdates to $\\sigma$ and $\\phi$ are written using a square bracket notation, \nspecifically: $\\sigma(\\pid p)[v]$ is the function defined as $\\sigma$\neverywhere except $\\pid p$, which is mapped to $v$;\n$\\phi(k)[\\phiframe{\\pid s}{u}{\\pid r}{u'}]$ is as $\\phi$, but $k$ is now mapped to\n$\\phiframe{\\pid s}{u}{\\pid r}{u'}$; $\\phi(k)_2[u]$ changes the second element (the sender's \nside) of the frame state for $k$ to $u$; and, likewise, $\\phi(k)_4[u]$ changes the fourth element \n(the receiver's side) of the frame state for $k$ to $u$.\n\n\n\\begin{figure*}[t]\n\\centering\n\\begin{tikzpicture}[auto,yscale=1.4,xscale=3.5,\n\t\tstate\/.style={\n\t\t\trectangle split, \n\t\t\trectangle split horizontal, \n\t\t\trectangle split parts=2,\n\t\t\tdraw,\n\t\t\trounded corners=5pt,\n\t\t\touter sep=2pt,\n\t\t\tinner sep=2pt,\n\t\t\tfont=\\small,\n\t text height=1.3ex,\n\t text depth=.1ex,\n\t text centered,\n\t minimum height=1.8em,\n\t\t},\n\t\ttransition\/.style={\n\t\t\t>=open triangle 60,->,\n\t\t\trounded corners=5pt,\n\t\t},\n\t\ttransition loss\/.style={\n\t\t\ttransition,\n\t\t\tdashed,\n\t\t\tblue\n\t\t},\n\t\ttransition label\/.style={\n\t\t\tpos=.45,\n\t\t\tfont=\\footnotesize,\n\t\t}\n\t]\n\n\t\\newcounter{stc}\n\t\\def\\state(#1) at (#2) <#3|#4>{\n\t\t\\stepcounter{stc}\n\t\t\\node[state,\n\t\t\tlabel={[label distance=-3pt]110:\\color{white!40!black}\\footnotesize(\\alph{stc})}\n\t\t] (#1) at (#2) {\t\t\t\t\n\t\t\t\\code{#3}\n\t\t\t\\nodepart{two}\n\t\t\t\\ensuremath{#4}\n\t\t};\n\t}\n\n\t\\state (nA) at (1,3) <1,2,3,4|>\n\t\\state (nB) at (1,2) < 2,3,4|\\bot,\\bot>\n\t\\state (nC) at (2,2) < 3,4|v,\\bot>\n\t\\state (nD) at (2,3) < 3,4|\\removed{v},v>\n\t\\state (nE) at (3,3) < 4|\\removed{v},\\removed{v}>\n\t\\state (nF) at (1,1) < 2,4|\\bot,\\bot>\n\t\\state (nG) at (0,2) < 3,4|\\bot,\\bot>\n\t\\state (nH) at (0,1) < 4|\\bot,\\bot>\n\t\\state (nI) at (2,1) < 4|v,\\bot>\n\t\\state (nJ) at (3,2) < 3,4|\\removed{v},\\bot>\n\t\\state (nK) at (3,1) < 4|\\removed{v},\\bot>\n\t\\state (nL) at (2,0) < 4|\\removed{v},v>\n\t\t\t\t\t\n\t\\draw[transition] (nA) -- \n\t\tnode[transition label] {\\ref{rule:c-new-frame}} \n\t\t(nB);\n\t\\draw[transition] (nB) --\n\t\tnode[transition label] {\\ref{rule:c-send}} \n\t\t(nC);\n\t\\draw[transition] (nC) --\n\t\tnode[transition label,swap] {\\ref{rule:c-comm}} \n\t\t(nD);\n\t\\draw[transition] (nD) --\n\t\tnode[transition label] {\\ref{rule:c-recv-val}} \n\t\t(nE);\n\t\\draw[transition] (nB) -- \n\t\tnode[transition label] {\\ref{rule:c-recv-fail}} \n\t\t(nF);\n\t\\draw[transition] (nB) -- \n\t\tnode[transition label,swap] {\\ref{rule:c-send-fail}} \n\t\t(nG);\n\t\\draw[transition] (nF) -- \n\t\tnode[transition label,swap] {\\ref{rule:c-send-fail}} \n\t\t(nH);\n\t\\draw[transition] (nF) --\n\t\tnode[transition label] {\\ref{rule:c-send}} \n\t\t(nI);\n\t\\draw[transition] (nG) -- \n\t\tnode[transition label] {\\ref{rule:c-recv-fail}} \n\t\t(nH);\n\t\\draw[transition] (nC) -- \n\t\tnode[transition label] {\\ref{rule:c-recv-fail}} \n\t\t(nI);\n\t\\draw[transition loss] (nC) -- \n\t\tnode[transition label] {\\ref{rule:c-loss}} \n\t\t(nJ);\n\t\\draw[transition] (nI) --\n\t\tnode[transition label] {\\ref{rule:c-comm}} \n\t\t(nL);\n\t\\draw[transition loss] (nI) -- \n\t\tnode[transition label] {\\ref{rule:c-loss}} \n\t\t(nK);\n\t\\draw[transition] (nJ) -- \n\t\tnode[transition label] {\\ref{rule:c-recv-fail}} \n\t\t(nK);\n\\end{tikzpicture}\n\\caption{An end-to-end communication and its execution.}\n\\label{fig:com-execution}\n\\end{figure*}\n\n\nFor compactness, the presentation relies on the structural precongruence $\\precongr_\\mathcal{D}$ via the standard mechanism of \\cref{rule:c-struct}; the relation is defined as the smallest relation on choreographies closed under rules in \\cref{fig:chor-semantics} (discussed below) and under syntactic constructors of the language. \nHerein, $C \\congr_\\mathcal{D} C'$ is a shorthand \nfor $C \\precongr_\\mathcal{D} C'$ and $C' \\precongr_\\mathcal{D} C$.\nUnnecessary schematic variables are omitted and replaced by the wildcard $\\_$.\n\n\\Cref{rule:c-new-proc} describes the creation of a new process which inherits the network knowledge \nof its parent---as common in standard process models like, \\eg, the $\\pi$-calculus \\cite{MPW92}.\nThe expression $f(\\sigma(\\pid p)) \\downarrow v$ in the rule premises states that the evaluation of \n$f$ against the content of the memory of $\\pid p$ yields value $v$. \nIn the reactum the memory of $\\pid p$ is initialised to $v$ and the connection graph is updated to \ninclude: the mutual connection between $\\pid p$ and $\\pid q$; and a connection from $\\pid q$ to \neach process in the neighbourhood of $\\pid p$.\n\\Cref{rule:c-new-frame} models the creation of a new frame. In the reactum the frame is given \nstatus $\\phiframe{\\pid p}{\\bot}{\\pid q}{\\bot}$, meaning that neither the sender's or the \nreceiver's stacks contain the frame payload.\n\\Cref{rule:c-send,rule:c-send-fail} describe the execution of a send attempt for frame $k$. In the \nfirst case the sender computes a payload for $k$ and its stack accepts it, whereas in the second \ncase it rejects it (the send action failed). In both cases no information about the attempt is \npropagated to the receiver side. For conciseness, we extend the notation of function evaluation \n($f(\\sigma(\\pid p)) \\downarrow v$) to process names and labels by regarding them as constants---we \nsignal this abuse of the notation by writing $s$ and $u$ in place of $f$ and $v$, respectively.\nObserve that \\cref{rule:c-send} does not check the frame's state, meaning that a sender may \nperform multiple send actions resulting in the transmission of different payloads for the \nsame frame.\n\\Cref{rule:c-comm,rule:c-loss} model frame transmission and its non-deterministic outcome.\nOnly in the first case the payload reaches the receiver (successful transmission), but the sender \nhas no knowledge of this outcome since the effect on its stack is the same \n(both reacti set $\\phi(k)_2$ to $\\removed{u}$).\n\\Cref{rule:c-recv-val,rule:c-recv-pid,rule:c-recv-lbl,rule:c-recv-fail} define the execution of a \nreceive attempt for frame $k$. The first three rules model the delivery of different types of \npayloads (values, processes, and labels) and the fourth models failure due to any payload for $k$ \nnot having reached the receiver end yet. \\Cref{rule:c-unfold} unfolds procedure calls by replacing \nall occurrences of formal arguments in its body ($C_2$) with actual ones as prescribed by the \nsubstitution $\\vec{A}$ (provided its domain of definition coincides with the set $\\vec{P}$) and all \nnames bound in the procedure body $C_2$ with fresh ones as per Barendregt's convention.\n\\Cref{rule:c-i-i,rule:c-i-n,rule:c-n-n,rule:c-i-if,rule:c-if-i,rule:c-n-if,rule:c-if-if} model the dynamic \nrescheduling of non-interfering operations, like communications involving different processes, \nwhere $\\mathrm{pn}(C)$ is the set of process names in $C$. We omit the symmetric rules for \nconditionals, for swapping statements in\/from the continuation of conditionals.\n\n\\paragraph{Failure Model}\nWe can now formalise our two settings described in \\cref{sec:failure-model}.\nSpecifically, in the remainder: whenever we discuss \\cref{fm:reliable} (Reliable Transmission), we \nrefer to RC\\xspace without \\cref{rule:c-loss} (without that rule, messages cannot be lost in \nthe transport phase); whenever we discuss \\cref{fm:unreliable} (Unreliable Transmission), we use \nfull RC\\xspace (\\cref{rule:c-loss} is included).\nWe illustrate the (formal) difference between the two settings with an end-to-end communication \nexample.\nConsider the program term below.\n\\begin{snippet}\n\t\\cnewframe{k}{\\pid s}{\\pid r}{T}{\\\\\n\t\\indent \\csend{k}{\\pid s.f}\\keyword{;}\\, \\\\\n\t\\indent \\crecv{k}{\\pid r.g}\\keyword{;}\\, \\\\\n\t\\indent \\keyword{0}}\n\\end{snippet}\nAssume w.l.o.g.~that $v$ is the payload computed using $f$, that $f$, $g$, $v$, and memory cells of \nsender and receiver have the correct types, and that $\\pid s$ and $\\pid r$ are \nconnected.\nThe transition system in \\cref{fig:com-execution} depicts all possible executions for this program.\nIn \\cref{fm:reliable}, the dashed edges are not included since they require \\cref{rule:c-loss}. In \n\\cref{fm:unreliable}, instead, all transitions are possible (even the dashed ones).\nStates represent runtime configurations and edges reductions.\nFor exposition convenience states are assigned letters from (a) to (l) and only a subset of the data forming a runtime configuration is included:\nprogram terms are represented by the list of lines from the program above (left half of each state);\nprocess memory cells are omitted;\nonly stacks of $\\pid s$ and $\\pid r$ for $k$ are included (right half of each state);\nconnection graphs are omitted.\nEdges are labelled with the names of the reduction rules used to derive them; rules about structural precongruence are omitted---the reduction from (b) to (f) is the only one that requires also \\cref{rule:c-struct}.\n\nThe program execution begins in state (a) and every run reaches a configuration with program term \n$\\keyword{0}$, \\ie, every execution eventually terminates---there are configurations like (i) where $\\keyword{0}$ \nmay still admit some reductions but these are derivable only using \\cref{rule:c-comm,rule:c-loss} and eventually reach a configuration that cannot be further reduced.\nWe describe the different situations after executing all terms.\n\\begin{description}\n\\item[(e)] This is the only configuration where $v$ is marked as delivered and there is exactly one path from (a) to (e) that is the only chain of events without failures.\n\n\\item[(h)] This configuration is reached only if both the send and receive fail. There are two paths to (h), one passing through (g) and one through (f).\nIn the former the failure at the receiver side is consequential to the failure at the sender side \nwhereas in the second the two failures are independent. In fact, (f) may also reduce to (i).\n\n\\item[(i)] This configuration is reached only if the send succeeds and the receive fails regardless \nof the relative ordering of such events. Although the program has been reduced to $\\keyword{0}$, (i) \nadmits further reductions modelling a transmission attempt by the sender stack.\n\n\\item[(l)] Considerations made for (i) apply (l) too since it is reachable only from (i) and via a reduction modelling actions performed by the communication stack.\n\\end{description}\nOnly if we assume \\cref{fm:unreliable} (\\ie we include \\cref{rule:c-loss}) then, (j) and (k) become \nreachable. In particular, (k) configuration is reached only if the send succeeds, the receive fails, \nand the frame is lost during transmission. Similarly to (h), the last two events are consequential \nonly along the path through (j). Indeed (i) admits a reduction to a configuration unreachable from \n(j).\nIf we exclude any form of failure instead (\\ie \\cref{fm:reliable} and the additional assumption \nsend and receive operations may never fail) then, only configurations on the path to (e) remain \nreachable from (a)---as expected.\n\n\n\\section{Application Examples}\n\\label{sec:examples}\n\nConsider the procedure in the snippet below.\n\\begin{snippet}\n\t\\procdef{\\code{sendWhile}}{\n\t\t\\cframe{k}{\\pid s}{\\pid r}{T_k},\n\t\t\\pid s\\colon T_{\\pid s},\n\t\tf\\colon T_{\\pid s} \\to T_k,\n\t\tc\\colon T_{\\pid s} \\to T_{\\pid s},\n\t\tg\\colon T_{\\pid s} \\to \\type{Bool}\n\t}{\\\\\\indent\n\t\t\\scond{\\clocal{\\pid s}{g} \\land \\neg \\pid s.\\csent{k}}{\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\csend{k}{\\clocal{\\pid s}{f}}\\keyword{;}\\,\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\clocal{\\pid s}{c}\\keyword{;}\\,\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\code{sendWhile}(k,\\pid s,f,c,g)\n\t\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}\n\t}\n\\end{snippet}\nAbove, process $\\pid s$ tries to send a payload computed by $f$ for frame $k$ \nuntil its stack accepts it or the guard $g$ is falsified; $c$ is \n(locally) computed between attempts.\nWe can use \\code{sendWhile} to implement different recovery strategies, as other procedures:\n\\begin{itemize}\n\t\\item\n\ta procedure \\code{send} that never gives up sending until successful is implemented as a call \n\tto \\code{sendWhile} with a guard $g$ that is always true;\n\n\t\\item\n\ta procedure \\code{sendN} that gives up after $n$ attempts is implemented as a call to \n\t\\code{sendWhile} where $g$ and $c$ are used to test and increment a counter respectively;\n\n\t\\item\n\ta procedure \\code{sendT} that gives up after a timeout $t$ is implemented as a call to \n\t\\code{sendWhile} where $g$ tests a timer;\n\n\t\\item\n\tvariations of the above that use exponential backoff or (\\eg procedure\n\t\\code{sendExpBackoff} from \\cref{sec:intro}) are implemented by passing a delay computation \n\t(``sleep'') as $c$.\n\\end{itemize}\nProcedure \\code{recvWhile}, the analogue of \\code{sendWhile}, is likewise implemented: one just has \nto replace every frame operation with its dual.\n\n\\paragraph{Procedural Choreographies}\nAssume \\cref{fm:reliable}, and consider the procedure below.\n\\begin{snippet}\n\t\\procdef{\\code{com}}{\n\t\t\\pid s\\colon T_{\\pid s},\n\t\t\\pid r\\colon T_{\\pid r},\n\t\tf\\colon T_{\\pid s} \\to T_k,\n\t\tg\\colon T_{\\pid r}\\times T_{k} \\to T_{\\pid r}\n\t}{\\\\\\indent\n\t\t\\cnewframe{k}{\\pid s}{\\pid r}{T}{\n\t\t\\\\\\indent\n\t\t\\code{send}(k,\\pid s,f)\\keyword{;}\\,\n\t\t\\\\\\indent\n\t\t\\code{recv}(k,\\pid r,g)\n\t\t}\n\t}\n\\end{snippet}\nProcedure \\code{com} implements exactly-once delivery since \\code{send} terminates only if the \npayload is accepted by the sender's stack, which in turn ensures transmission, and finally \n\\code{recv} terminates only after the payload is delivered.\n\nAs a consequence, we can recover the language of Procedural Choreographies (PC) \\cite{CM17:forte}, \nwhich abstracts from communication failures, ``as a library''. For more evocative notation we \ndefine ${\\pid s.f \\ttTo \\pid r.g}$, ${\\pid s \n\\ttTo \\pid r[l]}$, and ${\\pid s.\\pid p \\ttTo \\pid r}$ as syntactic sugar for\n$\\code{com}(\\pid s,\\pid r,f,g)$ and its equivalent versions for label and process name \ncommunications, respectively.\nThen, translating PC programs into RC\\xspace is a matter of rewriting a\nfew symbols, \\eg, $\\pid s.f \\ttto \\pid r.f'$ in PC becomes ${\\pid s.f \\ttTo \\pid r.g}$ in RC\\xspace.\n\n\\paragraph{A search engine}\nWe now present a more sophisticated scenario, where a search process $\\pid s$ queries\nproviders $\\pid p_1,\\dots,\\pid p_m$ making a limited number of attempts.\nProcedures allow us to hide the request-response implementation and write\n\\begin{snippet*}\n\t\\code{reqRes}(\\pid s,\\pid p_1,req,\\overline{req},\\overline{resp},resp_1)\\keyword{;}\\,\n\t\\\\\\dots\\keyword{;}\\,\\\\\n\t\\code{reqRes}(\\pid s,\\pid p_m,req,\\overline{req},\\overline{resp},resp_m)\\keyword{;}\\,\n\\end{snippet*}\\looseness=-1\nwhere handling of internal representations and computations is delegated to functions $req$, $\\overline{req}$, $\\overline{resp}$, $resp_i$ written in the guest language.\nSince all queries are independent we can offload them to worker processes spawned by $\\pid s$ as shown in procedure $\\code{reqRes}$ below.\n\\begin{snippet}[\n\t\t\\def\\alignedcomment#1{%\n\t\t\t\\hfill\\quad%\n\t\t\n\t\t\t\\commentline{#1}%\n\t\t\n\t\t}\n\t]\n\t\\procdef{\\code{reqRes}}{\n\t\t\\pid s\\colon T_\\pid{s},\n\t\t\\pid p\\colon T_\\pid{p},\n\t\t\\\\\\indent\\indent\n\t\treq\\colon T_\\pid{s} \\to \\type{Maybe(Str)},\n\t\t\\overline{req}\\colon \\type{Maybe(Str)} \\to T_\\pid{p},\n\t\t\\\\\\indent\\indent\n\t\t\\overline{resp}\\colon T_\\pid{p} \\to \\type{Str},\n\t\tresp\\colon \\type{Maybe(Str)} \\to T_{\\pid s}\n\t}{\n\t\\\\\\indent\n\t\\cnewproc{\\pid s}{\\pid w}{req}{}\n\t\\alignedcomment{start a worker initialised with $req$}\n\t\\\\\\indent\t\n\t\\code{comPID}(\\pid s,\\pid p,\\pid w)\\keyword{;}\\,\n\t\\alignedcomment{introduce worker and provider}\n\t\\\\\\indent\t\n\t\\cnewframe{k_1}{\\pid w}{\\pid p}{\\type{Str}}{}\n\t\\alignedcomment{declare a frame for the request}\n\t\\\\\\indent\n\t\\code{sendN}(k_1,\\pid w,req)\\keyword{;}\\,\n\t\\alignedcomment{sends the query; $n$ attempts}\n\t\\\\\\indent\n\t\\code{recvT}(k_1,\\pid p,\\overline{req})\\keyword{;}\\,\n\t\\alignedcomment{receive the query but set a timeout}\n\t\\\\\\indent\n\t\\cnewframe{k_2}{\\pid p}{\\pid w}{\\type{Maybe(Str)}}{}\n\t\\alignedcomment{a frame for the response}\n\t\\\\\\indent\t\n\t\\cond{\\pid p.\\creceived{k_2}}{\n\t\\\\\\indent\\indent\n\t\t\\code{comN}(k_2,\\pid p,\\pid w,\\code{some}(resp),id)\\keyword{;}\\,\n\t\t\\alignedcomment{send response}\n\t\\\\\\indent\\mspace{-10.0mu}\n\t}{\n\t\\\\\\indent\\indent\n\t\t\\code{comN}(k_2,\\pid p,\\pid w,\\code{none},id)\\keyword{;}\\,\n\t\t\\alignedcomment{send empty response}\n\t\\\\\\indent\\mspace{-10.0mu}\n\t}\n\t\\\\\\indent\n\t\\code{com}(\\pid w,\\pid s,id,\\overline{resp})\\keyword{;}\\,\n\t\\alignedcomment{rely response}\n\t}\n\\end{snippet}\n\n\\paragraph{Best-effort strategies}\nAssume \\cref{fm:unreliable}.\nIn this setting, procedure \\code{com} is not robust any more, for \npayloads may now be lost during transmission, leaving the receiver looping forever.\n\nThis is a common problem in practice, which is addressed by switching to ``best-effort'' strategies where delivery is possible (to varying degrees) but not certain.\nBelow is a procedure that implements a simple communication protocol with capped retries and acknowledgements to the sender. In this, the strategy implemented by \\code{comACK} can be regarded as a simplification of that of TCP; four-phase handshakes or other protocols are implementable in RC\\xspace as well.\n\\begin{snippet}\n\t\\procdef{\\code{comACK}}{\n\t\t\\cframe{k}{\\pid s}{\\pid r}{T_k},\n\t\t\\cframe{k_{ack}}{\\pid r}{\\pid s}{\\type{Unit}},\n\t\t\\pid s\\colon T_{\\pid s},\n\t\t\\pid r\\colon T_{\\pid r},\n\t\tf\\colon T_{\\pid s} \\to T_k,\n\t\tg\\colon T_{\\pid r}\\times T_{k} \\to T_{\\pid r}\n\t}{\n\t\t\\\\\\indent\n\t\t\\code{sendT}(k,\\pid s,f)\\keyword{;}\\,\n\t\t\\\\\\indent\n\t\t\\code{recvT}(k, \\pid r,g)\\keyword{;}\\, \n\t\t\\\\\\indent\n\t\t\\code{sendT}(k_{ack}, \\pid r, \\literal{unit})\\keyword{;}\\,\n\t\t\\\\\\indent\n\t\t\\code{sendUntilACK}(k,k_{ack},\\pid s,f)\\keyword{;}\\,\n\t}\n\t\\\\\n\t\\procdef{\\code{sendUntilACK}}{\n\t\t\\cframe{k}{\\pid s}{\\pid r}{T_k},\n\t\t\\cframe{k_{ack}}{\\pid r}{\\pid s}{\\type{Unit}},\n\t\t\\pid s\\colon T_{\\pid s},\\\\\\indent\\indent\n\t\tf\\colon T_{\\pid s} \\to T_k\n\t}{\n\t\t\\\\\\indent\n\t\t\\scond{\\pid s.\\code{n > 0} \\land \\neg \\pid s.\\creceived{k}}{\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\pid s.\\code{n-{}-}\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\code{send}(k,\\pid s,f)\\keyword{;}\\,\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\code{recv}(k_{ack},\\pid s,\\code{noop})\\keyword{;}\\,\n\t\t\t\\\\\\indent\\indent\n\t\t\t\\code{sendUntilACK}(k,k_{ack},\\pid s, f)\\keyword{;}\\,\n\t\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}\n\t}\n\\end{snippet}\n\n\\paragraph{Compensations}\nWith \\code{comACK} we can also use RC\\xspace to develop a new variant of PC that does not assume reliable transmission, \\ie, for \\cref{fm:unreliable}. \nIn this setting, a common patter to deal with failures of best-effort communications are \n\\emph{compensations}. Fault compensations can be defined in RC\\xspace (for both \n\\cref{fm:reliable,fm:unreliable}) using conditionals, \\code{comACK} (or variations thereof), and \nsome syntax sugar to improve readability.\nAn expression ${\\pid s.f \\ttTo^{BE} \\pid r.f'}\\{C_{\\pid s}\\}\\{C_{\\pid r}\\}$ is a communication as \nin $\\code{comACK}(\\pid s,\\pid r,f,g)$ where choreographies $C_{\\pid s}$ and $C_{\\pid r}$ are \nexecuted as compensations for faults detected by the sender $\\pid s$ (no ack) or the receiver $\\pid \nr$, respectively.\nAn example of communications with fault compensations is the communication construct defined in \n\\cite{APN17} where communication operations specify default values as compensations; this construct \nis recovered in RC\\xspace using local computations as, \\eg, in ${\\pid s.f \\ttTo^{BE} \\pid r.f'}\\{\\pid \ns.\\literal{foo}\\}\\{\\pid r.\\literal{42}\\}$.\n\n\\paragraph{Any\/Many communications}\nWe can also implement more complex communication primitives, like those in \\cite{LNN16,CMP18}.\nBelow are procedures that iteratively attempt\nat sending some frames until the sender \nstack accepts all or any of them, respectively, using a round-robin strategy.\n\\begin{snippet}\n\t\\procdef{\\code{sendAll}}{\n\t\t\\pid s\\colon T_{\\pid s},\n\t\t\\cframe{k_1}{\\pid s}{\\pid r_1}{T},\n\t\t\\dots,\n\t\t\\cframe{k_n}{\\pid s}{\\pid r_n}{T},\n\t\t\\\\\\indent\\indent\n\t\tf\\colon T_{\\pid s} \\to T\n\t}{\\\\\\indent\n\t\t\\csend{k_n}{\\pid s.f}\\keyword{;}\\, \n\t\t\\\\\\indent\n\t\t\\cond{\\pid s.\\csent{k_n}}{\n\t\t\\\\\\indent\\indent\n\t\t\t\t\\code{sendAll}(\n\t\t\t\t\t\\pid s,\n\t\t\t\t\tk_1,\n\t\t\t\t\t\\dots,\n\t\t\t\t\tk_{n-1},\n\t\t\t\t\tf)\n\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}{\n\t\t\\\\\\indent\\indent\n\t\t\t\\code{sendAll}(\\pid s,\n\t\t\t\tk_1 \\tteq k_2,\n\t\t\t\t\\dots,\n\t\t\t\tk_{n-1} \\tteq k_n,\n\t\t\t\tk_n \\tteq k_1,\n\t\t\t\tf)\n\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}\n\t}\n\\end{snippet}\n\\begin{snippet}\n\t\\procdef{\\code{sendAny}}{\n\t\t\\pid s\\colon T_{\\pid s},\n\t\t\\cframe{k_1}{\\pid s}{\\pid r_1}{T},\n\t\t\\dots,\n\t\t\\cframe{k_n}{\\pid s}{\\pid r_n}{T},\n\t\t\\\\\\indent\\indent\n\t\tf\\colon T_{\\pid s} \\to T\n\t}{\\\\\\indent\n\t\t\\csend{k_1}{\\pid s.f}\\keyword{;}\\, \n\t\t\\\\\\indent\n\t\t\\scond{\\pid s.\\neg\\csent{k_1}}{\n\t\t\\\\\\indent\\indent\n\t\t\t\\code{sendAny}(\\pid s,\n\t\t\t\tk_1 \\tteq k_2,\n\t\t\t\t\\dots,\n\t\t\t\tk_{n-1} \\tteq k_n,\n\t\t\t\tk_n \\tteq k_1,\n\t\t\t\tf)\n\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}\n\t}\n\\end{snippet}\nWe omit the dual procedures for receiving all or some frames, which are similarly defined.\nCombining these it is possible to implement scatter\/gather communication primitives from \n\\cite{LNN16}. For instance, below is an implementation of scatter.\n\\begin{snippet}\n\t\\procdef{\\code{scatterAll}}{\n\t\t\\pid s\\colon T_{\\pid s},\n\t\t\\pid r_1\\colon T_{\\pid r},\n\t\t\\dots,\n\t\t\\pid r_n\\colon T_{\\pid r},\n\t\tf\\colon T_{\\pid s} \\to T,\n\t\tg\\colon T_{\\pid r}\\times T_{k} \\to T_{\\pid r}\n\t}{\\\\\\indent\n\t\t\\cbindin{\\cframe{k_1}{\\pid s}{\\pid r_1}{T}\n\t\t\\dots\n\t\t\\cframe{k_n}{\\pid s}{\\pid r_2}{T}}{\n\t\t\\\\\\indent\n\t\t\\code{sendAll}(k_1,\\dots,k_n,f)\\keyword{;}\\,\n\t\t\\\\\\indent\n\t\t\\code{recv}(k,\\pid r_1,g)\\keyword{;}\\,\n\t\t\\dots\n\t\t\\code{recv}(k,\\pid r_n,g)\n\t\t}\n\t}\n\\end{snippet}\n\n\\begin{remark}\nFor clarity, we remark that RC\\xspace would require a version of the above procedures (\\eg \n\\code{send<$T_k$,$T_{\\pid s}$,$T_{\\pid r}$>}) for each signature used by the program at hand, since \nwe do not support type variables. Extending RC\\xspace with \nparametric polymorphism for procedures or an erasure step seems straightforward. \n\\end{remark}\n\n\\section{Typing, Progress, and Robustness}\n\\label{sec:typing}\n\nIt is easy to write programs in RC\\xspace that get stuck or have inconsistent communication implementations: parties may not be connected, payload types may not be respected, communication attempts may be mismatched. \nTo address these issues we introduce a typing discipline for RC\\xspace that checks that:\n\\begin{enumerate}\n\\item types of processes, functions, and procedures are respected;\n\\item processes that need to communicate are properly connected;\n\\item the delivery of frames is guaranteed to be at-most-once and best-effort;\n\\item there are no unnecessary checks on network actions (to avoid dead branches).\n\\end{enumerate}\nAdditionally, in \\cref{fm:reliable} exactly-once delivery is also checkable.\n\nChoreography programs can be regarded as ``network transformers''. Under this perspective, typing judgements are naturally of form\n\\[\n\t\\Gamma \\vdash \\langle \\mathcal{D}, C\\rangle \\colon \\mathcal{N} \\to \\mathcal{N}'\n\\]\nand read ``under the environment $\\Gamma$, running $\\langle\\mathcal{D},C\\rangle$ on a network configuration of type $\\mathcal{N}$ yields one of type $\\mathcal{N}'$''. \n\nTyping environments specify labels, procedures, and process names that may be used as well as their type; they are collections of the following form\n\\[\\Gamma \\Coloneqq\n\t\t\\Gamma, \\pid p\\colon T \\mid \n\t\t\\Gamma, l \\mid\n\t\t\\Gamma, \\cframe{k}{\\pid p}{\\pid q}{T} \\mid \n\t\t\\Gamma, X(\\vec{P})\\colon \\mathcal{N} \\to \\mathcal{N}' \\mid\n\t\t\\varnothing\n\\]\nwhere labels are unique, processes are assigned unique types in $\\mathcal{V}$, and procedure may be assigned multiple types.\nA type of network configurations $\\mathcal{N}$ is a pair $\\hnet{F}{G}$ formed by an abstract frame dictionary $F$ and a connection graph $G$. Abstract frame dictionaries specify possible states of frames while abstracting payload of a value type and any information non accessible to a program: a frame sender and receiver may only test its status using conditionals: a sender may only know whether its component is $\\bot$ and the receiver only whether its component is in $\\removed{\\mathcal{U}}$. \nFormally, abstract frames are collections of the form\n\\[\n\tF \\Coloneqq F, \\hframe[k]{U}{U'} \\mid \\emptyset\n\\]\nwhere $U,U' \\subseteq \\mathcal{U}_\\hbot$ for $\\mathcal{U}_\\hbot \\triangleq \\{\\hbot,\\bullet\\} \\uplus \\type{PID} \\uplus \\type{LBL}$.\nThe sender and receiver components of a frame status are abstracted by the function $\\alpha^s\\colon \\mathcal{U}_\\bot^\\checkmark \\to \\mathcal{U}_\\hbot$ and $\\alpha^r\\colon \\mathcal{U}_\\bot^\\checkmark \\to \\mathcal{U}_\\hbot$, respectively.\nThe first is given by the assignments \n\\[\n\t\\bot \\mapsto \\hbot\n\t\\qquad\n\tv,\\removed{v} \\mapsto \\bullet\n\t\\qquad\n\tl,\\removed{l} \\mapsto l\n\t\\qquad\n\t\\pid p,\\removed{\\pid p} \\mapsto \\pid p\n\\]\nand the second by\n\\[\n\t\\bot,v,l,\\pid p \\mapsto \\hbot\n\t\\qquad\n\t\\removed{v} \\mapsto \\bullet\n\t\\qquad\n\t\\removed{l} \\mapsto l\n\t\\qquad\n\t\\removed{\\pid p} \\mapsto \\pid p\n\\]\nwhere $v \\in \\type{VAL}$, $\\pid p \\in \\type{PID}$, and $l \\in \\type{LBL}$.\nConsider for instance a value exchange $k$, $\\hframe[k]{\\{\\bullet\\}}{\\{\\hbot,\\bullet\\}}$ is inhabited by any frame status of $k$, $\\hframe[k]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}}$ by any frame status where the payload is not delivered to the receiver, and $\\hframe[k]{\\{\\bullet\\}}{\\{\\hbot,\\bullet\\}}$ by any frame status where the sender stack accepted the payload.\nA type of network $\\hnet{F}{G}$ is well-formed under $\\Gamma$ (written $\\Gamma \\vdash \\hnet{F}{G}$) if the following conditions are met:\n\\begin{enumerate}[label={\\em(\\alph{*})}]\n\t\\item \n\tif $\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{T}$, then\n\t$G \\vdash \\pid p \\leftrightarrow \\pid q$,\n\t$\\hframe[k]{U}{U'} \\in F$, and payloads in $U$ and $U'$ are of type $T$;\n\t\n\t\\item \n\tif $\\Gamma \\vdash \\pid p\\colon T$, then $\\pid p \\in G$;\n\t\n\t\\item\n\tif $\\hframe[k]{U}{U'} \\in F$, then $\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{T}$ for some $\\pid p$, $\\pid q$, $T$;\n\t\n\t\\item\n\tif $\\hframe[k]{U}{U'} \\in F$ and $\\pid r \\in U \\cup U'$ then $\\pid r \\in G$;\n\t\n\t\\item\n\tif $\\pid p \\in G$, then $\\Gamma \\vdash \\pid p\\colon T$ for some $T$.\n\\end{enumerate}\nHereafter network types are assumed well-formed whenever appearing in a judgement together with an environment.\nJudgements of (concrete) network configurations have form\n\\[\n\t\\Gamma \\vdash {\\phi,G} \\colon \\hnet{F}{G'}\n\\]\nand hold whenever the following conditions are met:\n\\begin{enumerate}[label={\\em(\\alph{*})}]\n\t\\item \n\tif $\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{T}$ and $\\hframe[k]{U}{U'} \\in F$, then\n\t$\\phi(k) = \\phiframe{\\pid p}{u}{\\pid q}{u'}$,\n\t$\\alpha^s(u) \\in U$, $\\alpha^r(u') \\in U'$, and payloads are of type $T$;\n\n\t\\item\n\t$G \\subseteq G'$.\n\\end{enumerate}\n\n\\begin{figure*}\n\\begin{infrules}\n\t\\infrule[\\rname[T]{Weaken}][rule:t-weaken]{\n\t\t\\Gamma_1 \\vdash C \\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma_0,\\Gamma_1 \\vdash C \\colon \n\t\t\\hnet{F_0,F_1}{G_0\\cup G_1}\t\\to\t\\hnet{F_0,F_2}{G_0\\cup G_2}\t\t\n\t}\n\n\t\\infrule[\\rname[T]{Tell}][rule:t-tell]{\n\t\t\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{\\type{PID}} \\and\n\t\t\\Gamma \\vdash \\pid r\\colon T \\and\n\t\t\\Gamma \\vdash C \\colon \\hnet{F_0,\\hframe[k]{U}{\\{\\pid r\\}}}{\\{\\pid q \\to \\pid r\\} \\uplus G_0} \\to \\hnet{F_1}{G_1}\t\t\n\t}{\n\t\t\\Gamma \\vdash C \\colon \n\t\t\\hnet{F_0,\\hframe[k]{U}{\\{\\pid r\\}}}{G_0} \\to\t\\hnet{F_1}{G_1}\t\t\n\t}\n\n\t\\infrule[\\rname[T]{Swap}][rule:t-swap]{\n\t\tC_0 \\dotrel{\\congr} C_1 \\and\n\t\t\\Gamma \\vdash C_1 \\colon \\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1}\n\t}{\n\t\t\\Gamma \\vdash C_0 \\colon \n\t\t\\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1}\t\t\n\t}\n\n\t\n\t\\infrule[\\rname[T]{Int}][rule:t-local]{\n\t\t\\vdash f\\colon T \\to T \n\t}{\n\t\t\\pid p\\colon T \\vdash \\pid p.f \\colon\n\t\t\\hnet{\\varnothing}{\\varnothing} \\to \\hnet{\\varnothing}{\\varnothing}\n\t}\n\t\n\t\\infrule[\\rname[T]{Nil}][rule:t-nil]{\n\t}{\n\t\t\\varnothing \\vdash \\keyword{0} \\colon \\hnet{\\varnothing}{\\varnothing}\\to\\hnet{\\varnothing}{\\varnothing}\n\t}\n\n\t\\infrule[\\rname[T]{;}][rule:t-conc]{\n\t\t\\Gamma \\vdash\tC_0 \\colon \\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash\tC_1 \\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma \\vdash C_0\\keyword{;}\\, C_1 \\colon \\hnet{F_0}{G_0}\t\\to \\hnet{F_2}{G_2}\n\t}\n\t\t\n\t\\infrule[\\rname[T]{NP}][rule:t-new-proc]{\n\t\tG_{\\pid q} = \\{\\pid p \\leftrightarrow \\pid q\\} \\cup \\{\\pid q \\rightarrow \\pid r \\mid G_1 \\vdash \\pid p \\rightarrow \\pid r \\}\n\t\t\\and\n\t\t\\vdash f \\colon T \\to T' \\and\n\t\t\\Gamma \\vdash \\pid p \\colon T \\and\n\t\t\\Gamma, \\pid q \\colon T' \\vdash C \\colon \n\t\t\\hnet{F_0}{G_0 \\cup G_{\\pid q} } \\to \n\t\t\\hnet{F_1}{G_1}\n\t}{\n\t\t\\Gamma \\vdash \\cbindin{\\cstart{\\pid p}{\\pid q}{f}}{C} \\colon\n\t\t\\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1\\setminus \\{\\pid q\\}}\n\t}\n\t\n\t\\infrule[\\rname[T]{NF}][rule:t-new-frame]{\n\t\tG_0 \\vdash \\pid p \\leftrightarrow \\pid q\n\t\t\\and\n\t\t\\Gamma, \\cframe{k}{\\pid p}{\\pid q}{T} \\vdash C \\colon \n\t\t\\hnet{F_0, \\hframe[k]{\\{\\hbot\\}}{\\{\\hbot\\}}}{G_0} \\to \n\t\t\\hnet{F_1, \\hframe[k]{\\_}{\\_}}{G_1}\n\t}{\n\t\t\\Gamma \\vdash \\cbindin{\\cframe{k}{\\pid p}{\\pid q}{T}}{C} \\colon\n\t\t\\hnet{F_0}{G_0} \\to\\hnet {F_1}{G_1}\n\t}\n\t\t\n\t\\infrule[\\rname[T]{Call}][rule:t-call]{\n\t\t\\Gamma \\vdash X(P_1,\\dots,P_n)\\colon \\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash P_i[\\vec{A}] \\text{ for any } 1 \\leq i \\leq n \n\t}{\n\t\t\\Gamma \\vdash\n\t\tX(\\vec{A}) \\colon \n\t\t\\hnet{F_0[\\vec{A}]}{G_0[\\vec{A}]} \\to\n\t\t\\hnet{F_1[\\vec{A}]}{G_1[\\vec{A}]}\n\t}\n\n\t\\infrule[\\rname[T]{IfExp}][rule:t-if-exp]{\n\t\t\\Gamma \\vdash \\pid p\\colon T \\and\n\t\t\\vdash f\\colon T \\to \\type{Bool}\n\t\t\\and\n\t\t\\Gamma \\vdash C_1 \\colon\n\t\t\t\t\\hnet{F_0}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash C_2 \\colon\n\t\t\t\t\\hnet{F_0}{G_0} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\cond{\\pid p.f}{C_1}{C_2} \\colon\n\t\t\\hnet{F_0}{G_0} \\to \\hnet{F_1 \\Ydown F_2}{G_1\\cap G_2}\n\t}\n\n\t\\infrule[\\rname[T]{IfSnd}][rule:t-if-send]{\n\t\t\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{T} \n\t\t\\and\n\t\t\\Gamma \\vdash C_1 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{\\{u\\}}{U}}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash C_2 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{\\{\\hbot\\}}{U}}{G_0} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\cond{\\pid p.\\csent{k}}{C_1}{C_2} \\colon\n\t\t\\hnet{F_0,\\hframe[k]{\\{\\hbot,u\\}}{U}}{G_0} \\to \\hnet{F_1 \\Ydown F_}{G_2\\cap G_2}\n\t}\n\n\t\\infrule[\\rname[T]{IfRcv}][rule:t-if-recv]{\n\t\t\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{T} \n\t\t\\and\n\t\t\\Gamma \\vdash C_1 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{U}{\\{u\\}}}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash C_2 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{U}{\\{\\hbot\\}}}{G_0} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\cond{\\pid q.\\creceived{k}}{C_1}{C_2} \\colon\n\t\t\\hnet{F_0,\\hframe[k]{U}{\\{\\hbot,u\\}}}{G_0} \\to \\hnet{F_1 \\Ydown F_2}{G_1\\cap G_2}\n\t}\n\n\t\\infrule[\\rname[T]{IfLBL}][rule:t-if-recv-lbl]{\n\t\t\\Gamma \\vdash \\cframe{k}{\\pid p}{\\pid q}{\\type{LBL}} \\and\n\t\t\\Gamma \\vdash l \n\t\t\\and\n\t\t\\Gamma \\vdash C_1 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{U}{\\{l\\}}}{G_0} \\to \\hnet{F_1}{G_1}\n\t\t\\and\n\t\t\\Gamma \\vdash C_2 \\colon\n\t\t\t\t\\hnet{F_0,\\hframe[k]{U}{\\{\\hbot\\}}}{G_0} \\to \\hnet{F_1}{G_2}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\cond{\\pid q.\\creceivedlbl{k}{l}}{C_1}{C_2} \\colon\n\t\t\\hnet{F_0,\\hframe[k]{U}{\\{\\hbot,l\\}}}{G_0} \\to \\hnet{F_1 \\Ydown F_2}{G_1\\cap G_2}\n\t}\n\t\n\t\\infrule[\\rname[T]{RcvV}][rule:t-recv-val]{\n\t\t\\Gamma = \\cframe{k}{\\pid p}{\\pid q}{T}, \\pid q \\colon T'\n\t\t\\and\n\t\t\\vdash f\\colon T \\times T' \\to T'\n\t\t\\and\n\t\tU \\subseteq \\{\\hbot\\}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\crecv{k}{\\pid q.f} \\colon \n\t\t\\hnet{\\hframe[k]{U \\cup \\{\\bullet\\}}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{U \\cup \\{\\bullet\\}}{\\{\\hbot,\\bullet\\}}}{\\varnothing}\n\t}\n\n\t\\infrule[\\rname[T]{RcvP}][rule:t-recv-pid]{\n\t\t\\Gamma = \\cframe{k}{\\pid p}{\\pid q}{\\type{PID}},\\pid r \\colon T\n\t\t\\and\n\t\tU \\subseteq \\{\\hbot\\}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\crecv{k}{\\pid q} \\colon \n\t\t\\hnet{\\hframe[k]{U \\cup \\{\\pid r\\}}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{U \\cup \\{\\pid r\\}}{\\{\\hbot,\\pid r\\}}}{\\varnothing}\n\t}\n\n\t\\infrule[\\rname[T]{RcvL}][rule:t-recv-lbl]{\n\t\t\\Gamma = \\cframe{k}{\\pid p}{\\pid q}{\\type{LBL}}, l\n\t\t\\and\n\t\tU \\subseteq \\{\\hbot\\}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\crecv{k}{\\pid q.f} \\colon \n\t\t\\hnet{\\hframe[k]{U \\cup \\{l\\}}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{U \\cup \\{l\\}}{\\{\\hbot,l\\}}}{\\varnothing}\n\t}\n\\end{infrules}\n\t\\caption{Typing choreographies, shared rules}\n\t\\label{fig:chor-typing-shared}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{infrules}\t\n\t\\infrule[\\rname[R]{SndV}][rule:r-send-val]{\n\t\t\\Gamma = \\cframe{k}{\\pid p}{\\pid q}{T}, \\pid p \\colon T',\tf\\colon T' \\to T\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.f} \\colon \n\t\t\\hnet{\\hframe[k]{\\{\\hbot\\}}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}}}{\\varnothing}\n\t}\n\n\t\\infrule[\\rname[R]{SndP}][rule:r-send-pid]{\n\t\t\\Gamma = \\pid r \\colon T,\t\\cframe{k}{\\pid p}{\\pid q}{\\type{PID}}\n\t\t\\and\n\t\tG = \\{\\pid p \\rightarrow \\pid r\\}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.\\pid r} \\colon \n\t\t\\hnet{\\hframe[k]{\\{\\hbot\\}}{\\{\\hbot\\}}}{G} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,\\pid r\\}}{\\{\\hbot\\}}}{G}\n\t}\n\t\n\t\\infrule[\\rname[R]{SndL}][rule:r-send-lbl]{\n\t\t\\Gamma = l, \\cframe{k}{\\pid p}{\\pid q}{\\type{LBL}}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.\\pid r} \\colon \n\t\t\\hnet{\\hframe[k]{\\{\\hbot\\}}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,l\\}}{\\{\\hbot\\}}}{\\varnothing}\n\t}\n\\end{infrules}\n\t\\caption{Typing choreographies, rules for \\cref{fm:reliable}.}\n\t\\label{fig:chor-typing-reliable}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{infrules}\n\t\\infrule[\\rname[U]{SndV}][rule:u-send-val]{\n\t\tU \\subseteq \\{\\hbot,\\bullet\\} \\and\n\t\t\\Gamma = \\cframe{k}{\\pid p}{\\pid q}{T}, \\pid p \\colon T',\tf\\colon T' \\to T\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.f} \\colon \n\t\t\\hnet{\\hframe[k]{U}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}}}{\\varnothing}\n\t}\n\t\n\t\\infrule[\\rname[U]{SndP}][rule:u-send-pid]{\n\t\tU \\subseteq \\{\\hbot,\\pid r\\}\n\t\t\\and\n\t\t\\Gamma = \\pid r \\colon T,\t\\cframe{k}{\\pid p}{\\pid q}{\\type{PID}}\n\t\t\\and\n\t\tG = \\{\\pid p \\rightarrow \\pid r\\}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.\\pid r} \\colon \n\t\t\\hnet{\\hframe[k]{U}{\\{\\hbot\\}}}{G} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,\\pid r\\}}{\\{\\hbot\\}}}{G}\n\t}\n\t\n\t\\infrule[\\rname[U]{SndL}][rule:u-send-lbl]{\n\t\tU \\subseteq \\{\\hbot,l\\}\n\t\t\\and\n\t\t\\Gamma = l, \\cframe{k}{\\pid p}{\\pid q}{\\type{LBL}}\n\t}{\n\t\t\\Gamma \\vdash \n\t\t\\csend{k}{\\pid p.\\pid r} \\colon \n\t\t\\hnet{\\hframe[k]{U}{\\{\\hbot\\}}}{\\varnothing} \\to \n\t\t\\hnet{\\hframe[k]{\\{\\hbot,l\\}}{\\{\\hbot\\}}}{\\varnothing}\n\t}\n\\end{infrules}\n\t\\caption{Typing choreographies, rules for \\cref{fm:unreliable}.}\n\t\\label{fig:chor-typing-unreliable}\n\\end{figure*}\n\nJudgements of choreography terms have form \n\\[\n\t\\Gamma \\vdash C \\colon \\hnet{F}{G} \\to \\hnet{F'}{G'}\n\\]\nand are derived using rules in \\cref{fig:chor-typing-shared} together with either rules in \\cref{fig:chor-typing-reliable} or \\cref{fig:chor-typing-unreliable} according to whether \\cref{fm:reliable} or \\cref{fm:reliable} is assumed.\n\n\\Cref{rule:t-recv-val,rule:t-recv-pid,rule:t-recv-lbl} specify that a receive operation takes any network where the sender will eventually produce a payload of the expected type, the receiver knows the sender, and the payload has not been consumed yet, and yields a network where the payload may be consumed.\nIn particular, choreographies where receives cannot be matched to sends (\\eg $\\cnewframe{k}{\\pid p}{\\pid q}{T}{\\crecv{k}{\\pid q.f}}$) or have consecutive receive operations (\\eg $\\crecv{k}{\\pid q.f}\\keyword{;}\\,\\crecv{k}{\\pid q.f}$) are rejected since delivery is either impossible or inconsistencies may arise (\\eg the second operation shadows a successful outcome of the first).\nGiven the same receive statement multiple typing can be derived for the same receive statement under the same environment, even once weakening is taken into account. Likewise for send operations (discussed below).\nFor instance, judgement \n$\\Gamma \\vdash \\crecv{k}{\\pid q.f} \\colon \\hnet{\\hframe[k]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}}}{G} \\to \\hnet{\\hframe[k]{\\{\\hbot,\\bullet\\}}{\\{\\hbot,\\bullet\\}}}{G}$\nis derivable if and only if \n$\\Gamma \\vdash \\crecv{k}{\\pid q.f} \\colon \\hnet{\\hframe[k]{\\{\\bullet\\}}{\\{\\hbot\\}}}{G} \\to \\hnet{\\hframe[k]{\\{\\bullet\\}}{\\{\\hbot,\\bullet\\}}}{G}$ is derivable.\n\\Cref{rule:t-recv-pid} cannot update the connection graph since the outcome of a receive operation cannot be statically known; the update can only be done using \\cref{rule:t-tell} once the delivery is certain.\n\\Cref{rule:r-send-val,rule:r-send-pid,rule:r-send-lbl} and \n\\Cref{rule:u-send-val,rule:u-send-pid,rule:u-send-lbl} are intended for typing under the assumptions of \\cref{fm:reliable} and \\cref{fm:unreliable}, respectively.\nRules of the two groups are alike save for the requirements imposed on the sender stack: rules of both groups require that the frame is of the expected type, that the receiver is known, and that it may be so once the statement is executed, but only those from the former require that the stack has yet to accept a payload for the frame. \nThe more stringent set of rules forbids any send operation for a frame with a potentially accepted (hence transmitted) payload (\\eg $\\csend{k}{\\pid p.f}\\keyword{;}\\,\\csend{k}{\\pid p.f}$) since this is a programming error in \\cref{fm:reliable} but not in \\cref{fm:unreliable} where transmission is not guaranteed. In fact, when the stack does not guarantee transmission this has to be programmed at the application level \\eg by resending frames not acknowledged.\n\\Cref{rule:t-if-exp,rule:t-if-send,rule:t-if-recv,rule:t-if-recv-lbl} require branches to be live; graphs are intersected to remove connections created only in one branch; frame dictionaries are merged ($\\Ydown$) by pointwise union under the condition that whenever both branches specify a payload they agree on it \\ie whenever $U_1$ and $U_2$ are merged it must hold that the set $(U_1 \\cup U_2) \\cap (\\{\\bullet\\} \\uplus \\type{PID} \\uplus \\type{LBL})$ has at most one element.\n\\Cref{rule:t-call} requires that the substitution $\\vec{A}$ respects types of formal and actual parameters and that the call type is obtained applying $\\vec{A}$ to the selected procedure type---the discipline admits \\adhoc polymorphism.\n\\Cref{rule:t-swap} allow each step of a derivation to switch to any element in the (finite) equivalence class $[C]_{\\dotrel{\\congr}}$. As a direct consequence, $\\dotrel{\\congr}$ implies type equivalence \\ie types are unaffected by instruction scheduling---recursive calls are not unfold by $\\dotrel{\\congr}$.\nObserve that \\cref{rule:t-swap} is only required for typing choreographies that would be otherwise rejected like \\eg $\\crecv{k}{\\pid q.g}\\keyword{;}\\,\\csend{k}{\\pid p.f}$.\n\n\\begin{lemma}\nIf $\\Gamma \\vdash C\\colon \\mathcal{N} \\to \\mathcal{N'}$ has a derivation where \\ref{rule:t-swap} is used, then either:\n\\begin{itemize}\n\\item $\\Gamma \\vdash C\\colon \\mathcal{N} \\to \\mathcal{N'}$ has a derivation without \\cref{rule:t-swap} or \n\\item no judgement for $C$ has a derivation without \\cref{rule:t-swap}.\n\\end{itemize}\n\\end{lemma}\n\nTyping judgements for procedure definitions are derived using the rule\n\\begin{infrules}\n\t\\infrule[\\rname[T]{Proc}]{\n\t\t\\Gamma \\vdash X(\\vec{P}) \\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}\n\t\t\\\\\n\t\t\\Gamma|_{\\mathcal{D}}, \\vec{P} \\vdash C \\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}\n\t}{\n\t\t\\Gamma \\vdash \\procdef{X}{\\vec{P}}{C}\n\t}\n\\end{infrules}\nwhere $\\Gamma|_{\\mathcal{D}}$ is the restriction of $\\Gamma$ to procedures in $\\mathcal{D}$. This restriction guarantees every label, free process name, and free frame name in the procedure body $C$ is bound by a formal parameter.\nA procedure definition is is well-typed under $\\Gamma$ if $\\Gamma \\vdash \\procdef{X}{\\vec{P}}{C}$.\nA set of procedure definitions $\\mathcal{D}$ is well-typed under $\\Gamma$ (written $\\Gamma \\vdash \\mathcal{D}$) provided that all of its elements are well-typed.\n\nA memory configuration $\\sigma$ is well-typed under $\\Gamma$ (written $\\Gamma \\vdash \\sigma$) provided that $\\vdash \\sigma(\\pid p)\\colon T$ whenever $\\Gamma \\vdash \\pid p\\colon T$.\n\n\\begin{definition}[Well-typedness]\n\\label{def:well-typedness}\nFor $\\mathcal D$ a set of procedure definitions and $\\langle C, \\sigma, \\phi,G \\rangle$ a runtime \nconfiguration, $\\langle C, \\sigma, \\phi,G \\rangle$ is \\emph{well-typed} under $\\mathcal{D}$ if there exist $\\Gamma$, $F_1$, $G_1$, $F_2$, and $G_2$ such that \n$\\Gamma \\vdash \\mathcal D$,\n$\\Gamma \\vdash C\\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}$,\n$\\Gamma \\vdash \\sigma$, and\n$\\Gamma \\vdash \\phi,G\\colon \\hnet{F_1}{G_1}$.\n\nA choreographic program $\\langle \\mathcal{D}, C\\rangle$ is \\emph{well-typed} if there exist $\\sigma$, \n$\\phi$ and $G$ s.t.~the runtime configuration $\\langle C,\\sigma,\\phi, G \\rangle$ is well-typed.\n\\end{definition}\n\nFor any configuration there is an environment and a finite set of pairs of network types that subsume any other typing judgement.\n\n\\begin{theorem}[Existence of minimal typing]\n\\label{thm:chor-minimal-type}\nLet $\\langle C, \\sigma, \\phi,G \\rangle$ be \\emph{well-typed} under $\\mathcal{D}$. There are $\\Gamma$, $(\\mathcal{N}_0 \\to \\mathcal{N}'_0),\\dots,(\\mathcal{N}_n \\to \\mathcal{N}'_n)$ with the property that\nwhenever $\\Gamma' \\vdash \\langle \\mathcal{D}, C, \\sigma, \\phi, G \\rangle \\colon {\\mathcal{N}} \\to {\\mathcal{N}'}$ there is $i \\leq n$ such that:\n\\[\n\t\\infer{\n\t\t\\Gamma' \\vdash \\langle \\mathcal{D}, C, \\sigma, \\phi, G \\rangle \\colon {\\mathcal{N}} \\to {\\mathcal{N}'}\n\t}{\n\t\t\\infer*{}{\n\t\t\\Gamma \\vdash \\langle \\mathcal{D}, C, \\sigma, \\phi, G \\rangle \\colon {\\mathcal{N}_i} \\to {\\mathcal{N}'_i}\n\t\t}\n\t}\n\t\\text{.}\n\\]\n\\end{theorem}\n\n\\begin{theorem}[Decidability of typing]\n\\label{thm:decidability}\nGiven a set of procedure definitions $\\mathcal D$ and a runtime configuration $\\langle C, \\sigma, \n\\phi,G \\rangle$, it is decidable whether $\\langle C, \\sigma, \\phi,G \\rangle$ is \\emph{well-typed} \nunder $\\mathcal{D}$.\nGiven a program $\\langle \\mathcal{D},C\\rangle$, it is decidable whether $\\langle \n\\mathcal{D},C\\rangle$ is well-typed.\n\\end{theorem}\n\n\\begin{proof}[Sketch]\nTyping for elements of the guest language is known by assumption.\nObserve that building derivations for typing judgements from \\cref{def:well-typedness} is completely mechanical:\nrule selection is deterministic save \\cref{rule:t-tell,rule:t-swap} which only introduce a finite \nnumber of cases and can be used a finite amount of times (\\ref{rule:t-tell} uses disjoint union and \n\\ref{rule:t-swap} cannot unfold calls). A heuristic is to delay uses of these rules until it is \nnecessary to infer new connections or consider scheduling alternatives in order to proceed. Hence, \nderivations can be built using straightforward non-deterministic case exploration. Furthermore, \nrules in \\cref{fig:chor-typing-shared,fig:chor-typing-reliable,fig:chor-typing-unreliable} can be \nused to construct network types.\nThe only nontrivial part is constructing typing environments but only a finite number of cases need \nto be checked since processes, frames, and labels that need to be part of the typing environment \nare inferred from free names in program terms and formal parameters.\nFinally, observe that the definition domain of memory configurations and concrete frame \ndictionaries is bounded by typing environments and network types, and that actual values in $\\sigma$ \nor $\\phi$ (the only possible source of infinity) are irrelevant provided that they are of the right \ntypes (which is checkable by the assumptions on the guest language).\n\\end{proof}\n\nTyping is preserved by all reductions.\n\\begin{theorem}[Type preservation]\n\t\\label{thm:type-preservation}\n\tIf $\\langle C,\\sigma,\\phi,G \\rangle$ is well-typed under $\\mathcal{D}$ and there is a reduction $\\langle C,\\sigma,\\phi,G\\rangle \\reducesto_{\\mathcal{D}} \\langle C',\\sigma',\\phi',G'\\rangle$, then the reductum $\\langle C',\\sigma',\\phi',G'\\rangle$ is well-typed under $\\mathcal{D}$.\n\\end{theorem}\n\nIn general, RC\\xspace programs may deadlock if \nthe types of functions, memory cells, or procedures are not respected.\nInstead, well-typed programs enjoy progress, \\ie, they either terminate or diverge.\n\\begin{theorem}[Progress]\n\t\\label{thm:progress}\n\tIf a runtime configuration $\\langle C,\\sigma,\\phi,G \\rangle$ is well-typed under $\\mathcal{D}$ then either\n\t\t $C \\preceq_{\\mathcal{D}} \\keyword{0}$ or\n\t\tthere are $C'$, $\\sigma'$, $\\phi'$, $G'$ such that $\\langle C,\\sigma,\\phi,G\\rangle \\reducesto_{\\mathcal{D}} \\langle C',\\sigma',\\phi',G'\\rangle$.\n\\end{theorem}\n\nFrames are never delivered (to the receiver application level) more than one time.\n\\begin{theorem}[At-most-once delivery]\n\\label{thm:chor-at-most-once}\nLet $\\langle C,\\sigma,\\phi,G \\rangle$ be well-typed under $\\mathcal{D}$.\nIf $C\\precongr_{\\mathcal{D}} \\crecv{k}{R}\\keyword{;}\\, C'$, then $\\phi(k)_4 \\notin \\removed{\\mathcal{U}}$.\n\\end{theorem}\n\nIn \\cref{fm:reliable}, typing identifies frames that are guaranteed to be delivered.\n\\begin{theorem}[At-least-once delivery]\n\\label{thm:chor-at-least-once}\nAssume \\cref{fm:reliable} and that\n$\\Gamma \\vdash \\langle \\mathcal{D}, C, \\sigma, \\phi, G \\rangle \\colon \\hnet{F_1}{G_1} \\to \\hnet{F_2}{G_2}$.\nFor $\\hframe[k]{U}{U'} \\in F_2$ such that $\\hbot \\notin U'$, \nif $\\langle C,\\sigma,\\phi,G \\rangle \\reducesto_{\\mathcal{D}}^\\ast \\langle C',\\sigma',\\phi',G' \\rangle$ and $k \\notin \\mathrm{fn}(C')$ then $\\phi'(k)_4 \\in \\removed{\\mathcal{U}}$.\n\\end{theorem}\n\nThere is always an execution where a given frame is delivered.\n\\begin{theorem}[Best-effort delivery]\n\\label{thm:chor-best-effort}\nAssume \\cref{fm:unreliable} and that $\\Gamma \\vdash \\langle \\mathcal{D}, C, \\sigma, \\phi, G \\rangle \\colon \\hnet{F_1}{G_1} \n\\to \\hnet{F_2}{G_2}$.\nFor any $\\hframe[k]{U}{U'} \\in F_2$ such that $\\hbot \\notin U$, \nif there is a sequence of reductions $\\langle C,\\sigma,\\phi,G \\rangle \\reducesto_{\\mathcal{D}}^\\ast \n\\langle \\crecv{k}{R}\\keyword{;}\\, C',\\sigma',\\phi',G' \\rangle$ such that $k \\notin \\mathrm{fn}(C')$, then \nthere exist $C''$, $\\sigma''$, $\\phi''$, $G''$ such that $\\langle \nC,\\sigma,\\phi,G \\rangle \\reducesto^\\ast \\langle \\crecv{k}{R}\\keyword{;}\\, C'',\\sigma'',\\phi'',G'' \\rangle$ and $\\phi''(k)_4 \\in \\mathcal{U}$.\n\\end{theorem}\n\nWe conclude this section pointing out a limitation of the type system and a possible future extension. Consider procedure \\code{sendAnyOfTwo} reported below.\n\\begin{snippet*}\n\t\\procdef{\\code{sendAnyOfwo}}{\n\t\t\\pid p \\colon T_{\\pid p},\n\t\t\\cframe{k_1}{\\pid p}{\\pid q_1}{T_k},\n\t\t\\cframe{k_2}{\\pid p}{\\pid q_2}{T_k},\n\t\t\t\t\\\\\\indent\\indent\n\t\tf \\colon T_{\\pid p} \\to T_K\n\t}{\\\\\\indent\n\t\t\\csend{k_1}{\\pid p.f}\\keyword{;}\\, \n\t\t\\\\\\indent\n\t\t\\scond{\\pid p.\\neg\\csent{k_1}}{\n\t\t\\\\\\indent\\indent\n\t\t\t\\code{sendAnyOfTwo}(\\pid p \\tteq \\pid p, k_1 \\tteq k_2,k_2 \\tteq k_1, f \\tteq f)\n\t\t\\\\\\indent\\mspace{-10.0mu}\n\t\t}\n\t}\n\\end{snippet*}\nThis procedure alternates attempts for $k_1$ and $k_2$ until exactly one is successful. However, \nthis property is not captured by its type:\n\\begin{multline*}\n\t\\code{sendAnyOfwo}(\n\t\t\\pid p \\colon T_{\\pid p},\n\t\t\\cframe{k_1}{\\pid p}{\\pid q_1}{T_k},\n\t\t\\cframe{k_2}{\\pid p}{\\pid q_2}{T_k}) \\colon\\\\\n\t\\hnet{\n\t\t\\hframe[k_1]{\\{\\hbot\\}}{\\{\\hbot\\}},\n\t\t\\hframe[k_2]{\\{\\hbot\\}}{\\{\\hbot\\}}\n\t}{G}\n\t\\to\\\\\n\t\\hnet{\n\t\t\\hframe[k_1]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}},\n\t\t\\hframe[k_2]{\\{\\hbot,\\bullet\\}}{\\{\\hbot\\}}\n\t}{G}\n\\end{multline*}\nwhere $G = \\{\\pid p \\to \\pid q_1,\\pid p \\to \\pid q_2\\}$.\nIndeed, the type system is designed to verify single communications, not groups.\n\n\\section{Synthesis of implementations}\n\\label{sec:synthesis}\n\n\nIn this section we present an EndPoint Projection (EPP) procedure which compiles a choreography to a concurrent implementation represented in terms of a process calculus. This calculus assumes the same failure model assumed for the choreography model but foregoes global data like globally unique frame identifiers since these are unrealistic in distributed settings.\n\n\\subsection{Process model}\nThe target process model is an extension of Procedural Processes \\cite{CM17:forte} where send and receive operations may fail and exchanged messages are tagged with numeric identifiers. Differently from frame identifiers used at the choreography level, numeric ones are strictly local: \n\\begin{itemize}\n\\item each process maintains a counter for each known process (its neighbourhood in the choreography model);\n\\item frame declarations increment counters locally \\ie without synchronising with the other party (which may not even have a matching frame declaration);\n\\item frames are assigned the value held by the corresponding counter.\n\\end{itemize}\nNumeric frame identifiers may be regarded as sequence numbers. However, the model does not offer any mechanism for maintaining counters synchronised among connected processes nor can such mechanism be programmed since these counters are inaccessible. The only way to maintain synchrony is to write programs where frame declarations are carefully matched on each involved party.\n\n\\paragraph{Syntax} A network is a pair $\\langle \\mathcal{B}, N\\rangle$ where $\\mathcal{B}$ is a set of procedure definitions and $N$ is a parallel composition of processes and messages in transit. A process is written as $\\actor{\\pid p}{B}{\\sigma_\\pid p}{\\theta_\\pid p}$ where $\\pid p$ is its name, $\\sigma_{\\pid p}$ is its memory cell, and $\\theta_{\\pid p}$ is the memory reserved to the runtime for storing information about:\n\\begin{itemize}\n\t\\item open connections (known process, last frame index), \n\t\\item method requests (labels received), and\n\t\\item frame status (the last send\/receive operation succeeded).\n\\end{itemize}\nFormally, $\\theta$ is a function given as the combination of:\n\\begin{itemize}\n\\item \n$\\theta^{\\mathrm{fc}}\\colon \\type{PID} \\rightharpoonup \\type{FID}$, \n\\item\n$\\theta^{\\mathrm{lb}}\\colon \\type{PID} \\times \\type{FID} \\rightharpoonup \\type{LBL}$, and\n\\item\n$\\theta^{\\mathrm{fs}}\\colon \\type{PID} \\times \\type{FID} \\to \\type{Bool}$.\n\\end{itemize}\nWe will omit superscripts $\\mathrm{fc}$, $\\mathrm{lb}$, and $\\mathrm{fs}$ provided that the intended component is clear from the context.\nThe full syntax of the language for programming in this model is defined by the grammar below.\n\\begin{align*}\n\t\\mathcal{B} \\Coloneqq {} & \n\t\t\\procdef{X}{\\vec{P}}{B}, \\mathcal{B} \\mid \\varnothing\n\t\\\\\n\tN\t\\Coloneqq {} & \n\t\t\\actor{\\pid p}{B}{\\sigma_\\pid p}{\\theta_\\pid p} \\mid\n\t\tN \\mathrel{\\keyword{\\bfseries |}} N' \\mid \n\t\t\\keyword{0}\n\t\\\\\n\tB \\Coloneqq {} & \n\t\t\\astart{\\pid q}{f}{B'} \\keyword{;}\\, B \\mid\n\t\t\\acomdecl{\\pid p}{k} \\keyword{;}\\, B \\mid\n\t\t\\arecv{\\pid p}{k}{}\\keyword{;}\\, B\\mid\n\t\t\\\\ \\mid {} & \n\t\t\\arecv{\\pid p}{k}{R}\\keyword{;}\\, B\\mid\n\t\t\\asend{\\pid q}{k}{S}\\keyword{;}\\, B\\mid\n\t\t\\abranch{\\pid p}{k}{l_i\\keyword{:} B_i}[i \\in I] \\mid\n\t\tX(\\vec{P})\\keyword{;}\\, B \\mid\n\t\t\\keyword{0} \\mid\n\t\t\\\\ \\mid {} &\n\t\t\\cond{E}{B}{B'}\\keyword{;}\\, B''\n\t\\\\\n\tS \\Coloneqq {} & \n\t\tf \\mid \\pid r \\mid l\n\t\\\\\n\tR \\Coloneqq {} & \n\t\tf \\mid (\\pid r)\n\t\\\\\n\tE \\Coloneqq {} &\n\t\te \\mid \\adelivered{\\pid p}{k}\n\t\\\\\n\tP \\Coloneqq {} &\n\t\tk \\mid \\pid p\n\\end{align*}\nTerm $\\acomdecl{\\pid p}{k}$ describes a behaviour that creates a new frame for its continuation. Send actions for $k$ are described by terms $\\asend{\\pid q}{k}{f}$, $\\asend{\\pid q}{k}{\\pid r}$, $\\asend{\\pid q}{k}{l}$, and receive ones by $\\arecv{\\pid p}{k}{f}$, $\\arecv{\\pid p}{k}{(\\pid r)}$, $\\arecv{\\pid p}{k}{}$, for values, process names, or label exchanges, respectively. We remark that terms for receiving process names bind them in their continuation.\nTerm $\\abranch{\\pid p}{k}{l_i\\keyword{:} B_i}[i \\in I]$ describes a selection based on a label communicated as frame $k$, if any label $l_i$ has been successfully received then, the process proceeds with the corresponding behaviour $B_i$ otherwise it proceeds with the one labelled with $\\lbl{\\color{keyword} default}$. This label is reserved exclusively for this purpose and cannot be sent. If $I = \\emptyset$, then the term is simply discarded. Guards $\\adelivered{\\pid p}{k}$ state that the last communication action for frame $k$ with $\\pid p$ has been successfully completed. Remaining terms are standard.\n\n\nPrograms are written using frame names which are replaced by numeric identifiers assigned at runtime when frames are created.\nTerms where frame names ($k$) are replaced by numeric identifiers ($n$) are reserved to the runtime. Messages in transit from $\\pid p$ to $\\pid q$ are represented by ``bags'' \\ie terms (of sort $N$) like $\\abag{n}{\\pid p}{\\pid q}{M}$ where the subterm $M$ stands for the payload. \n\n\\begin{figure*}[t]\n\\begin{infrules}\n\t\\infrule[\\rname[P]{NP}][rule:p-new-proc]{\n\t\tf(\\sigma) \\downarrow v\n\t\t\\and\n\t\t\\theta_{\\pid p}' = \\theta_{\\pid p}[\\pid q \\mapsto 0]\n\t\t\\and\n\t\t\\theta_{\\pid q} = \\{\\pid p \\mapsto 0\\}\n\t}{\n\t\t\\actor{\\pid p}{\\astart{\\pid q}{f}{B'}\\keyword{;}\\, B}{\\sigma}{\\theta_{\\pid p}}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid p}{B'}{\\sigma}{\\theta_{\\pid p}'}\n\t\t\\mathrel{\\keyword{\\bfseries |}}\n\t\t\\actor{\\pid q}{B}{v}{\\theta_{\\pid q}}\n\t}\n\t\\infrule[\\rname[P]{NF}][rule:p-new-frame]{\n\t\tn = \\mathrm{next}(\\theta(\\pid q)) \\and\n\t\t\\theta' = \\theta[\\pid q \\mapsto n, (\\pid q,n) \\mapsto \\bot]\n\t}{\n\t\t\\actor{\\pid p}{\\keyword{new}(\\pid q,k) \\keyword{;}\\, B[n\/k]}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid p}{B}{\\sigma}{\\theta'}\n\t}\n\t\\infrule[\\rname[P]{Int}][rule:p-comp]{\n\t\tf(\\sigma) \\downarrow v\n\t}{\n\t\t\\actor{\\pid p}{\\alocal{\\pid q}{f} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid p}{B}{v}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{Snd}][rule:p-send]{\n\t\ts(\\sigma) \\downarrow u \\and\n\t\t\\and\n\t\t\\theta' = \\theta[(\\pid q,n) \\mapsto \\top]\n\t}{\n\t\t\\actor{\\pid p}{\\asend{\\pid q}{n}{s} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid p}{B}{\\sigma}{\\theta'}\n\t\t\\mathrel{\\keyword{\\bfseries |}}\n\t\t\\abag{n}{\\pid p}{\\pid q}{u}\n\t}\n\t\\infrule[\\rname[P]{SndFail}][rule:p-send-fail]{\n\t}{\n\t\t\\actor{\\pid p}{\\asend{\\pid q}{n}{s} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid p}{B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{Loss}][rule:p-loss]{\n\t}{\n\t\t\\abag{n}{\\pid p}{\\pid q}{u}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\keyword{0}\n\t}\n\t\\infrule[\\rname[P]{RcvV}][rule:p-recv-val]{\n\t\tf(\\sigma,v) \\downarrow w\n\t\t\\and\n\t\t\\theta' = \\theta[(\\pid p,n) \\mapsto \\top]\n\t}{\n\t\t\\actor{\\pid q}{\\arecv{\\pid p}{n}{f} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\mathrel{\\keyword{\\bfseries |}}\n\t\t\\abag{n}{\\pid p}{\\pid q}{v}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid q}{B}{w}{\\theta'}\n\t}\n\t\\infrule[\\rname[P]{RcvP}][rule:p-recv-pid]{\n\t\t\\theta' = \\theta[\n\t\t\t(\\pid p,n) \\mapsto \\top,\n\t\t\t\\pid r \\mapsto 0\n\t\t]\n\t}{\n\t\t\\actor{\\pid q}{\\arecv{\\pid p}{n}{(\\pid s)} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\mathrel{\\keyword{\\bfseries |}}\n\t\t\\abag{n}{\\pid p}{\\pid q}{\\pid r}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid q}{B[\\pid r \/ \\pid s]}{\\sigma}{\\theta'}\n\t}\n\t\\infrule[\\rname[P]{RcvL}][rule:p-recv.lbl]{\n\t\t\\theta' = \\theta[(\\pid p,n) \\mapsto \\top,(\\pid p,n) \\mapsto l]\n\t}{\n\t\t\\actor{\\pid q}{\\arecv{\\pid p}{n} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\mathrel{\\keyword{\\bfseries |}}\n\t\t\\abag{n}{\\pid p}{\\pid q}{l}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid q}{B}{\\sigma}{\\theta'}\n\t}\n\t\\infrule[\\rname[P]{RcvFail}][rule:p-rcv-fail]{\n\t}{\n\t\t\\actor{\\pid q}{\\arecv{\\pid p}{n}{R} \\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\n\t\t\\actor{\\pid q}{B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{IfFrame}][rule:p-if-frame]{\n\t\t\\text{if } \\theta(\\pid q,n) = \\top\n\t\t\\text{ then } i = 1\n\t\t\\text{ else } i = 2\n\t}{\n\t\t\\actor{\\pid p}{\\cond{\\adelivered{\\pid q}{n}}{B_1}{B_2}\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\t\t\n\t\t\\actor{\\pid p}{B_i\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{IfExp}][rule:p-if-exp]{\n\t\t\\text{if } f(\\sigma) \\downarrow \\literal{true}\n\t\t\\text{ then } i = 1\n\t\t\\text{ else } i = 2\n\t}{\n\t\t\\actor{\\pid p}{\\cond{f}{B_1}{B_2}\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\t\t\n\t\t\\actor{\\pid p}{B_i\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{Branch}][rule:p-branch]{\n\t\t\\theta(\\pid p,n) = l_i\t\n\t\t\\and\n\t\t\\lbl{default} \\neq l_i\n\t}{\n\t\t\\actor{\\pid q}{\\abranch{\\pid p}{n}{l_i\\keyword{:} B_i}[i \\in I]\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\t\t\n\t\t\\actor{\\pid p}{B_i\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{BranchFail}][rule:p-branch-fail]{\n\t\t\\theta(\\pid p,n) = \\bot\n\t\t\\and\n\t\t\\lbl{default} = l_i\n\t}{\n\t\t\\actor{\\pid q}{\\abranch{\\pid p}{n}{l_i\\keyword{:} B_i}[i \\in I]\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t\t\\reducesto_{\\mathcal{B}}\t\t\n\t\t\\actor{\\pid p}{B_i\\keyword{;}\\, B}{\\sigma}{\\theta}\n\t}\n\t\\infrule[\\rname[P]{Par}][rule:p-par]\n\t\t{N \\reducesto_{\\mathcal{B}} N'}\n\t\t{N \\mathrel{\\keyword{\\bfseries |}} M \\reducesto_{\\mathcal{B}} N' \\mathrel{\\keyword{\\bfseries |}} M}\t\t\n\t\\infrule[\\rname[P]{Str}][rule:p-str]\n\t\t{N \\precongr_{\\mathcal{B}} M \\qquad M \\reducesto_{\\mathcal{B}} M' \\qquad M' \\precongr_{\\mathcal{B}} N'}\n\t\t{N \\reducesto_{\\mathcal{B}} N'}\n\t\\infrule[\\rname[P]{NilBeh}][rule:p-nil-beh]{}{\n\t\t\\keyword{0}\\keyword{;}\\, B \\precongr_{\\mathcal{B}} B\n\t}\n\t\\infrule[\\rname[P]{NilProc}][rule:p-nil-proc]\n\t\t{}\n\t\t{\\actor{\\pid p}{\\keyword{0}}{\\sigma}{\\theta} \\precongr_{\\mathcal{B}} \\keyword{0}}\n\n\t\\infrule[\\rname[P]{NilRecv}][rule:p-nil-recv]\n\t\t{}\n\t\t{\\abag{n}{\\pid p}{\\pid q}{M} \\mathrel{\\keyword{\\bfseries |}} \\actor{\\pid q}{\\keyword{0}}{\\sigma}{\\theta} \\precongr_{\\mathcal{B}} \\actor{\\pid q}{\\keyword{0}}{\\sigma}{\\theta}}\n\t\\infrule[\\rname[P]{NilNet}][rule:p-nil-net]\n\t\t{}\n\t\t{\\keyword{0} \\mathrel{\\keyword{\\bfseries |}} N \\precongr_{\\mathcal{B}} N}\n\t\\infrule[\\rname[P]{Unfold}][rule:p-unfold]\n\t\t{\\procdef{X}{\\vec{P}}{B'} \\in D}\n\t\t{X(\\vec{A})\\keyword{;}\\, B \\precongr_{\\mathcal{B}} B'[\\vec{A}\/\\vec{P}]\\keyword{;}\\, B}\n\\end{infrules}\n\t\\caption{Process model, operational semantics}\n\t\\label{fig:proc-semantics}\n\\end{figure*}\n\n\\paragraph{Semantics} \nThe calculus semantics is given by the reduction relation on networks $\\reducesto_{\\mathcal{B}}$ in \\cref{fig:proc-semantics} and is parameterised in the set $\\mathcal{B}$ of procedure definitions. For compactness, the presentation relies on the structural precongruence $\\precongr_\\mathcal{B}$.\nAll rules follow the intuitive description of terms and their description will be omitted.\n\n\\begin{figure*}[t]\n\t\\begin{center}\n\t\\begin{autoflow}\t\n\t\t\\newcommand{\\afdbox}[1]{{$\\displaystyle#1$}\\endmath\\autoflow@AND\\math\\displaystyle}\n\n\t\t\\afdbox{\n\t\t\t\\epp{\\keyword{0}}[m][\\pid r] \n\t\t\t\\triangleq \\keyword{0}\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\keyword{0} \\keyword{;}\\, C}[m][\\pid r]\n\t\t\t\\triangleq \n\t\t\t\\keyword{0}\\keyword{;}\\,\\epp{C}[m][\\pid r]\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\cstart{\\pid p}{\\pid q}{f}{C}}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\t\\astart{\\pid q}{f}{\\epp{C}[m][\\pid q]}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } \\pid r = \\pid p\\\\\n\t\t\t\t\\epp{C}[m][\\pid r] & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{X(\\vec{A})\\keyword{;}\\, C}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\tX_{\\pid p}(m(\\vec{A}\\setminus \\pid p\\tteq \\pid r))\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } \n\t\t\t\t\\pid p\\tteq \\pid r \\in \\vec{A} \\text{ for some } \\pid p\\\\\n\t\t\t\t\\epp{C}[m][\\pid r] & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\cnewframe{k}{\\pid p}{\\pid q}{T}{C}}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\t\\acomdecl{\\pid q}{k}\\keyword{;}\\, \\epp{C[k^{T,\\pid r,\\pid q}\/k]}[m[k \\mapsto k]][\\pid r] & \\text{if } \\pid r = \\pid p\\\\\n\t\t\t\t\\acomdecl{\\pid p}{k}\\keyword{;}\\, \\epp{C[k^{T,\\pid p,\\pid r}\/k]}[m[k \\mapsto k]][\\pid r] & \\text{if } \\pid r = \\pid q\\\\\n\t\t\t\t\\epp{C}[m][\\pid r] & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\csend{k^{T,\\pid p,\\pid q}}{\\pid p.s}\\keyword{;}\\, C}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\t\\asend{\\pid q}{m(k)}{s}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } \\pid r = \\pid p\\\\\n\t\t\t\t\\epp{C}[m][\\pid r] & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\crecv{k^{T,\\pid p,\\pid q}}{R}\\keyword{;}\\, C}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\t\\arecv{\\pid p}{m(k)}{f}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } R = \\pid r.f\\\\\n\t\t\t\t\\arecv{\\pid p}{m(k)}{(\\pid s)}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } R = \\pid r \\text{ and } T\\eqtype\\type{PID}\\\\\n\t\t\t\t\\arecv{\\pid p}{m(k)}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \\text{if } R = \\pid r \\text{ and } T\\eqtype\\type{LBL}\\\\\n\t\t\t\t\\epp{C}[m][\\pid r] & \\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\t\t\t\t\n\t\t\\afdbox{\n\t\t\t\\epp{\\cond{E}{C_1}{C_2}\\keyword{;}\\, C}[m][\\pid r] \n\t\t\t\\triangleq \n\t\t\t\\begin{cases}\n\t\t\t\t\\cond{e}{\\epp{C_1}[m][\\pid r]}{\\epp{C_2}[m][\\pid r]}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \n\t\t\t\t\t\\text{if } E = \\pid r.e\\\\\n\t\t\t\t\\cond{\\adelivered{\\pid q}{m(k)}}{\\epp{C_1}[m][\\pid r]}{\\epp{C_2}[m][\\pid r]}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \n\t\t\t\t\t\\text{if } E = \\pid r.\\csent{k^{T,\\pid r,\\pid q}}\\\\\n\t\t\t\t\\cond{\\adelivered{\\pid p}{m(k)}}{\\epp{C_1}[m][\\pid r]}{\\epp{C_2}[m][\\pid r]}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \n\t\t\t\t\t\\text{if } E = \\pid r.\\creceived{k^{T,\\pid p,\\pid r}}\\\\\n\t\t\t\t\\abranch{\\pid p}{m(k)}{l\\colon \\epp{C_1}[m][\\pid r],\\lbl{\\color{keyword} default}\\keyword{:} \\epp{C_2}[m][\\pid r]}\\keyword{;}\\, \\epp{C}[m][\\pid r] & \n\t\t\t\t\t\\text{if } E = \\pid r.\\creceivedlbl{k^{T,\\pid p,\\pid r}}{l}\\\\\n\t\t\t\t(\\epp{C_1}[m][\\pid r] \\merge \\epp{C_2}[m][\\pid r])\\keyword{;}\\, \\epp{C}[m][\\pid r] & \t\n\t\t\t\t\t\\text{otherwise}\n\t\t\t\\end{cases}\n\t\t}\n\n\t\\end{autoflow}\n\t\\end{center}\n\t\\caption{Behaviour projection.}\n\t\\label{fig:behaviour-projection}\n\\end{figure*}\n\n\\subsection{EndPoint Projection}\n\\label{sec:epp}\n\nRecall that at process level, active frames have numeric identifiers that are locally and independently generated by each process as soon as a frame is declared whereas at the choreography level frames have unique global identifiers. As a consequence, a coherent mapping from the former to the latter is needed in order to project choreographies with free frame names as in the case of running ones. By \\emph{frame mapping} we mean any mapping taking frame names to frame numbers or to themselves---there is no reason for assigning a numeric identifier or a different name to a bound frame name. \nWrite $\\phi|_{\\pid p}$ for the set\n$\n\t\\{k \\mid \\phi(k) = (\\pid q,\\pid r,t,u) \\text{ and } \\pid p \\in \\{\\pid q,\\pid r\\}\\}\n$\nof all frames in $\\phi$ to or from $\\pid p$.\nA frame mapping $m$ is said to be \\emph{compatible with $\\phi$} if for any $\\pid p$ that occurs in $\\phi$, $m$ assigns to frames in $\\phi|_{\\pid p}$ unique and sequential numbers \\ie:\n\\[\n\t\\{m(k) \\mid k \\in \\phi|_{\\pid p}\\} = \\{1,2,\\dots,\\left|\\,\\phi|_{\\pid p}\\,\\right|\\}\\text{.}\n\\]\nAny $\\phi$ admits a compatible mapping under the mild assumption that frame names can be totally ordered.\nConsider the choreography $C = \\csend{k}{\\pid p.f}\\keyword{;}\\, \\csend{k}{\\pid q.f'}$.\nIf $C$ is part of an execution (of a well-typed program) then, $k$ must occur in $\\phi$ and hence the projections of $\\pid p$ and $\\pid q$ must refer to this frame via a numeric identifier (\\cf \\cref{rule:p-new-frame}) and they must agree on it. However, since the process model does not offer any mechanism for processes to negotiate an agreement on their internal frame counters this property must be derived from the choreography level, hence the necessity of $m$. We remark that this situation is limited to free names only: programs in RC\\xspace are projected with $m = id$.\n\n\nGiven a choreography $C$ and a frame mapping $m$, the projected behaviour of process $\\pid p$ in $C$ is defined as $\\epp{C}[m][\\pid p]$ where $\\epp{-}[m][\\pid p]$ is the partial function defined by structural recursion in \\cref{fig:behaviour-projection}---for conciseness, each frame occurring in the choreography $C$ is pre-annotated with its senders and receivers.\nEach case in the definition follows the intuition of projecting, for each choreographic term, the local actions performed by the given process.\nFor instance, $\\csend{k}{\\pid p.f}$ is skipped during the projection of any process but $\\pid p$ for which case the send action $\\asend{\\pid p}{m(k)}{f}$ is produced. Cases for frame reception, procedure calls, frame and process creation, are similar. \nThe case for conditionals is more involved but follows a standard approach (see \\eg \\cite{BCDLDY08,LGMZ08,CHY12,CM16:facs,CM17:forte}). The (partial) merging operator $\\merge$ from \\cite{CHY12} is used to merge the behaviour of a process that does not know (yet) which branch has been chosen by the the process evaluating the guard. Intuitively, $B \\merge B'$ is isomorphic to $B$ and $B'$ up to branching, where branches of $B$ or $B'$ with distinct labels are also included. One proceeds homomorphically (\\eg $\\csend{k}{\\pid p.f}\\keyword{;}\\, B \\merge \\csend{k}{\\pid p.f}\\keyword{;}\\, B'$ is $\\csend{k}{\\pid p.f}\\keyword{;}\\, (B \\merge B')$) on all terms but branches which are handled defining the merge of\n$\n\t\\abranch{\\pid p}{k}{l_i \\keyword{:} B_i}[i \\in I]\\keyword{;}\\, B \n$ and $\n\t\\abranch{\\pid p}{k}{l_j \\keyword{:} B'_j}[j \\in J]\\keyword{;}\\, B'\n$\nas\n$\\abranch{\\pid p}{k}{l_h \\keyword{:} B''_h}[h \\in H]\\keyword{;}\\, (B \\merge B')$\nwhere $\\{l_h\\colon B''_h\\}_{h \\in H}$ is the union of\n$\\{l_i \\keyword{:} B_i\\}_{i \\in I\\setminus J}$, $\\{l_j \\keyword{:} B'_j\\}_{j \\in J\\setminus I}$, and $\\{l_g \\keyword{:} B_g \\merge B'_g\\}_{g \\in I\\cap J}$.\n\nProjection of procedure definitions follows the approach introduced by Procedural Choreographies \\cite{CM17:forte}. For $\\mathcal{D}$ a set of procedure definitions, its projection is defined as follows:\n\\[\n\t\\epp{\\mathcal{D}} \\triangleq \\bigcup_{\\procdef{X}{\\vec{P}}{C} \\in \\mathcal{D}}\n\t\t\t\\left\\{\n\t\t\t\t\\procdef{X_{\\pid p}}{\\vec{P}\\setminus \\pid p}{\\epp{C}[id][\\pid p]}\n\t\t\t\\,\\middle|\\,\n\t\t\t\t\\pid p: T \\in \\vec{P}\n\t\t\t\\right\\}\n\t\\text{.}\n\\]\nObserve that since a procedure $X$ may be called multiple times on any combination of its arguments (hence assigning to a process different r\\^oles at each call) it is necessary to project the behaviour of each possible process parameter in $\\vec{P}$ as the procedure $X_{\\pid r}$. Here typing is crucial otherwise processes may be called to play r\\^oles for which they lack the necessary connections.\n\nTo designate a network as the projection of a configuration $\\langle C,\\sigma,\\phi,G\\rangle$ it remains only to distribute the information contained in the global state $\\sigma$, $\\phi$, $G$.\nReserved memory for process $\\pid p$ ($\\theta_{\\pid p}$) in the process model is completely determined (up to frame numbering) by $\\phi$ and $G$ from the choreography level as these contain all data regarding processes known to $\\pid p$ and frames exchanged by $\\pid p$. \nSpecifically, $\\epp{\\phi,G}[m][\\pid p]$ is defined as the function $\\theta$ where\n\\begin{align*}\n\t\\theta^{\\mathrm{fc}}(\\pid q) \\triangleq{}&\n\t\\begin{cases}\n\t\t\\left|\\phi|_{\\pid p}\\right|\n\t\t & \\text{if } G \\vdash \\pid p \\to \\pid q \\\\\n\t\t\\bot & \\text{otherwise}\n\t\\end{cases}\n\t\\\\\n\t\\theta^{\\mathrm{ln}}(\\pid q,n) \\triangleq{}&\n\t\\begin{cases}\n\t\tl & \\text{if } \\exists k \\in m^{-1}(n) \\text{ s.t.~}\\phi(k) = \\phiframe{\\pid p}{\\_}{\\pid q}{\\removed{l}} \\\\\n\t\t\\bot & \\text{otherwise}\n\t\\end{cases}\n\t\\\\\n\t\\theta^{\\mathrm{fn}}(\\pid q,n) \\triangleq{}&\n\t\\begin{cases}\n\t\t\\top & \n\t\t\t\\mspace{-10mu}\\array{l}\n\t\t\t\\text{if } \\exists k \\in m^{-1}(n) \\text{ s.t. either } \n\t\t\t\\phi(k) = \\phiframe{\\pid p}{u}{\\pid q}{\\_} \\\\\n\t\t\t\\text{and } u \\neq \\bot \\text{ or } \n\t\t\t\\phi(k) = \\phiframe{\\pid q}{\\_}{\\pid p}{u'} \\text{ and } u' \\in \\removed{\\mathcal{U}}\n\t\t\t\\endarray\\mspace{-15mu}\\\\\n\t\t\\bot & \\text{otherwise}\n\t\\end{cases}\n\\end{align*}\nThe only information of $\\phi$ and $G$ that cannot be reconstructed from the distributed state of the processes in a network is that of frames in transit. To this end, a term $\\abag{m(k)}{\\pid p}{\\pid q}{u}$ is added to the network for each $\\phi(k) = \\phiframe{\\pid p}{\\removed{u}}{\\pid q}{\\bot}$.\nThe projection $\\epp{C,\\sigma,\\phi,G}[m]$ of $\\langle C,\\sigma,\\phi,G\\rangle$ \nis defined as the network:\n\\[\n\t\\prod_{\\phi(k) = \\phiframe{\\pid p}{\\removed{u}}{\\pid q}{\\bot}}\n\t\\abag{m(k)}{\\pid p}{\\pid q}{u}\n\t\\mathrel{\\keyword{\\bfseries |}}\n\t\\prod_{\\pid r \\in \\mathrm{pn}(C)} \\actor{\\pid p}{\\epp{C}[m][\\pid p]}{\\sigma(\\pid p)}{\\epp{\\phi,G}[m][\\pid p]}\n\\]\nwhere $m$ is any mapping compatible with $\\phi$.\nObserve that mappings are all equivalent up to $\\alpha$-conversion and that if $\\epp{C,\\sigma,\\phi,G}[m]$ is defined for some $\\sigma$, $\\phi$, and $G$ then $\\epp{C,\\sigma',\\phi',G'}[m']$ is defined for any $\\sigma'$, $\\phi'$, and $G'$. We say that $C$ is projectable whenever $\\epp{C,\\sigma,\\phi,G}[m]$ is defined for some $m$, $\\sigma$, $\\phi$, and $G$.\n\nThere is an operational correspondence between choreographies and their projections---up to the ``pruning'' relation $\\pruning$ (\\cite{CHY12,CM13}) that eliminates ``dead branches'' due to the merging operator $\\merge$ when they are no longer needed to follow the originating choreography. \n\n\\begin{theorem}[EPP]\n\t\\label{thm:epp}\n\tLet $\\langle \\mathcal{D},C\\rangle$ be a projectable program.\n\tFor any $\\langle C,\\sigma,\\phi,G \\rangle$ well-typed under $\\mathcal{D}$:\n\t\\begin{description}\n\t\t\\item[Compl.] \n\t\t\tIf $\\langle C,\\sigma,\\phi,G \\rangle \\reducesto_{\\mathcal{D}} \\langle C',\\sigma',\\phi',G' \\rangle$, \n\t\t\tthen there are mappings $m \\subseteq m'$ such that $|\\dom(m')\\setminus\\dom(m)| \\leq 1$ and\n\t\t\t$\\epp{C,\\sigma,\\phi,G}[m] \\reducesto_{\\epp{\\mathcal{D}}} N$\n\t\t\tfor some $N \\pruning \\epp{C',\\sigma',\\phi',G'}[m']$.\n\t\t\\item[Sound.]\n\t\t\tIf $\\epp{C,\\sigma,\\phi,G}[m] \\reducesto_{\\epp{\\mathcal{D}}} N$, \n\t\t\tthen there are $N'$,$C'$,$\\sigma'$,$\\phi'$,$G'$, and $m'$ s.t.~$N \\reducesto_{\\epp{\\mathcal{D}}}^\\ast N'$,\n\t\t\t$\\langle C,\\sigma,\\phi,G \\rangle \\reducesto_{\\mathcal{D}} \\langle C',\\sigma',\\phi',G' \\rangle$, \n\t\t\t$\\epp{C',\\sigma',\\phi',G'}[m'] \\pruning N'$,\n\t\t\t$|\\dom(m')\\setminus\\dom(m)| \\leq 1$, and\n\t\t\t$m \\subseteq m'$.\n\t\\end{description}\n\\end{theorem}\n\nIt follows from the operational correspondence in \\cref{thm:epp} that projected networks exhibit all relevant properties ensured by the typing disciplines introduced in \\cref{sec:typing}, namely: progress (\\cref{thm:progress}), at-most-once delivery (\\cref{thm:chor-at-least-once}), best-effort delivery (\\cref{thm:chor-best-effort}), and at-lest-once delivery (\\cref{thm:chor-at-least-once}).\n\n\\section{Conclusions}\n\\label{sec:concl}\n\nProgramming methodologies based on structured communications, like choreographies, have been \ninvestigated for a long time now \\cite{Aetal16,Hetal16}. This is the first paper that investigates \nhow this research line can be applied to the programming of robust distributed systems in the \nsetting of communication failures, bringing us one step nearer to improved reliability in concurrent \ncomputing in the future.\n\nWe believe that the results achieved in this paper unlock a very promising research direction. A \nnatural continuation is to consider different failure models that take into account node failures \nand explore adversaries models (\\eg to include message loss, duplication, or forging).\nNode failures and adversaries are crucial for reasoning about agreement problems in distributed systems. These problems are as challenging as common in real-world distributed programming. Results in this direction may advance the development of correct-by-construction agreement protocols and their implementations.\n\nAnother interesting direction is to explore quantitative properties of programs in RC\\xspace. To this end we plan to develop quantitative semantics for the RC\\xspace model. For instance, in a probabilistic settings, failures are characterised by probability distributions and properties like progress, at-most-once, and exactly-once delivery are formalised as almost-certain properties (their complement event has null measure). Then it is possible to reason about reliability assumptions on communication links \\eg to understand how a certain failure probability impacts our program. Another interesting property is the expected number of retransmissions. Estimates of this value allow to optimise failure-recovery strategies. Likewise, stochastic or timed semantics will enable models with explicit timeouts.\n\nThe typing disciplines introduced in this work ensure that well-typed distributed programs have at-most-once or exactly-once delivery guarantees. As pointed out at the end of \\cref{sec:typing}, these guarantees are limited to single communications but our approach can be reasonably extended to communication groups. This extension has immediate applications \\eg to the statical verification of replication protocols where an update is deemed successful only if it accepted by enough replicas.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn this paper we study the following type of (markovian) backward\nstochastic differential equations with infinite horizon (that we\nshall call \\textit{ergodic} BSDEs or EBSDEs for short):\n\\begin{equation}\\label{EBSDE*}\nY^x_t=Y^x_T +\\int_t^T\\left[\\psi(X^x_\\sigma,Z^x_\\sigma)-\n\\lambda\\right]d\\sigma-\\int_T^T Z^x_\\sigma dW_\\sigma, \\quad 0\\le\nt\\le T <\\infty.\n\\end{equation}\nIn equation (\\ref{EBSDE*}) $X^x$ is the solution of a forward\nstochastic differential equation with\n values in a Banach space $E$\nstarting at $x$ and $(W_t)_{t\\geq 0}$ is a cylindrical Wiener\nprocess in a Hilbert space $\\Xi$.\n\n\nOur aim is to find a triple\n $(Y,Z,\\lambda)$, where $Y,Z$ are\n adapted processes taking values in\n $\\mathbb{R}$ and $\\Xi^*$ respectively and $\\lambda$\n is a real number. $\\psi:E\\times \\Xi^*\\to \\mathbb R$ is\n a given function.\nWe stress the fact that $\\lambda$ is part of the unknowns of\nequation (\\ref{EBSDE*}) and this is the reason why the above is a\nnew class of BSDEs.\n\n\n$ $\n\n\nIt is by now well known that BSDEs provide an efficient\nalternative tool to study optimal control problems, see, e.g.\n\\cite{peng93}, \\cite{ElKaMaz} or, in an infinite dimensional\nframework, \\cite{{FuTe1}}, \\cite{masiero}. But up to our best\nknowledge, there exists no work in which BSDE techniques are\napplied to\n optimal control problems with \\emph{ergodic} cost functionals that is\n functionals depending only on the asymptotic behavior of the state\n (see e.g. the cost defined in formula\n (\\ref{ergodic-cost*}) below).\n\n\n$ $\n\n\n\\noindent The purpose of the present paper is to show that\nbackward\n stochastic differential equations, in particular\n the class of EBSDEs mentioned above, are a very useful tool in the\n treatment of ergodic control problems as\n well, especially in an infinite dimensional framework.\n\n\n $ $\n\n\n \\noindent There is a fairly large amount of literature dealing by\n analytic techniques with optimal ergodic control problems\n for finite dimensional stochastic state\n equations.\n We just mention the basic papers by Bensoussan and Frehse\n \\cite{BeFr} and by Arisawa and Lions \\cite{ArLi} where the\nproblem is treated through the study of the corresponding\nHamilton-Jacobi-Bellman (HJB) equation (solutions are understood\n in a classical sense and in a viscosity sense,\nrespectively).\n\n\nConcerning the infinite dimensional case it is known that both\nclassical and viscosity notions of solutions are not so suitable\nconcepts. Maslowski and Goldys in \\cite{GoMa} employ a mild\nformulation of the Hamilton-Jacobi-Bellman equation in a\n Hilbertian framework (see \\cite{C} and references within for the\ncorresponding mild formulations in the standard cases). In\n\\cite{GoMa} the authors prove, by a fixed point argument that\nexploits the smoothing properties of the Ornstein-Uhlenbeck\nsemigroup corresponding to the state equation, existence and\nuniqueness of the solution of the stationary HJB equation for\ndiscounted infinite horizon costs. Then they pass to the limit, as\nthe discount goes to zero, to obtain a mild solution of the HJB\nequation for the ergodic problem (see also \\cite{duncans}). Such\ntechniques need to assume, beside natural condition on the\ndissipativity of the state equation, also non-degeneracy of the\nnoise and a limitation on the lipschitz constant (with respect to\nthe gradient variable) of the hamiltonian function. This last\ncondition carries a bound on the size of the control domain (see\n\\cite{FuTe-ell} for similar conditions in the infinite horizon\ncase).\n\n\n$ $\n\n\n\n\nThe introduction of EBSDEs allow us to treat Banach valued state\nequations with general monotone nonlinear term and possibly\ndegenerate noise. Non-degeneracy is replaced by a structure\ncondition as it usually happens in BSDEs approach, see, for\ninstance, \\cite{ElKaMaz}, \\cite{FuTe1}. Moreover the use of\n$L^{\\infty}$ estimates specific to infinite horizon backward\nstochastic differential equations (see \\cite{bh}, \\cite{royer},\n\\cite{HuTe}) allow us to eliminate conditions on the lipschitz\nconstant of the hamiltonian. On the other side we will only\nconsider bounded cost functionals.\n\n\n$ $\n\n\n To start being more precise we consider a forward equation\n$$dX_t^x=(AX_t^x+F(X_t^x))dt+G dW_t,\\qquad X_0=x$$\nwhere $X$ has values in a Banach space $E$, $F$ maps $E$ to $E $\nand $A$ generates a strongly continuous semigroup of contractions.\nAppropriate dissipativity assumptions on $A+F$ ensure the\nexponential decay of the difference between the trajectories\nstarting from different points $x,x'\\in E$.\n\n\nThen we introduce the class of strictly monotonic backward\nstochastic differential equations\n\\begin{equation}\\label{bsderoyer*}\n{Y}^{x,\\alpha}_t={Y}^{x,\\alpha}_T +\\int_t^T(\\psi(X^{x}_\\sigma,\nZ^{x,\\alpha}_\\sigma)-\\alpha Y^{x,\\alpha}_\\sigma)d\\sigma-\\int_t^T\nZ^{x,\\alpha}_\\sigma dW_\\sigma, \\quad 0\\le t\\le T <\\infty.\n\\end{equation}\nfor all $\\alpha>0$ (see \\cite{bh}, \\cite{royer} or \\cite{HuTe})\nwhere $\\psi: E\\times\\Xi^*\\rightarrow \\mathbb{R}$ is bounded in the\nfirst variable and Lipschitz in the second. By estimates based on\na Girsanov argument introduced in \\cite{bh} we obtain uniform\nestimates on $\\alpha{Y}^{x,\\alpha}$ and\n${Y}^{x,\\alpha}-{Y}^{x',\\alpha}$ that allow us to prove that,\nroughly speaking, $({Y}^{x,\\alpha}-{Y}^{0,\\alpha}_0,\n{Z}^{x,\\alpha}, \\alpha {Y}^{0,\\alpha}_0)$ converge to a solution\n$(Y^x,Z^x,\\lambda)$ of the EBSDE (\\ref{EBSDE*}), for all $x\\in E$.\nWe also show that $\\lambda$ is unique under very general\nconditions. On the contrary, in general we can not expect\nuniqueness of the solution to (\\ref{EBSDE*}), at least in the non\nmarkovian case. On the other side in the markovian case we show\nthat we can find a solution of (\\ref{EBSDE*}) with\n$Y^x_t=v(X^x_t)$ and $Z^x_t=\\zeta(X^x_t)$ where $v$ is Lipschitz\nand $v(0)=0$. Moreover $(v, \\zeta)$ are unique at least in a\nspecial case where $\\psi$ is the Hamiltonian of a control problem\nand the processes $X^x$ are recurrent (see Section \\ref{sec-uniq}\nwhere we adapt an argument from \\cite{GoMa}).\n\n\n$ $\n\n\nIf we further assume differentiability of $F$ and\n $\\psi$ (in the Gateaux sense) then $v$ is differentiable,\n moreover $\\zeta =\\nabla v G$\n and finally $(v,\\lambda)$ give a mild solution of the HJB equation\n \\begin{equation}\n\\mathcal{L}v(x)\n+\\psi\\left( x,\\nabla v(x) G\\right) = \\lambda, \\quad x\\in E, \\label{hjb*}%\n\\end{equation}\nwhere linear operator $\\mathcal{L}$ is formally defined by\n\\[\n\\mathcal{L}f\\left( x\\right) =\\frac{1}{2}Trace\\left(\nGG^{\\ast}\\nabla ^{2}f\\left( x\\right) \\right) +\\langle Ax,\\nabla\nf\\left( x\\right) \\rangle_{E,E^{\\ast}}+\\langle F\\left( x\\right)\n,\\nabla f\\left( x\\right) \\rangle_{E,E^{\\ast}}.\n\\]\nMoreover if the Kolmogorov semigroup satisfies the smoothing\nproperty in Definition \\ref{strongly-feller} and $F$ is genuinely\ndissipative (see Definition \\ref{gen-diss}) then $v$ is bounded.\n\n\n\n$ $\n\n\n\nThe above results are then applied to a control problem with cost\n\\begin{equation}\\label{ergodic-cost*}\nJ(x,u)=\\limsup_{T\\rightarrow\\infty}\\frac{1}{T}\\, \\mathbb E\\int_0^T\nL(X_s^x,u_s)ds,\n\\end{equation}\n where $u$ is an adapted process (an admissible control)\nwith values in a separable metric space $U$, and the state\nequation is a Banach valued evolution equation of the form\n$$dX_t^x=(AX_t^x+F(X_t^x))\\, dt+G(dW_t+R(u_t)\\,dt),$$\nwhere $R: U \\rightarrow \\Xi$ is bounded. It is clear that the\nabove functional depends only on the asymptotic behavior of the\ntrajectories of $X^x$. After appropriate formulation\n we prove that, setting $\\psi(x,z)= \\inf_{u\\in U} [L(x,u)+ zR(u)]$ in\n (\\ref{EBSDE*}), then $\\lambda$ is optimal, that is\n $$\\lambda=\\inf_{u}J(x,u)$$\n where the infimum is over all admissible controls.\n Moreover $Z$ allows to construct on optimal feedback in the\n sense that $$\\lambda=J(x,u) \\hbox{ if and only if } L(X_t^x,u_t)+Z_t\nR(u_t)=\\psi(X_t^x,Z_t).$$\n\n\n\nFinally, see Section \\ref{section-heat-eq}, we show that our\nassumptions allow us to treat ergodic optimal control problems for\na stochastic heat equation with polynomial nonlinearity and\nspace-time white noise. We notice that the Banach space setting is\nessential in order to treat nonlinear terms with superlinear\ngrowth in the state equation.\n\n\n\n\n$ $\n\n\n\n\n\n\n\nThe paper is organized as follows.\nAfter a section on notation, we introduce the forward SDE; in section 4 we\nstudy the ergodic BSDEs; in section 5 we show in\naddition the differentiability of the\nsolution assuming that the coefficient is Gateaux differentiable.\nIn section 6 we study the ergodic Hamilton-Jacobi-Bellman\nequation and we apply our result to optimal\nergodic control in section 7. Section 8 is devoted to\nshow the uniqueness of Markovian solution and the last section\ncontains application to the ergodic control of a nonlinear stochastic\nheat equation.\n\n\n\\section{Notation}\n Let $E,F$ be Banach spaces, $H$ a Hilbert space, all\nassumed to be defined over the real field and to be separable. The\nnorms and the scalar product will be denoted $|\\,\\cdot\\,|$,\n$\\langle\\,\\cdot\\,,\\,\\cdot\\,\\rangle$, with subscripts if needed.\nDuality between the dual space $E^*$ and\n $E$ is denoted $\\langle\\,\\cdot\\,,\\,\\cdot\\,\\rangle_{E^*,E}$.\n$L(E,F)$ is the space of linear bounded operators $E\\to F$, with\nthe operator norm.\nThe domain of a linear (unbounded) operator $A$ is denoted $D(A)$.\n\nGiven a bounded function\n$ \\phi: E\\rightarrow \\mathbb{R}$ we denote\n$\\Vert\\phi\\Vert_0=\\sup_{x\\in E}|\\phi(x)|$. If, in addition,\n$\\phi$ is also Lipschitz continuous then\n$\\Vert\\phi\\Vert_{\\hbox{lip}}=\\Vert\\phi\\Vert_0+\n\\sup_{x,x'\\in E,\\,x\\ne x'}|\\phi(x)-\\phi(x')||x-x'|^{-1}$.\n\n\nWe say that a function $F:E\\to F$ belongs to\nthe class ${\\cal G}^1(E,F)$ if it is continuous, has a Gateaux\ndifferential $\\nabla F(x)\\in L(E,F)$ at any point $x \\in E$, and\nfor every $k\\in E$ the mapping $x\\to \\nabla F(x) k$ is continuous\nfrom $E$ to $F$ (i.e. $x\\to \\nabla F(x) $ is continuous from $E$\nto $L(E,F)$ if the latter space is endowed the strong operator\ntopology). In connection with stochastic equations,\nthe space ${\\cal G}^1$ has been introduced in \\cite{FuTe1},\nto which we refer the reader for further properties.\n\n\n Given a probability space $\\left(\n\\Omega,\\mathcal{F},\\mathbb{P}\\right) $ with a filtration\n$({\\cal F}_t)_{t\\ge 0}$ we consider the following classes of\nstochastic processes with values in a real separable Banach space\n$K$.\n\n\\begin{enumerate}\n\\item\n$L^p_{\\mathcal{P}}(\\Omega,C([0,T],K))$, $p\\in [1,\\infty)$,\n$T>0$, is the space\nof predictable processes $Y$ with continuous paths\non $[0,T]$\nsuch that\n$$\n|Y|_{L^p_{\\mathcal{P}}(\\Omega,C([0,T],E))}^p\n= \\mathbb E\\, \\sup_{t\\in [0,T]}|Y_t|_K^p<\\infty.\n$$\n\\item\n$L^p_{\\mathcal{P}}(\\Omega,L^2([0,T];K))$, $p\\in [1,\\infty)$,\n$T>0$, is the space\nof predictable processes $Y$ on $[0,T]$ such that\n$$\n|Y|^p_{L^p_{\\mathcal{P}}(\\Omega,L^2([0,T];K))}=\n\\mathbb E\\,\\left( \\int_{0}^{T}|Y_t|_K^2\\,dt\\right)^{p\/2}<\\infty.\n$$\n\n\n\n\\item\n$L_{\\cal P, {\\rm loc}}^2(\\Omega;L^2(0,\\infty;K))$\nis the space\nof predictable processes $Y$ on $[0,\\infty)$ that belong\nto the space $L^2_{\\mathcal{P}}(\\Omega,L^2([0,T];K))$\nfor every $T>0$.\n\\end{enumerate}\n\n\n\n\\section{The forward equation}\nIn a complete probability space $\\left(\n\\Omega,\\mathcal{F},\\mathbb{P}\\right) ,$ we consider the following\nstochastic differential equation with values in a Banach\nspace $E$:\n\\begin{equation}\n\\left\\{\n\\begin{array}[c]{l} dX_t =AX_t d t+F(X_t) dt +GdW_t ,\\text{ \\ \\ \\\n} t\\geq 0, \\\\\nX_0 =x\\in \\, E.\n\\end{array}\n\\right. \\label{sde}\n\\end{equation}\nWe assume that $E$ is continuously and densely embedded in a\nHilbert space $H$, and that both spaces are real separable.\n\n\n We will work under the following\ngeneral assumptions:\n\n\n\\begin{hypothesis}\n\\label{general_hyp_forward}\n\\begin{enumerate}\n \\item The operator $A$ is the generator of a strongly\n continuous semigroup of contractions in\n$E$. We assume that the semigroup\n $\\{e^{tA},\\, t\\geq0\\}$\n of bounded linear operators on $E$ generated by $A$\n admits an extension to a strongly\ncontinuous semigroup of bounded linear operators on $H$ that we\ndenote by $\\{S(t),\\, t\\geq 0\\}$.\n\n\n\n\\item $W$ is a cylindrical Wiener process in another real separable\nHilbert space $\\Xi$. Moreover by ${\\cal F}_{t}$ we denote the\n$\\sigma$-algebra generated by $\\{W_s,\\; s\\in [0,t]\\}$ and\n by the sets of ${\\cal F}$ with $\\mathbb P$-measure zero.\n\n\n\n\\item $F:E\\to E$ is continuous and has polynomial growth (that is\nthere exist $c>0, k\\ge 0$ such that $|F(x)|\\leq c (1+|x|^k)$,\n$x\\in E$). Moreover there exists $\\eta>0$\n such that $A+F+\\eta I$ is dissipative.\n\n\n\\item $G$ is a bounded linear operator from $\\Xi$ to $H$. The\nbounded linear, positive and symmetric operators on $H$ defined\nby the formula\n\\[\nQ_{t}h=\\int_{0}^{t}S(s)GG^{\\ast}S^*(s)h\\,ds,\\qquad t\\geq 0,\\; h\\in\nH,\n\\]\nare assumed to be of trace class in $H$. Consequently we can\ndefine the stochastic convolution\n$$\nW^{A}_t =\\int_{0}^{t}S(t-s) GdW_s,\\quad t\\geq 0,\n$$\nas a family of $H$-valued stochastic integrals. We assume that the\nprocess $\\{W^{A}_t,\\, t \\geq 0\\}$ admits an $E$-continuous\nversion.\n\\end{enumerate}\n\\end{hypothesis}\n\n\n\n\nWe recall that, for every $x\\in E$, with $x\\neq 0$, the\nsubdifferential of the norm at $x$, $\\partial\\left( |x| \\right) $,\nis the set of functionals $x^{\\ast}\\in E^{\\ast}$ such that\n$\\left\\langle x^{\\ast },x\\right\\rangle _{E^{\\ast},E}=| x| $ and $|\nx^{\\ast}|_{E^{\\ast}}=1$. If $x=0$ then $\\partial\\left( | x|\\right)\n$ is the set of functionals $x^{\\ast}\\in E^{\\ast}$ such that\n$|x^{\\ast}|_{E^{\\ast}}\\leq 1$. The dissipativity assumption on\n$A+F$ can be explicitly stated as follows: for $x,x'\\in\nD(A)\\subset E$ there exists $x^{\\ast } \\in\\partial\\left( \\left|\nx-x' \\right| \\right) $ such that\n$$\\left\\langle\nx^{\\ast} ,A( x-x' ) +F\\left( x \\right) -F\\left( x' \\right)\n\\right\\rangle _{E^{\\ast},E}\n \\leq-\\eta\\left|\nx-x' \\right|.\n$$\n\n\nWe can state the following theorem, see e.g. \\cite{DP1}, theorem\n7.13 and \\cite{DP2}, theorem 5.5.13.\n\\begin{theorem}\n\\label{teo2 forward}Assume that Hypothesis\n\\ref{general_hyp_forward} holds true. Then for every $x\\in E$\nequation (\\ref{sde}) admits a unique mild solution, that is an\nadapted $E$-valued process with continuous paths satisfying\n$\\mathbb{P}$-a.s.\n\\[\nX_{t}=e^{ t A}x+\\int_{0}^{t}e^{ (t-s ) A}F\\left( X_{s}\\right)\nds+\\int_{0}^{t}e^{(t-s ) A}GdW_{s},\\text{ \\ \\ \\ }t\\geq 0 .\n\\]\n\\end{theorem}\n\n\nWe denote the solution by $X^x $, $x\\in E$.\n\n\n Now we want to investigate the dependence of\nthe solution on the initial datum.\n\n\n\\begin{proposition}\n\\label{prop lip X} Under Hypothesis \\ref{general_hyp_forward} it\nholds:\n\\[\n\\left| X_t^{x_1} -X_t^{x_2} \\right| \\leq e^{-\\eta t }\\left|\nx_{1}-x_{2}\\right| ,\\text{ }t\\ge 0, \\;\\; x_{1},x_{2}\\in E.\n\\]\n\n\n\\end{proposition}\n\n\n\\begin{proof}\nLet $X_{1}\\left( t\\right) =X^{x_1}_{t} $ and $X_{2}\\left(\nt\\right) =X^{x_2}_{t} $, $x_{1},x_{2}\\in E$. For $i=1,2$ we set\n$X_{i}^{n}\\left( t\\right) =J_n X_{i}\\left( t\\right) $, where\n$J_n\n =n\\left( nI-A\\right) ^{-1}$. Since $X_{i}^{n}\\left( t\\right)\n\\in {D}\\left( A\\right) $ for every $t\\geq 0 $, and\n\\[\nX_{i}^{n}\\left( t\\right) =e^{t A}J_n x_{i}+\\int\n_{0}^{t}e^{\\left( t-s\\right) A}J_n F\\left( X_{i}\\left( s\\right)\n\\right) ds+\\int_{0}^{t}e^{\\left( t-s\\right) A}J_n GdW_{s},\n\\]\nwe get\n\\[\n\\frac{d}{dt}\\left( X_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left(\nt\\right) \\right) =A\\left( X_{1}^{n}\\left( t\\right) -X_{2}\n^{n}\\left( t\\right) \\right) +J_n \\left[ F\\left( X_{1}\\left(\nt\\right) \\right) -F\\left( X_{2}\\left( t\\right) \\right) \\right]\n.\n\\]\nSo, by proposition II.8.5 in \\cite{S}\\ also $\\left|\nX_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left( t\\right) \\right| $\nadmits the left and right derivatives with respect to $t$ and\nthere exists $x_{n}^{\\ast }\\left( t\\right) \\in\\partial\\left(\n\\left| X_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left( t\\right) \\right|\n\\right) $ such that the left derivative of $\\left|\nX_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left( t\\right) \\right| $\nsatisfies the following\n\\[\n\\frac{d^{-}}{dt}\\left| X_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left(\nt\\right) \\right| =\\left\\langle x_{n}^{\\ast}\\left( t\\right)\n,\\frac{d}{dt}\\left( X_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left(\nt\\right) \\right) \\right\\rangle _{E^{\\ast},E}.\n\\]\nSo we have\n$$ \\begin{array}{ll}\\displaystyle \\frac{d^{-}}{dt}\\left| X_{1}^{n}\\left(\nt\\right) -X_{2}^{n}\\left( t\\right) \\right| & =\\left\\langle\nx_{n}^{\\ast}\\left( t\\right) ,A\\left( X_{1}^{n}\\left( t\\right)\n-X_{2}^{n}\\left( t\\right) \\right) +F\\left( X_{1} ^{n}\\left(\nt\\right) \\right) -F\\left( X_{2}^{n}\\left( t\\right)\n\\right) \\right\\rangle _{E^{\\ast},E}\\\\\n& \\quad +\\left\\langle x_{n}^{\\ast}\\left( t\\right) ,J_n F\\left(\nX_{1}\\left( t\\right) \\right) -F\\left( X_{1}\n^{n}\\left( t\\right) \\right) \\right\\rangle _{E^{\\ast},E}\\\\\n& \\quad -\\left\\langle x_{n}^{\\ast}\\left( t\\right) ,J_n F\\left(\nX_{2}\\left( t\\right) \\right) -F\\left( X_{2}\n^{n}\\left( t\\right) \\right) \\right\\rangle _{E^{\\ast},E}\\\\\n& \\leq-\\eta\\left|\n X_{1}^{n}\\left( t\\right) -X_{2}^{n}\\left(\nt\\right) \\right| +\\left| \\delta_{1}^{n}\\left( t\\right)\n-\\delta_{2}^{n}\\left( t\\right) \\right| ,\n\\end{array}$$\nwhere for $i=1,2$ we have set $\\delta_{i}^{n}\\left( t\\right) =J_n\nF\\left( X_{i}\\left( t\\right) \\right) -F\\left( X_{i}^{n}\\left(\nt\\right) \\right) $.\n\n\nMultiplying the above by $e^{\\eta t}$ we get\n$$\\frac{d^{-}}{dt}\\left( e^{\\eta t}\\left| X_{1}^{n}\\left( t\\right)\n-X_{2}^{n}\\left( t\\right) \\right|\\right)\\leq e^{\\eta t} \\left|\n\\delta_{1}^{n}\\left( t\\right) -\\delta_{2}^{n}\\left( t\\right)\n\\right|.$$ We note that $\\delta_{i} ^{n}\\left( t\\right) $ tends\nto $0$\\ uniformly in $t\\in \\left[0,T\\right] $ for arbitrary\n$T>0$. Indeed,\n\\[\n\\delta_{i}^{n}\\left( t\\right) =nR\\left( n,A\\right) \\left[\nF\\left( X_{i}\\left( t\\right) \\right) -F\\left( X_{i}^{n}\\left(\nt\\right) \\right) \\right] +\\left( nR\\left( n,A\\right) -I\\right)\nF\\left( X_{i}\\left( t\\right) \\right) ,\n\\]\nand the convergence to $0$ follows by a classical argument, see\ne.g. the proof of theorem 7.10 in \\cite{DP1}, since\n$X_{i}^{n}\\left( t\\right) $ tends to $X_{i}\\left( t\\right) $\nuniformly in $t\\in\\left[ 0,T\\right] $ and the maps $t\\mapsto\nX_{i}\\left( t\\right) $ and $t\\mapsto F\\left( X_{i}\\left(\nt\\right) \\right) $ are continuous with respect to $t$.\n\n\n Thus letting $n\\rightarrow\\infty$ we can conclude\n\\[\n\\left| X_{1}\\left( t\\right) -X_{2}\\left( t\\right) \\right| \\leq\ne^{-\\eta t } \\left| x_{1}-x_{2}\\right| .\n\\]\nand the claim is proved. \\end{proof}\n\n\n$ $\n\n\n\\noindent We will also need the following assumptions.\n\\begin{hypothesis}\n\\label{hyp_W_A F(W_A)} We have $\\sup_{t\\geq 0}\\,\n\\mathbb E\\,|W^A_t|^2<\\infty.$\n\\end{hypothesis}\n\\begin{hypothesis}\n\\label{hyp-convol-determ} $e^{tA}G\\,(\\Xi)\\subset E$ for all $t>0$\nand $\\displaystyle \\int_0^{+\\infty} |e^{tA} G|_{L(\\Xi,E)} dt <\n\\infty$.\n\\end{hypothesis}\n\n\nWe recall that for arbitrary gaussian random variabile $Y$ with\nvalues in the Banach space $E$, the inequality\n$$\n\\mathbb E \\,\\phi (|Y|-\\mathbb E\\,|Y|)\\le \\mathbb E \\,\\phi (2\\sqrt{\\mathbb E\\,|Y|^2}\\,\\gamma)\n$$\nholds for any convex nonnegative continuous function $\\phi$\non $E$ and\nfor $\\gamma$ a real standard gaussian random variable, see e.g.\n\\cite{kw-woy}, Example 3.1.2. Upon taking $\\phi(x)=|x|^p$, it\nfollows that for every $p\\ge 2$ there exists $c_p>0$ such that $\\mathbb E\n\\,|Y|^p\\le c_p(\\mathbb E \\,|Y|^2)^{p\/2}$. By the gaussian character of\n$W^A_t$ and the polynomial growth condition on $F$ stated in\nHypothesis \\ref{general_hyp_forward}, point 3, we see that\nHypothesis \\ref{hyp_W_A F(W_A)} entails that for every $p\\ge 2$\n\\begin{equation}\\label{stimegaussunif}\n\\sup_{t\\geq 0} \\mathbb E\\left[ |W^A_t|^p+ |F(W^A_t)|^p \\right] <\\infty.\n\\end{equation}\n\n\n\\begin{proposition}\\label{prop-X-L^p}\nUnder Hypothesis \\ref{general_hyp_forward} it holds, for arbitrary\n$T>0$ and arbitrary $p\\geq 1$\n\\begin{equation}\\label{prop-X-L^p-1}\n\\mathbb E\\sup_{t\\in [0,T]} |X_t^x|^p \\leq C_{p,T}(1+|x|^p),\\qquad x\\in\nE.\n\\end{equation}\n If, in addition, Hypothesis\n\\ref{hyp_W_A F(W_A)} holds then, for a suitable constant C\n\\begin{equation}\\label{prop-X-L^p-2}\\sup_{t\\geq 0} \\mathbb E |X_t^x| \\leq C(1+|x|)\n,\\qquad x\\in E.\\end{equation} Moreover if, in addition, Hypothesis\n\\ref{hyp-convol-determ} holds, $\\gamma$ is a bounded, adapted,\n$\\Xi$-valued process and $X^{x,\\gamma}$ is the mild solution of\nequation\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\ndX^{x,\\gamma}_t =AX^{x,\\gamma}_{t} dt+F(\nX^{x,\\gamma}_{t} ) dt+GdW_{t}+G\\gamma_{t}\\,dt ,\\quad t\\geq 0, \\\\\nX^{x,\\gamma}_{0} =x\\in E.\n\\end{array} \\right. \\label{sde-gamma}\n\\end{equation}\nthen it is still true that\n\\begin{equation}\\label{rel-estimate-Xgamma}\n \\sup_{t\\geq 0} \\mathbb E |X^{x,\\gamma}_t| \\leq C_{\\gamma}(1+|x|),\\qquad x\\in E,\n\\end{equation}\nfor a suitable constant $C_{\\gamma}$ depending only on\n a uniform bound for $\\gamma$.\n\\end{proposition}\n\\begin{proof} We let $Z_t=X^x_t-W^A_t$,\n$Z^n_t=J_n Z_t $, then\n$$\\frac{d}{dt } Z^n_t =\nAZ^n_t +J_nF(X^x_t) = AZ^n_t +\\left[F(Z^n_t+J_n W^A_t) - F(J_n\nW^A_t)\\right]+F( W^A_t)+\\delta^n_t\n$$ where $$\\delta^n_t= J_n F(X^x_t)-F(J_n X^x_t)\n+F(J_n W^A_t)-F( W^A_t).$$ Proceeding as in the proof of\nProposition \\ref{prop lip X} observing that, for all $t>0$,\n $\\displaystyle \\int_0^{t}|\\delta^n_s| ds \\rightarrow 0$ as $n\\rightarrow\\infty$, we get:\n$$|Z_t|\\leq e^{-\\eta t}|x|+\\int_0^{t} e^{-\\eta (t-s)}\n|F(W^A_s)|ds,\\;\\;\\; \\mathbb{P}-\\hbox{a.s.}$$ and\n(\\ref{prop-X-L^p-2}) follows from (\\ref{stimegaussunif}).\n\n\nIn the case in which $X^x$ is replaced by $X^{x, \\gamma}$ the\nproof is exactly the same just replacing $W^A_t$ by\n$W^{A,\\gamma}_t=W^A_t+\\int_0^t e^{(t-s)A}G\\gamma_s ds$.\n\n\n\nFinally to prove (\\ref{prop-X-L^p-1}) we notice that (see the\ndiscussion in \\cite{masiero}) the process $W^A$ is a Gaussian\nrandom variable with values in $C([0,T],E)$. Therefore by the\npolynomial growth of $F$ we get\n$$ \\mathbb E\\sup_{t\\in [0,T]} \\left[|W^A_t|^p + |F(W^A_t)|^p\\right]\\leq\nC_{p,T}(1+|x|^p),$$ and the claim follows as above.\n\\end{proof}\n\n\n$ $\n\n\n Finally the following result is proved exactly as\nTheorem 6.3.3. in \\cite{DP2}.\n\\begin{theorem}\\label{ergodicity}\nAssume that Hypotheses \\ref{general_hyp_forward} and\n \\ref{hyp_W_A F(W_A)} hold then equation (\\ref{sde}) has a unique\n invariant measure in $E$ that we will denote by $\\mu$. Moreover\n $\\mu$ is strongly\nmixing (that is, for all $x\\in E$, the law of $X_t^x$ converges\nweakly to $ \\mu$ as\n $t\\rightarrow \\infty$).\n Finally\nthere exists a constant $C>0$ such that for any bounded Lipschitz\nfunction $\\phi: E\\rightarrow \\mathbb{R}$,\n$$\\left|\\mathbb{E}\\phi(X^x_t)-\\int_E \\phi\\, d\\mu \\right|\\leq C(1+|x|)\ne^{-\\eta t \/2} \\Vert\\phi\\Vert_{\\hbox{\\em lip}}.$$\n\\end{theorem}\n\n\n\n\\section{Ergodic BSDEs (EBSDEs)}\n\n\nThis section is devoted to the following type of BSDEs with\ninfinite horizon\n\\begin{equation}\\label{EBSDE}\nY^x_t=Y^x_T +\\int_t^T\\left[\\psi(X^x_\\sigma,Z^x_\\sigma)-\n\\lambda\\right]d\\sigma-\\int_t^T Z^x_\\sigma\\, dW_\\sigma, \\quad 0\\le\nt\\le T <\\infty,\n\\end{equation}\nwhere $\\lambda$ is a real number and is part of the unknowns of\nthe problem; the equation is required to hold for every $t$ and\n$T$ as indicated. On the function $\\psi: E\\times \\Xi^*\n\\rightarrow {\\mathbb R}$ and assume the following:\n\n\n\\begin{hypothesis}\\label{hypothesisroyer} $ $ There exists\n$K_x, K_z>0$ such that\n$$ |\\psi(x,z)\n-\\psi(x',z')|\\le K_x|x-x'|+ K_z |z-z'|, \\qquad\n x,x'\\in E,\\;\nz,z'\\in\\Xi^*.\n$$\nMoreover $\\psi(\\,\\cdot\\,,0)$ is bounded. We denote $\\sup_{x\\in E\n}|\\psi(x,0)|$ by $M$.\n\\end{hypothesis}\nWe start by considering an infinite horizon equation with strictly\nmonotonic drift, namely, for $\\alpha>0$, the equation\n\\begin{equation}\\label{bsderoyer}\n{Y}^{x,\\alpha}_t={Y}^{x,\\alpha}_T +\\int_t^T(\\psi(X^{x}_\\sigma,\nZ^{x,\\alpha}_\\sigma)-\\alpha Y^{x,\\alpha}_\\sigma)d\\sigma-\\int_t^T\nZ^{x,\\alpha}_\\sigma dW_\\sigma, \\quad 0\\le t\\le T <\\infty.\n\\end{equation}\n\n\n\n\nThe existence and uniqueness of solution to (\\ref{bsderoyer})\nunder Hypothesis \\ref{hypothesisroyer} was first studied by Briand\nand Hu in \\cite{bh} and then generalized by Royer in \\cite{royer}.\n They have established the following result when $W$ is a finite dimensional\n Wiener process but the extension to the case in which $W$ is a\n Hilbert-valued Wiener process is immediate (see also \\cite{HuTe}).\n\n\n\\begin{lemma}\\label{lemmaroyer} Let us suppose that Hypotheses\n\\ref{general_hyp_forward} and\n\\ref{hypothesisroyer} hold.\n Then\n there exists a unique solution $(Y^{x,\\alpha},Z^{x,\\alpha})$\n to BSDE (\\ref{bsderoyer})\nsuch that $Y^{x,\\alpha}$ is a bounded continuous process, and\n$Z^{x,\\alpha}$ belongs to $L_{\\cal P, {\\rm\nloc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$.\n\n\nMoreover $|Y^{x,\\alpha}_t|\\leq {M}\/{\\alpha}$, $\\mathbb{P}$-a.s.\nfor all $t\\geq 0$.\n\\end{lemma}\nWe define $$v^{\\alpha}(x)=Y^{\\alpha,x}_0.\n$$\nWe notice that by the above $|v^{\\alpha}(x)|\\leq {M}\/{\\alpha}$ for\nall $x\\in E$. Moreover by the uniqueness of the solution of\nequation (\\ref{bsderoyer}) it follows that\n$Y^{\\alpha,x}_t=v^{\\alpha}(X^x_t)$\n\n\n\nTo establish Lipschitz continuity of $ v^{\\alpha}$ (uniformly in\n$\\alpha$) we use a Girsanov argument due to P. Briand and Y. Hu,\nsee \\cite{bh}. Here and in the following we use an\ninfinite-dimensional version of the Girsanov formula that can be\nfound for instance in \\cite{DP1}.\n\\begin{lemma}\\label{lemma-lip-v} Under Hypotheses \\ref{general_hyp_forward}\nand \\ref{hypothesisroyer} the following\nholds for any $\\alpha>0$:\n$$|v^{\\alpha}(x) - v^{\\alpha}(x')| \\leq \\frac{K_x}{\\eta} |x-x'|,\n\\qquad x,x'\\in E. $$\n\\end{lemma}\n\\begin{proof} We briefly report the argument for the reader's convenience.\n\n\nWe set $\\tilde{Y}=Y^{\\alpha,x}-Y^{\\alpha,x'}$,\n$\\tilde{Z}=Z^{\\alpha,x}-Z^{\\alpha,x'},$\n$$\\beta_t=\\begin{cases}\n\\frac{\\displaystyle \\psi(X^{x'}_t,Z^{\\alpha,\nx'}_t)-\\psi(X^{x'}_t,Z^{\\alpha,x}_t)} {\\displaystyle\n|Z^{\\alpha,x}_t - Z^{\\alpha,x'}_t|_{\\Xi^*}^2}\\left( Z^{\\alpha,x}_t\n- Z^{\\alpha,x'}_t\\right)^*,& \\hbox{ if } Z^{\\alpha,x}_t \\neq Z^{\\alpha,x'}_t \\\\\n0, & \\hbox{ elsewhere, }\n \\end{cases}\n$$\n$$f_t=\\psi(X^{x}_t,\nZ^{x,\\alpha}_t)-\\psi(X^{x'}_t, Z^{x,\\alpha}_t). $$ By\nHypothesis \\ref{hypothesisroyer}, $\\beta$ is a bounded\n$\\Xi$-valued, adapted process thus there exists a probability\n$\\tilde{\\mathbb{P}}$ under which $\\tilde{W_{t}}=\\int_0^{t} \\beta_s\nds + W_{t}$ is a cylindrical $\\Xi$-valued Wiener process for\n${t}\\in [0,T]$. Then $(\\tilde{Y},\\tilde{Z})$ verify, for all\n$0\\le t\\le T <\\infty$,\n\\begin{equation}\\label{bsderoyer-girsanov}\n\\tilde{Y}_t=\\tilde{Y}_T -\\alpha \\int_t^T \\tilde{Y}_\\sigma d\\sigma\n+\\int_t^T f_{\\sigma}d\\sigma- \\int_t^T \\tilde{Z}_\\sigma\nd\\tilde{W}_\\sigma.\n\\end{equation}\nComputing $d (e^{-\\alpha t}\\tilde{Y}_t)$, integrating over\n$[0,T]$, estimating the absolute value and finally taking the\nconditional expectation\n $\\tilde{\\mathbb{E}}^{\\mathcal{F}_t}$ with respect to\n$\\tilde{\\mathbb{P}}$ and $\\mathcal{F}_t$ we get:\n$$ |\\tilde{Y}_t| \\leq e^{-\\alpha(T-t)} \\tilde{\\mathbb{E}}^{\\mathcal{F}_t}\n| \\tilde{Y}_T |+\n \\tilde{\\mathbb{E}}^{\\mathcal{F}_t}\n \\int_{t}^T e^{-\\alpha(s-t)} |f_s| ds $$\nNow we recall that $ \\tilde{Y}$ is bounded and that $|f_t|\\leq\nK_x |X^{x}_t-X^{x'}_t|\\leq K_x e^{-\\eta t}|x-x'|$ by Proposition\n\\ref{prop lip X}. Thus if $T\\rightarrow \\infty$ we get $\n|\\tilde{Y}_t| \\leq K_x (\\eta+\\alpha)^{-1}e^{\\alpha t} |x-x'| $\nand the claim follows setting $t=0$.\n\\end{proof}\n\n\n$ $\n\n\n\\noindent By the above Lemma if we set\n$$\\overline{v}^{\\alpha}(x)= {v}^{\\alpha}(x)- {v}^{\\alpha}(0),$$\nthen $ | \\overline{v}^{\\alpha}(x)|\\leq K_x \\eta^{-1}|x|$ for\nall $x\\in E$ and all $\\alpha>0$. Moreover by Lemma\n\\ref{lemmaroyer} $\\alpha |{v}^{\\alpha}(0)|\\leq M$.\n\n\n \\noindent Thus by a diagonal procedure we can construct a\n sequence $\\alpha_n\\searrow 0$ such that for all $x$ in a\ncountable dense subset $D\\subset E$\n \\begin{equation}\\label{def-of-lambda}\n {\\overline{v}}^{\\alpha_n}(x)\\rightarrow \\overline{v}(x),\\qquad\n\\alpha_n v^{\\alpha_n}(0)\\rightarrow \\overline{\\lambda},\n \\end{equation}\nfor a suitable function $ \\overline{v}: D \\rightarrow\n\\mathbb{R}$ and for a suitable real number $\\overline{\\lambda}$.\n\n\n Moreover, by Lemma \\ref{lemma-lip-v}, $ | \\overline{v}^{\\alpha}(x)-\n\\overline{v}^{\\alpha}(x')|\\leq K_x \\eta^{-1}|x-x'|$ for all\n$x,x'\\in E$ and all $\\alpha>0$. So $\\overline{v}$ can be extended\nto a Lipschitz function defined on the whole $E$ (with Lipschitz\nconstant $K_x \\eta^{-1} $) and\n\\begin{equation}\\label{def-of-v} {\\overline{v}}^{\\alpha_n}(x)\\rightarrow\n\\overline{v}(x),\\qquad x\\in E.\\end{equation}\n\n\n\\begin{theorem} \\label{main-EBSDE} Assume Hypotheses\n\\ref{general_hyp_forward} and\n\\ref{hypothesisroyer} hold. Moreover let $\\bar \\lambda$ be the\nreal number in (\\ref{def-of-lambda}) and define $\\bar Y^x_t= \\bar\nv(X^x_t)$ (where $\\overline{v}$ is the Lipschitz function with\n$\\overline{v}(0)=0$ defined in (\\ref{def-of-v})). Then there\nexists a process $\\overline{Z}^{x}\\in L_{\\cal P, {\\rm\nloc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$\n such that $\\mathbb{P}$-a.s. the EBSDE\n (\\ref{EBSDE}) is satisfied by\n $(\\bar Y^x,\\bar Z^x, \\bar \\lambda)$ for all $0\\leq t\\leq T$.\n\n\nMoreover there exists a measurable function $\\overline{\\zeta}:\nE\\rightarrow \\Xi^*$ such that\n$\\overline{Z}^{x}_t=\\overline{\\zeta}(X^x_t)$.\n\\end{theorem}\n\n\n\\begin{proof} Let $\\overline{Y}^{x,\\alpha}_t={Y}^{x,\\alpha}_t-v^{\\alpha}(0)=\n\\overline{v}^{\\alpha}({X}^{x}_t)$. Clearly we have,\n$\\mathbb{P}$-a.s.,\n\\begin{equation}\\label{equation-proof-main-1}\n \\overline{Y}^{x,\\alpha}_t=\\overline{Y}^{x,\\alpha}_T +\\int_t^T(\\psi(X^{x}_\\sigma,\nZ^{x,\\alpha}_\\sigma)-\\alpha \\overline{Y}^{x,\\alpha}_\\sigma-\\alpha\n{v}^{\\alpha}(0))d\\sigma -\\int_t^T Z^{x,\\alpha}_\\sigma dW_\\sigma,\n\\quad 0\\le t\\le T <\\infty.\n\\end{equation}\nSince $|\\bar v^{\\alpha}(x)|\\leq K_x|x|\/\\eta $, inequality\n(\\ref{prop-X-L^p-1}) ensures that\n$\\mathbb{E}\\sup_{t\\in[0,T]}\\left[\\sup_{\\alpha>0}\n|\\overline{Y}^{x,\\alpha}_t|^2\\right]< +\\infty$ for any $T>0$.\nThus, if we define $\\overline{Y}^x=\\overline{v}(X^x)$, then by\ndominated convergence theorem\n$$\\mathbb{E} \\int_0^T |\\overline{Y}^{x,\\alpha_n}_t -\\overline{Y}^{x}_t|^2 dt\n \\rightarrow 0\\quad \\hbox{and}\\quad\n\\mathbb{E} |\\overline{Y}^{x,\\alpha_n}_T-\\overline{Y}^{x}_T|^2\n\\rightarrow 0\n$$\nas $n\\rightarrow \\infty$ (where $\\alpha_n \\searrow 0$ is a\nsequence for which (\\ref{def-of-lambda}) and (\\ref{def-of-v})\nhold).\n\n\n\nWe claim now that there exists $\\overline{Z}^{x}\\in L_{\\cal P,\n{\\rm loc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$ such that\n $$\\mathbb{E} \\int_0^T |{Z}^{x,\\alpha_n}_t -\\overline{Z}^{x}_t|_{\\Xi^*}^2 dt\n \\rightarrow 0$$\nLet $\\tilde{Y}={\\bar Y}^{x,\\alpha_n}-{\\bar Y}^{x,\\alpha_m}$,\n$\\tilde{Z}={Z}^{x,\\alpha_n}-{Z}^{x,\\alpha_m}$. Applying It\\^o's\nrule to $\\tilde{Y}^2$ we get by standard computations\n$$\\tilde{Y}^2_0+\\mathbb{E}\\int_0^T |\\tilde{Z}_t|_{\\Xi^*}^2 dt\n=\\mathbb{E}{\\tilde Y}^2_T + 2\\mathbb{E}\\int_0^T \\tilde \\psi_t\n\\tilde Y_t dt -2 \\mathbb{E}\\int_0^T \\left[\\alpha_n\n{Y}^{x,\\alpha_n}_t - \\alpha_m {Y}^{x,\\alpha_m}_t\\right] \\tilde\nY_t\\,dt\n$$\nwhere $\\tilde\n\\psi_t=\\psi(X^x_t,Z^{x,\\alpha_n}_t)-\\psi(X^x_t,Z^{x,\\alpha_m}_t)$.\nWe notice that $|\\tilde\\psi_t| \\leq K_z|\\tilde Z _t|$ and\n$\\alpha_n |{Y}^{x,\\alpha_n}_t|\\leq M$. Thus\n$$\n\\mathbb{E}\\int_0^T |\\tilde{Z}_t|_{\\Xi^*}^2 dt \\leq c\\left[\n\\mathbb{E} (\\tilde Y^x_T)^2 +\\mathbb{E}\\int_0^T (\\tilde{Y}^x_t)^2\ndt +\\mathbb{E}\\int_0^T |\\tilde{Y}^x_t| dt \\right].$$ It follows\nthat the sequence $\\{{Z}^{x,\\alpha_m}\\}$ is Cauchy in\n$L^2(\\Omega;L^2(0,T;\\Xi^*))$ for all $T>0$ and our claim is\nproved.\n\n\nNow we can pass to the limit as $n\\rightarrow \\infty$ in equation\n(\\ref{equation-proof-main-1}) to obtain\n\\begin{equation}\\label{equation-proof-main-2}\n \\overline{Y}^{x}_t=\\overline{Y}^{x}_T +\\int_t^T(\\psi(X^{x}_\\sigma,\n\\overline{Z}^{x}_\\sigma)-\\overline{\\lambda })d\\sigma-\\int_t^T\n\\overline{Z}^{x}_\\sigma dW_\\sigma, \\quad 0\\le t\\le T <\\infty.\n\\end{equation}\nWe notice that the above equation also ensures continuity of the\ntrajectories of $\\overline{Y}$ It remains now to prove that we can\nfind a measurable function $\\bar \\zeta:E\\rightarrow \\Xi^*$ such\nthat\n $\\overline{Z}^{x}_t=\\bar \\zeta (X^x_t)$, $\\mathbb{P}$-a.s. for almost every $t\\geq 0$.\n\n\nBy a general argument, see for instance \\cite{Fu}, we know that\nfor all $\\alpha>0$ there exists $\\zeta^{\\alpha}:E\\rightarrow\n\\Xi^*$ such that\n ${Z}^{x,\\alpha}_t=\\zeta^{\\alpha} (X^x_t)$, $\\mathbb{P}$-a.s.\n for almost every $t\\geq 0$.\n\n\nTo construct $\\zeta$ we need some more regularity of the processes\n${Z}^{x,\\alpha}$ with respect to $x$.\n\n\nIf we compute $d ({Y}^{x,\\alpha}_t-{Y}^{x',\\alpha}_t)^2$ we get by\nthe Lipschitz character of $\\psi$:\n$$ \\begin{array} {l}\n\\displaystyle \\mathbb{E}\\int_0^T\n|Z^{x,\\alpha}_t-Z^{x',\\alpha}_t|_{\\Xi^*}^2 dt \\leq \\mathbb{E}\n(v^{\\alpha}(X^x_T)- v^{\\alpha}(X^{x'}_T))^2\n \\\\\n\\quad + \\displaystyle \\mathbb{E}\\int_0^T\n\\left(K_x|X^x_s-X^{x'}_s|\n+K_z|Z^{x,\\alpha}_s-Z^{x',\\alpha}_s|\\right)\n\\left|v^{\\alpha}(X^x_s)- v^{\\alpha}(X^{x'}_s)\\right| ds\n \\end{array}$$\nBy the Lipschitz continuity of $v^{\\alpha}$ (uniform in $\\alpha$)\nthat of $\\psi$ and Proposition \\ref{prop lip X} we immediately\nget:\n\\begin{equation}\\label{lip-of-Z}\n \\mathbb{E}\\int_0^T |Z^{x,\\alpha}_t-Z^{x',\\alpha}_t|_{\\Xi^*}^2 dt \\leq c |x-x'|^2.\n\\end{equation}\nfor a suitable constant $c$ (that may depend on $T$).\n\n\nNow we fix an arbitrary $T>0$ and, by a diagonal procedure (using\nseparability of $E$) we construct a subsequence\n$(\\alpha_n')\\subset (\\alpha_n)$ such that $\\alpha_n' \\searrow 0$\nand\n$$\\mathbb{E}\\int_0^T |Z^{x,\\alpha_n'}_t-Z^{x',\\alpha_m'}_t|_{\\Xi^*}^2 dt \\leq 2^{-n}\n$$ for all $m\\geq n$ and for all $x\\in E$.\nConsequently $Z^{x,\\alpha_n'}_t\\rightarrow \\overline{Z}^x_t$,\n$\\mathbb{P}$-a.s. for a.e. $t\\in [0,T]$. Then we set:\n$$\\bar \\zeta(x)=\\left\\{\\begin{array}{ll} \\lim_n \\zeta^{\\alpha_n'}(x),\n& \\hbox{ if the limit exists in }\\Xi^*,\\\\\n0, & \\hbox{ elsewhere.}\\end{array}\\right.$$ Since\n$Z^{x,\\alpha_n'}_t= \\zeta^{\\alpha_n'}(X^x_t)\\rightarrow\n\\overline{Z}^{x}_t$ $\\mathbb{P}$-a.s. for a.e. $t\\in [0,T]$ we\nimmediately get that, for all $x\\in E$, the process $X^x_t$\nbelongs $\\mathbb{P}$-a.s. for a.e. $t\\in [0,T]$ to the set where\n$\\lim_n \\zeta^{\\alpha_n'}(x)$ exists and consequently\n $\\overline{Z}^{x}_t=\\bar \\zeta(X^x_t)$.\n\\end{proof}\n\\begin{remark}\\begin{em} We notice that the solution we\nhave constructed above has the following ``linear growth''\nproperty with respect to $X$: there exists $c>0$ such that,\n$\\mathbb{P}$-a.s.,\n\\begin{equation}\\label{growt-of-Y}\n|\\overline{Y}^x_t|\\leq c |X^x_t| \\hbox{ for all $t\\geq 0$}.\n\\end{equation}\n\\end{em}\n\\end{remark}\nIf we require similar conditions then we immediately obtain\nuniqueness of $\\lambda$.\n\\begin{theorem}\\label{th-uniq-lambda} Assume that,\nin addition to Hypotheses \\ref{general_hyp_forward}, \\ref{hyp_W_A\nF(W_A)} and \\ref{hypothesisroyer}, Hypothesis\n\\ref{hyp-convol-determ} holds as well. Moreover suppose that, for\nsome $x\\in E$, the triple $(Y',Z',\\lambda')$ verifies\n$\\mathbb{P}$-a.s. equation\n (\\ref{EBSDE}) for all $0\\leq t\\leq T$,\nwhere\n $Y'$ is a progressively measurable continuous process, $Z'$ is a process\n in $L_{\\cal P, {\\rm loc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$ and\n $\\lambda'\\in \\mathbb{R}$.\n Finally assume that there exists $c_x>0$ (that may depend\n on $x$) such that\n$\\mathbb{P}$-a.s.\n$$\n |Y'_t|\\leq c_x (|X^x_t|+1) , \\hbox{ for all $t\\geq 0$}.\n$$ Then $\\lambda'=\\bar \\lambda$.\n\\end{theorem}\n\\begin{proof}\nLet $\\tilde \\lambda=\\lambda'-\\lambda$, $\\tilde\nY=Y'-\\overline{Y}^x$, $\\tilde Z=Z'-\\overline{Z}^x$. By easy\ncomputations:\n$$\\tilde \\lambda=T^{-1}\\left[\\tilde Y_T-\\tilde Y_0\\right]+T^{-1}\\int_0^T \\tilde Z_t \\gamma_t dt\n-T^{-1}\\int_0^T \\tilde Z_t dW_t$$ where\n$$\\gamma_t:=\\begin{cases} \\frac{\\displaystyle \\psi(X^{x}_t,Z'_t)-\\psi(X^{x}_t,\\overline{Z}^{x}_t)}{\\displaystyle |Z'_t - \\overline{Z}_t|_{\\Xi^*}^2}\\left(Z'_t - \\overline{Z}_t \\right)^*,& \\hbox{ if } Z'_t \\neq \\overline{Z}_t, \\\\\n0, & \\hbox{ elsewhere },\n \\end{cases}\n$$\nis a bounded $\\Xi$-valued progressively measurable process. By the\nGirsanov Theorem there exists a probability measure\n$\\mathbb{P}_{\\gamma}$ under which $W^{\\gamma}_t=-\\int_0^t \\gamma_s\nds+W_t$, $t\\in [0,T]$, is a cylindrical Wiener process in $\\Xi$.\nThus computing expectation with respect to $\\mathbb{P}_{\\gamma}$\nwe get\n$$\\tilde \\lambda=T^{-1}\\mathbb{E}^{\\mathbb{P}_{\\gamma}}\n\\left[\\tilde Y_T-\\tilde Y_0\\right].$$ Consequently, taking into\naccount (\\ref{growt-of-Y}),\n \\begin{equation}\\label{eq-proof-uniq-lambda}\n|\\tilde \\lambda|\\leq c T^{-1}\\mathbb{E}^{\\mathbb{P}_{\\gamma}}\n(|X^x_T|+1)+ c T^{-1}(|x|+1)\n \\end{equation}\nWith respect to $W^{\\gamma}$, $X^x$ is the mild solution of\n$$\n\\left\\{\n\\begin{array}{l}\ndX^{x,\\gamma}_t =AX^{x,\\gamma}_{t} dt+F( X^{x,\\gamma}_{t} )\ndt+GdW^{\\gamma}_{t}+G\\gamma_{t}\\,dt ,\n\\quad t\\geq 0 \\\\\nX^{x,\\gamma}_{0} =x\\in E.\n\\end{array} \\right.\n$$\nand by (\\ref{rel-estimate-Xgamma}) we get\n$\\sup_{T>0}\\mathbb{E}^{\\mathbb{P}_{\\gamma}}|X^x_T|<\\infty$. So if\nwe let $T\\rightarrow\\infty$ in (\\ref{eq-proof-uniq-lambda}) we\nconclude that $\\tilde\\lambda=0$.\n\\end{proof}\n\n\n\\begin{remark} \\em\nThe solution to EBSDE (\\ref{EBSDE}) is, in general, not unique.\nIt is evident that the equation is invariant with respect to\naddition of a constant to $Y$ but we can also construct an\narbitrary number of solutions that do not differ only by a\nconstant (even if we require them to be bounded). On the contrary\nthe solutions we construct are not Markovian.\n\n\n\nIndeed, consider the equation:\n\\begin{equation}\\label{eq:nouniqueness}\n-dY_t=[\\psi(Z_t)-\\lambda]dt-Z_tdW_t.\n\\end{equation}\nwhere $W$ is a standard brownian motion and\n$\\psi:\\mathbb{R}\\rightarrow \\mathbb{R}$ is differentiable bounded\nand has bounded derivative.\n\n\nOne solution is $Y=0;Z=0;\\lambda=\\psi(0)$ (without loss of\ngenerality we can suppose that $\\psi(0)=0$).\n\n\nLet now $\\phi:\\mathbb{R}\\rightarrow \\mathbb{R}$ be an arbitrary\ndifferentiable function bounded and with bounded derivative. The\nfollowing BSDE on $[t,T]$ admits a solution:\n$$\\left\\{\\begin{array}{rcl}\n-dY_s^{x,t}&=&\\psi(Z_s^{x,t})ds-Z_s^{x,t}dW_s,\\\\\nY_T^{x,t}&=&\\phi(x+W_T-W_t).\n\\end{array}\\right.$$\nIf we define $u(t,x)=Y_t^{x,t}$ then both $u$ and $\\nabla u$ are\nbounded. Moreover if $\\tilde{Y}_t=Y_t^{0,0}=u(t,W_t),\\\n\\tilde{Z}_t=Z_t^{0,0}=\\nabla u(t,W_t)$ then\n$$\\left\\{\\begin{array}{rcl}\n-d\\tilde{Y}_t&=&\\psi(\\tilde{Z}_t)dt-\\tilde{Z}_tdW_t,\\quad\nt\\in [0,T],\\\\\n\\tilde{Y}_T&=&\\phi(W_T).\n\\end{array}\\right.$$\n Then it is enough to extend with\n$\\tilde{Y}_t=\\tilde{Y}_T,\\ \\tilde{Z}_t=0$ for $t>T$ to construct a\nbounded solution to (\\ref{eq:nouniqueness}).\n\\end{remark}\n\\begin{remark}\\em The existence result in Theorem \\ref{main-EBSDE}\ncan be easily extended to the case of $\\psi$ only satisfying\nthe conditions\n$$ |\\psi(x,z)\n-\\psi(x',z)|\\le K_x|x-x'|,\\quad |\\psi(x,0)|\\le M,\n\\quad |\\psi(x,z)|\n\\le K_z(1+|z|).\n$$\nIndeed we can construct a sequence $\\{\\psi_n : n\\in \\mathbb{N}\\}$\nof functions Lipschitz in $x$ and $z$ such that for all $x,x'\\in\nH$, $z \\in \\Xi^*$, $n\\in \\mathbb{N}$\n$$ |\\psi^n(x,z)\n-\\psi^n(x',z)|\\le K'_x|x-x'|;\\quad |\\psi^n(x,0)|\\leq M';\\quad\n\\lim_{n\\rightarrow \\infty}|\\psi^n(x,z) -\\psi(x,z)|=0.\n$$\nThis can be done by projecting $x$ to the subspaces generated by\na basis in $\\Xi^*$ and then regularizing by the standard\nmollification techniques, see \\cite{FuTeBE}.\nWe know that if $(\\bar Y^{x,n}, \\bar Z^{x,n},\\lambda_n)$ is the\nsolution of the EBSDE (\\ref{EBSDE}) with $\\psi$ replaced by\n$\\psi^n$ then $\\bar Y^{x,n}_t=\\bar v^n(X^x_t)$ with\n$$ |\\bar v^n(x)\n-\\bar v^n(x')|\\le \\dfrac{K'_x}{\\eta}|x-x'|;\\quad \\bar v^n(0)=0\n;\\quad |\\lambda_n|\\leq M'\n$$\nThus we can assume (considering, if needed, a subsequence) that\n$\\bar v^n(x) \\rightarrow \\bar v(x)$ and $\\lambda_n \\rightarrow\n\\lambda$.\nThe rest of the proof is identical to the one of Theorem\n\\ref{main-EBSDE}.\n\\end{remark}\n\n\n\\section{Differentiability}\n\n\n\nWe are now interested in the differentiability of the\nsolution to the EBSDE (\\ref{EBSDE}) with respect to $x$.\n\n\\begin{theorem}\\label{th-diff} Assume that Hypotheses\n\\ref{general_hyp_forward} and\n\\ref{hypothesisroyer} hold. Moreover assume that $F$ is of class\n${\\cal G}^1(E,E)$ with $\\nabla F$ bounded on bounded sets of $E$.\nFinally assume that $\\psi$ is of class ${\\cal G}^1(E\\times\n\\Xi^*,E)$. Then the function $\\overline{v}$ defined in\n(\\ref{def-of-v}) is of class ${\\cal G}^1(E,\\mathbb{R})$.\n\\end{theorem}\n\\begin{proof} In \\cite{masiero} it is proved that for arbitrary $T>0$ the map\n$x\\rightarrow X^x$ is of class $\\mathcal{G}^1$ from $E$ to\n$L^p_{\\mathcal{P}}(\\Omega,C([0,T],E))$. Moreover Proposition\n\\ref{prop lip X} ensures that for all $h\\in E$,\n\\begin{equation}\\label{proof-diff-estim-nabla-X}\n |\\nabla X^x_t h|\\leq e^{-\\eta t}|h|,\\quad \\hbox{$\\mathbb{P}$-a.s.,\n for all $t\\in [0,T]$}.\n\\end{equation}\nUnder the previous conditions one can proceed exactly as\nin Theorem 3.1 of \\cite{HuTe} to\nprove that for all $\\alpha >0$ the map $v^{\\alpha}$ is of class\n$\\mathcal{G}^1$.\n\n\n$ $\n\n\nThen we consider again\nequation (\\ref{bsderoyer}):\n$$\n{Y}^{x,\\alpha}_t ={Y}^{x,\\alpha}_T\n+\\int_t ^T(\\psi(X^{x}_\\sigma, Z^{x,\\alpha}_\\sigma)-\\alpha\nY^{x,\\alpha}_\\sigma)d\\sigma-\\int_t ^T Z^{x,\\alpha}_\\sigma\ndW_\\sigma, \\quad 0\\le t \\le T <\\infty,\n$$\nwe recall that ${Y}^{x,\\alpha}_T={v}^{\\alpha}(X^{x}_T)$,\n and apply again \\cite{masiero} (see Proposition 4.2 there) and \\cite{FuTe1}\n (see Proposition 5.2 there) to obtain that for all $\\alpha >0 $\n the map $x\\rightarrow Y^{x,\\alpha}$ is of class $\\mathcal{G}^1$\n from $E$ to $L^2_{\\mathcal{P}}(\\Omega,C([0,T],\\mathbb{R}))$ and the map\n$x\\rightarrow Z^{x,\\alpha}$ is of class $\\mathcal{G}^1$ from $E$\nto $L^2_{\\mathcal{P}}(\\Omega,L^2([0,T],\\Xi^*))$. Moreover for all\n$h\\in E$ it holds (for all $t>0$ since $T$ was arbitrary)\n$$\n-d\\nabla Y^{\\alpha,x}_th=[\\nabla_x\\psi(X^x_t,Z_t^{\\alpha,x})\n\\nabla X_t^xh+\\nabla_z\\psi(X^x_t,Z_t^{\\alpha,x})\\nabla\nZ_t^{\\alpha,x}h-\\alpha\\nabla Y^{\\alpha,x}_th]dt\n-\\nabla Z^{\\alpha,x}h dW_t.\n$$\nWe also know that $|Y^{\\alpha,x}_t|\\le {M}\/{\\alpha}$. Now we set\n$$U^{\\alpha,x}_t=e^{\\eta t}\\nabla Y^{\\alpha,x}_t h,\n\\quad V^{\\alpha,x}=e^{\\eta t}\\nabla Z^{\\alpha,x}_t h.$$ Then\n$(U^{\\alpha,x},V^{\\alpha,x})$ satisfies the following BSDE:\n\\begin{eqnarray*}\n-dU^{\\alpha,x}_t&=&[e^{\\eta t}\\nabla_x\\psi(X^x_t,Z_t^{\\alpha,x})\n\\nabla X_t^x-(\\alpha+\\eta)U^{\\alpha,x}_t +\\nabla_z\n\\psi(X^x_t,Z_t^{\\alpha,x}) V^{\\alpha,x}_t]dt-V^{\\alpha,x}_tdW_t.\n\\end{eqnarray*}\nBy (\\ref{proof-diff-estim-nabla-X}) and the usual Girsanov\nargument (recall the $\\nabla_x \\psi$ and $\\nabla_z \\psi$ are\nbounded),\n$$|U^{\\alpha,x}_t|\\le \\frac{c}{\\alpha+\\eta},\\;\n\\forall t\\geq 0,\\; \\hbox{$\\mathbb P-$a.s. $\\qquad$ i.e. } \\qquad\n|\\nabla Y_t^{x,\\alpha}|\\le e^{-\\eta t}\\frac{c}{\\alpha+\\eta}.$$\nMoreover, consider the limit equation, with unknown\n$(U^{x},V^{x})$,\n\\begin{equation}\\label{eq:limit}\n-dU^x_t=[e^{\\eta t}\\nabla_x\\psi(X^x_t,\\bar Z_t^{x}) \\nabla\nX_t^x-\\eta U^x_t+\\nabla_z\\psi(X^x_t,\\bar Z_t^{x}) V^x]dt-V^xdW_t,\n\\end{equation}\nwhich, since $|e^{\\eta t}\\nabla_x\\psi \\nabla_x X_t^x|$ is bounded,\nhas a unique solution such that $U^x$ is bounded and $V^x$\nbelongs to $L_{\\cal P, {\\rm loc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$\n(see \\cite{bh} and \\cite{royer}).\n\n\nWe know that for a suitable sequence $\\alpha_n \\searrow 0$,\n$$\\bar v^{\\alpha}(x)= Y^{x,\\alpha_n}_0-Y^{0,\\alpha_n}_0\\rightarrow \\bar{Y}^x_0,$$\nand we claim now that\n$$ \\nabla \\bar v^{\\alpha_n}(x)=\\nabla Y_0^{x,\\alpha_n}=U_0^{x,\\alpha_n}\n\\rightarrow U_0^x.$$ To prove this we introduce the finite horizon\nequations: for $t\\in [0,N]$,\n$$\\begin{cases}\n& -dU_t^{x,\\alpha,N}=[e^{\\eta t}\\nabla_x\\psi(X^x_t,Z_t^{x,\\alpha})\n \\nabla X_t^x-(\\alpha+\\eta)U_t^{x,\\alpha,N}\n +\\nabla_z \\psi (X^x_t,Z_t^{x,\\alpha}) V_t^{x,\\alpha,N}]dt\\\\\n& \\qquad\\qquad\\qquad - V^{x,\\alpha,N}_tdW_t,\\\\\n& U_N^{x,\\alpha,N}=0.\n\\end{cases}$$\n$$\\begin{cases}& -dU_t^{x,N}=[e^{\\eta t}\\nabla_x\\psi(X^x_t,\\bar Z_t^{x})\n\\nabla X_t^x-(\\alpha+\\eta)U_t^{x,N}\n+\\nabla_z \\psi (X^x_t,\\bar Z^{x}_t) V_t^{x,N}]dt-V^{x,N}_tdW_t,\\\\\n& U_N^{x,N}=0.\n\\end{cases}$$\nSince $\\displaystyle \\mathbb E\\int_0^N |Z^{x,\\alpha_n}_s-\\bar Z^{x}_s|^2\nds\\rightarrow 0$ it is easy to verify that, for all fixed $N>0$,\n$U_0^{x,\\alpha_n,N}\\rightarrow U_0^{x,N}$.\n\n\nOn the other side a standard application of Girsanov Lemma gives\n see \\cite{HuTe},\n$$|U_0^{x,\\alpha_n,N}-U_0^{x,\\alpha_n}|\\le \\frac{c}{\\alpha_n+\\eta}e^{-\\eta N}, \\qquad |U_0^{x,N}-U_0^{x}|\\le \\frac{c}{\\eta}e^{-\\eta N}.$$\nfor a suitable constant $c$.\n\n\nThus a standard argument implies $U_0^{x,\\alpha_n}\\rightarrow\nU_0^{x}$. An identical argument also ensures continuity of\n$U_0^{x}$ with respect to $x$ (also taking into account\n\\ref{lip-of-Z}). The proof is therefore completed.\n\\end{proof}\n\n\n$ $\n\n\nAs usual in the theory of markovian BSDEs, the differentiability\nproperty allows to identify the process $\\bar Z^x$ as a function\nof the process $X^x$. To deal with our Banach space setting we\nneed to make the following extra assumption:\n\n\\begin{hypothesis}\\label{Hyp-masiero}\nThere exists a Banach space $\\Xi_0$, densely and continuously\nembedded in $\\Xi$, such that $G\\, (\\Xi_0) \\subset \\Xi$ and $G\n:\\Xi_0 \\rightarrow E$ is continuous.\n\\end{hypothesis}\n\nWe note that this condition is satisfied in most applications. In\nparticular it is trivially true in the special case $E=H$ just by\ntaking $\\Xi_0=\\Xi$, since $G$ is assumed to be a linear bounded\noperator from $\\Xi$ to $H$. The following is proved in\n\\cite[Theorem 3.17]{masiero}:\n\n\n\\begin{theorem}\n \\label{theorem-identif-Z}\nAssume that Hypotheses \\ref{general_hyp_forward},\n \\ref{hypothesisroyer} and \\ref{Hyp-masiero} hold.\nMoreover assume that $F$ is of class ${\\cal G}^1(E,E)$ with\n$\\nabla F$ bounded on bounded subsets of $E$ and $\\psi$ is of\nclass ${\\cal G}^1(E\\times \\Xi^*,E)$. Then $\\bar Z^x_t=\\nabla \\bar\nv(X^x_t)G$, $\\mathbb{P}$-a.s. for a.e. $t\\geq 0$.\n\\end{theorem}\n\\begin{remark} \\label{precision}\\begin{em}\nWe notice that $\\nabla \\bar v(x)G\\xi$ is only defined for $\\xi\\in\n\\Xi_0$ in general, and the conclusion of\nTheorem \\ref{theorem-identif-Z} should be stated more precisely\nas follows: for $\\xi\\in\n\\Xi_0$ the equality $Z^x_t\\xi=\\nabla \\bar v(X^x_t)G\\xi$\nholds $\\mathbb{P}$-a.s. for almost every $t\\geq 0$. However,\nsince $\\bar Z^x$\nis a process with values in $\\Xi^*$, and more specifically\na process in $\nL^2_{\\mathcal{P}}(\\Omega,L^2([0,T],\\Xi^*))$, it follows that\n$\\mathbb P$-a.s. and\nfor almost every\n$t$ the\noperator $\\xi \\rightarrow \\nabla \\bar v(X^x_t)G\\xi$ can be\nextended to a bounded linear operator defined on the whole $\\Xi$.\nEquivalently,\nfor almost every\n$t$ and for almost all $x\\in E$ (with respect to the law of $X_t$)\nthe linear\noperator $\\xi \\rightarrow \\nabla \\bar v(x)G\\xi$ can be\nextended to a bounded linear operator defined on the whole $\\Xi$\n(see also Remark 3.18 in \\cite{masiero}).\n\\end{em}\n\\end{remark}\n\\begin{remark}\\label{boundedpsibar}\n\\begin{em} The above representation together with the fact that\n$\\bar v$ is Lipschitz with Lipschitz constant $K_x\\eta^{-1}$\nimmediately implies that, if $F$ is of class ${\\cal G}^1(E,E)$ and\n$\\psi$ is of class ${\\cal G}^1(E\\times \\Xi^*,E)$, then $|\\bar\n{Z}^x_t|_{\\Xi_0^*}\\leq K_x\\eta^{-1} |G|_{L(\\Xi_0,E)}$ for all $x\\in\nE$, $\\mathbb{P}$-a.s. for almost every $t\\geq 0$. Consequently we\ncan construct $\\bar \\zeta$ in Theorem \\ref{main-EBSDE}\n in such a way that it is bounded in the\n$\\Xi_0^*$ norm by $K_x\\eta^{-1} |G|_{L(\\Xi_0,E)}$.\n\n\nOnce this is proved we can extend the result to the case in which\n$\\psi$ is no longer differentiable but only Lipschitz, namely\nwe can prove than even in this case the process $\\bar\n{Z}^x$ is bounded. Indeed if we\nconsider a sequence $\\{\\psi_n : n\\in \\mathbb{N}\\}$ of functions of\nclass ${\\cal G}^1(E\\times \\Xi^*,E)$ such that for all $x,x'\\in H$,\n$z,z'\\in \\Xi^*$, $n\\in \\mathbb{N}$,\n$$ |\\psi_n(x,z)\n-\\psi_n(x',z')|\\le K_x|x-x'|+ K_z |z-z'|;\\quad \\lim_{n\\rightarrow\n\\infty}|\\psi_n(x,z) -\\psi(x,z)|=0.\n$$\nWe know that if $(\\bar Y^{x,n}, \\bar Z^{x,n},\\lambda_n)$ is the\nsolution of the EBSDE (\\ref{EBSDE}) with $\\psi$ replaced by\n$\\psi_n$ then $|\\bar {Z}^{x,n}_t|_{\\Xi_0^*}\\leq K_x\\eta^{-1}\n|G|_{L(\\Xi_0,E)}$. Then as we did above we can show (showing that the\ncorresponding equations with monotonic generator converge\nuniformly in $\\alpha$) that $\\mathbb{E}\\int_0^T|\\bar {Z}^{x,n}_t\n-\\bar {Z}^{x}_t|_{\\Xi_0^*}^2dt\\rightarrow 0$ and the claim\nfollows.\n\n\nWe also notice that by the same argument we also have $ |\\bar\n\\zeta^{\\alpha}(x)|_{\\Xi_0^*}\\leq K_x\\eta^{-1} |G|_{L(\\Xi_0,E)}$,\n$\\forall \\alpha>0$.\n\\end{em}\n\\end{remark}\nNow we introduce the Kolmogorov semigroup corresponding to $X$:\n for measurable and bounded $\\phi:\nE\\rightarrow \\mathbb{R}$ we define\n\\begin{equation}\\label{def-of-p}\nP_t[\\phi](x)=\\mathbb{E}\\, \\phi(X^x_t)\\qquad t\\ge 0,\\, x\\in E.\n\\end{equation}\n\\begin{definition}\\label{strongly-feller}\nThe semigroup $(P_t)_{t\\geq 0}$ is called strongly Feller if for\nall $t>0$ there exists $k_t$ such that for all measurable and\nbounded $\\phi: E\\rightarrow \\mathbb{R}$,\n$$|\nP_t[\\phi](x)- P_t[\\phi](x')|\\leq k_t \\Vert\\phi\\Vert_0 |x-x'|,\n\\qquad x,x'\\in E,\n$$\nwhere $\\Vert\\phi\\Vert_0=\\sup_{x\\in E}|\\phi(x)|$.\n\\end{definition}\n\nThis terminology is somewhat different from the classical one\n(namely, that $P_t$ maps measurable bounded functions into\ncontinuous ones,\n for all\n$t>0$), but it will be convenient for us.\n\n\n\n\\begin{definition}\\label{gen-diss} We say that $F$\nis genuinely dissipative if there exist $\\epsilon>0$ and $c>0$\nsuch that, for all $x,x'\\in E$, there exists $z^*\\in \\partial\n|x-x'|$ such that $_{E^*,E}\\leq c\n|x-x'|^{1+\\epsilon}$.\n\\end{definition}\n\n\n\n\\begin{lemma}\\label{lemma-SF-dissip}\nAssume that Hypotheses \\ref{general_hyp_forward} and \\ref{hyp_W_A\nF(W_A)} hold.\nIf the Kolmogorov\nsemigroup $(P_t)$ is strongly Feller then for all bounded\nmeasurable $\\phi: E\\rightarrow\\mathbb{R}$,\n$$\\left|P_t[\\phi](x)-\\int_E \\phi(x)\\mu (dx)\\right|\n\\leq c e^{-\\eta (t\/4)}(1+|x|)\\Vert\\phi\\Vert_0.$$\nIf in addition $F$ is genuinely dissipative then\n$$\\left|P_t[\\phi](x)-\\int_E \\phi(x)\\mu (dx)\\right|\n\\leq c e^{-\\eta (t\/4)}\\Vert\\phi\\Vert_0.$$\n\\end{lemma}\n\\begin{proof} We fix $\\epsilon >0$. For $t>2$ we have,\nby Theorem \\ref{ergodicity},\n$$\n\\begin{array}{r}\\displaystyle\\left|P_t[\\phi](x)-\\int_E \\phi(x)\\mu (dx)\\right|=\n\\left|P_{t-1}[P_1[\\phi]](x)-\\int_E P_{1}[\\phi](x)\\mu (dx)\\right|\n\\leq C(1+|x|)\ne^{-\\eta t \/4} \\Vert P_{1}[\\phi]\\Vert_{\\hbox{lip}}\\\\\n\\displaystyle \\leq C(1+|x|) e^{-\\eta t \/4} k_{1}\\Vert\\phi\\Vert_0,\n\\end{array}$$\nand the first claim follows since $\\left|P_t[\\phi](x)-\\int_E\n\\phi(x)\\mu (dx)\\right|\\leq 2 \\Vert\\phi\\Vert_0$.\n\n\nIf now $F$ is genuinely dissipative then in \\cite{DP2}, Theorem\n6.4.1 it is shown that\n$$\\left|\\mathbb{E}\\phi(X^x_t)-\\int_E \\phi\\, d\\mu \\right|\\leq\nC e^{-\\eta t \/2} \\Vert\\phi\\Vert_{\\hbox{lip}}$$ and the second\nclaim follows by the same argument.\n\\end{proof}\n\nWe are now able to state and prove two corollaries\nof Theorems \\ref{th-diff} and \\ref{theorem-identif-Z}.\n\n\n\\begin{corollary}\\label{characterization of lambda}\nAssume that Hypotheses \\ref{general_hyp_forward}, \\ref{hyp_W_A\nF(W_A)}, \\ref{hypothesisroyer} and \\ref{Hyp-masiero} hold.\nMoreover assume that $F$ is of class $\\mathcal{G}^1$ with $\\nabla\nF$ bounded on bounded subsets of $E$, and that $\\psi$ is bounded\non each set $E\\times B$, where $B$ is any ball of $\\Xi_0^*$.\nFinally assume that the Kolmogorov semigroup $(P_t)$ is strongly\nFeller.\n\n\n\nThen the following holds:\n$$\\lambda=\\int_E \\psi(x,\\bar \\zeta(x))\\mu (dx),$$\nwhere $\\mu$ is the unique invariant measure of $X$.\n\\end{corollary}\n\\begin{proof} First notice that $\\overline{\\psi}:=\n\\psi(\\,\\cdot\\, , \\bar\\zeta(\\,\\cdot\\,))$ is bounded, by\nRemark \\ref{boundedpsibar}.\n Then\n$$T^{-1}\\mathbb{E}[\\bar Y ^x_0-\\bar Y ^x_T]=\nT^{-1}\\mathbb E \\int_0^T\\left (\\psi(X^x_t,\\bar \\zeta( X^x_t))- \\int_E\n\\bar \\phi\\, d\\mu \\right)dt+ \\left(\\int_E \\bar \\phi\\, d\\mu\n-\\lambda\\right).$$ We know that $T^{-1}\\mathbb{E}[\\bar Y ^x_0-\\bar\nY ^x_T]\\rightarrow 0$, by the argument\nin Theorem \\ref{th-uniq-lambda}.\nMoreover by the first conclusion of Lemma\n\\ref{lemma-SF-dissip}\n$$ T^{-1}\\mathbb E \\int_0^T\\left (\\psi(X^x_t,\\bar \\zeta( X^x_t))-\n\\int_E \\bar \\phi\\, d\\mu \\right)dt \\rightarrow 0,$$ and the claim\nfollows. \\end{proof}\n\n\\begin{corollary}\\label{boundedness of v}\nIn addition to the assumptions of Corollary \\ref{characterization\nof lambda} suppose that $F$ is genuinely dissipative. Then $\\bar\nv$ is bounded.\n\\end{corollary}\n\\begin{proof}\nLet $(Y^{x,\\alpha},Z^{x,\\alpha})$ be the solution of\n(\\ref{bsderoyer}). We know that $Y^{x,\\alpha}_t=v^{\\alpha}(X^x_t)$\nand $Z^{x,\\alpha}_t= \\zeta^{\\alpha}(X^x_t)$ with $v^{\\alpha}$\nLipschitz uniformly with respect to $\\alpha$ and $\\zeta^{\\alpha}$\nbounded in $\\Xi^*$ uniformly with respect to $\\alpha$. Let\n$\\psi^{\\alpha}=\\psi(\\,\\cdot\\,,\\bar \\zeta^{\\alpha}(\\,\\cdot\\,))$.\nUnder the present assumptions we conclude that also the maps\n$\\psi^{\\alpha}$ as well are bounded in $\\Xi^*$ uniformly with\nrespect to $\\alpha$.\n\n\nComputing $d (e^{-\\alpha t} \\bar Y^{x\\alpha}_t)$ we obtain,\n$$Y^{x,\\alpha}_0=\\mathbb{E} e^{-\\alpha T} Y^{x,\\alpha}_T+\n\\mathbb{E} \\int_0^T e^{-\\alpha t} \\psi^{\\alpha} (X^x_t)dt,$$ and\nfor $T\\rightarrow\\infty$,\n$$Y^{x,\\alpha}_0=\n\\mathbb{E} \\int_0^\\infty e^{-\\alpha t} \\psi^{\\alpha} (X^x_t)dt.$$\nSubtracting to both sides $\\alpha^{-1}\\int_E\n\\psi^{\\alpha}(x)\\mu(dx)$ we obtain\n$$\\left|Y^{x,\\alpha}_0-\\alpha^{-1}\\int_E \\psi^{\\alpha}(x)\\mu(dx)\\right|=\n\\left| \\int_0^\\infty e^{-\\alpha t} \\left[P_t[\\psi^{\\alpha}]\n(x)-\\int_E \\psi^{\\alpha}(x)\\mu(dx)\\right]dt\\right|\\leq 4c\n\\eta^{-1} \\Vert \\psi^\\alpha\\Vert_0 $$ where the last inequality\ncomes from the second conclusion of Lemma \\ref{lemma-SF-dissip}.\n\n\nThus $\\left|Y^{x,\\alpha}_0-Y^{0,\\alpha}_0\\right| \\leq 8 c\n\\eta^{-1} \\Vert \\psi^\\alpha\\Vert_0 $ and the claim follows since\nby construction $Y^{x,\\alpha}_0-Y^{0,\\alpha}_0 \\rightarrow \\bar v\n(x)$.\n\\end{proof}\n\\section{Ergodic Hamilton-Jacobi-Bellman equations}\nWe briefly show here that if $\\bar Y_0^x=\\bar v(x)$ is of class\n${\\cal G}^1$ then the couple $(v,\\lambda)$ is a mild solution of\nthe following ``ergodic'' Hamilton-Jacobi-Bellman equation:\n\\begin{equation}\n\\mathcal{L}v(x)\n+\\psi\\left( x,\\nabla v(x) G\\right) = \\lambda, \\quad x\\in E, \\label{hjb}%\n\\end{equation}\nWhere linear operator $\\mathcal{L}$ is formally defined by\n\\[\n\\mathcal{L}f\\left( x\\right) =\\frac{1}{2}Trace\\left(\nGG^{\\ast}\\nabla ^{2}f\\left( x\\right) \\right) +\\langle Ax,\\nabla\nf\\left( x\\right) \\rangle_{E,E^{\\ast}}+\\langle F\\left( x\\right)\n,\\nabla f\\left( x\\right) \\rangle_{E,E^{\\ast}},\n\\]\nWe notice that we can define the transition semigroup\n $(P_t)_{t\\geq 0}$ corresponding to $X$ by the formula (\\ref{def-of-p})\nfor all measurable functions $\\phi:E\\to\\mathbb{ R}$ having\npolynomial growth, and we notice that $\\mathcal{L}$ is the formal\ngenerator of $(P_t)_{t\\geq 0}$.\n\n\n Since we are dealing with an elliptic equation it is natural to consider\n$(v,\\lambda)$ as a mild solution of equation (\\ref{hjb}) if and\nonly if, for arbitrary $T>0$, $v(x)$ coincides with the mild\n solution $u(t,x)$ of the corresponding parabolic equation\n having $v$ as a terminal condition:\n\\begin{equation}\\left\\{\n\\begin{array}{l}\n \\dfrac{\\partial u(t,x)}{\\partial t}+\\mathcal{L}u\\left( t,x\\right)\n+\\psi\\left( x,\\nabla u\\left( t,x\\right) G\\right)\n -\\lambda=0, \\quad t\\in [0,T],\\; x\\in E, \\\\ \\\\\nu(T,x)=v(x), \\quad x\\in E.\n \\end{array}\\right. \\label{hjb-parab}\n\\end{equation}\nThus we are led to the following definition (see also\n\\cite{FuTe-ell}):\n\\begin{definition}\n\\label{defsolmildkolmo} A pair $(v,\\lambda)$ ($v: E\\rightarrow\n\\mathbb{R}$ and $\\lambda\\in \\mathbb{R}$) is a mild solution of the\nHamilton-Jacobi-Bellman equation (\\ref{hjb}) if the following are\nsatisfied:\n\n\n\\begin{enumerate}\n\\item $v\\in\\mathcal{G}^{1}\\left( E,\\mathbb R \\right) $;\n\n\n\\item there exists $C>0$ such that $\\left| \\nabla v\\left(\nx\\right)\nh\\right| \\leq C\\left| h\\right| _{E}\\left( 1+\\left| x\\right| _{E}%\n^{k}\\right) $ for every $x,h\\in E$ and some positive integer\n$k$;\n\n\n\\item for $0\\le t\\le T$ and $x\\in E$,\n\\begin{equation}\nv(x)=P_{T-t}\\left[ v\\right] \\left( x\\right)\n+\\int_{t}^{T}\\left(P_{s -t }\\left[ \\psi(\\cdot,\\nabla v\\left(\n\\cdot\\right) G)\\right] \\left( x\\right) -\\lambda \\right) \\,ds.\n\\label{mild sol hjb}\n\\end{equation}\n\n\n\\end{enumerate}\n\\end{definition}\n\n\n\n\nIn the right-hand side of (\\ref{mild sol hjb}) we notice\noccurrence of the term $\\nabla v\\left( \\cdot\\right) G$, which is\nnot well defined as a function $E\\to\\Xi^*$, since $G$ is not\nrequired to map $\\Xi$ into $E$.\nThe situation is similar to Remark \\ref{precision}.\nIn general,\n for $x \\in E$, $\\nabla \\bar\nv(x)G\\xi$ is only defined for $\\xi\\in \\Xi_0$.\nIn (\\ref{mild sol hjb}) it is implicitly required\nthat, $\\mathbb P$-a.s. and\nfor almost every\n$t$, the\noperator $\\xi \\rightarrow \\nabla \\bar v(X^x_t)G\\xi$ can be\nextended to a bounded linear operator defined on the whole $\\Xi$.\nNoting that\n$$\nP_{t }\\left[ \\psi(\\cdot,\\nabla v\\left( \\cdot\\right) G)\\right]\n\\left( x\\right) = \\mathbb E \\, \\psi(X^x_{t},\\nabla v\\left( X^x_{t}\\right)\nG)\n$$\nthe equation (\\ref{mild sol hjb}) is now meaningful.\n\nUsing the results for the parabolic case, see \\cite{masiero}, we\nget existence of the mild solution of equation (\\ref{hjb})\nwhenever we have proved that the function\n$\\bar v$ in Theorem \\ref{main-EBSDE} is differentiable.\n\n\n\\begin{theorem}\\label{th-EHJB}\nAssume that Hypotheses \\ref{general_hyp_forward},\n\\ref{hypothesisroyer} and \\ref{Hyp-masiero} hold.\nMoreover assume that $F$ is of class ${\\cal G}^1(E,E)$ with\n$\\nabla F$ bounded on bounded subsets of $E$ and $\\psi$ is of\nclass ${\\cal G}^1(E\\times \\Xi^*,E)$.\n\n\nThen $(\\bar v, \\bar\\lambda)$ is a mild solution of the\nHamilton-Jacobi-Bellman equation (\\ref{hjb}).\n\nConversely, if $(v,\\lambda)$ is a mild solution of\n (\\ref{hjb}) then, setting $ Y^x_t=\nv(X^x_t)$ and ${Z}^{x}_t=\n\\nabla v( X^x_t) G$,\nthe triple\n $( Y^x, Z^x, \\lambda)$ is a solution of\n the EBSDE\n (\\ref{EBSDE}).\n\n\\end{theorem}\n\n\n\n\\section{Optimal ergodic control}\n\\label{optcontr}\n\nAssume that Hypothesis \\ref{general_hyp_forward} holds and let\n$X^x$ denote the solution to equation (\\ref{sde}).\n Let $U$ be a separable\n metric space. We define a control $u$ as an\n$({\\cal F}_t)$-progressively measurable $U$-valued process. The cost\n corresponding to a given control\nis defined in the following way. We assume that the functions\n$R:U\\rightarrow \\Xi^*$ and $L:E\\times U \\rightarrow \\mathbb R$ are\nmeasurable and satisfy, for some constant $c>0$,\n\\begin{equation}\\label{condcosto}\n|R(u)|\\leq c,\\quad |L(x,u)|\\leq c, \\quad |L(x,u)-L(x',u)|\\leq\nc\\,|x-x'|,\\qquad u\\in U,\\,x,x'\\in E.\n\\end{equation}\nGiven an arbitrary control $u$ and $T>0$, we introduce the\nGirsanov density\n$$ \\rho_T^u=\\exp\\left(\\int_0^T R(u_s)dW_s\n-\\frac{1}{2}\\int_0^T |R(u_s)|_{\\Xi^*}^2 ds\\right)$$ and the\nprobability $\\mathbb P_T^u=\\rho_T^u\\mathbb P$ on ${\\cal F}_T$. The\nergodic cost corresponding to $u$ and the starting point $x\\in E$\nis\n\\begin{equation}\\label{def-ergodic-cost}\n J(x,u)=\\limsup_{T\\rightarrow\\infty}\\frac{1}{T} \\mathbb\nE^{u,T}\\int_0^T L(X_s^x,u_s)ds,\n\\end{equation}\nwhere $\\mathbb E^{u,T}$ denotes expectation with respect to\n$\\mathbb P_T^u$. We notice that $W_t^u=W_t-\\int_0^t R(u_s)ds$ is a\nWiener process on $[0,T]$ under $\\mathbb P^u$ and that\n$$dX_t^x=(AX_t^x+F(X_t^x))dt+G(dW_t^u+R(u_t)dt),\n\\quad t\\in [0,T]$$ and this justifies our formulation of the\ncontrol problem. Our purpose is to minimize the cost over all\ncontrols.\n\n\n To this purpose we first define the Hamiltonian in the\nusual way\n\\begin{equation}\\label{defhamiton}\n\\psi(x,z)=\\inf_{u\\in U}\\{L(x,u)+z R(u)\\},\\qquad x\\in E,\\,z\\in\n\\Xi^*,\n\\end{equation}\nand we remark that if, for all $ x,z$, the infimum is attained\nin (\\ref{defhamiton}) then there exists a measurable function\n$\\gamma:E\\times \\Xi^*\\rightarrow U$ such that\n$$\\psi(x,z)=l(x,\\gamma(x,z))+z R(\\gamma(x,z)).$$\nThis follows from an application of Theorem 4 of \\cite{McS-War}.\n\nWe notice that under the present assumptions $\\psi$ is a\nLipschitz function and $\\psi(\\cdot,0)$ is bounded (here the fact\nthat $R$ depends only on $u$ is used). So if we assume Hypotheses\n\\ref{general_hyp_forward} and \\ref{hyp_W_A F(W_A)} then in Theorem\n\\ref{main-EBSDE} we have constructed, for every $x\\in E$, a triple\n\\begin{equation}\\label{richiamoebsde}\n(\\bar Y^x,\\bar Z^x, \\bar \\lambda)= (\\bar v (X^x),\\bar \\zeta(X^x),\n\\bar \\lambda)\n\\end{equation} solution to\n the EBSDE\n (\\ref{EBSDE}).\n\n\n\n\n\n\n\n\\begin{theorem}\\label{Th-main-control}\nAssume that Hypotheses \\ref{general_hyp_forward}, \\ref{hyp_W_A\nF(W_A)} and \\ref{hyp-convol-determ} hold, and that\n(\\ref{condcosto}) holds as well.\n\n\nMoreover suppose that, for some $x\\in E$, a triple $(Y,Z,\\lambda)$\nverifies $\\mathbb{P}$-a.s. equation\n (\\ref{EBSDE}) for all $0\\leq t\\leq T$,\nwhere\n $Y$ is a progressively measurable continuous process, $Z$ is a process\n in $L_{\\cal P, {\\rm loc}}^2(\\Omega;L^2(0,\\infty;\\Xi^*))$ and\n $\\lambda\\in \\mathbb{R}$.\n Finally assume that there exists $c_x>0$ (that may depend\n on $x$) such that\n$\\mathbb{P}$-a.s.\n$$\n |Y_t|\\leq c_x (|X^x_t|+1) , \\hbox{ for all $t\\geq 0$}.\n$$\n\n\n\nThen the following holds:\n\\begin{enumerate}\n \\item[(i)] For arbitrary control\n $u$ we have $J(x,u)\\ge \\lambda=\\bar\\lambda,$\nand the equality holds if and only if $L(X_t^x,u_t)+Z_t\nR(u_t)=\\psi(X_t^x,Z_t)$, $\\mathbb P$-a.s. for almost every $t$.\n\n\n\\item[(ii)] If the infimum is attained in (\\ref{defhamiton}) then\nthe control $\\bar u_t=\\gamma(X_t^x,Z_t)$ verifies $J(x,\\bar u)=\n\\bar\\lambda.$\n\\end{enumerate}\n\n\nIn particular, for the solution (\\ref{richiamoebsde}) mentioned\nabove, we have:\n\\begin{enumerate}\n \\item[(iii)] For arbitrary control\n $u$ we have $J(x,u)=\\bar\\lambda$ if and only if\n$L(X_t^x,u_t)+\\bar\\zeta (X_t^x) R(u_t)=\\psi(X_t^x,\\bar \\zeta\n(X_t^x))$, $\\mathbb P$-a.s. for almost every $t$. \\item[(iv)] If the\ninfimum is attained in (\\ref{defhamiton}) then the control $\\bar\nu_t=\\gamma(X_t^x,\\bar\\zeta (X_t^x))$ verifies $J(x,\\bar u)=\n\\bar\\lambda.$\n\\end{enumerate}\n\n\n\\end{theorem}\n\n\n\\begin{remark}\\em\n\\begin{enumerate}\n \\item\nThe equality $\\lambda=\\bar\\lambda$ clearly follows from Theorem\n\\ref{th-uniq-lambda}. \\item Points $(iii)$ and $(iv)$ are\nimmediate consequences of $(i)$ and $(ii)$. \\item The conclusion\nof point $(iv)$ is that there exists an optimal control in\nfeedback form, with the optimal feedback given by the function\n$x\\mapsto \\gamma(x,\\bar\\zeta (x))$. \\item Under the conditions of\nTheorem \\ref{th-EHJB}, the pair $(\\bar v, \\bar \\lambda)$ occurring\nin (\\ref{richiamoebsde}) is a mild solution of the\nHamilton-Jacobi-Bellman equation (\\ref{hjb}). \\item It follows\nfrom the proof below that if $\\limsup$ is changed into $\\liminf$\nin the definition (\\ref{def-ergodic-cost}) of the cost, then the\nsame conclusions hold, with the obvious modifications, and the\noptimal value is given by $\\bar\\lambda$ in both cases.\n\\end{enumerate}\n\\end{remark}\n\n\n\\begin{proof}\n As $(Y,{Z}, \\bar\\lambda)$ is a solution of the\nergodic BSDE, we have\n\\begin{eqnarray*}\n-d{Y}_t&=&[\\psi(X_t^x,{Z}_t)-\\bar\\lambda]dt-{Z}_tdW_t\\\\\n&=&[\\psi(X_t^x,{Z}_t)- \\bar\\lambda]dt-{Z}_tdW_t^u-{Z}_t R(u_t)dt,\n\\end{eqnarray*}\nfrom which we deduce that\n\\begin{eqnarray*}\n\\bar\\lambda&=&\\frac{1}{T}\\mathbb E^{u,T}[Y_T-Y_0]\n+\\mathbb E^{u,T}\\frac{1}{T}\\int_0^T[\\psi(X_t^x,{Z}_t)-{Z}_t r(u_t)-L(X_t^x,{Z}_t)]dt\\\\\n& &+\\frac{1}{T}\\mathbb E^{u,T}\\int_0^T L(X_t^x,{Z}_t)dt.\n\\end{eqnarray*}\n\n\nThus\n$$\\frac{1}{T}\\mathbb E^{u,T}\\int_0^T L(X_t^x,{Z}_t)dt\\ge\n \\frac{1}{T}\\mathbb E^{u,T}[Y_0-Y_T]+\\bar\\lambda.$$\nBut by (\\ref{rel-estimate-Xgamma}) we have\n$$|\\mathbb E^{u,T} Y_T|\\le c\\mathbb E^{u,T}(|X_T^x|+1)\\le c(1+|x|).$$\nConsequently $T^{-1}\\mathbb E^{u,T}[Y_0-Y_T]\\rightarrow 0,$ and\n$$\\limsup_{T\\rightarrow\\infty } \\frac{1}{T}\\mathbb E^{u,T}\\int_0^T L(X_t^x,{Z}_t)dt\n\\ge \\bar\\lambda.$$\n\n\nSimilarly, if $L(X_t^x,u_t)+ Z_t R(u_t)=\\psi(X_t^x,Z_t)$,\n$$\\frac{1}{T}\\mathbb E^{u,T}\\int_0^T L(X_t^x,{Z}_t)dt=\n\\frac{1}{T}\\mathbb E^{u,T}[Y_0-Y_T]+\\bar\\lambda,$$ and the claim\nholds.\n\\end{proof}\n\n\n\n\n\\section{Uniqueness}\\label{sec-uniq}\nWe wish now to adapt the argument in \\cite{GoMa} in order to\nobtain uniqueness of markovian solutions to the EBSDE. This will\nbe done by a control thoretic interpretation the requires that the\nMarkov process related to the state equation with continuous\nfeedback enjoys recurrence properties. In this section we assume\n\\begin{equation}\\label{addizionali}\nE=H \\qquad\\hbox{ and }\\qquad F \\hbox{ is bounded.}\n\\end{equation}\n\n\\noindent We recall here a result due to \\cite{seid} on recurrence\nof solution to SDEs.\n\\begin{theorem}\\label{th-rec-seidler}\nConsider\n\\begin{equation}\\label{eq:u}\nd{X}_t=(A{X}_t+g({X}_t))dt+GdW_t.\n\\end{equation}\nwhere $g: H \\rightarrow H$ is bounded and weakly continuous (that\nif $x\\rightarrow\\<\\xi,g(x)\\>$ is continuous for all $\\xi\\in H$).\nLet\n$$Q_t=\\int_0^t e^{sA}GG^*e^{sA^*}ds.$$\nand assume the following\n\\begin{enumerate}\n \\item $\\sup_{t\\ge 0} \\hbox{Trace}\\,(Q_t)<\\infty$;\n\\item $Q_t$ is injective for $t>0$; \\item $ e^{t A}(H)\\subset\n(Q_t)^{1\/2}(H)$ for $t>0$; \\item $\\int_0^t\n|Q_s^{-1\/2}e^{sA}|ds<\\infty$ for $t>0$; \\item there exists\n$\\beta>0$ such that $\\int_0^t s^{-\\beta}\\,\n\\hbox{Trace}\\,(S(s)S(s)^*)\\, ds<\\infty$\n for $t>0$.\n\\end{enumerate}\nThen, for all $T>0$, equation (\\ref{eq:u}) admits a martingale\nsolution on $[0,T]$, unique in law. The associated transition\nprobabilities $P(t,x,T,\\cdot)$ on $H$ ($0\\le t\\le T, x\\in H$)\nidentify a recurrent Markov process on $[0,\\infty)$.\n\\end{theorem}\n\n\nConsider now the ergodic control problem with state equation:\n$$d{X}^{x,u}_t=(A{X}^{x,u}_t+F({X}^{x,u}_t)+GR(u_t))dt+GdW_t, \\ X_0^{x,u}=x,$$\nand cost\n$$\\limsup_{T\\to\\infty}\\frac{1}{T}\\,\n\\mathbb E\\int_0^T l(X_s,u_s)ds$$ where $R:U\\rightarrow\n\\Xi$ is continuous and bounded.\n\n\nWe restrict ourselves to the class of controls given by continuous\nfeedbacks, i.e. given arbitrary\n continuous $u: H\\rightarrow U$ (called feedback) we define the\n corresponding trajectory as the solution of\n$$d{X}^{x,u}_t=(A{X}^{x,u}_t+F({X}^{x,u}_t))dt+G(R(u(X_t^{x,u}))dt+dW_t),\n\\ X_0^{u,x}=x.$$\nWe notice that for all $T>0$ there exists a weak solution $X^{x,u}$ of\nthis equation, and it is unique in law.\n\n\n\n$ $\n\n\n We set as usual\n$$\\psi(x,z)=\\inf_{u\\in U}\\{L(x,u)+zR(u)\\},$$\nand assume that $\\psi$ is continuous and there exists a continuous\n$\\gamma:H\\times\\Xi\\rightarrow U$ such that\n$$\\psi(x,z)=L(x,\\gamma(x,z))+zR(\\gamma(x,z)).$$\n\n\n\\begin{theorem}\\label{th-uniqueness}\nSuppose (\\ref{addizionali})\nand suppose that the assumptions of Theorem\n\\ref{th-rec-seidler} hold.\nLet $(v,\\zeta,\\lambda)$ with $v:H\\rightarrow \\mathbb{R}$ continuous,\n$\\zeta:H\\rightarrow \\mathbb{R}$ continuous, and $\\lambda$ a real number\nsatisfy the following conditions:\n\n\n\\begin{enumerate}\n \\item $|v(x)|\\le c|x|$;\n\\item\nfor an arbitrary filtered probability space with\na Wiener process\n$(\\hat{\\Omega},\\hat{\\mathcal{F}},\n\\{\\hat{\\mathcal{F}}_t\\}_{t>0},\\hat{\\mathbb{P}},\\{\n\\hat{W}_t\\}_{t>0})$ and\nfor any solution of\n$$d\\hat{X}_t=(A\\hat{X}_t+F(\\hat{X}_t))dt+Gd\\hat{W}_t,\\qquad t\\in [0,T],$$\nsetting $Y_t=v(\\hat{X}_t),\\\nZ_t=\\zeta(\\hat{X}_t)$, we have\n$$-dY_t=[\\psi(\\hat{X}_t,Z_t)-\\lambda]dt-Z_tdW_t\\quad t\\in [0,T].$$\n\\end{enumerate}\nLet\n$$\\tau_r^T=\\inf \\{s\\in [0,T]:|X_s^{u,x}|< r\\},$$\nwith the convention $\\tau_r^T=T$ if the indicated set\nis empty,\nand\n$$J(x,u)=\\limsup_{r\\rightarrow 0}\\limsup_{T\\rightarrow \\infty}\n\\mathbb E\\int_0^{\\tau_r^T} [\\psi(X_s^{x,u},u(X_s^{x,u}))-\\lambda]ds.$$\nThen\n$$v(x)=\\inf_u J(x,u),$$\nwhere the infimum (that is a minimum) is taken over all continuous\nfeedbacks $u$.\n\\end{theorem}\n\\begin{proof} Let $u:H\\to U$ be continuous.\nWe notice that $X^{x,u}$ solves on $[0,T]$:\n$$dX_t^{x,u}=(AX_t^{x,u}+F(X_t^{x,u}))ds+Gd\\tilde{W}_t^u,\\ t\\in [0,T],$$\nwhere $\\tilde{W}_t=\\int_0^t R(u(X_r^{x,u})dr+W_t$ is a Wiener\nprocess on $[0,T]$ under a suitable probability $\\hat{\\mathbb{P}}^{u,T}$.\n\n\nTherefore $Y_t=v(X_t^{x,u})$, $Z_t=\\zeta(X_t^{x,u})$ satisfy:\n$$\n-dY_t=[\\psi(X_t^{x,u},u(X_t^{x,u}))-\\lambda]dt-Z_t\nR(u(X_t^{x,u}))]dt-Z_tdW_t.$$ Integrating in $[0,\\tau_r^T]$ we get\n$$v(x)=\\mathbb E(v(X_{\\tau_r^T}^{x,u}))+\\mathbb E\\int_0^{\\tau_r^T}\n[\\psi(X_s^{u,x},u(X_s^{x,u}))-\\lambda-Z_s R(X_s^{x,u})]ds.$$ Thus,\n\\begin{equation}\\label{eq:x}\nv(x)\\le\\mathbb E(v(X_{\\tau_r^T}^{x,u}))+\\mathbb E\\int_0^{\\tau_r^T}\n[L(X_s^{u,x},u(X_s^{x,u}))-\\lambda]ds.\n\\end{equation}\nNow\n\\begin{eqnarray*}\n|\\mathbb E(v(X_{\\tau_r^T}^{x,u}))|\\le c\\mathbb\nE|X^{x,u}_{\\tau_r^T}|&\\le& c r+ (\\mathbb\nE(|X_T^{x,u}|^2))^{1\/2}(\\mathbb P(\\tau_r^T=r))^{1\/2}\\\\ &\\le & c r+\nc (\\mathbb P(\\tau_r^T=r))^{1\/2}\n\\end{eqnarray*}\nNotice that $\\mathbb P(\\tau_r^T=r)=\\tilde{\\mathbb P}(\\inf_{t\\in\n[0,T]}|\\tilde{X}_t|\\geq r),$ where $\\tilde{X}$ is the Markov\nprocess on the whole $[0,+\\infty)$ corresponding to the equation\n(\\ref{eq:u}) with $g=F(\\cdot)+GR(u(\\cdot))$.\n\n\n$ $\n\n\n\\noindent Since $\\tilde{X}$ is recurrent, for all $ r>0$ it holds\n$\\tilde{\\mathbb P}(\\inf_{t\\in [0,T]}|\\tilde{X}_t|>r)\\rightarrow 0$\nas $T\\rightarrow \\infty.$ Thus\n$$\\limsup_{r\\rightarrow 0}\\limsup_{T\\rightarrow \\infty}|\n\\mathbb E(v(X_{\\tau_r^T}^{x,u}))|\\rightarrow 0.$$\nHence,\n$$v(x)\\le \\limsup_{r\\rightarrow 0}\\limsup_{T\\rightarrow \\infty}\n\\mathbb E\\int_0^{\\tau_r^T}\n[l(X_s^{x,u},u(X_s^{x,u}))-\\lambda]ds.$$ The proof is completed\nnoticing that if $u$ is chosen as ${u}(x)=\\gamma(x,\\zeta(x))$\nthen the above\ninequality becomes an equality.\n\\end{proof}\n\nThis result combines with Theorems \\ref{th-uniq-lambda}\nand \\ref{th-EHJB}\nto give the following\n\n\\begin{corollary}\\label{HJB-uniqueness}\nSuppose that all the assumptions of\nTheorems \\ref{th-uniq-lambda}, \\ref{th-EHJB} and \\ref{th-uniqueness} hold.\nThen $(\\bar v, \\bar\\lambda)$ is the unique mild solution of the\nHamilton-Jacobi-Bellman equation (\\ref{hjb}) satisfying\n$|\\bar v (x)|\\le c|x|$.\n\\end{corollary}\n\n\n\\section{Application to ergodic control of a semilinear heat equation}\n\\label{section-heat-eq}\n\nIn this section we show how our results can be applied to perform\nthe synthesis of the ergodic optimal control when the state\nequation is a semilinear heat equation with additive noise. More\nprecisely, we treat a stochastic heat equation in space dimension\none, with a dissipative nonlinear term and with control and noise\nacting on a subinterval. We consider homogeneous Dirichlet\nboundary conditions.\n\n\n\\noindent In $\\left( \\Omega,\\mathcal{F},\\mathbb{P}\\right) $ with\na filtration $\\left( \\mathcal{F}_{t}\\right) _{t\\geq0}$ satisfying\nthe usual conditions, we consider, for $t \\in\\left[ 0,T\\right] $\nand $\\xi\\in\\left[ 0,1\\right] $, the following equation\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l} d_{t }X^{u}\\left( t ,\\xi\\right) =\\left[\n\\frac{\\partial^{2}}{\\partial \\xi^{2}}X^{u}\\left( t ,\\xi\\right)\n+f\\left( \\xi,X^{u}\\left( t ,\\xi\\right) \\right)\n+\\chi_{[a,b]}(\\xi) u\\left( t ,\\xi\\right) \\right] dt\n+\\chi_{[a,b]}(\\xi) \\dot{W}\\left(\nt ,\\xi\\right) dt ,\\\\\nX^{u}\\left( t ,0\\right) =X^{u}\\left( t ,1\\right) =0,\\\\\nX^{u}\\left( t,\\xi\\right) =x_{0}\\left( \\xi\\right) ,\n\\end{array}\n\\right. \\label{heat equation}\n\\end{equation}\nwhere $\\chi_{[a,b]}$ is the indicator function of $[a,b]$ with\n$0\\leq a\\leq b\\leq 1$; $\\dot{W}\\left( t ,\\xi\\right) $ is a\nspace-time white noise on $\\left[ 0,T\\right] \\times\\left[\n0,1\\right] $.\n\n\n\\noindent We introduce the cost functional\n\\begin{equation}\nJ\\left( x,u\\right) = \\limsup_{T\\rightarrow\\infty}\\dfrac{1}{T}\n\\mathbb{E}\\int_{0}^{T}\\int_{0}^{1}l\\left( \\xi ,X^{u}_s\\left(\n\\xi\\right) ,u_s(\\xi)\\right) \\mu\\left( d\\xi\\right) \\, ds,\n \\label{heat costo diri}\n\\end{equation}\nwhere $\\mu$ is a finite Borel measure on $\\left[ 0,1\\right] $.\nAn admissible control $u\\left( t ,\\xi\\right) $ is a predictable\nprocess such that for all $t \\geq 0$, and\n $\\mathbb{P}$-a.s.\n $u\\left( t ,\\cdot\\right)\n\\in U:=\\{v\\in C\\left( \\left[ 0,1\\right] \\right) :\\left\\vert\nv\\left( \\xi\\right) \\right\\vert \\leq\\delta\\}$. We denote by\n$\\mathcal{U}$ the set of such admissible controls. We wish to\nminimize the cost over $\\mathcal{U}$, adopting the formulation of\nSection \\ref{optcontr}, i.e. by a change of probability in the\nform of (\\ref{def-ergodic-cost}). The cost introduced in\n(\\ref{heat costo diri}) is well defined on the space of continuous\nfunctions on the interval $\\left[ 0,1\\right] $, but for an\narbitrary $\\mu$\\ it is not well defined on the Hilbert space of\nsquare integrable functions.\n\n\nWe suppose the following:\n\n\n\\begin{hypothesis}\n\\label{heatipotesi}\n\\begin{enumerate}\n\\item $f:\\left[ 0,1\\right] \\times\\mathbb{R} \\to\\mathbb{R}$ is\ncontinuous and for every\n $\\xi\\in\\left[0,1\\right] $, $ f(\\xi,\\,\\cdot\\,)$ is decreasing.\nMoreover there exist $C>0$ and $m>0$ such that for every\n$\\xi\\in\\left[0,1\\right] ,$ $x\\in\\mathbb{R}$,\n$$ |f\\left(\n\\xi,x\\right)|\\leq C(1+|x|)^m, \\qquad f\\left( 0,x\\right)= f\\left(\n1,x\\right)=0.\n$$\n\n\n\\item $l:\\left[ 0,1\\right] \\times\\mathbb{R} \\times\n[-\\delta,\\delta]\\rightarrow\\mathbb{R}$ is continuous and bounded,\nand $l(\\xi,\\cdot,u)$ is Lipschitz continuous uniformly with\nrespect to $\\xi \\in\\left[ 0,1\\right]$, $u\\in [-\\delta,\\delta]$.\n\n\n\n\n\\item $x_{0}\\in C\\left( \\left[ 0,1\\right] \\right) $,\n$x_{0}(0)=x_{0}(1)=0$.\n\\end{enumerate}\n\\end{hypothesis}\n\n\n\\noindent To rewrite the problem in an abstract way we set\n $H=\\Xi=L^{2}\\left( 0,1 \\right) $\n and $E=C_0\\left(\\left[ 0,1\\right] \\right)\n =\\{y\\in C\\left(\\left[ 0,1\\right] \\right)\\,:\\, y(0)=y(1)=0\\} $.\n We define an operator $A$ in $E$\\ by\n\\[\nD\\left( A\\right) =\\{y\\in C^{2}\\left( \\left[ 0,1\\right]\n\\right)\\,:\\, y,y''\\in C_{0}\\left( \\left[ 0,1\\right] \\right)\\}\n,\\text{ \\ \\ \\ \\ }\\left( Ay\\right) \\left( \\xi\\right)\n=\\frac{\\partial^{2}}{\\partial\\xi^{2}}y\\left( \\xi\\right) \\text{\nfor }y\\in D\\left( A\\right).\n\\]\nWe notice that $A$ is the generator of a $C_0$ semigroup in $E$,\nadmitting and extension to $H$, and $\\left| e^{tA}\\right|\n_{L\\left( E,E\\right) }\\leq e^{- t}$ see, for instance, Theorem\n11.3.1 in \\cite{DP2}. As a consequence, $A+ F+I$ is\n dissipative in $E$.\n\nWe set, for $x\\in E$, $\\xi \\in [0,1]$, $z\\in \\Xi$, $u\\in U$,\n\\begin{equation}\nF\\left( x\\right) \\left( \\xi\\right) =f\\left( \\xi,x\\left(\n\\xi\\right) \\right) ,\\ \\ \\left( Gz\\right) \\left( \\xi\\right)\n =\\chi_{[a,b]}\\left( \\xi\\right) z\\left(\n\\xi\\right) ,\\ \\ L\\left( x,u\\right)\n=\\displaystyle\\int_{0}^{1}l\\left( \\xi,x\\left( \\xi\\right) ,u\\left(\n\\xi\\right) \\right) \\mu\\left( d\\xi\\right) ,\n\\label{heatnotazioni}\n\\end{equation}\nand let $R$ denote the canonical imbedding of $C( \\left[\n0,1\\right])$ in\n $L^2( 0,1)$.\n\n\n\\noindent Finally $\\left\\{ W_{t },t \\geq0\\right\\} $ is a\ncylindrical Wiener process in $H$ with respect to the filtration\n$\\left( \\mathcal{F}_{t }\\right) _{t \\geq0}$\n\n\n$ $\n\n\n\\noindent It is easy to verify that Hypotheses\n\\ref{general_hyp_forward} and \\ref{hyp_W_A F(W_A)} are satisfied\n(for the proof of point $4$ in Hypothesis\n\\ref{general_hyp_forward} and of Hypothesis \\ref{hyp_W_A F(W_A)}\nsee again \\cite{DP2} Theorem 11.3.1.).\n\n\n\\noindent Moreover, see for instance \\cite{C}, for some $C>0$,\n\\[\n\\left| e^{tA}\\right| _{L\\left( H,E\\right) }\\leq Ct^{-1\/4}, \\qquad\n t\\in(0,1] ,\n\\]\nthus Hypothesis \\ref{hyp-convol-determ} holds.\n\n\n\\noindent Also Hypothesis \\ref{Hyp-masiero} is satisfied by taking\n$\\Xi _{0}=\\left\\lbrace f\\in C_0\\left( \\left[ 0,1\\right]\n\\right):f(a)=f(b)=0 \\right\\rbrace $.\n\n\n$ $ \\noindent Clearly the controlled heat equation (\\ref{heat\nequation}) can now be written in abstract way in the Banach space\n$E$ as\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}\ndX_{t }^{x_0,u}=\\left[ AX_{t }^{x_0,u}+F\\left( X_{t\n}^{x_0,u}\\right) \\right] dt +GRu_{t }dt +GdW_{t }\\text{\\ \\ \\ }t\n\\in\\left[\nt,T\\right] \\\\\nX^{x_0,u}_0=x_{0},\n\\end{array}\n\\right. \\label{heat eq abstract}\n\\end{equation}\nand the results of the previous sections can be applied to the\nergodic cost (\\ref{heat costo diri}) (reformulated by a change of\nprobability in the form of (\\ref{def-ergodic-cost})).\n\n\n\\noindent In particular if we define,\nfor all $x\\in C_0([0,1])$, $z\\in L^2(0,1)$, $u\\in U$\n(identifying $L^2(0,1)$ with its dual)\n$$\\psi(x,z)=\\inf _{u\\in U}\\left\\{\\int_0^1 l (\\xi,x(\\xi),u(\\xi))\n \\mu (d\\xi)+ \\int_a^b z(\\xi) u(\\xi) d\\xi\\right\\}$$\nthen there exist $\\overline v: E \\rightarrow \\mathbb{R}$\nLipschitz continuous and with $\\overline v(0)=0$, $\\overline \\zeta : E\n\\rightarrow \\Xi^*$ measurable and $\\overline \\lambda \\in\n\\mathbb{R}$ such that if $X^{x_0}=X^{x_0,0}$ is the solution of\nequation (\\ref{heat eq abstract}) then $(\\overline v(X^{x_0}),\n\\overline \\zeta(X^{x_0}),\\overline \\lambda)$ is a solution of the\nEBSDE (\\ref{EBSDE}) and the characterization of the optimal\nergodic control stated in Theorem \\ref{Th-main-control} holds (and\n $\\overline \\lambda$ is unique in the sense of Theorem\n\\ref{th-uniq-lambda}).\n\n $ $\n\n\n\\noindent Moreover if $ f$ is of class $C^1(\\mathbb{R})$\n (consequently $F$ will be of class ${\\cal G}^1(E,E)$) and $\\psi$\n is of class ${\\cal G}^1(E\\times \\Xi^*,E)$ then by Theorem \\ref{th-diff}\n $ \\overline v$ is of class ${\\cal G}^1(E,E)$ and, by Theorem \\ref{th-EHJB},\nit is a mild solution of the ergodic HJB equation (\\ref{hjb}) and it holds\n $\\overline \\zeta=\\nabla \\overline v G$.\n\n\n$ $\n\n\n\\noindent Let us then consider the particular case in which $[a,b]\n=[0,1]$, $f(x,\\xi)=f(x)$ is of class $C^1$ with derivative having polynomial\ngrowth, and satisfies $f(0)=0$,\n$[f(x+h)-f(x)]h\\leq - c |h|^{2+\\epsilon}$\nfor suitable $c,\\epsilon\n>0$ and all $x,h\\in \\mathbb{R}$ (for instance, $f(x)=-x^3$).\nIn that case the Kolmogorov semigroup corresponding to the process\n$X^{x_0}$ is strongly Feller, see\n \\cite{C} and \\cite{masiero2}, and it is easy to verify that\n$F$ is genuinely dissipative (see Definition \\ref{gen-diss}).\nMoreover we can choose $\\Xi_0=C_0([0,1])$ and it turns out that\n$\\psi$\n is bounded\non each set $E\\times B$, where $B$ is any ball of $\\Xi_0^*$. Thus\nthe claims of Corollaries \\ref{characterization of lambda} and\n\\ref{boundedness of v} hold true, and in particular $\\overline v$\nis bounded.\n\n\n$ $\n\n\n\n\\noindent Finally if we assume that $\\mu$ is Lebesgue measure and\n$f$ is bounded and Lipschitz we can choose\n$E=\\Xi=\\Xi_0=H=L^2(0,1)$. Then the assumptions of Theorem\n\\ref{th-rec-seidler} are satisfied and we can apply Theorem\n\\ref{th-uniqueness} to characterize the function $\\overline v$. In\nparticular if $f$ is of class $C^1(\\mathbb{R})$ and $\\psi$ is of\nclass ${\\cal G}^1(H\\times \\Xi^*,H)$ then $\\overline v$ is the\nunique mild solution of the ergodic HJB equation (\\ref{hjb}).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAutonomous trucks are expected to fundamentally transform the freight transportation industry, and the enabling technology is progressing rapidly.\nMorgan Stanley estimates the potential savings from automation at \\$168 billion annually for the US alone \\cite{Greene2013-AutonomousFreightVehicles}.\nAdditionally, autonomous transportation may improve on-road safety, and reduce emissions and traffic congestion \\cite{ShortMurray2016-IdentifyingAutonomousVehicle,SlowikSharpe2018-AutomationLongHaul}.\n\nSAE International defines different levels of driving automation, ranging from L0 to L5, corresponding to no-driving automation to full-driving automation \\cite{SAEInternational2018-TaxonomyDefinitionsTerms}.\nThe current focus is on L4 technology (high automation), which aims at delivering automated trucks that can drive without any need for human intervention in specific domains, e.g., on highways.\nThe automotive industry is actively involved in making L4 vehicles a reality.\nDaimler Trucks, one of the leading heavy-duty truck manufacturers in North America, is working with both Torc Robotics and Waymo, and will be testing the latest generation of L4 trucks in the Southwest in early 2021 \\cite{Engadget2020-WaymoDaimlerTeam}.\nIn 2020, truck and engine maker Navistar announced a strategic partnership with technology company TuSimple to develop L4 trucks, to go into production by 2024 \\cite{TransportTopics2020-NavistarTusimplePartner}.\nOther companies developing self-driving vehicles include Argo AI, Aurora, Cruise, Embark, Ford, Kodiak, Lyft, Motional, Nuro, and Volvo Cars \\cite{FleetOwner-TusimpleAutonomousTruck}.\n\nA study by Viscelli \\cite{Viscelli-Driverless?AutonomousTrucks} describes different scenarios for the adoption of autonomous trucks by the industry.\nThe most likely scenario, according to some of the major players, is the \\emph{transfer hub business model} \\cite{Viscelli-Driverless?AutonomousTrucks,RolandBerger2018-ShiftingGearAutomation,ShahandashtEtAl2019-AutonomousVehiclesFreight}.\nAn Autonomous Transfer Hub Network (ATHN) makes use of autonomous truck ports, or \\emph{transfer hubs}, to hand off trailers between human-driven trucks and driverless autonomous trucks.\nAutonomous trucks then carry out the transportation between the hubs, while conventional trucks serve the first and last miles.\nFigure~\\ref{fig:autonomous_example} presents an example of an autonomous network with transfer hubs.\nOrders are split into a first-mile leg, an autonomous leg, and a last-mile leg, each of which served by a different vehicle.\nA human-driven truck picks up the cargo at the customer location, and drops it off at the nearest transfer hub.\nA driverless autonomous truck moves the trailer to the transfer hub closest to the destination, and another human-driven truck performs the last leg.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[scale=0.6]{images\/autonomous_example_ryder.pdf}\n\t\\caption{An Example of an Autonomous Transfer Hub Network.}\n\t\\label{fig:autonomous_example}\n\\end{figure}\n\nATHNs apply automation where it counts: monotonous highway driving is automated, while more complex local driving and customer contact is left to humans.\nEspecially for long-haul transportation, the benefit of automation is expected to be high.\nGlobal consultancy firm Roland Berger \\cite{RolandBerger2018-ShiftingGearAutomation} estimates that operational cost savings may be between 22\\% and 40\\% in the transfer hub model, based on the cost difference between driverless trucks and conventional trucks.\nThese estimates are based on single trips and it is not clear that they can be realized in practice: in particular, they do not take into account the empty miles traveled by autonomous trucks to pick up their next orders.\n\nThis paper proposes a Constraint Programming (CP) model to schedule the ATHN operations for a given set of orders.\nThe resulting schedule details the autonomous operations and the first\/last-mile operations at each of the hubs, and specifies the movements of every load, vehicle, and driver.\n\\emph{The CP model is then used to provide, for the first time, a detailed quantitative study of the benefits of ATHNs by considering a real case study where actual operations are modeled and optimized with high fidelity.}\nIt examines whether the savings predicted by \\cite{RolandBerger2018-ShiftingGearAutomation} materialize when the network effects, e.g., empty miles for relocation, are taken into account.\nIt is found that it is computationally feasible to solve this large-scale optimization problem with more than 100,000 variables and 100,000 constraints, and that the benefits of ATHNs may indeed be realized in practice.\n\nThe remainder of this paper is organized as follows.\nSection~\\ref{sec:problem} defines the problem of scheduling freight operations on an ATHN, and Section~\\ref{sec:formulation} formulates a CP model to solve this problem.\nThe case study is presented in Section~\\ref{sec:casestudy}.\nThe final section of the paper summarizes the findings and provides the conclusions.\n\n\n\\section{Problem Statement}\n\\label{sec:problem}\n\nThis section defines the problem of scheduling freight operations on an ATHN to perform a given set orders with a given set of vehicles, with the objective to minimize the cost of driving empty.\nThe problem is defined on a directed graph $G=(V,A)$, with vertices $V$ and arcs $A$.\nThe vertices represent locations, and are partitioned into hub locations $V_H$ and customer locations $V_C$.\nArcs between the transfer hubs correspond to autonomous transportation, and the other arcs represent human-driven legs.\nEvery arc $(i,j) \\in A$ is associated with a non-negative travel time $\\tau_{ij}$ and a cost $c_{ij}$.\nFor convenience, define $\\tau_{ii} = 0$ and $c_{ii} = 0$ for all $i \\in V$.\n\nThe set of customer orders is given by $R$.\nOrder $r\\in R$ is supposed to be picked up at time $p(r)$ at the origin $o(r) \\in V_C$, and to be transported through the ATHN to the destination $d(r) \\in V_C$.\nBased on order $r\\in R$, three tasks are defined: the first-mile task $t_r^f$, the autonomous task $t_r^a$, and the last-mile task $t_r^l$.\nThe first-mile task consists of loading the trailer at the customer location, moving the freight to the closest transfer hub, and unloading the trailer.\nSimilarly, the autonomous task and the last-mile task consist of loading, driving (between the hubs and to the destination, respectively), and unloading.\n\nLet $T$ be the set of all tasks generated by the orders.\nEvery task $t\\in T$ corresponds to a single leg, and is defined by an origin $o(t) \\in V$, a destination $d(t) \\in V$, and a pickup time $p(t)$.\nThe duration of a task equals $\\tau_{o(t),d(t)} + 2S$, where $S \\ge 0$ is the fixed time for loading or unloading a trailer.\nThe pickup time $p(t_r^f)$ of the first-mile task is equal to the order pickup time $p(r)$, while subsequent pickup times are based on the time the freight is supposed to be available.\nThat is, $p(t_r^a) = p(t_r^f) + \\tau_{o(t_r^f),d(t_r^f)} + 2S$, and $p(t_r^l) = p(t_r^a) + \\tau_{o(t_r^a),d(t_r^a)} + 2S$.\n\nTo create a feasible schedule, every task must be given a starting time, and be assigned to one of the available trucks.\nIt is assumed that an appointment flexibility of $\\Delta \\ge 0$ minutes is permitted, which means that task $t\\in T$ may start anywhere in the interval $[p(t)-\\Delta, p(t)+\\Delta]$.\nThe set of trucks $K$ is partitioned into autonomous trucks $K_A$, and regular trucks $K_h$ at every hub $h \\in V_H$.\nTasks can only be assigned to the corresponding set of trucks, and tasks performed by the same vehicle must not overlap in time.\nIf $t\\in T$ and $t'\\in T$ are subsequent tasks for a single truck, and $d(t) \\neq o(t')$, then an empty relocation is necessary, which takes $\\tau_{d(t) o(t')}$ time units and has cost $c_{d(t) o(t')}$.\nThe objective is to assign the tasks such that the total relocation cost is minimized.\n\nNote that the problem described above can be decomposed and solved independently for the autonomous network and for the operations at each of the hubs.\nThis is possible because different trucks are used for each part of the ATHN, and because each task is given an independent pickup time, based on the expected time the freight is available.\nOne potential problem is that using the appointment flexibility in one part of the network may lead to an infeasibility in another part of the network, but the case study shows that this is not an issue in practice: The first and last-mile schedules are not very constrained, and shifting the schedule to accommodate flexibility in the autonomous network is straightforward.\nAlternatively, one may first optimize the autonomous network, and update the first and last-mile pickup times accordingly.\n\n\n\\section{Mathematical Formulation}\n\\label{sec:formulation}\n\nThis section presents a CP model to schedule orders on the ATHN.\nWithout loss of generality, the set of tasks and the set of trucks represent a single part of the network that can be optimized independently.\nThat is, either the autonomous operations, or the first\/last-mile operations at one of the hubs are considered.\n\n\\newsavebox{\\modelbox}\n\\begin{lrbox}{\\modelbox}\n\\begin{varwidth}{1.15\\textwidth}\n\\begin{lstlisting}\nrange Trucks = ...;\nrange Tasks = ...;\nrange Sites = ...;\nrange Horizon = ...;\nrange Types = Sites union { shipType }; \nint or[Tasks] = ...; \nint de[Tasks] = ...; \nint pickupTime[Tasks] = ...;\nint loadTime = ...;\nint flexibility = ...;\nint travelTime[Types,Types] = ...;\nint travelCost[Types,Types] = ...; \n\ndvar interval task[t in Tasks] in Horizon\n size travelTime[or[t],de[t]] + 2*loadTime;\ndvar interval ttask[k in Trucks,t in Tasks] optional in Horizon\n size travelTime[or[t],de[t]] + 2*loadTime;\ndvar interval load[Trucks,Tasks] optional in Horizon size loadTime;\ndvar interval ship[k in Trucks,t in Tasks] optional in Horizon\n size travelTime[ort],de[t]];\ndvar interval unload[Trucks,Tasks] optional in Horizon size loadTime;\ndvar sequence truckSeq[k in Trucks]\n in append(all(t in Tasks)load[k,t],all(t in Tasks)ship[k,t],all(t in Tasks)unload[k,t])\n types append(all(t in Tasks)or[t],all(t in Tasks)shipType,all(t in Tasks)de[t]);\ndvar int emptyMilesCost[Trucks,Tasks];\ndvar int truckEmptyMilesCost[Trucks];\n\nminimize sum(k in Trucks) truckEmptyMilesCost[k];\n\nconstraints {\n\n forall(t in Tasks) \n startOf(task[t]) >= pickupTime[t] - flexibility;\n startOf(task[t]) <= pickupTime[t] + flexibility;\n\t\n forall(k in Trucks,t in Tasks)\n span(ttask[k,t],[load[k,t],ship[k,t],unload[k,t]]);\n startOf(ship[k,t]) == endOf(load[k,t])\n startOf(unload[k,t]) == endOf(ship[k,t])\t \n\t\n forall(k in Trucks)\n alternative(task[t],all(k in Trucks) ttask[k,t])\t\n\t\n forall(k in Trucks,t in Tasks)\n emptyMilesCost[k,t] = travelCost[destination[t],typeOfNext(truckSeq[k],ttask[k,t],destination[t],destination[t])];\n\t\n forall(k in Trucks)\n truckEmptyMilesCost[k] = sum(t in Tasks) emptyMilesCost[k,t];\n\t\n forall(k in Trucks)\n noOverlap(truckSeq,travelTime);\n\n}\n\\end{lstlisting}\n\\end{varwidth}\n\\end{lrbox}\n\n\\begin{figure}[!t]\n\\makebox[\\textwidth][c]{%\n\\fbox{\\begin{minipage}{1.13\\textwidth}\n\t\\usebox{\\modelbox}\n\\end{minipage}}\n}\n\\caption{Formulation for Scheduling Freight Operations on an ATHN.}\n\\label{fig:formulation}\n\\end{figure}\n\nThe model is depicted in Figure \\ref{fig:formulation} using OPL\nsyntax \\cite{VanHentenryck1999-OplOptimizationProgramming}. The data of the model is given in lines 1--12. It consists of\na number of ranges (line 1--5), information about the tasks (lines\n6--8) that include their origins, destinations, and pickup times, the\ntime to load\/unload a trailer (line 9), the flexibility around the\npickup times (line 10), and the matrices of travel times and travel\ncosts. These matrices are defined between the sites but also\ninclude a dummy location {\\tt shipType} for reasons that will become\nclear shortly.\n\nThe main decision variables are the interval variables {\\tt task[t]}\nthat specify the start and end times of task {\\tt t} when processed\nby the autonomous network, and the optional interval variables {\\tt\n\tttask[k,t]} that are present if task {\\tt t} is transported by\ntruck {\\tt k}. These optional variables consist of three subtasks that\nare captured by the interval variables {\\tt load[k,t]} for loading,\n{\\tt ship[k,t]} for transportation, and {\\tt unload[k,t]} for\nunloading. The other key decision variables are the sequence variables\n{\\tt truckSeq[k]} associated with every truck: these variables\nrepresent the sequence of tasks performed by every truck. They\ncontain the loading, shipping, and unloading interval variables\nassociated with the trucks, and their types. The type of a loading\ninterval variable is the origin of the task, the type of an unloading\ninterval variable is the destination of the task, and the type of the\nshipping interval variable is the specific type {\\tt shipType} that is\nused to represent the fact that there is no transition cost and\ntransition time between the load and shipping subtasks, and the\nshipping and destination subtasks. The model also contains two\nauxiliary decision variables to capture the empty mile cost between a\ntask and its successor, and the empty mile cost of the truck\nsequence.\n\nThe objective function (line 28) minimizes the total costs of empty\nmiles. The constraints in lines 32--34 specify the potential start\ntimes of the tasks, and are defined in terms of the pickup times and\nthe flexibility parameter. The {\\sc span} constraints (line 37) link\nthe task variables and their subtasks, while the constraints in lines\n38--39 link the subtasks together. The {\\sc alternative} constraints\non line 42 specify that each task is processed by a single truck. The\nempty mile costs between a task and its subsequent task (if it\nexists) is computed by the constraints in line 45: they use the {\\sc\n\ttypeOfNext} expression on the sequence variables. The total empty\nmile cost for a truck is computed in line 48. The {\\sc noOverlap}\nconstraints in line 51 impose the disjunctive constraints between the\ntasks and the transition times. \n\n\n\\section{Case Study}\n\\label{sec:casestudy}\n\nTo quantify the impact of autonomous trucking on a real transportation network, a case study is presented for the dedicated transportation business of Ryder System, Inc., commonly referred to as \\emph{Ryder}.\nRyder is one of the largest transportation and logistics companies in North America, and provides fleet management, supply chain, and dedicated transportation services to over 50,000 customers.\nIts dedicated business, \\emph{Ryder Dedicated Transportation Solutions}, offers supply-chain solutions in which Ryder provides both drivers and trucks, and handles all other aspects of managing the fleet.\nRyder's order data is used to design an ATHN, and to create a detailed plan for how it would operate.\nThis allows for a realistic evaluation of the benefits of autonomous trucking.\n\n\n\\subsection{Data Description}\n\\label{sec:inputdata}\n\nRyder prepared a representative dataset for its dedicated transportation business in the Southeast of the US, reducing the scope to orders that were strong candidates for automation.\nThe dataset consists of trips that start in the first week of October 2019, and stay completely within the following states: AL, FL, GA, MS, NC, SC, and TN.\nIt contains 11,264 rows, which corresponds to 2,090 orders, formatted as in Table~\\ref{tab:order_data}.\nEvery order has a unique \\emph{OrderNumber}, and every row corresponds to a stop for a particular order.\nStops have a unique identifier \\emph{StopNumber}, and the \\emph{Stop} column indicates the sequence within the order.\nThe columns \\emph{StopArrivalDate} and \\emph{StopDepartureDate} indicate the scheduled arrival and departure times, and \\emph{City} and \\emph{ZipCode} identify the location of the stop.\nThe \\emph{Status} column gives a code for the status of the vehicle on arrival, and the \\emph{Event} column indicates what happens at the stop.\n\n\\begin{adjustbox}{center,float={table}[!t]}\n\t\\centering\n\t\\scriptsize\n\t\\begin{threeparttable}\n\t\t\\caption{An Example of the Order Data.}\n\t\t\\label{tab:order_data}%\n\t\t\\begin{tabular}{rrrrrrrrrr}\n\t\t\t\\multicolumn{1}{l}{StopNum} & \\multicolumn{1}{l}{OrderNum} & \\multicolumn{1}{l}{StopArrivalDate} & \\multicolumn{1}{l}{StopDepartureDate} & \\multicolumn{1}{l}{Stop} & \\multicolumn{1}{l}{City} & ZipCode & \\multicolumn{1}{l}{Status} & Event & \\\\\n\t\t\t\\toprule\n\t\t\t68315760 & 7366366 & 2-10-2019 09:01 & 2-10-2019 09:02 & 1 & Atlanta & 30303 & LD & HPL \\\\\n\t\t\t68315761 & 7366366 & 2-10-2019 16:29 & 2-10-2019 18:33 & 2 & Tennessee & 37774 & LD & LUL \\\\\n\t\t\t68315762 & 7366366 & 3-10-2019 11:00 & 3-10-2019 11:30 & 3 & Atlanta & 30303 & MT & DMT \\\\\n\t\t\n\t\t\n\t\t\t\\dots & \\dots & \\dots & \\dots & \\dots & \\dots & \\dots & \\dots & \\dots\\\\\n\t\t\t46798427 & 5207334 & 7-10-2019 02:35 & 7-10-2019 02:50 & 1 & Alpharetta & 30009 & NaN & LLD \\\\\n\t\t\t46798428 & 5207334 & 7-10-2019 08:10 & 7-10-2019 08:49 & 2 & Macon & 31201 & LD & LUL \\\\\n\t\t\t46798429 & 5207334 & 7-10-2019 15:16 & 7-10-2019 15:45 & 3 & Alpharetta & 30009 & LD & LUL \\\\\n\t\t\\end{tabular}%\n\t\\end{threeparttable}\n\\end{adjustbox}%\n\nThe example data in Table~\\ref{tab:order_data} displays two orders.\nThe first order is a trip from Atlanta to Tennessee and back.\nBased on the status code, the truck arrived in Tennessee loaded (LD) and returned to Atlanta empty (MT).\nThe event codes show that a preloaded trailer was hooked in Atlanta (HPL), followed by a live unload (LUL) in Tennessee, after which the truck dropped the empty trailer (DMT) in Atlanta.\nThe exact codes are not important for the purpose of this paper.\nWhat is important, is the ability to derive the parts of the trip when the truck is moving freight, and the parts when the truck is driving empty.\nIf a vehicle returns to the starting location after only making deliveries, it is assumed that the return trip is empty, and the data is corrected if needed.\n\nRoad system data was obtained from OpenStreetMap \\cite{OpenStreetMap2020}, and route distance and mileage were calculated with the GraphHopper library.\nThe provided orders are long-haul trips, with an average trip length of 431 miles.\nMost of this distance is driven on highways: 65\\% of the distance is driven on interstates and US highways, and this number goes up further to 87\\% if state highways are included.\nSignificant highway usage is typical for long-haul transportation, and indicates that a significant part of each trip can potentially be automated.\n\n\n\\subsection{ATHN Design}\n\nThe design of ATHNs needs to decide the locations of the transfer hubs, which are the gateways to the autonomous parts of the network.\nA natural choice is to locate the hubs in areas where many trucks currently enter or exit the highway system.\nHistorical order data is used to identify these common highway access points.\nFor a given order, the truck is routed through the existing road network, and the highway segments, and their access points, can easily be identified.\nThe transfer hubs are then placed in areas with many access points.\nThe case study considers two different sets of hubs: a \\emph{small network} with 17 transfer hubs in the areas where Ryder trucks most frequently access the highway system, and a \\emph{large network} that includes 13 additional hubs in locations with fewer highway access points.\nThe small and the large network are visualized in Figure~\\ref{fig:designsmall} and Figure~\\ref{fig:designlarge}, respectively.\nThe exact hub locations are masked, but the figures are accurate within a 50 mile range.\nThe large network extends further northeast into North-Carolina, and further south into Florida.\nIt also makes the network more dense in the center of the region.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\subfloat[Small Network (17 hubs).\\label{fig:designsmall}]{%\n\t\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth, trim=28cm 3cm 29cm 15cm, clip]{images\/obfuscated_design_small.png}\n\t}\n\t\\hfill\n\t\\subfloat[Large Network (30 hubs).\\label{fig:designlarge}]{%\n\t\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth, trim=28cm 3cm 29cm 15cm, clip]{images\/obfuscated_design_large.png}\n\t}\n\t\\caption{ATHN Network Designs for the Southeast.}\n\t\\label{fig:designs}\n\\end{figure}\n\n\n\\subsection{Order Selection}\n\nThe case study focuses on scheduling the 494 most \\emph{challenging orders}.\nThese orders consist of a single delivery, followed by an empty return trip.\nBecause they require empty travel for 50\\% of the trip, these challenging orders are an excellent target for cost savings.\nThey also make up 24\\% of the dataset, and account for 53\\% of the empty mileage.\n\nFor a given ATHN and given arc costs $c_{ij}$ for all $(i,j) \\in A$, it is first determined which of these orders may benefit from the autonomous network, and which are better served by human-driven trucks.\nFor example, a direct trip with a human-driven trucks may be preferred for trips that are short, or for trips between locations that are far from any hub.\nTwo options are compared for every order: The first option is to serve the order with a conventional truck, which necessitates an empty return leg.\nThe second option uses the autonomous network, which amounts to driving to the nearest hub with a human-driven truck (first mile), shipping the load over the autonomous network to the hub closest to the destination, and then serving the last mile with another human-driven truck.\nNo empty returns are added in this case, as every truck is immediately available for the next task.\nThe two options are compared in terms of arc costs, and the cheapest one is selected.\nDifferent costs are used for autonomous and non-autonomous arcs, as will be explained in the next section.\nThe CP model only considers the orders that may benefit from the ATHN, while the other orders are scheduled separately.\n\n\n\\subsection{Parameters and Settings}\n\nThe following scenario is defined as the \\emph{base case} for the upcoming experiments.\nThe base case uses the small network, presented by Figure~\\ref{fig:designsmall}.\nFor conventional trucks, the cost $c_{ij}$ for driving arc $(i,j)\\in A$ is equal to the road distance.\nFor autonomous trucks, this cost is reduced by the percentage $\\alpha$.\nThe value of $\\alpha$ is a parameter, and the case study uses values ranging from $\\alpha=25\\%$ to $\\alpha=40\\%$, with $\\alpha = 25\\%$ for the base case.\nThis results in a conservative estimate of the benefits of autonomous trucking, which is predicted to be 29\\% to 45\\% cheaper per mile \\cite{EngholmEtAl2020-CostAnalysisDriverless}.\n\nThe time for loading or unloading is estimated at $S = 30$ minutes, and the appointment time flexibility is set to one hour ($\\Delta=60$).\nThe number of autonomous trucks is set to $\\lvert K_A \\rvert=50$.\nThe CP model is used to schedule the orders for each independent part of the network: the autonomous operations, and the first\/last-mile operations at each of the hubs.\nEach model is solved with the CPLEX CP Optimizer version 12.8 \\cite{LaborieEtAl2018-IbmIlogCp}.\n\n\n\\subsection{Base Case Results}\n\nOut of the 494 challenging orders, 437 (88\\%) are found to potentially benefit from the autonomous network, and the CP model is used to schedule these orders.\nScheduling the autonomous part of the ATHN is the most challenging: the model has close to 110,000 decision variables and more than 110,000 constraints.\nThe model is given an hour of CPU time and returns the best found solution, which is visualized in Figure~\\ref{fig:basecase_schedule}.\nThis figure shows both the autonomous tasks and the and the relocation tasks for the first week of October.\n\n\\begin{figure}[p]\n\t\\centering\n\t\\includegraphics[trim=20 50 10 40,clip,width=\\linewidth]{images\/small_25_gantt.pdf}\n \\caption{Autonomous Truck Schedule for the Base Case.}\n\t\\label{fig:basecase_schedule}\n\\end{figure}\n\n\\begin{figure}[p]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{images\/obfuscated_vehicle_20.png}\n\t\\caption{Single Autonomous Truck Route in the Base Case (blue is loaded, red is empty).}\n\t\\label{fig:singleroute}\n\\end{figure}\n\nFigure~\\ref{fig:basecase_schedule} shows that the transportation tasks are close together, and only a relatively small amount of relocation is necessary.\nIt is interesting to see that only a small number of autonomous trucks is driving during the weekend.\nThis is because the appointment times are still based on the current agreements with the customers, and having drivers work during the weekends is typically avoided.\nFor autonomous trucks, this would not be a problem, which again underlines that making full use of autonomous transportation requires adapting the business model and current practices.\nIn Section~\\ref{sec:flexibility}, the importance of time flexibility is considered in more detail.\n\nThe routes that are driven by the autonomous trucks consist of serving autonomous legs of different orders, with relocations in between.\nFigure~\\ref{fig:singleroute} shows a representative single truck route from the schedule.\nBlue arrows indicate that freight is being moved, and red arrows indicate that the truck is driving empty.\nThe truck starts at the west coast of Florida, where it picks up a load that has been delivered to the hub by a first\/last-mile truck driver.\nThe freight is then transported to the east coast, where it is unloaded so that a regular truck with driver can complete the last-mile.\nThe autonomous truck immediately starts serving the autonomous leg of the next order, returning to the Tampa area.\nAfter that, legs are served that are going north.\nThe first time a relocation is needed, is when the truck makes a delivery near Valdosta, close to the Georgia and Florida border.\nNo freight is immediately available, and the vehicle drives empty to the next location to pick up a load there.\nThe routes are clearly complex, which emphasizes the power of optimization: It is unlikely that this solution can be found by manual planners, but it is possible with optimization techniques.\n\nCompared to the autonomous trucks, the total distance driven by regular trucks is relatively short.\nThe optimization model was used to schedule the first and last-mile operations at selected hubs, and it was found that the amount of work is often insufficient to keep drivers occupied throughout the week.\nTo prevent driver idle time, it may be beneficial to outsource these legs, or to consolidate them with other operations in the area.\nIn terms of mileage, the percentage of empty miles at the hubs is typically under 25\\%, which is used as an estimate for the first\/last-mile efficiency in the remainder.\n\nTable~\\ref{tab:basecase_costs} quantifies the impact of autonomous trucking on the operating costs for the 437 selected orders.\nThe \\emph{Mileage} column indicates the total miles driven for both the current network and for the ATHN.\nThe numbers are separated based on whether the distance was driven while loaded or empty, and percentages are shown in the \\emph{\\% of total} column.\nThe \\emph{Cost without autonomous trucks} converts the miles into dollars, using \\$2 per mile as an approximation for the cost of human-driven trucks.\nRecall that driving autonomously is assumed to be $\\alpha=25\\%$ cheaper, which is reflected by the \\emph{Cost adjustment} column.\nThe \\emph{Cost} column presents the cost when autonomous trucks are available, and is obtained by multiplying the cost without autonomous trucks by the cost adjustment factor.\n\n\\begin{adjustbox}{center,float={table}[!t]}\n\t\\centering\n\t\\footnotesize\n\t\\begin{threeparttable}\n\t\t\\caption{Cost Table for the Base Case (437 orders).}\n\t\t\\label{tab:basecase_costs}%\n\t\t\\begin{tabular}{ccrrrrrrr}\n\t\t\t\\toprule\n\t\t\t&\t\t &\t\t &\t\t &\t\t\t & Cost without & & \\\\\n\t\t\t& & & \\quad Mileage & \\quad \\% of total & \\quad auton. trucks & \\quad Cost adj. & \\quad\\quad\\quad Cost \\\\\n\t\t\t\\midrule\n\t\t\t\\multirow{3}[0]{*}{Current network} & & \\multicolumn{1}{l}{Loaded} & 96,669 & 50\\% & \\$ 193,338 & 1.00 & \\$ 193,338 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Empty} & 96,698 & 50\\% & \\$ 193,396 & 1.00 & \\$ 193,396 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Total} & 193,367 & 100\\% & \\$ 386,734 & 1.00 & \\$ 386,734 \\\\\n\t\t\t\\midrule\n\t\t\t\\multirow{9}[0]{*}{\\parbox{2.5 cm}{Autonomous transfer\\\\hub network}} & \\multirow{3}[0]{*}{Autonomous} & \\multicolumn{1}{l}{Loaded} & 91,618 & 67\\% & \\$ 183,235 & 0.75 & \\$ 137,426 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Empty} & 44,217 & 33\\% & \\$ 88,433 & 0.75 & \\$ 66,325 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Total} & 135,834 & 100\\% & \\$ 271,668 & 0.75 & \\$ 203,751 \\\\\n\t\t\t\\cmidrule{2-8}\n\t\t\t& \\multirow{3}[0]{*}{First\/last mile} & \\multicolumn{1}{l}{Loaded} & 29,286 & 75\\% & \\$ 58,573 & 1.00 & \\$ 58,573 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Empty\\tnote{*}} & 9,762 & 25\\% & \\$ 19,524 & 1.00 & \\$ 19,524 \\\\\n\t\t\t& & \\multicolumn{1}{l}{Total} & 39,049 & 100\\% & \\$ 78,097 & 1.00 & \\$ 78,097 \\\\\n\t\t\t\\cmidrule{2-8}\n\t\t\t& Total & & 174,883 & & \\$ 349,766 & & \\$ 281,848 \\\\\n\t\t\t\\cmidrule{2-8}\n\t\t\t& Savings & & 18,484 & & \\$ 36,969 & & \\$ 104,886 \\\\\n\t\t\t& Savings (\\%) & & 10\\% & & 10\\% & & 27\\% \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}%\n\t\t\\begin{tablenotes}\n\t\t\t\\item[*] estimated\n\t\t\\end{tablenotes}\n\t\\end{threeparttable}\n\\end{adjustbox}%\n\nCompared to the current network, the ATHN allows for significant savings for the selected orders: Table~\\ref{tab:basecase_costs} shows that the total cost goes down by 27\\%.\nAt a cost of \\$2 per mile, this corresponds to \\$104,886 per week, or \\$5.5M per year.\nThe \\emph{Mileage} column shows that almost 80\\% of the mileage in the ATHN can be automated, which partly explains the large savings.\nWhat is very interesting to observe is that the total mileage for the ATHN is actually \\emph{less} than the total mileage for the direct trips in the current network.\nIn the transfer hub network, there is no need to return back empty after a delivery, and there is no need to limit working hours or to return to a domicile at the end of the day.\nAs a result, only 33\\% of the automated distance is driven empty, compared to 50\\% for the current system.\nThis means that even if autonomous trucks would be as expensive as trucks with drivers, costs would still go down by 10\\% due to the additional flexibility that automation brings.\n\n\n\\subsection{Impact of the Size of the Network}\n\nA larger autonomous network results in shorter first\/last-mile trips, and may have a larger area of coverage.\nTo evaluate the impact of the size of the network, the calculations for the base case are repeated using the large network (Figure~\\ref{fig:designlarge}) with 30 hubs, instead of the small network (Figure~\\ref{fig:designsmall}) with 17 hubs.\nFor the large network, 468 of the 494 orders (95\\%) may benefit from the autonomous network, compared to only 88\\% for the base case.\nThis immediately implies that there is more potential for savings.\nIt also means a higher utilization of the autonomous trucks, as more legs are served by the same 50 vehicles.\n\nTable~\\ref{tab:large_25_costs} shows that the relative cost savings for the large network (29\\%) are similar to those for the small network (27\\%).\nThis means that the average benefit of automation is similar for both designs, \\emph{for the orders that are automated}.\nHowever, the large network allows more trips to benefit from automation, which is why the cost savings of \\$ 116,582 are 11\\% higher than the savings for the small network (\\$ 104,886).\nThe average benefit of automation is similar for the two designs due to two effects that cancel out.\nFirst, the same autonomous trucks have to serve more orders on the large network.\nThis increases the utilization of the vehicles, but also increases the percentage of empty miles from 33\\% to 35\\%.\nThe reason for this increase is that there is less time available to wait around at a hub for the next order, as the trucks are needed to perform other orders in the meantime.\nOn the other hand, the first and last-mile trips are shorter due to the additional hubs, which saves costs.\n\n\\begin{adjustbox}{center,float={table}[!t]}\n\t\\centering\n\t\\footnotesize\n\t\\begin{threeparttable}\n\t\t\\caption{Cost Table for the Large Network (468 orders).}\n\t\t\\label{tab:large_25_costs}%\n\t\t\\begin{tabular}{ccrrrrrr}\n\t \\toprule\n\t\t&\t\t &\t\t &\t\t &\t\t\t & Cost without & & \\\\\n\t\t& & & \\quad Mileage & \\quad \\% of total & \\quad auton. trucks & \\quad Cost adj. & \\quad\\quad\\quad Cost \\\\\n\t\t\\midrule\n\t\t\\multirow{3}[0]{*}{Current network} & & \\multicolumn{1}{l}{Loaded} & 101,213 & 50\\% & \\$ 202,425 & 1.00 & \\$ 202,425 \\\\\n\t\t& & \\multicolumn{1}{l}{Empty} & 96,698 & 50\\% & \\$ 193,396 & 1.00 & \\$ 193,396 \\\\\n\t\t& & \\multicolumn{1}{l}{Total} & 202,476 & 100\\% & \\$ 404,953 & 1.00 & \\$ 404,953 \\\\\n\t\t\\midrule\n\t\t\\multirow{8}[0]{*}{\\parbox{2.5 cm}{Autonomous transfer\\\\hub network}} & \\multirow{3}[0]{*}{Autonomous} & \\multicolumn{1}{l}{Loaded} & 97,326 & 65\\% & \\$ 194,653 & 0.75 & \\$ 145,990 \\\\\n\t\t& & \\multicolumn{1}{l}{Empty} & 53,247 & 35\\% & \\$ 106,493 & 0.75 & \\$ 79,870 \\\\\n\t\t& & \\multicolumn{1}{l}{Total} & 150,573 & 100\\% & \\$ 301,146 & 0.75 & \\$ 225,860 \\\\\n\t\t\\cmidrule{2-8}\n\t\t& \\multirow{3}[0]{*}{First\/last mile} & \\multicolumn{1}{l}{Loaded} & 23,442 & 75\\% & \\$ 46,883 & 1.00 & \\$ 46,883 \\\\\n\t\t& & \\multicolumn{1}{l}{Empty \\tnote{*}} & 7,814 & 25\\% & \\$ 15,628 & 1.00 & \\$ 15,628 \\\\\n\t\t& & \\multicolumn{1}{l}{Total} & 31,256 & 100\\% & \\$ 62,511 & 1.00 & \\$ 62,511 \\\\\n\t\t\\cmidrule{2-8}\n\t\t& Total & & 181,829 & & \\$ 363,657 & & \\$ 288,371 \\\\\n\t\t\\cmidrule{2-8}\n\t\t& Savings & & 20,648 & & \\$ 41,296 & & \\$ 116,582 \\\\\n\t\t& Savings (\\%) & & 10\\% & & 10\\% & & 29\\% \\\\\n\t\t\\bottomrule\n\t\t\\end{tabular}%\n\t\t\\begin{tablenotes}\n\t\t\t\\item[*] estimated\n\t\t\\end{tablenotes}\n\t\\end{threeparttable}\n\\end{adjustbox}%\n\n\n\\subsection{Impact of the Cost of Autonomous Trucking}\n\nFor the base case, it was assumed that autonomous trucks are $\\alpha= 25\\%$ cheaper per mile than trucks with a driver.\nHowever, this number is yet far from certain, and higher cost reductions have also been reported in the literature.\nTo investigate the impact of the cost of autonomous trucking, Table~\\ref{tab:overview_costs} presents results for $\\alpha$ ranging from $25\\%$ to $40\\%$, for both the small and the large network.\nThe column \\emph{Autom.\n\torders} gives the number of orders that may benefit from automation, and are considered in the ATHN.\nThe relative cost savings (\\emph{Rel.\n\tsavings}) state the cost reduction compared to serving these orders with conventional trucks.\nThe \\emph{Cost savings} column gives the absolute cost savings in dollars.\nThe final column compares the absolute savings to the savings obtained for the baseline (small network, $\\alpha=25\\%$).\n\nTable~\\ref{tab:overview_costs} shows that, as autonomous trucking gets cheaper, and as more hubs are added to the network, the savings compared to the current system go up.\nAdditionally, more orders start using the ATHN, which increases the absolute savings further.\nEven though the autonomous trucks only perform the transportation between the hubs, the relative cost savings for the complete system often exceed the mileage cost reduction for autonomous trucks ($\\alpha$).\nThis again shows that the benefit of autonomous trucks is not only the lower cost per mile, but also the additional flexibility.\nCompared to the base case, cheaper autonomous transportation results in significantly larger savings.\nSimilar as in the previous section, increasing the size of the network does not strongly impact the average cost benefit per order, but does increase the total amount of orders that can be automated, which leads to more profits.\nIn the best case (large network, $\\alpha=40\\%$), the total benefit of the ATHN is \\$ 161,762 per week for the challenging orders, which corresponds to \\$8.4M savings per year.\n\n\\begin{table}[!t]\n\t\\centering\n\t\\footnotesize\n\t\\caption{Overview of Cost Savings under Different Assumptions.}\n\t\\label{tab:overview_costs}%\n\t\\begin{tabular}{crrrrr}\n\t\t\\toprule\n\t\t& & & & & Additional savings \\\\\n\t\t\\multicolumn{1}{l}{Network} & $\\alpha$ & \\quad Autom. orders & \\quad Rel. savings & \\quad Cost savings & \\quad comp. to base case \\\\\n\t\t\\midrule\n\t\t\\multirow{4}[0]{*}{Small} & 25\\% & 437 & 27\\% & \\$ 104,886 & +0\\% \\\\\n\t\t& 30\\% & 439 & 32\\% & \\$ 122,396 & +17\\% \\\\\n\t\t& 35\\% & 443 & 35\\% & \\$ 135,572 & +29\\% \\\\\n\t\t& 40\\% & 443 & 38\\% & \\$ 148,984 & +42\\% \\\\\n\t\t\\midrule\n\t\t\\multirow{4}[0]{*}{Large} & 25\\% & 468 & 29\\% & \\$ 116,582 & +11\\% \\\\\n\t\t& 30\\% & 469 & 33\\% & \\$ 131,798 & +26\\% \\\\\n\t\t& 35\\% & 472 & 37\\% & \\$ 149,586 & +43\\% \\\\\n\t\t& 40\\% & 472 & 40\\% & \\$ 161,762 & +54\\% \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\\end{table}%\n\n\\subsection{Impact of Appointment Flexibility}\n\\label{sec:flexibility}\n\nFor the base case, the appointment flexibility was assumed to be $\\Delta = 60$ minutes.\nDeviating from a previously agreed appointment must be negotiated with the customer, but if there are significant benefits in terms of efficiency, this may be worth the effort.\nTo determine the impact of appointment flexibility, Table~\\ref{tab:time_flexibility} presents results for the base case (small network, $\\Delta=60$), in which the value of $\\Delta$ is varied.\nThe model is given four hours of CPU time for each setting.\nThe columns are similar to the previous table, and show the number of automated orders, the relative and absolute savings, and the additional savings compared to the base case.\nNote that the appointment flexibility does not affect the amount of orders that may benefit from automation, which is 437 for all four experiments.\n\nTable~\\ref{tab:time_flexibility} reveals that, if the appointment flexibility is already limited to one hour, limiting it further to 30 minutes to increase the service level is relatively inexpensive: the cost savings would only go down by 0.8\\%.\nIncreasing the flexibility by 30 minutes, on the other hand, goes a long way.\nUsing $\\Delta=90$ instead of $\\Delta=60$ results in 5\\% additional savings.\nThis indicates that the impact of appointment flexibility can be substantial.\nAlso note that the additional benefit is almost half that of the additional benefit for extending the network (+11\\%).\n\n\\begin{table}[!t]\n\t\\centering\n\t\\footnotesize\n\t\\caption{Overview of Cost Savings under Different Values of $\\Delta$.}\n\t\\label{tab:time_flexibility}%\n\t\\begin{tabular}{crrrrr}\n\t\t\\toprule\n\t\t& & & & & Additional savings \\\\\n\t\t\\multicolumn{1}{l}{Network} & $\\Delta$ & Autom. Orders & Rel. savings & Cost savings & comp. to base case \\\\\n\t\t\\midrule\n\t\t\\multirow{4}[0]{*}{Small} & 30 & 437 & 29\\% & \\$ 110,887 & -0.8\\% \\\\\n\t\t& 60 & 437 & 29\\% & \\$ 111,802 & 0.0\\% \\\\\n\t\t& 90 & 437 & 30\\% & \\$ 117,354 & 5.0\\% \\\\\n\t\t& 120 & 437 & 30\\% & \\$ 116,865 & 4.5\\% \\\\\n\t\t\\bottomrule\n\t\\end{tabular}%\n\\end{table}%\n\nIt is surprising to see that increasing $\\Delta$ from 90 to 120 actually leads to a schedule that is less efficient, while more flexibility is available.\nThis is due to the CP model not finding the optimal solution.\nFinding the best schedule is a very challenging task, and increasing the flexibility increases the search space, which makes this task even more challenging.\nFurther investigation is needed to determine the actual additional savings that can be realized for $\\Delta=120$.\nThe results do suggest that no schedule could easily be found that was significantly better than the schedule for $\\Delta=90$, which hints that the advantage of additional flexibility is leveling off after $\\Delta=90$.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nAutonomous freight transportation is expected to completely transform the industry, and the technology is advancing rapidly, with different players developing and testing high automation L4 trucks.\nA crucial factor for the adoption of autonomous trucks is its return on investment, which is still uncertain.\nThis study contributes to the discussion by quantifying the benefits of the ATHN model, which is one of the most likely future scenarios.\nThe benefits are estimated based on a real transportation network, taking into account the detailed operations of the autonomous network.\n\nA CP model was presented to schedule the orders on an ATHN, and the model was used to conduct a case study on a real transportation network.\nIt was found that solving this large-scale optimization problem with CP is computationally feasible.\nFurthermore, ATHN may lead to substantial cost savings.\nFor some of the most challenging orders in the Southeast (orders that make a single delivery and return empty), operational cost may be reduced by 27\\% to 40\\%, which could save an estimated \\$5.5M to \\$8.4M per year on these orders.\nThis shows that the cost savings estimated by \\cite{RolandBerger2018-ShiftingGearAutomation} (22\\%-40\\%) may indeed be realized.\nThe savings are mainly attributed to a reduction in labor cost, but the increased flexibility of autonomous trucks also plays a significant role: Even if autonomous trucks would have the same cost per mile as human-driven trucks, cost savings would still be possible.\n\nIt was also explored how different assumptions impact the ATHN.\nIncreasing the size of the autonomous network mainly increases the number of orders that can benefit from automation, while the average benefit per automated order remains similar.\nThe impact of the cost per mile for autonomous trucking was also studied.\nAs autonomous trucks become cheaper, it is cost-efficient to automate more orders, and existing trips become cheaper as well.\nDue to the additional flexibility, it was found that the system benefit of automation often exceeds the benefit of the lower cost per mile.\nFinally, it was analyzed how appointment flexibility impacts the efficiency, and it was found that allowing deviations to be even 30 minutes larger can go a long way.\n\nThis paper quantified the impact of autonomous trucking on a real transportation network, and substantial benefits were found in terms of labor costs and flexibility.\nThese results strengthen the business case for autonomous trucking, and major opportunities may arise in the coming years.\nTo seize these opportunities, transport operators will have to update their business models, and use optimization technology to operate the more complex systems.\nDeveloping more detailed models and solution methods to support this transition is an interesting direction for future research.\n\n\n\\subsection*{Acknowledgements}\n\nThis research was funded through a gift from Ryder. Special thanks to the Ryder team for their invaluable support, expertise, and insights.\n\n\\clearpage\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:1}\n\nElectromagnetic waves propagating in disordered media are progressively scrambled by refractive index fluctuations and,\nthanks to interference, result into mesoscopic phenomena, such as speckle correlations and weak\nlocalization~\\cite{Akkermans2011, Sheng2010}. Polarization is an essential characteristic of electromagnetic waves that,\nconsidering the ubiquity of scattering processes in science, prompted the development of research in statistical\noptics~\\cite{Goodman2015, Brosseau1998} and impacted many applications, from optical imaging in biological\ntissues~\\cite{Tuchin2006} to material spectroscopy (e.g.\\@\\xspace, rough surfaces)~\\cite{Maradudin2007}, and radiation transport\nin turbulent atmospheres~\\cite{Andrews2005, Shirai2003}. Although the topic has experienced numerous developments and\noutcomes in the past decades, recent studies have revealed that much remains to be explored and understood on the\nrelation between the microscopic structure of scattering media and the polarization properties of the scattered field.\nIn particular, it was found that important information about the morphology of a disordered medium is contained in\nthe three-dimensional (3D) polarized speckles produced in the near-field above its surface~\\cite{Apostol2003,\nCarminati2010, Parigi2016} and in the spontaneous emission properties of a light source in the bulk~\\cite{Caze2010,Sapienza2011a}.\nSimilarly, the light scattered by random ensembles of large spheres was shown to exhibit unusual polarization features\ndue to the interplay between the various multipolar scatterer resonances~\\cite{Schmidt2015}.\n\nThe fact that light transport is affected by the microscopic structural properties of disordered media is well known.\nStructural correlations, coming from the finite scatterer size or from the specific morphology of porous\nmaterials~\\cite{Torquato2005, RojasOchoa2004, Garcia2007}, typically translate into an anisotropic phase function,\n$p(\\cos \\theta)$, which describes the angular response of a single scattering event with the scattering angle $\\theta$. The\naverage cosine of the phase function, known as the anisotropic scattering factor, $g=\\left\\langle \\cos \\theta \\right\\rangle$ (with\n$-1 \\leq g \\leq 1$), then leads to the standard definition of the transport mean free path (the average distance after\nwhich the direction of light propagation is completely randomized) as $\\ell^*=\\ell\/(1-g)$, where $\\ell$ is the\nscattering mean free path (the average distance between two scattering events). Single scattering anisotropy naturally\naffects how the polarization diffuses in disordered media, one of the most notable findings being that circularly\npolarized light propagates on longer distances compared to linearly polarized light in disordered media exhibiting\nforward single scattering ($g>0$) ---the so-called ``circular polarization memory effect''~\\cite{MacKintosh1989a,\nXu2005a, Gorodnichev2007}.\n\nRecent observations in mesoscopic optics also motivate deeper investigations on polarized light transport in correlated\ndisordered media. Indeed, numerical simulations revealed that uncorrelated ensembles of point scatterers cannot exhibit\n3D Anderson localization due to the vector nature of light~\\cite{Skipetrov2014, Bellando2014}. By contrast, it was found that the\ninterplay between short-range structural correlations and scatterer resonances could yield the opening of a 3D photonic\ngap in disordered systems~\\cite{Edagawa2008, Liew2011} and promote localization phenomena at its\nedges~\\cite{Imagawa2010}. To date, the respective role of polarization and structural correlations on mesoscopic optical\nphenomena remains largely to be clarified.\n\nTheoretically describing the propagation of polarized light in disordered media exhibiting structural correlations is a\ndifficult task. A first approach consists in using the vector radiative transfer equation~\\cite{Chandrasekhar1960,\nPapanicolaou1975, Mishchenko2006}, in which electromagnetic waves are described via the Stokes parameters and the\nscattering and absorption processes are related via energy conservation arguments. The various incident polarizations\n(linear, circular) and the single scattering anisotropy are explicitly implemented, thereby allowing for the\ninvestigation of a wide range of problems~\\cite{Amic1997, Gorodnichev2014}. A second approach relies on a transfer\nmatrix formalism based on a scattering sequence picture, where each scattering event (possibly anisotropic) yields a partial\nredistribution of the light polarization along various directions~\\cite{Akkermans1988, Xu2005, Rojas-Ochoa2004a}. The\napproach is phenomenological, yet very intuitive, making it possible to gain important physical insight into mesoscopic\nphenomena such as coherent backscattering~\\cite{Akkermans1988}.\n\nThe most \\textit{ab-initio} approach to wave propagation and mesoscopic phenomena in disordered systems is the so-called\nmultiple scattering theory, which directly stems from Maxwell's equations and relies on perturbative expansions on the\nscattering potential~\\cite{Sheng2010, Akkermans2011}. The formalism is often used to investigate mesoscopic\nphenomena, such as short and long-range (field and intensity) correlations or coherent backscattering, in a large\nvariety of complex (linear or nonlinear) media, including disordered dielectrics and atomic clouds. Unfortunately, it\nalso rapidly gains in complexity when the vector nature of light is considered. In fact, multiple scattering theory for\npolarized light has so far been restricted to uncorrelated disordered media only~\\cite{Stephen1986, MacKintosh1988,\nOzrin1992, VanTiggelen1996, VanTiggelen1999, Muller2002, Vynck2014}.\n\nIn this article, we present a model based on multiple scattering theory that describes how the diffusion of polarized\nlight is affected by short-range structural correlations, thereby generalizing previous models limited to uncorrelated\ndisorder. We do not aim at developing a complete theory for polarization-related mesoscopic phenomena in correlated\ndisordered media but at showing that, by a series of well-controlled approximations, important steps towards this\nobjective can be made. Starting from the (exact) Dyson and the Bethe-Salpeter equations for the average field and the\nfield correlation function, we derive a radiative transfer equation for the polarization-resolved specific\nintensity in the limit of short-range structural correlations and weak scattering. To analyze the impact of short-range\nstructural correlations on the diffusion of polarization, we then apply a $P_1$ approximation and decompose the\npolarization-resolved energy density into ``polarization eigenmodes'', as was done previously for uncorrelated\ndisordered media~\\cite{Ozrin1992, Muller2002, Vynck2014}. An interesting outcome of this decomposition is the\nobservation that each polarization eigenmode is affected independently and differently by short-range structural\ncorrelations. More precisely, each mode is characterized by a specific transport mean free path, and thus a specific\nattenuation length (describing the depolarization process) for its intensity. The transport mean free path of each\neigenmode depends non-trivially on the anisotropy factor $g$, and differently from the $(1-g)^{-1}$ rescaling\nwell known for the diffusion of scalar waves.\n\nThe paper is organized as follows. The radiative transfer equation for polarized light is derived\n\\textit{ab-initio} in Sect.~\\ref{sec:2}. The diffusion limit and the eigenmode decomposition are applied in\nSect.~\\ref{sec:3}. In Sect.~\\ref{sec:4}, we discuss the model and the results deduced from it, paying special attention to \nthe consistency of the approximations that have been made. Our conclusions are given in Sect.~\\ref{sec:5}. \nTechnical details about the average Green's function, the range of validity of the short-range structural correlation approximation, \nand the particular case of uncorrelated disorder, are presented in Appendices~\\ref{sec:A1}--\\ref{sec:A3}, respectively.\n\n\\section{Radiative transfer for polarized light}\\label{sec:2}\n\n\\subsection{Spatial field correlation}\n\nWe consider a disordered medium described by a real dielectric function of the form\n$\\epsilon(\\bm{r})=1+\\delta\\epsilon(\\bm{r})$, where $\\delta\\epsilon(\\bm{r})$ is the fluctuating part with the\nstatistical properties\n\\begin{equation}\\label{eq:disorder}\n \\left\\langle \\delta\\epsilon(\\bm{r}) \\right\\rangle = 0,\n \\qquad \\left\\langle \\delta\\epsilon(\\bm{r}) \\delta\\epsilon(\\bm{r}') \\right\\rangle = u f(\\bm{r}-\\bm{r}')\n\\end{equation}\nwhere $\\left\\langle\\ldots\\right\\rangle$ indicates ensemble averaging. The function\n$f(\\bm{r}-\\bm{r}')$ describes the structural correlation of the medium and $u$ is an amplitude whose expression will be\nderived below. We assume that the medium is statistically isotropic and invariant by translation. Considering a\nmonochromatic wave with free-space wavevector $k_0=\\omega\/c=2\\pi\/\\lambda$, $\\omega$ being the frequency,\n$\\lambda$ the wavelength and $c$ the speed of light in vacuum, the electric field $\\textbf{E}$ satisfies the vector \npropagation equation\n\\begin{equation}\n \\nabla \\times \\nabla \\times \\bm{E}(\\bm{r})-k_0^2 \\epsilon(\\bm{r}) \\bm{E}(\\bm{r})=i \\mu_0 \\omega \\bm{j}(\\bm{r}),\n\\end{equation}\nwhere the current density $\\bm{j}(\\bm{r})$ describes a source distribution in the disordered medium. \nIntroducting the dyadic Green's function $G_{ik}$, the $i$th component of the electric field reads\n\\begin{equation}\\label{eq:Efield-green}\nE_i(\\bm{r})=i \\mu_0 \\omega \\int G_{ik}(\\bm{r},\\bm{r}') j_k(\\bm{r}') d\\bm{r}',\n\\end{equation} \nwhere implicit summation of repeated indices is assumed.\nThe spatial correlation function of the electric field $\\left\\langle E_i(\\bm{r}) E_j^\\star(\\bm{r}') \\right\\rangle$ obeys the Bethe-Salpeter equation\n\\begin{multline}\\label{eq:BS_field}\n \\left\\langle E_i(\\bm{r}) E_j^\\star(\\bm{r}') \\right\\rangle = \\left\\langle E_i(\\bm{r}) \\right\\rangle \\left\\langle E_j^\\star (\\bm{r}') \\right\\rangle\n\\\\\n + k_0^4 \\int \\left\\langle G_{im}(\\bm{r}-\\bm{r}_1) \\right\\rangle \\left\\langle G_{jn}^\\star(\\bm{r}'-\\bm{r}_1') \\right\\rangle\n\\\\\n \\times \\Gamma_{mnrs} (\\bm{r}_1,\\bm{r}_1',\\bm{r}_2,\\bm{r}_2') \\left\\langle E_r (\\bm{r}_2) E_s^\\star (\\bm{r}_2') \\right\\rangle d\\bm{r}_1 d\\bm{r}_1' d\\bm{r}_2 d\\bm{r}_2'\n\\end{multline}\nthat can be derived from diagrammatic calculations~\\cite{Akkermans2011,Sheng2010}. In this expression\nthe superscript $\\star$ denotes complex conjugation, and\n$\\Gamma_{mnrs}$ is the four-point irreducible vertex that describes all possible scattering sequences\nbetween four points. In Eq.~(\\ref{eq:BS_field}), the first term in the right-hand side corresponds to the ballistic \nintensity, that is attenuated due to scattering at the scale of the scattering mean free path $\\ell$, and\nthe second term describes the multiple-scattering process. Note that at this level, Eq.~(\\ref{eq:BS_field}) \nis an exact closed-form equation.\n\nIt is also interesting to remark that the field correlation function $\\left\\langle E_i(\\bm{r}) E_j^\\star(\\bm{r}') \\right\\rangle$ is\none of the key quantities in statistical optics (where it is usually denoted by cross-spectral density matrix), since it\nencompasses the polarization and coherence properties of fluctuating fields in the frequency domain~\\cite{Goodman2015, Brosseau1998}. \nThe study of light fluctuations in 3D multiple scattering media has stimulated a revisiting of the concepts of degree of polarization \nand coherence~\\cite{Setala2002, Dennis2007, Refregier2014, Gil2014,Dogariu2015}, initially defined for 2D paraxial fields.\n\nTo proceed further, we assume weak disorder, such that the scattering mean free path $\\ell$\n is much larger than the wavelength ($k_0\\ell \\gg 1$). In this regime, only the two diagrams\nfor which the field and its complex conjugate follow the same trajectories (the so-called ladder and most-crossed\ndiagrams) contribute to the average intensity. The ladder diagrams are the root of radiative transport theory, that describes\nthe transport of intensity as an incoherent process. The most-crossed diagrams are responsible for weak localization and\ncoherent backscattering. In the ladder approximation and assuming independent scattering, the four-point irreducible\nvertex reduces to\n\\begin{multline}\n \\Gamma_{mnrs}(\\bm{r}_1,\\bm{r}_1',\\bm{r}_2,\\bm{r}_2')\n\\\\\n \\begin{split}\n & = \\left\\langle \\delta\\epsilon(\\bm{r}_1) \\delta\\epsilon(\\bm{r}_1') \\right\\rangle \\delta(\\bm{r}_1-\\bm{r}_2) \\delta(\\bm{r}_1'-\\bm{r}_2') \\delta_{mr} \\delta_{ns}\n \\\\\n & = u f(\\bm{r}_1-\\bm{r}_1') \\delta(\\bm{r}_1-\\bm{r}_2) \\delta(\\bm{r}_1'-\\bm{r}_2') \\delta_{mr} \\delta_{ns},\n \\end{split}\n\\end{multline}\nyielding\n\\begin{multline}\\label{eq:BS_field2}\n \\left\\langle E_i(\\bm{r}) E_j^\\star(\\bm{r}') \\right\\rangle = \\left\\langle E_i(\\bm{r}) \\right\\rangle \\left\\langle E_j^\\star (\\bm{r}') \\right\\rangle\n\\\\\n + uk_0^4 \\int \\left\\langle G_{im}(\\bm{r}-\\bm{r}_1) \\right\\rangle \\left\\langle G_{jn}^\\star(\\bm{r}'-\\bm{r}_1') \\right\\rangle\n\\\\\n \\times f(\\bm{r}_1-\\bm{r}_1') \\left\\langle E_m (\\bm{r}_1) E_n^\\star (\\bm{r}_1') \\right\\rangle d\\bm{r}_1 d\\bm{r}_1'.\n\\end{multline}\nWe consider the source to be a point electric dipole located at $\\bm{r}_0$, such that\n\\begin{equation}\n j_k(\\bm{r})=-i\\omega p_k \\delta(\\bm{r}-\\bm{r}_0),\n\\end{equation}\nwhere $p_k$ is the dipole moment along direction $k$. \nEquation~(\\ref{eq:Efield-green}) simplifies into $E_i(\\bm{r})= \\mu_0 \\omega^2 \\, G_{ik}(\\bm{r}-\\bm{r}_0) p_k$ and the\nBethe-Salpeter equation (\\ref{eq:BS_field2}) can be rewritten in terms of the dyadic Green's function in the form\n\\begin{multline}\\label{eq:BS_green}\n \\left\\langle G_{ik}(\\bm{r}-\\bm{r}_0) G_{jl}^\\star(\\bm{r}'-\\bm{r}_0) \\right\\rangle = \\left\\langle G_{ik}(\\bm{r}-\\bm{r}_0) \\right\\rangle \\left\\langle G_{jl}^\\star (\\bm{r}'-\\bm{r}_0) \\right\\rangle\n\\\\\n + uk_0^4 \\int \\left\\langle G_{im}(\\bm{r}-\\bm{r}_1) \\right\\rangle \\left\\langle G_{jn}^\\star(\\bm{r}'-\\bm{r}_1') \\right\\rangle f(\\bm{r}_1-\\bm{r}_1')\n\\\\\n \\times \\left\\langle G_{mk} (\\bm{r}_1-\\bm{r_0}) G_{nl}^\\star (\\bm{r}_1'-\\bm{r}_0) \\right\\rangle d\\bm{r}_1 d\\bm{r}_1'.\n\\end{multline}\nUsing the change of variables $\\bm{r}-\\bm{r}_0=\\bm{R}+\\bm{X}\/2$ and $\\bm{r}'-\\bm{r}_0=\\bm{R}-\\bm{X}\/2$, and \ntransforming Eq.~(\\ref{eq:BS_green}) into reciprocal space, with $\\bm{K}$ and $\\bm{q}$ the reciprocal variables of\n$\\bm{R}$ and $\\bm{X}$ respectively, we finally obtain\n\\begin{widetext}\n \\begin{multline}\\label{eq:BS_green_fourier}\n \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n = \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\left\\langle G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n + uk_0^4 \\left\\langle G_{im}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\left\\langle G_{jn}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n \\\\\n \\times \\int f(\\bm{q}-\\bm{q}') \\left\\langle G_{mk} \\left(\\bm{q}'+\\frac{\\bm{K}}{2}\\right) G_{nl}^\\star\n \\left(\\bm{q}'-\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\frac{d\\bm{q}'}{8\\pi^3}.\n \\end{multline}\n\\end{widetext}\nA direct resolution of Eq.~(\\ref{eq:BS_green_fourier}) is possible for $f(\\bm{q}-\\bm{q}')=1$,\nand this approach was used in Ref.~\\cite{Vynck2014} to study the coherence and polarization properties of light in\nan uncorrelated disordered medium. In the case of a medium with structural correlations, a direct resolution is out of reach\nand we need to follow a different strategy.\n\n\\subsection{From field correlation to radiative transfer}\n\nIn this section we derive a radiative transfer equation for polarized light.\nWe proceed by evaluating the average Green's tensor $\\left\\langle \\bm{G} \\right\\rangle$, that obeys the Dyson\nequation~\\cite{Akkermans2011}. In its most general form, it reads~\\cite{Tai1993}\n\\begin{equation}\\label{eq:averageG}\n \\left\\langle \\bm{G} (\\bm{q}) \\right\\rangle = \\left[ k_0^2 \\bm{I} - q^2 \\bm{P}(\\hat{\\bm{q}}) - \\bm{\\Sigma}(\\bm{q}) \\right]^{-1},\n\\end{equation}\nwith $\\bm{I}$ the unit tensor, $\\bm{P}(\\hat{\\bm{q}})=\\bm{I}-\\hat{\\bm{q}} \\otimes \\hat{\\bm{q}}$ the transverse\nprojection operator, $\\hat{\\bm{q}}=\\bm{q}\/q$ and $q=|\\bm{q}|$. $\\bm{\\Sigma}(\\bm{q})$ is the self-energy, which contains\nthe sum over all multiple scattering events that cannot be factorized in the averaging process. As shown in\nAppendix~\\ref{sec:A1}, for arbitrary structural correlations, $\\bm{\\Sigma}(\\bm{q})$ is non-scalar. \nThe problem can be simplified by assuming short-range structural correlations,\nin which case $\\bm{\\Sigma}(\\bm{q})=\\Sigma(\\bm{q})\\bm{I}$. The average Green's tensor can then be written as\n\\begin{equation}\\label{eq:Green-mulet}\n \\left\\langle \\bm{G} (\\bm{q}) \\right\\rangle = \\left\\langle G(\\bm{q}) \\right\\rangle \\left( \\bm{I} - \\frac{\\bm{q} \\otimes \\bm{q}}{k_0^2-\\Sigma(\\bm{q})} \\right),\n\\end{equation}\nwith $\\left\\langle G(\\bm{q}) \\right\\rangle =[k_0^2-q^2-\\Sigma(\\bm{q})]^{-1}$ the scalar Green's function. In a dilute medium,\nthe scattering events are assumed to take place on large distances compared to the wavelength (near-field interactions between \nscatterers can be neglected). In this case, the average Green's tensor $\\left\\langle \\bm{G} (\\bm{q}) \\right\\rangle$ can be reduced to its transverse\ncomponent~\\cite{Arnoldus2003}, yielding\n\\begin{equation}\n \\left\\langle \\bm{G} (\\bm{q}) \\right\\rangle \\simeq \\left\\langle G(\\bm{q}) \\right\\rangle \\bm{P}(\\hat{\\bm{q}}).\n\\end{equation}\nAfter some simple algebra, the first term in the right-hand side in Eq.~(\\ref{eq:BS_green_fourier}) can be written as\n\\begin{multline}\n \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\left\\langle G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n\\\\\n = M_{ik} M'_{jl}\\frac{\\left\\langle G(\\bm{q}+\\bm{K}\/2) \\right\\rangle - \\left\\langle G^\\star(\\bm{q}-\\bm{K}\/2) \\right\\rangle}\n {2 \\bm{q} \\cdot \\bm{K} + \\Sigma(\\bm{q}+\\bm{K}\/2) - \\Sigma^\\star(\\bm{q}-\\bm{K}\/2)},\n\\end{multline}\nwhere we have defined the polarization factors $M_{ik}=\\delta_{ik} - (q_i + K_i\/2) (q_k + K_k\/2)\/|\\bm{q}+\\bm{K}\/2|^2$ and\n$M'_{jl}=\\delta_{jl} - (q_j - K_j\/2) (q_l - K_l\/2)\/|\\bm{q}-\\bm{K}\/2|^2$. In a dilute medium, we can assume that $|\\bm{K}| \\ll\n|\\bm{q}|$. This means that there are two different space scales in the correlation function of Green's tensor: A short scale associated to $\\bm{q}$ and corresponding to the dependence on direction of the specific intensity that we will introduce in Eq.~(\\ref{eq:specific_intensity}), and a large scale associated to $\\bm{K}$ and corresponding to the dependence of the specific intensity on position. This leads to\n\\begin{multline}\\label{eq:Green1}\n \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\left\\langle G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n\\\\\n = (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l)\n \\frac{\\left\\langle G(\\bm{q}) \\right\\rangle - \\left\\langle G^\\star(\\bm{q}) \\right\\rangle}{2 \\bm{q} \\cdot \\bm{K} + 2i \\operatorname{Im}[\\Sigma(\\bm{q})]}.\n\\end{multline}\nThe self-energy $\\Sigma(\\bm{q})$ renormalizes the propagation constant in the medium by defining a complex effective\npermittivity $\\epsilon_\\text{eff}=1-\\Sigma(\\bm{q})\/k_0^2$. The real part of $\\Sigma$ yields a change in the phase velocity,\nand the imaginary an attenuation of the field amplitude due to scattering. Hence, we can write\n\\begin{equation}\\label{eq:avGreen}\n \\left\\langle G(\\bm{q}) \\right\\rangle = \\frac{1}{k_0^2 \\operatorname{Re}[\\epsilon_\\text{eff}] - q^2 + i k_0^2 \\operatorname{Im}[\\epsilon_\\text{eff}]}.\n\\end{equation}\nSince $\\operatorname{Im}[\\epsilon_\\text{eff}] \\ll \\operatorname{Re}[\\epsilon_\\text{eff}]$ in a dilute medium, we can rewrite\nEq.~(\\ref{eq:avGreen}) using the identity\n\\begin{equation}\n \\lim_{\\varepsilon \\rightarrow 0} \\frac{1}{x-x_0-i \\varepsilon} = \\operatorname{PV}\\left[ \\frac{1}{x-x_0} \\right] - i \\pi \\delta (x-x_0),\n\\end{equation}\nwhere $\\operatorname{PV}$ stands for principal value.\nDefining $q_e=k_0\\sqrt{\\operatorname{Re}[\\epsilon_\\text{eff}]}$ as an effective wavevector, Eq.~(\\ref{eq:Green1}) becomes\n\\begin{multline}\\label{eq:Green2}\n \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) \\right\\rangle \\left\\langle G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n\\\\\n = (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l) \\frac{\\pi \\delta(q_e^2-q^2)}{ i \\bm{q} \\cdot \\bm{K} - \\operatorname{Im}[\\Sigma(\\bm{q})]}.\n\\end{multline}\nIn order to derive a radiative transfer equation, we then introduce the quantity $L_{ijkl}$ by the relation\n\\begin{multline}\\label{eq:specific_intensity}\n \\left\\langle G_{ik}\\left(\\bm{q}+\\frac{\\bm{K}}{2}\\right) G_{jl}^\\star\\left(\\bm{q}-\\frac{\\bm{K}}{2}\\right) \\right\\rangle\n\\\\\n = \\frac{4\\pi^2}{q_e} \\delta(q_e^2 - q^2) L_{ijkl}(\\bm{K},q_e \\hat{\\bm{q}}).\n\\end{multline}\nHere, we assume that the correlation function of Green's tensor propagates on shell, {\\it i.e.} with a wavevector $q=q_e$. The impact of the on-shell approximation, which is the key step to solve the Bethe-Salpeter equation in the presence of structural correlations, will be discussed in Sec.~\\ref{sec:4}. From Eqs.~(\\ref{eq:Green2}) and (\\ref{eq:specific_intensity}), we can rewrite the Bethe-Salpeter equation (\\ref{eq:BS_green_fourier}) in the form\n\\begin{multline}\n \\frac{4\\pi^2}{q_e} \\delta(q_e^2 - q^2) L_{ijkl}(\\bm{K},q_e \\hat{\\bm{q}})\n\\\\\n = \\frac{\\pi \\delta(q_e^2 - q^2)}{ i \\bm{q} \\cdot \\bm{K} - \\operatorname{Im}[\\Sigma(\\bm{q})]}\n \\left[\\vphantom{\\int} (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l)\\right.\n\\\\\n + u k_0^4 (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n)\n\\\\\n \\left. \\times \\frac{4\\pi^2}{q_e} \\int f(\\bm{q}-\\bm{q}') \\delta(q_e^2 - q'^2) L_{mnkl}(\\bm{K},q_e \\hat{\\bm{q}}')\n \\frac{d\\bm{q}'}{8\\pi^3} \\right].\n\\end{multline}\nIntegrating both sides of the equation over $q$, performing the integral on the right-hand side over $q'$, and using the relation $\\int_0^\\infty f(\\bm{r}) \\delta(r^2-r_0^2) r^2 dr=r_0f(\\bm{r}=r_0 \\hat{\\bm{r}})\/2$, we obtain\n\\begin{multline}\\label{eq:preRTE}\n L_{ijkl}(\\bm{K},q_e \\hat{\\bm{q}})\n\\\\\n =\\frac{q_e}{4\\pi}\\frac{1}{ i q_e \\hat{\\bm{q}} \\cdot \\bm{K} - \\operatorname{Im}[\\Sigma(q_e \\hat{\\bm{q}})]}\n \\left[\\vphantom{\\int} (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l) \\right.\n\\\\\n + \\frac{u k_0^4}{4\\pi} (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n)\n\\\\\n \\left.\\times \\int f(q_e (\\hat{\\bm{q}}-\\hat{\\bm{q}}')) L_{mnkl}(\\bm{K}, q_e \\hat{\\bm{q}}') d\\hat{\\bm{q}}' \\right].\n\\end{multline}\nThe quantity $L_{ijkl}(\\bm{K},q_e \\hat{\\bm{q}})$ is proportional to the specific intensity introduced in radiative transfer theory~\\cite{Chandrasekhar1960}, and has the meaning of a local and directional radiative flux. Actually, Eq.~(\\ref{eq:preRTE}) can be cast in the form of a radiative transfer equation, as we will now show.\n\nSince the disordered medium is statistically isotropic and translational-invariant, the correlation function $f$ only depends on $|\\hat{\\bm{q}}-\\hat{\\bm{q}}'|$, or equivalently on $\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}'$. It is directly related to the classical phase function $p(\\hat{\\bm{q}}\\cdot\\hat{\\bm{q}}')$ of radiative transfer theory as\n\\begin{equation}\\label{eq:correlation-phase}\n f(q_e |\\hat{\\bm{q}}-\\hat{\\bm{q}}'|) = A \\, p(\\hat{\\bm{q}}\\cdot\\hat{\\bm{q}}'),\n\\end{equation}\nwhere $A$ is a constant whose value is determined by energy conservation, and $\\int p(\\hat{\\bm{q}}\\cdot\\hat{\\bm{q}}') d\\hat{\\bm{q}} = 4\\pi$.\nTo order $(k_0\\ell)^{-1}$ and for short-range structural correlations, one has $\\operatorname{Im}[\\Sigma(q_e \\hat{\\bm{q}})] =\n-q_e\/\\ell$ and $u=6\\pi\/k_0^4\\ell$ (these results are derived in Appendix~\\ref{sec:A1}). This allows us to rewrite Eq.~(\\ref{eq:preRTE})\nin its final form \n\\begin{widetext}\n \\begin{equation}\\label{eq:RTE}\n \\left[ i \\hat{\\bm{q}} \\cdot \\bm{K} + \\frac{1}{\\ell} \\right] L_{ijkl}(\\bm{K},\\hat{\\bm{q}})\n = \\frac{1}{4\\pi} (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l)\n + \\frac{3 A}{8 \\pi \\ell} (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n)\n \\int p(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}') L_{mnkl}(\\bm{K}, \\hat{\\bm{q}}') d\\hat{\\bm{q}}'\n \\end{equation}\n\\end{widetext}\nwhere an implicit summation over $m$ and $n$ is assumed.\nThis expression takes the form of a radiative transfer equation (RTE) for the polarization-resolved specific intensity.\nIt differs from the standard vector radiative transfer equation~\\cite{Chandrasekhar1960} in the sense that it is not written\nin terms of Stokes vector, but using a fourth-order tensor representing the specific intensity for polarized light, and relating two\ninput and two output polarization components. Nevertheless, the various terms in Eq.~(\\ref{eq:RTE}) have a very clear\nphysical meaning. The first and second terms on the left-hand-side respectively describe the total variation of specific\nintensity along direction $\\hat{\\bm{q}}$ and the extinction of the ballistic light due to scattering (i.e.\\@\\xspace, Beer-Lambert's\nlaw). The first and second terms on the right-hand-side describe the increase of specific intensity along direction\n$\\hat{\\bm{q}}$ due to the presence of a source, and to the light originally propagating along direction $\\hat{\\bm{q}}'$\nand being scattered along $\\hat{\\bm{q}}$, respectively.\n\nConservation of energy requires the scattering losses to be compensated by the gain due to scattering after\nintegration over all angles. The energy conservation relation has to be written on the intensity, i.e.\\@\\xspace by setting $i=j$ and\nsumming over polarization components in Eq.~(\\ref{eq:RTE}), in the form\n\\begin{multline}\n \\frac{1}{\\ell}\\sum_i \\int L_{iikl}(\\bm{K},\\hat{\\bm{q}}) d\\hat{\\bm{q}} = \\frac{3 A}{8\\pi\\ell}\n\\\\\n \\times \\sum_{i,m} \\int (\\delta_{im} - \\hat{q}_i \\hat{q}_m)^2 p(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}') L_{mmkl}(\\bm{K}, \\hat{\\bm{q}}') d\\hat{\\bm{q}}' d\\hat{\\bm{q}}.\n\\end{multline}\nThis leads to the following relation on the coefficient $A$\n\\begin{equation}\\label{eq:energy_conservation_constant}\n \\frac{3}{8\\pi} \\sum_m \\int (\\delta_{im} - \\hat{q}_i \\hat{q}_m)^2 p(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}') d\\hat{\\bm{q}} = \\frac{1_i}{A},\n\\end{equation}\nwhere $1_i$ is the unit vector. At this stage, we have obtained a transport equation for polarized light\n[Eq.~(\\ref{eq:RTE})] that takes the form of a RTE. This equation stems directly from the Dyson and Bethe-Salpeter equations, \nfulfills energy conservation, and is valid for dilute media and short-range correlated disorder.\n\n\\section{Diffusion of polarization}\\label{sec:3}\n\n\\subsection{$P_1$ approximation}\n\nIn short-range correlated media, the phase function $p(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}')$ is expected to be\nquasi-isotropic. It can therefore be expanded into a Legendre series, which, to order $\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}'$,\nreads\n\\begin{equation}\\label{eq:phasefunction}\n p(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}') = 1 + 3g (\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}'),\n\\end{equation}\nwhere $g$ is the anisotropic scattering factor, defined as\n\\begin{equation}\n g =\\frac{1}{4\\pi} \\int p({\\bm{q}} \\cdot \\hat{\\bm{q}}') \\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}' d\\hat{\\bm{q}},\n\\end{equation}\nand satisfying\n\\begin{equation}\n g \\hat{\\bm{q}}' =\\frac{1}{4\\pi} \\int p({\\bm{q}} \\cdot \\hat{\\bm{q}}') \\hat{\\bm{q}} d\\hat{\\bm{q}}.\n \\end{equation}\nInserting Eq.~(\\ref{eq:phasefunction}) into Eq.~(\\ref{eq:RTE}), the RTE can be rewritten as\n\\begin{multline}\\label{eq:newRTE}\n \\left[ i \\hat{\\bm{q}} \\cdot \\bm{K} + \\frac{1}{\\ell} \\right] L_{ijkl}(\\bm{K},\\hat{\\bm{q}})\n = \\frac{1}{4\\pi} (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l)\n\\\\\n + \\frac{3 A}{2 \\ell} (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n)\n \\\\\n \\times \\left[\n L^{(0)}_{mnkl}(\\bm{K})+ \\frac{3g}{4 \\pi } \\bm{j}_{mnkl}(\\bm{K}) \\cdot \\hat{\\bm{q}}\\right],\n\\end{multline}\nwhere $L^{(0)}_{ijkl}$ and $\\bm{j}_{ijkl}$ are the (polarization-resolved)\nirradiance and radiative flux vector, respectively, defined as\n\\begin{align}\n L^{(0)}_{ijkl}(\\bm{K}) & = \\frac{1}{4\\pi} \\int L_{ijkl}(\\bm{K},\\hat{\\bm{q}}) d\\hat{\\bm{q}},\n\\\\\n \\bm{j}_{ijkl}(\\bm{K}) & = \\int \\hat{\\bm{q}} L_{ijkl}(\\bm{K},\\hat{\\bm{q}}) d\\hat{\\bm{q}}.\n\\end{align}\n\nTo gain insight into the effect of short-range correlations on the propagation of polarized light, it is convenient to\ninvestigate the diffusion limit, which is reached after propagation on distances much larger than the \nscattering mean free path $\\ell$. In this limit, the specific intensity becomes quasi-isotropic. \nExpanding $L_{ijkl}$ into Legendre polynomials $P_n$ to first order in $\\hat{\\bm{q}}$, we have \n\\begin{equation}\\label{eq:P1_approx}\n L_{ijkl}(\\bm{K},\\hat{\\bm{q}}) = L^{(0)}_{ijkl}(\\bm{K}) + \\frac{3}{4\\pi} \\bm{j}_{ijkl}(\\bm{K}) \\cdot \\hat{\\bm{q}}\n\\end{equation}\nwhich is the so-called $P_1$ approximation.\nInserting Eq.~(\\ref{eq:P1_approx}) into Eq.~(\\ref{eq:newRTE}) and calculating the zeroth and first moments of the\nresulting equation (which amounts to performing the integrations $\\int - d\\hat{\\bm{q}}$ and $\\int - \\hat{\\bm{q}} d\\hat{\\bm{q}}$,\nrespectively), we eventually arrive to a pair of equations relating $L^{(0)}_{ijkl}$ and $\\bm{j}_{ijkl}$:\n\\begin{widetext}\n \\begin{align}\n i \\bm{K} \\cdot \\bm{j}_{ijkl}(\\bm{K}) + \\frac{4\\pi}{\\ell} L^{(0)}_{ijkl}(\\bm{K})\n & = \\frac{2}{3} S_{ijkl} + \\frac{4\\pi}{\\ell} A S_{ijmn} L^{(0)}_{mnkl}(\\bm{K}), \\label{eq:zeroth_moment}\n \\\\\n -\\frac{4\\pi}{3} K^2 \\ell L^{(0)}_{ijkl}(\\bm{K}) + i \\bm{K} \\cdot \\bm{j}_{ijkl}(\\bm{K})\n & = i g \\frac{9A}{8\\pi} \\int (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n)\n \\left( \\bm{j}_{mnkl}(\\bm{K}) \\cdot \\hat{\\bm{q}} \\right) \\left( \\bm{K} \\cdot \\hat{\\bm{q}} \\right)\n d\\hat{\\bm{q}}.\\label{eq:first_moment}\n \\end{align}\n\\end{widetext}\nHere, we have defined\n\\begin{equation}\n S_{ijkl}=\\frac{3}{8\\pi} \\int (\\delta_{ik} - \\hat{q}_i \\hat{q}_k) (\\delta_{jl} - \\hat{q}_j \\hat{q}_l) d\\hat{\\bm{q}},\n\\end{equation}\nand used the relations $\\int (\\delta_{im} - \\hat{q}_i \\hat{q}_m) (\\delta_{jn} - \\hat{q}_j \\hat{q}_n) \\hat{\\bm{q}}\nd\\hat{\\bm{q}}=0$, $\\int \\hat{q}_i \\hat{q}_j d\\hat{\\bm{q}}= 4\\pi \/ 3\\delta_{ij}$ and $\\int \\hat{q}_i \\hat{q}_j\n\\hat{q}_k d\\hat{\\bm{q}}= 0$. The additional complexity of the polarization mixing due to structural correlations can be\napprehended from Eq.~(\\ref{eq:first_moment}), where the relation between $L^{(0)}_{ijkl}$ and $\\bm{j}_{ijkl}$ in terms\nof input and output polarization components becomes particularly intricate as soon as $g \\neq 0$. Much deeper insight\ninto the diffusion of polarized light can be gained via an eigenmode decomposition, as shown below.\n\n\\subsection{Polarization eigenmodes}\n\nAnalytical expressions for all terms in the $L_{ijkl}^{(0)}(\\bm{K})$ and $\\bm{j}_{ijkl}(\\bm{K})$ tensors can be\nobtained by solving Eqs.~(\\ref{eq:zeroth_moment}) and (\\ref{eq:first_moment}), which we have done imposing $\\bm{K}$ to\nbe along one of the main spatial directions, without loss of generality, and using the software Mathematica~\\cite{Mathematica}. The\nobtained expressions at this stage are long and complicated, containing in particular high-order terms in powers of $K$\nand $g$ (that are not physical and will be neglected below). We now introduce a polarization-resolved energy density\n$U_{ijkl}=6\\pi \/ cL_{ijkl}^{(0)}$ and decompose it in terms of ``polarization eigenmodes''\nas in Refs.~\\cite{Ozrin1992, Muller2002, Vynck2014}:\n\\begin{equation}\\label{eq:eigenmode_decomposition}\n U_{ijkl}(\\bm{K}) = \\sum_p U^{(p)}(\\bm{K}) \\left|ij\\right\\rangle_p \\left\\langle kl \\right|_p.\n\\end{equation}\nThe eigenvalues $U^{(p)}$ provide the characteristic length and time scales of the diffusion of each eigenmode and the projectors\n$\\left|ij\\right\\rangle_p \\left\\langle kl \\right|_p$, which will be denoted by ``polarization eigenchannels'', relate input polarization pairs\n$(k,l)$ to output polarization pairs $(i,j)$. The $U_{ijkl}$ is represented as a $9 \\times 9$ matrix (9 pairs of\npolarization components in input and output) and is diagonalized using Mathematica, leading again to full analytical expressions.\n\nAt this stage, the obtained expressions still depend on the coefficient $A$, originally defined in\nEq.~(\\ref{eq:correlation-phase}) and used to ensure energy conservation in the RTE, Eq.~(\\ref{eq:RTE}). To predict how\n$A$ depends on structural correlations, we rely on the particular case of the Henyey-Greenstein (HG) phase function~\\cite{Henyey1941}\n\\begin{equation}\\label{eq:HG-phasefunction}\n p_\\text{HG}(\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}') = \\frac{1-g}{\\left[1+g^2-2g (\\hat{\\bm{q}} \\cdot \\hat{\\bm{q}}')\\right]^{3\/2}}.\n\\end{equation}\nThe HG phase function is very convenient since it provides a closed-form expression with $g$ as a single parameter, and\napproximates the phase functions of a wide range of disordered media (e.g.\\@\\xspace, interstellar dust clouds, biological\ntissues). The energy conservation equation, Eq.~(\\ref{eq:energy_conservation_constant}), can be solved analytically in\nthis case, yielding the surprisingly simple relation\n\\begin{equation}\\label{eq:A_HG}\n A_\\text{HG} = \\frac{1}{1+g^2\/2}.\n\\end{equation}\nNote that the modification in energy conservation due to structural correlations appears at order $g^2$.\n\nWe can finally insert Eq.~(\\ref{eq:A_HG}) into the eigenvalues and eigenvectors found from\nEq.~(\\ref{eq:eigenmode_decomposition}) and develop analytical expressions valid to orders $K^2$ (diffusion\napproximation) and $g$ (weakly correlated disorder). The eigenvectors take the expressions already obtained for\nuncorrelated disorder~\\cite{Ozrin1992, Muller2002, Vynck2014}\n\\begin{align}\\label{eq:eigenvectors}\n \\left|kl\\right\\rangle_{1} & = \\frac{1}{\\sqrt{3}} \\delta_{kl}, \\nonumber \\\\\n \\left|kl\\right\\rangle_{2,3,4} & = \\frac{1}{\\sqrt{2}} (\\delta_{ka} \\delta_{lb} - \\delta_{kb} \\delta_{la}), \\nonumber \\\\\n \\left|kl\\right\\rangle_{5} & = \\frac{1}{\\sqrt{2}} (\\delta_{ka} \\delta_{la} - \\delta_{kb} \\delta_{lb}), \\nonumber \\\\\n \\left|kl\\right\\rangle_{6,7,8} & = \\frac{1}{\\sqrt{2}} (\\delta_{ka} \\delta_{lb} + \\delta_{kb} \\delta_{la}), \\nonumber \\\\\n \\left|kl\\right\\rangle_{9} & = \\frac{1}{\\sqrt{6}} (\\delta_{ka} \\delta_{la} + \\delta_{kb} \\delta_{lb}- 2 \\delta_{kc} \\delta_{lc}).\n\\end{align}\nThe first eigenchannel is the scalar mode, relating uniformly pairs of identical polarization components ($xx$, $yy$ and\n$zz$), which describe the classical intensity, between themselves. The other eigenchannels either redistribute\nnonuniformly the energy between pairs of identical polarization ($p=5$ and $9$), thereby participating as well in the\npropagation of the classical intensity, or are concerned with pairs of orthogonal polarizations ($xy$, $xz$, etc), which\ncan participate, for instance, in magneto-optical media in which light polarization can rotate~\\cite{MacKintosh1988,\nVanTiggelen1996, VanTiggelen1999}.\n\nThe eigenvalues take the form of the solution of the diffusion equation in reciprocal space\n\\begin{equation}\\label{eq:diffusionsolution}\n U^{(p)}(\\bm{K}) = \\frac{1}{\\mathcal{D}^{(p)} K^2 + \\mu_a^{(p)} c},\n\\end{equation}\nwhere $\\mathcal{D}^{(p)}$ and $\\mu_a^{(p)}$ are the diffusion constant and attenuation coefficient of the $p$th\npolarization mode. The eigenmode energy densities in real space therefore read\n\\begin{equation}\n U^{(p)}(\\bm{R}) = \\frac{1}{4\\pi \\mathcal{D}^{(p)} R} \\exp\\left[- \\frac{R}{\\ell_\\text{eff}^{(p)}} \\right],\n\\end{equation}\nwith $R=|\\bm{R}|$ and $\\ell_\\text{eff}^{(p)}=\\sqrt{\\mathcal{D}^{(p)}\/\\mu_a^{(p)} c}$, which is an effective attenuation\nlength, describing the depolarization process.\n\nTable~\\ref{tab:diffcorr} summarizes the diffusion constants, attenuation coefficients and effective attenuation lengths\nof the different polarization eigenchannels. As in the case of uncorrelated\ndisorder previously studied in Ref.~\\cite{Vynck2014}, all modes exhibit different diffusion constants, thereby spreading at different speeds, and\nonly the scalar mode persists at large distances ($\\ell_\\text{eff}^{(1)}=\\infty$), all other modes being attenuated on a\nlength scale on the order of a mean free path.\n\n\\begin{table*}\n \\caption{Summary of the diffusion constants $\\mathcal{D}^{(p)}$, attenuation coefficients $\\mu_a^{(p)}$ and effective\n attenuation lengths $\\ell_\\text{eff}^{(p)}$ characterizing the diffusion properties of the energy density through the\n individual polarization eigenchannels and the depolarization process. Note that all quantities are given to order\n $g$. Quite remarkably, structural correlations, via the scattering asymmetry factor $g$, are found to affect\n differently and independently each mode.}\\label{tab:diffcorr}\n \\begin{ruledtabular}\n \\def1.5{1.5}\n \\begin{tabular}{c|c|c|c|c|c|c}\n $p$ & 1 & 2 & 3,4 & 5,6 & 7,8 & 9 \\\\ \\hline\n $\\mathcal{D}^{(p)}$ & $ \\left(1-g\\right)^{-1} \\frac{c\\ell}{3}$ & $\\left(\\frac{1}{2}-\\frac{9}{20}g\\right)^{-1} \\frac{c\\ell}{3}$ & $\\left(\\frac{1}{2}-\\frac{3}{20}g\\right)^{-1} \\frac{c\\ell}{3}$ & $\\left(\\frac{7}{10}-\\frac{69}{100}g\\right)^{-1} \\frac{c\\ell}{3}$ & $\\left(\\frac{7}{10}-\\frac{39}{100}g\\right)^{-1} \\frac{c\\ell}{3}$ & $\\left(\\frac{7}{10}-\\frac{29}{100}g\\right)^{-1} \\frac{c\\ell}{3}$ \\\\ \\hline\n \n $\\mu_a^{(p)}$ & $0$ & $\\frac{1}{\\ell}$ & $\\frac{1}{\\ell}$ & $\\frac{3}{7\\ell}$ & $\\frac{3}{7\\ell}$ & $\\frac{3}{7\\ell}$ \\\\ \\hline\n $\\ell_\\text{eff}^{(p)}$ & $\\infty$ & $\\left( 1-\\frac{9}{20}g \\right)^{-1} \\sqrt{\\frac{2}{3}}\\ell$ & $\\left( 1-\\frac{3}{20}g \\right)^{-1} \\sqrt{\\frac{2}{3}}\\ell$ & $\\left( 1-\\frac{69}{140}g \\right)^{-1} \\frac{\\sqrt{10}}{3}\\ell$ & $\\left( 1-\\frac{39}{140}g \\right)^{-1} \\frac{\\sqrt{10}}{3}\\ell$ & $\\left( 1-\\frac{29}{140}g \\right)^{-1} \\frac{\\sqrt{10}}{3}\\ell$\n \n \\end{tabular}\n \\end{ruledtabular}\n\\end{table*}\n\nMore interestingly, our study brings new information on the influence of short-range structural correlations on transport and depolarization.\nLet us first remark that we properly recover the diffusion constant of the scalar mode, $\\mathcal{D}=c\\ell^*\/3$ with $\\ell^*=\\ell\/(1-g)$ the\ntransport mean free path, which is a good indication of the validity of the model. The second and\nmore interesting finding in this study is the fact that the propagation characteristics of each polarization mode is\naffected independently and differently by short-range structural correlations. One may have anticipated that the\ndiffusion constant of each polarization mode would be simply rescaled by the $(1-g)^{-1}$ factor relating scattering and\ntransport mean free paths. Instead, we show that a transport mean free path can be defined for each polarization mode,\n$\\ell^{*(p)}=3\\mathcal{D}^{(p)}\/c$ and its dependence on the anisotropy factor $g$ can change significantly, as shown in Fig.~\\ref{fig1}(a). This, in\nturn, implies that the spatial attenuation of each polarization mode (due to depolarization) is affected differently by\nstructural correlations, as shown in Fig.~\\ref{fig1}(b).\n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=0.5\\textwidth]{fig1}\n \\caption{(Color online only) Evolution of (a) the\n transport coefficient, $1\/\\ell^{*(p)}$, and (b) the attenuation coefficient, $1\/\\ell_\\text{eff}^{(p)}$, of\n polarization eigenmodes with short-range structural correlations. The coefficients are given in units of $1\/\\ell$ and\n shown on a restricted range of $g$ since the model is expected to remain valid to first order near $g=0$. The scalar\n mode ($p=1$, cyan solid curve) has a transport coefficient scaling as $(1-g)$ and an attenuation coefficient equal to\n zero (not shown). The polarization modes $p=2$--$4$ (gray dashed curves) and $p=5$--$9$ (orange dot-dashed curves)\n exhibit different slopes, indicating that both their transport properties are affected differently by short-range\n structural correlations.}\n \\label{fig1}\n\\end{figure}\n\n\\section{Discussion}\\label{sec:4}\n\nPrevious studies based on the multiple scattering theory for the propagation of polarized light relied on the\n\\textit{direct} resolution of the Bethe-Salpeter equation, Eq.~(\\ref{eq:BS_green_fourier}), using an expansion of the average Green's\ntensors and its correlation function to order $K^2$ (diffusion approximation). This strategy is however\npossible only for uncorrelated disorder, for which $f(\\bm{q}-\\bm{q}')=1$. Here, we proposed an alternative strategy based on\nthe derivation of a transport equation taking the form of an RTE, which allowed us to reach the same final goal (eigenmode decomposition) including\nshort-range structural correlations. This strategy, however, involves an additionnal approximation that has some implications. To\nclarify this point, let us consider our predictions in the limit of an uncorrelated disorder. Setting $g=0$ in the\npredictions of Table~\\ref{tab:diffcorr} yields the values reported in Table~\\ref{tab:diffuncorr}. An alternative\nstraightforward derivation from Eqs.~(\\ref{eq:zeroth_moment}) and (\\ref{eq:first_moment}), which yields the same\nresults, is proposed in Appendix~\\ref{sec:A3}. Compared to previous results (see, e.g.\\@\\xspace, Ref.~\\onlinecite{Vynck2014}), we\nobserve that the eigenvectors, or polarization eigenchannels, remain unchanged, but the eigenvalues are now 1, 3 and\n5-fold degenerate, yielding the same attenuation coefficients $\\mu_a^{(p)}$ but different diffusion constants\n$\\mathcal{D}^{(p)}$. This apparent discrepancy can be explained by the on-shell approximation, which ``smoothes out''\nthe polarization dependence in the correlation function of Green's tensor. Nevertheless, it is important to note that the\n\\textit{average} diffusion constants for the various degenerate modes are strictly identical:\n\\begin{equation}\n\\frac{1}{3} \\left(\\frac{6}{5}c\\ell + 2\\frac{2}{5}c\\ell \\right) = 2 \\frac{c\\ell}{3},\n\\end{equation}\nand\n\\begin{equation}\n\\frac{1}{5} \\left(2\\frac{230}{343}c\\ell + 2\\frac{130}{343}c\\ell + \\frac{290}{1029}c\\ell \\right) = \\frac{10}{7}\\frac{c\\ell}{3}\n\\end{equation}\nThis brings us to the conclusion that the model is consistent with the approximations that have been made.\n\n\\begin{table}[H]\n \\caption{Summary of the diffusion constants $\\mathcal{D}^{(p)}$, attenuation coefficients $\\mu_a^{(p)}$ and effective\n attenuation lengths $\\ell_\\text{eff}^{(p)}$ characterizing the diffusion properties of the energy density through the\n individual polarization eigenchannels for an uncorrelated disorder ($g=0$).}\\label{tab:diffuncorr}\n \\begin{ruledtabular}\n \\def1.5{1.5}\n \\begin{tabular}{l|l|l|l}\n $p$ & 1 & 2-4 & 5-9 \\\\ \\hline\n $\\mathcal{D}^{(p)}$ & $\\frac{c\\ell}{3}$ & $2 \\frac{c\\ell}{3}$ & $\\frac{10}{7}\\frac{c\\ell}{3}$ \\\\ \\hline\n $\\mu_a^{(p)}$ & $0$ & $\\frac{1}{\\ell}$ & $\\frac{3}{7\\ell}$ \\\\ \\hline\n $\\ell_\\text{eff}^{(p)}$ & $\\infty$ & $\\sqrt{\\frac{2}{3}}\\ell$ & $\\frac{\\sqrt{10}}{3}\\ell$\n \\end{tabular}\n \\end{ruledtabular}\n\\end{table}\n\nA second point deserving a comment is the fact that the attenuation length $1\/\\mu_a^{(p)}$ of the polarization eigenmodes\ndoes not depend on $g$ to first order, the effect of short-range structural correlations on the spatial\ndecay of polarization away from the source being implemented via the definition of mode-specific transport mean free\npaths. This picture contrasts with previous studies based on the phenomenological transfer matrix approach~\\cite{Xu2005,\nRojas-Ochoa2004a}, which relate the depolarization length $\\ell_p$ for linearly polarized light to the \\textit{scalar}\ntransport mean free path via a linear relation with $g$. In this sense, our model provides a different perspective on\nthis basic problem of light transport in disordered media. Intuitively, this picture also appears more physically sound,\nsince it is known that the relation between depolarization and transport mean free path varies with the incident\npolarization (linear, circular) or in presence of magneto-optical effects~\\cite{MacKintosh1988, VanTiggelen1996}.\n\nRelated to this point, it is also important to discuss the validity of the diffusion limit to retrieve depolarization\ncoefficients. Reaching the regime of diffusive transport typically requires light to experience several multiple\nscattering events. However, as pointed out previously (see, e.g.\\@\\xspace, Ref.~\\onlinecite{Gorodnichev2014}), this limit can hardly be\nachieved for the polarization modes, for which the depolarization occurs on the scale of a mean free path. It is then\nlegitimate to question the accuracy of the expressions reported in Table~\\ref{tab:diffcorr}. Nevertheless,\nwe do not expect this question to impact our claim that different polarization modes are individually and differently\naffected by short-range structural correlations. Actually, the established RTE for the polarization-resolved specific\nintensity, Eq.~(\\ref{eq:RTE}), like the standard vector radiative transfer equation, does not assume diffusive\ntransport. On this aspect, our study constitutes a very good starting point to investigate the validity of the diffusion\napproximation, which may be done either numerically by solving the RTE by Monte-Carlo methods, or analytically by adding\nhigher-order Legendre polynomials $P_n$ in the following steps.\n\nFinally, let us remark that the results of our model, in which disorder is described by a continuous and randomly fluctuating function of position [Eq.~(\\ref{eq:disorder})], should apply not only to heterogeneous materials with complex textit{connected} morphologies (e.g., porous media) but also to random ensembles of finite-size scatterers. Indeed, the Fourier transform of the structural correlation $f(\\bm{r}-\\bm{r}')$ directly leads to the definition of the phase function $p(\\hat{\\bm{q}}\\cdot\\hat{\\bm{q}}')$ [Eq.~(\\ref{eq:correlation-phase})], which is the same function to which one arrives when investigating light scattering by finite-size scatterers (it is, in this case, defined from the differential scattering cross-section). For the sake of broadness of applications and convenience, the final results here have been given for the HG phase function [Eq.~(\\ref{eq:HG-phasefunction})] but other phase functions (e.g., Mie for spherical scatterers) may be used to describe specific disordered media. Note that for ensembles of finite-size scatterers, the short-range correlation approximation restricts the validity range of the model to small scatterers.\n\n\\section{Conclusion}\\label{sec:5}\n\nTo conclude, we have proposed a model based on multiple scattering theory to describe the propagation of polarized light\nin disordered media exhibiting short-range structural correlations. Our results assume weak disorder ($k_0\\ell \\gg 1$),\nshort-range structural correlations (first order in $g$), and are obtained in the ladder approximation. Starting from the exact\nDyson and Bethe-Salpeter equations for the average field and the field correlation, we have derived a RTE for the\npolarization-resolved specific intensity [Eq.~(\\ref{eq:RTE})] and applied the $P_1$ approximation to investigate the\npropagation of polarized light in the diffusion limit. Interestingly, we have found that the polarization modes, described\nso far for uncorrelated disorder only, are independently and differently affected by short-range structural\ncorrelations. In practice, each mode is described by its own transport mean free path, which does not trivially depend\non $g$ (see Table~\\ref{tab:diffcorr}). \n\nIn essence, our study partly unveils the intricate relation between the complex morphology of disordered media and the\npolarization properties of the scattered intensity. The road towards a possible description of polarization-related\nmesoscopic phenomena in correlated disorder is long, yet we hope that the present work, which highlights several\ntheoretical challenges when dealing with polarized light and structural correlations, will motivate future\ninvestigations. The model may be generalized, for instance, by including the most-crossed diagrams in the derivation to\nenable the study of phenomena such as weak localization, or frequency dependence to investigate ---via a generalized\nRTE--- the temporal response to incident light pulses. Another line of research could be to study the impact of\nshort-range structural correlations on spatial coherence properties, which appears extremely relevant to the optical\ncharacterization of complex nanostructured media~\\cite{Dogariu2015}.\n\n\\section*{Acknowledgements}\n\nThe authors acknowledge John Schotland for stimulating discussions. This work is supported by LABEX WIFI (Laboratory of\nExcellence within the French Program ``Investments for the Future'') under references ANR-10-LABX-24 and\nANR-10-IDEX-0001-02 PSL$^*$, by INSIS-CNRS via the LILAS project and the CNRS ``Mission for Interdisciplinarity''\nvia the NanoCG project.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}