diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznevb" "b/data_all_eng_slimpj/shuffled/split2/finalzznevb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznevb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSmall-scale hydropower plants are in many cases of ``run-of-river\" type (ROR) meaning that any dam or barrage is small,\nusually just a weir, and generally little or no water can be stored. ROR hydropower plants preserve the natural flow of the river (besides of course at the location of the power plant) and are therefore among the most environmentally benign existing energy technologies. \nDue to a low installation cost, ROR hydropower plants are often cost-efficient and, as such, \nuseful both for rural electrification in developing countries and for new hydropower developments in industrialized countries \\cite{P02}.\nROR hydropower plants are common in smaller rivers but also exist in larger sizes such as the Niagara Falls hydroelectric plants (Canada\/USA), the Chief Joseph dam on the Columbia River (Washington, USA) or the Saint Marys Falls hydropower plant in Sault Sainte Marie (Michigan, USA).\n\nThe optimal sizing of ROR hydropower plants has been considered by several authors, see e.g. \\cite{AP07, BMM11, F83, GZV09}\nand the references therein. In the estimation of the performance of a given power plant, all these authors omit the cost of switching between different production modes. Doing so, the optimal management strategy can be found trivially by starting the machine when flow is sufficient and stopping it when flow is insufficient. However, in rivers with large and rapid flow fluctuations, which is typically the case in smaller unregulated rivers, such strategies can lead to a large number of switches, the cost of which can not be neglected. For example, starting and stopping the turbines induces wear and tear on the machines and may also require intervention from personnel. Moreover, each start and stop involves a risk which can be considered a cost. To give an example, the major breakdown in the Akkats hydropower plant (Lule river, Sweden) 2002 was caused by a turbine being stopped too quickly, resulting in rushing water destroying the foundation of both the turbine and the generator \\cite{AKKATS1, AKKATS2}.\n\nIn this paper we create fully automatic production schemes for ROR hydropower plants, with stochastic flow of water and with non-zero cost of switching between different states of production. Our method is based on optimal switching and provides, to the best of our knowledge, a novel way to handle hydropower production planning using stochastic control theory. Along the way of deriving the production strategies, we also create a model for the random flow of water based on stochastic differential equations (SDEs) and fit this model to historical data. The stochastic flow model can incorporate a given flow forecast in order to catch up short term fluctuations.\n\nOur main focus lies on the management of a single power plant without storage capacity or means to control the flow of water, i.e.,~a ROR hydropower plant. Although this is rather restrictive from a practitioner's point of view, we stress that the mathematical framework presented here can be used also for more intricate optimization hydropower production problems, e.g.,~those including dams or pumped-storage hydropower. We have chosen to stay within a simple setup here to highlight the specific features of optimal switching and intend to treat more involved hydropower plants in a series of forthcoming papers. \n\nThe rest of this paper is outlined as follows. In Section \\ref{sec:litreview} we give a literature survey and explain the contribution of this paper. Section \\ref{sec:optimalsw} contains an introduction to optimal switching problems and, in particular, an outline of the theory in the context of hydropower planning. Sections \\ref{sec:flow} and \\ref{sec:powerplants} contain models for the flow of water and the power plants under consideration, respectively. Section \\ref{sec:numericsPDE} contains a very brief outline of the numerical approach taken to solve the variational inequalities appearing. We thereafter present the result of our parameter estimation and the performance of the constructed strategy in Section \\ref{sec:results}. We end with Section \\ref{sec:discussion} in which we discuss our approach in general, and our results in particular, and make some concluding remarks.\n\n\n\\setcounter{equation}{0} \\setcounter{theorem}{0}\n\n\\subsection{Literature survey and our contribution} \\label{sec:litreview}\n\n\nOptimal switching is a relatively new and fast growing field of mathematics combining optimization, SDEs and partial differential equations (PDEs) \\cite{BGSS20, BJK10,DHP10, AF12, AH09, HM12, HT07, K14, LNO14, LNO14b, LOO17, M16_1, M16_2, P18, P20, P20_2}. However, a literature survey shows that,\nalthough the mathematical theory is well developed, applications of optimal switching to real life problems is a far less explored area. A possible explanation for this discrepancy could be the difficulty of formulating real problems in mathematical terms and, conversely, to interpret the theoretical results in practical terms. \n\nMost commonly, applications are found in the context of real options, see \\cite{BO94, BS85, CL08, DZ01, GKL17}. In \\cite{ACLP12} the authors provide a probabilistic numerical scheme for optimal switching problems and test their scheme on a fictitious problem of investment in electricity generation. In \\cite{CL10} valuation of energy storage is studied with the help of optimal switching and in \\cite{P20_3, PE17} the authors study how the framework can be used to track electricity demand. \n\n\nThis paper extends the use of optimal switching by applying the general framework described in Section \\ref{sec:optimalsw} to two canonical examples of ROR hydropower plants, consisting of one and two adjustable units, respectively, more thoroughly described in Section \\ref{sec:powerplants}.\nIn particular, we construct a management strategy by solving a variational inequality related to the optimal switching problem (see \\eqref{pde}). However, for this solution to be practically meaningful, we first adapt and calibrate the optimal switching problem to the case of ROR power plants.\n\nA main feature of optimal switching-based production planning is that it allows for random factors influencing the production strategy (for details see Section \\ref{sec:optimalsw} below). This randomness is in general given by a Markovian stochastic process and can incorporate any number of different variables. However, to reduce computational strain we will in this paper focus on how the flow of water impacts the production. \n\nPopular streamflow models include linear time series models, such as ARMA models with Gaussian or GARCH noise \\cite{MO13, MNK90, WGVM05}, and non-linear time series models, such as SETAR models \\cite{FFODN19}. Modern approaches also include neural networks and machine learning techniques, see, e.g.,~\\cite{MBFZ17, MS17} and the references therein. Another suitable approach, and the approach that we have adopted here due to its natural relation to the optimal switching framework, is SDEs driven by Gaussian white noise and\/or compound Poissonian impulses \\cite{BTU87}. \nIn particular, we develop a new stand-alone SDE-based model for the flow of water $Q$, based on historical data and driven by Gaussian white noise, which mimics the long term seasonal variations of the flow while still allowing for short term fluctuations and flow forecasts. \n\n\nWhen trying to maximize monetary profit, the electricity spot price $P$ at which the electricity produced is sold is of course also of interest when planning the production. In general, $P$ is an exogenous stochastic process which, in principle, can be incorporated into our model similarly to the water flow by constructing an SDE for $P$ and applying optimal switching theory. However, modelling electricity prices is a non-trivial task and prices are usually not It\\^o diffusions, but rather discontinuous jump-processes, see, e.g., \\cite{HL12, WBT04, WSW04}, making the operator in the variational inequality to be solved non-local (see, e.g., \\cite{LNO14, LNO14b}). For simplicity, we therefore let $P$ be a continuous \\textit{deterministic} process in the current paper. \nWe stress, however, that our approach readily can be extended to random electricity prices (and streamflow models with compound Poisson impulses) at the cost of computational complexity. \n\nWe have chosen not to let deeper knowledge of different types of generators and how they operate be a prerequisite for the understanding of this paper and we hence avoid going into any such technical details. A comprehensive survey of different turbines and generators and their distinct characteristics can be found in, e.g., \\cite{B15, B01, W84}.\n\n\n\\section{Optimal switching in the context of hydropower} \\label{sec:optimalsw}\n\n\nTailored to the setting of hydropower plants, the optimal switching problem can be described as follows. We consider a manager of a hydropower plant with several units, each unit being a sub-power plant, i.e.,~a turbine and a generator. Each unit can be started and stopped separately in order to adjust the production of the power plant to the supply of water or to the production demand. This implies that the manager has the option to run the plant in $m \\geq 2$ production modes, corresponding to running different combinations of units. Starting and stopping units induces wear and tear on the units and therefore the manager finds herself in a trade-off, weighing the benefits of changing production state against the costs induced by making these changes.\n\nLet $X = \\{X_t\\}_{t\\geq 0}$ denote a Markovian stochastic process representing the features which influence the production. For small hydropower plants, $X_t$ may represent the flow of the river, but it may also be interpreted as, e.g.,~production demand for a frequency regulating plant, the spot price of electricity, or a varying cost of production such as for an oil driven power plant. The process $X$ may be multi-dimensional and hence incorporate all of the above and more, but we will in this paper consider $X =(Q,P)$ where $Q$ and $P$ are one-dimensional processes representing flow of water and spot price of electricity, respectively. Moreover, we let $f_i(X_t,t)$ denote the instantaneous payoff generated in production mode $i$ at time $t$, when the state of the underlying process is $X_t$. Depending on the interpretation of $X$ and the choice of $f$, $f_i(X_t,t)$ can be interpreted as, e.g., the power delivered by a power plant or as the instantaneous monetary profit per unit time. Finally, we associate to each start and stop of a unit a cost $c_{ij}$ for switching from production mode $i$ to production mode $j$, where $i,j \\in \\{1,\\dots m\\}$.\n\nThe manager of the power plant controls the production by choosing a \\textit{management strategy}, i.e.,~a combination of a non-decreasing sequence of stopping times $\\{\\tau_k\\}_{k\\geq 0}$, where, at time $\\tau_k$, the manager decides to switch the production from its current mode to another, and a sequence of indicators $\\{\\xi_k\\}_{k\\geq 0}$, taking values in $\\{1,\\dots,m\\}$, indicating the mode to which the production is switched. More precisely, at $\\tau_k$ the production is switched from mode $\\xi_{k-1}$ to $\\xi_k$ and when starting in mode $i$ at time $t$, we have $\\tau_0 = t$ and $\\xi_0 = i$. We stress that $\\tau_i$ is required to be a \\textit{stopping time} and as such it is adapted to the filtration $\\mathcal F^X$ generated by the underlying process $X$. In less mathematical terms, this simply means that the decision to switch at time $t$ must be based solely on the information made available up to time $t$, i.e., the manager cannot``peek into the future'' when making her decision\\footnote{Although this is obviously impossible from a practical point of view, it must be stated as an explicit restriction in the mathematical formulation.}.\n\nA strategy $(\\{\\tau_k\\}_{k\\geq 0},\\{\\xi_k\\}_{k\\geq 0})$ can be represented by the random function $\\mu : [0, T] \\to \\{1,...,m\\}$ defined as\n\\begin{align*}\n\\mu_s \\equiv \\mu(s) = \\sum_{k\\geq 0} \\mathbb{I}_{[\\tau_{k}, \\tau_{k+1})}(s)\\xi_{k},\n\\end{align*}\nwhich indicates the mode of production at time $s$. Here, $\\mathbb{I}_{[\\tau_{k}, \\tau_{k+1})}(s)$ denotes the indicator function, i.e.,~$\\mathbb{I}_{[\\tau_{k}, \\tau_{k+1})}(s) = 1$ if $s\\in [\\tau_{k}, \\tau_{k+1})$ and 0 otherwise. When the production is run using a strategy $\\mu$, defined by $(\\{\\tau_k\\}_{k\\geq 0},\\{\\xi_k\\}_{k\\geq 0})$, over a finite horizon $[0, T]$, the total expected payoff is\n\\begin{align*}\nE\\biggl [\\int \\limits_0^T f_{\\mu_s}(X_s,s) ds - \\sum_{k\\geq 1,\\tau_k < T} c_{\\xi_{k-1},\\xi_k} \\biggr ].\n\\end{align*}\nSimilarly, given that the stochastic process $X$ starts from $x$ at time $t$, the profit made using strategy $\\mu$ starting in $\\xi_0=i$, over the time horizon $[t, T]$, is\n\\begin{align} \\label{eq:OSP}\nJ_i(x,t,\\mu) = E\\biggl [\\int \\limits_t^T f_{\\mu_s}(X_s,s) ds - \\sum_{k\\geq 1,\\tau_k < T} c_{\\xi_{k-1},\\xi_k}\\bigg| X_t = x \\biggr ].\n\\end{align}\n\nThe task of the manager is now to maximize the expected payoff, i.e.~to find the most profitable trade-off between switching to more efficient states and minimizing the total cost of switches. In general, this problem, often referred to as an optimal switching problem, consists in finding the value function\n\\begin{align*}\nu_i(x, t) = \\sup_{\\mu\\in\\mathcal A_i} J_i(x, t, \\mu),\n\\end{align*}\nwhere $\\mathcal A_i$ is a set of strategies starting from $\\xi_0=i$, and the optimal management strategy $\\mu^* \\in \\mathcal A_i$, defined by $(\\{\\tau^*_k\\}_{k\\geq 0},\\{\\xi^*_k\\}_{k\\geq 0})$,\nsuch that\n\\begin{align*}\nJ_i(x, t, \\mu^*) \\geq J_i(x, t, \\mu)\n\\end{align*}\nfor any other strategy $\\mu \\in \\mathcal A_i$. We note that the value function and optimal strategy of an optimal switching problem depends on the set of available modes $\\{1,\\dots,m\\}$, the (finite) time horizon $T$, the running payoff functions $f_i$, $i\\in \\{1,\\dots,m\\}$, the switching costs $c_{ij}$, $i,j\\in \\{1,\\dots,m\\}$, as well as the dynamics of the underlying stochastic process $X$.\n\nFrom the dynamic programming principle, see, e.g.,~\\cite{TY93}, one can derive that the value function $u(x, t) = (u_1(x, t), \\dots, u_m(x, t))$ satisfies the following system of variational inequalities\n\\begin{align}\\label{pde}\n&\\min \\left\\{ -\\partial _{t}u_{i}\\left( x,t\\right) - \\mathcal{L} u_{i}\\left( x,t\\right) -f_i\\left( x,t\\right),\n\\;u_{i}\\left( x,t\\right) -\\max\\limits_{j\\neq i}\\left\\{ u_{j}\\left(\nx,t\\right) -c_{ij}\\left( x,t\\right) \\right\\} \\right\\} = 0, \\\\\n&u_{i}(x,T) = g_{i}\\left( x\\right) , \\notag\n\\end{align}\nfor $x\\in \\mathbb{R}$, $t\\in [0,T]$, $i\\in \\{1,...,m\\}$ and where \n$$\\mathcal{L} u(x,t) : = \\sum_{i=1}^{d_X} b_i(x,t) \\frac{\\partial }{\\partial x_i} u(x,t)+ \\sum_{i,j=1}^{d_X} \\frac{1}{2} (\\sigma \\sigma^\\ast)_{i,j}(x,t) \\frac{\\partial^2}{\\partial x_i \\partial x_j} u(x,t),\n$$ \nis the infinitesimal generator of the underlying $d_X$-dimensional stochastic process $X$ with dynamics\n$$\ndX_t = b (X_t,t) dt + \\sigma(X_t,t) dW_t.\n$$\nThe operator $\\mathcal{L}$ is a linear operator whenever the process $X$ is driven by Gaussian white noise (which we assume in this work). \n\nMoreover, the optimal strategy from state $i$ is given iteratively by the solution to \\eqref{pde} by\n\\begin{align*}\n&\\tau^\\ast_0=0 \\qquad \\xi^\\ast_0=i \\\\\n&\\tau^\\ast_k = \\inf\\left \\{t > \\tau^\\ast_{k-1}: u_{\\xi^{\\ast}_{k-1}}(X_t,t) \\leq \\max_{j \\neq \\xi^\\ast_{k-1}}\\{ u_j(X_t,t) - c_{\\xi^\\ast_{k-1}j} \\} \\right \\}\n\\\\\n& \\xi^\\ast_k = \\underset{j}{\\text{argmax}} \\left \\{ u_{j}(X_{\\tau^\\ast_{k}},\\tau^\\ast_{k}) - c_{\\xi^\\ast_{k-1}j} \\right \\}.\n\\end{align*}\n\nLast, we remark that system \\eqref{pde} consists of $m$ PDEs with interconnected obstacles and that a unique Lipschitz continuous solution to this system exists under the assumptions of this paper, see, e.g., \\cite{ADPS09, AH09, LNO14b}. \n\n\\section{Modeling river flow with an SDE} \\label{sec:flow}\nIn this section we construct an SDE whose solution resembles the actual flow of water in the river under investigation.\nThis resemblance must of course hold in the long-term (seasonal) sense, but must also allow for short-term fluctuations due to inter-yearly variations. More specifically, we use historical data to find functions $b_s$ and $\\sigma_s$, such that the solution to the stochastic differential equation\n\\begin{align*\ndQ_t =& b_s(Q_t,t)\\, dt + \\sigma_s(Q_t,t)\\, d\\tilde{W}_t,\\quad \\text{if }t\\in [s,T], \\notag \\\\\nQ_s=&q\n\\end{align*}\nwhere $\\tilde{W}_t$ is a standard Brownian motion, is similar (in some appropriate sense) to the actual flow of water.\n\nResults indicate that log-transformation of river flow data may increase prediction accuracy of flow models, see \\cite{AYKA17}, and we therefore work with the logarithm of the flow rather than the flow directly. We treat the seasonal and short-term resemblance separately, as outlined below.\n\n\\subsection{Seasonal variations}\n\nStarting with the long term seasonal variations, we define $R_t = \\log Q_t$. We let $r_t$ be a function describing the expected value of the logarithm of the flow at time $t$, independent of current observations. Defined as such, $r_t$ reflects the expected seasonal variation in flow due to spring flood, autumn rains etc., but without any consideration taken to observations from the current year\\footnote{As an example, one would expect that heavy snowfall from January to March would increase the probability of a long and intensive spring flood. Such considerations are \\textbf{not} included in $r_t$.}, and we may thus estimate the deterministic function $r_t$ from historical flow. More precisely, we will construct $r_t$ as a one-week moving average of the logarithm of the flow, see Section \\ref{sec:parametervalues}. The choice of a one-week moving average here is a trade-off between capturing seasonal variations, such as the spring flood, without letting the mean flow depend too much on the flows of particular years.\n\n\\subsection{Short term fluctuations}\n\nNext, we consider the fluctuations around the expected yearly mean, $S_t=R_t-r_t$, and assume that these fluctuations are given by an Ornstein-Uhlenbeck process reverting towards $0$, i.e.\n\\begin{equation*\ndS_t = -\\kappa S_t \\, dt + \\sigma\\, dW_t,\n\\end{equation*}\nwhere $\\kappa>0$ and $\\sigma>0$ are constants to be determined and $W_t$ is a standard Brownian motion. By standard It\\^o calculus, the flow $Q_t = \\exp\\left(r_t+S_t\\right)$, then satisfies the following stochastic differential equation \n\\begin{equation} \\label{eq:sde}\ndQ_t = \\left(r'_t+\\frac{1}{2}\\sigma^2-\\kappa\\left(\\log Q_t - r_t\\right)\\right)Q_t\\, dt + \\sigma Q_t\\,dW_t.\n\\end{equation}\nNote that the particular form of \\eqref{eq:sde} ensures that $Q_t$ stays positive.\n\nTo estimate the parameters $\\kappa,\\sigma$ we consider the asymptotic variance and asymptotic autocorrelation for lag $\\tau$ of an Ornstein-Uhlenbeck process, which are given by\n\\begin{equation}\n\\text{Variance}=\\frac{\\sigma^2}{2\\kappa}\\qquad\\text{and}\\qquad\\text{Autocorrelation}=e^{-\\kappa\\tau}.\\label{eq:varacf}\n\\end{equation}\nFor a given set of historical flow data, we first calculate the logarithm of the data and subtract the running-mean $r_t$ from above to obtain an empirical time series for $S_t$. We calculate the sample autocorrelation function of the time series and estimate the value of $\\kappa$ by a linear regression with the logarithm of the sample autocorrelation function as the dependent variable and the lag as the covariate. Finally, we calculate the sample variance of the time series and deduce an estimate of $\\sigma$ from the first equation in \\eqref{eq:varacf} and the estimated value of $\\kappa$.\n\n\\subsection{Forecasts}\nThe SDE constructed so far respects seasonal variations whilst still allowing for inter-yearly forecasts. However, in order to perform optimally, our model must also be able to treat short-term fluctuations based on, e.g.,~weather forecasts or upstream measurements of the flow. Such forecasts will be included in the model by altering the drift in the dynamics of $Q$, i.e.,~by changing \\eqref{eq:sde} on a short time span close to the present time $s$. \n\nMore precisely, we let for $t \\geq s$\n\\begin{equation} \\label{eq:sde!}\nd Q_t = b_s(Q_t,t) \\, dt + \\sigma_s(Q_t, t) \\, dW_t,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:sigma}\n\\sigma_s(Q, t) = \\sigma Q\n\\end{equation}\nas before and\n\\begin{align} \\label{eq:b}\nb_s(Q, t) &=\n\\begin{cases}\nb_s^{f} (t)\t\t\t\t\t\t\t &\\textrm{if}\\quad s \\leq t \\leq s+l +\\ell,\\\\\n \\left(r'_{t}+\\frac{1}{2}\\sigma^2-\\kappa\\left(\\log Q- r_{t}\\right)\\right) Q\t& \\textrm{if}\\quad s+l+\\ell < t,\n \\end{cases}\n\\end{align}\nwhere $b^f_s$ is a function constructed from the long term (log-)mean $r$, the flow $Q_s$ at time $s$ and the forecast of the flow during $(s, s+l)$ as follows. In the above, $l$ and $\\ell$ are parameters representing the length of the forecast and the (estimated) time it takes for the forecasted flow to return to the long term (log-)mean $r$.\nStarting at time $s$ with current flow $Q_s$ and given a forecast $\\{F_r\\}_{s< r \\leq\ts+l}$ of the future flow at times $t \\in (s,s+l)$ we set\n$$\ng_s(t) = \\begin{cases}\n\\log(Q_s) & \\mbox{for $t=s$} \\\\\n\\log (F_t) & \\mbox{for $t\\in (s,s+l]$} \\\\\n\\log(F_{s+l}) +(t-(s+l)) \\frac{ r_{s+l+\\ell}- \\log(F_{s+l}) }{\\ell} & \\mbox{for $t \\in (s+l, s+l+\\ell] $}\n\\end{cases}\n$$\nand let\n$$\nb_s^{f} (Q,t)\t = \\left(g_s'({t})+\\frac{1}{2}\\sigma^2-\\kappa\\left(\\log Q - g_s({t})\\right)\\right)Q.\n$$\n\nMore explicitly, we calculate the drift $b^f_s$ as in \\eqref{eq:sde} but with the (log-) mean $r$ replaced by $g_s$, where $g_s(t)$ is given directly by the forecast for $t \\in (s, s+l]$, coincides with $r_t$ for $t > s+l+\\ell$ and is linearly interpolated between $s+l$ and $s+l+\\ell$. The impact of such forecasts are illustrated in Figure \\ref{fig:forecasts}.\n\nWe stress that as time evolves, the forecast will be updated and the function $b_s^f$ needs to be updated accordingly each time $s$ that a new forecast becomes available. As the starting time $s$ will be clear from context we will drop the subscript $s$ and simply write $b^f(Q,t)$ in the following, although this function varies with $s$ as parameter.\n\n\n\\section{Modeling the payoff structure of power plants} \\label{sec:powerplants}\nAs mentioned above, we will consider two canonical examples of ROR hydropower plants and outline the payoff structure of these in the current section. We will measure the performance in monetary units (m.u.), but one can easily modify the below to have, e.g.,~total electricity produced as the trait for optimization.\n\n\\subsection{Power plant I: One adjustable unit}\\label{sec:model1}\nWe consider first the simple case of a hydropower plant having a single unit. The unit is designed for the flow $Q_d$, but can be run over a wide flow range $[Q_{min}, Q_{max}]$ with lower efficiency. We assume that the unit, automatically and at negligible cost, adjusts to the available flow, and the task of the manager is thus to find out when to start and stop the unit. In the setting of optimal switching, we model the above power plant as follows. The power plant can be run in two states, `1' and `0', representing `on' and `off', respectively, and for each switch from $0$ to $1$ or from $1$ to $0$ the manager must pay a cost $c_{01}$ or $c_{10}$, respectively. We assume that the electricity output (in Watts) of the unit when in state $1$ is given by\n\\begin{equation*}\nW_1(Q) =\n\\begin{cases}\n0\t\t\t\t\t\t\t& \\textrm{if}\\quad Q < Q_{min},\\\\\nc\\, \\eta (Q) \\, Q\t\t\t\t\t\t& \\textrm{if}\\quad Q_{min} \\leq Q < Q_{max},\\\\\nc\\, \\eta (Q_{max}) Q_{max}\t\t\t\t\t& \\textrm{if}\\quad Q_{max} \\leq Q,\n\\end{cases}\n\\end{equation*}\nwhere the constant $c$ is simply given by $c = \\rho\\, g\\, h$, where $\\rho = 10^3$ kg\/m$^3$ is the (approximate) density of water, $g = 9.82$ m\/s$^2$ is the (approximate) gravitational constant, and $h$ is the water head in meters, and $\\eta(Q) \\in (0,1)$ is the flow-dependent efficiency. In practice, this latter function is given by the specific characteristics of the turbine and generator, but in general it is a concave down function with a maximum at the design flow $Q_d$, see, e.g., \\cite[Figure 9]{IPCC}. To mimic such behaviour, we let\n\\begin{align}\\label{eq:efficiency-curve-assumption}\n\\eta(Q) = \\alpha - \\beta \\left( \\frac{Q}{Q_d} - 1\\right)^2,\n\\end{align}\nwhere $\\alpha$ is the efficiency at design flow and $\\beta > 0$ quantifies the concavity. Multiplying $W_1$ with the spot electricity price and integrating over time gives the income when running the plant in state $1$.\n\nTo be able to determine an optimal strategy in terms of monetary profit, we need to consider not only the produced electricity, but also running-costs of the plant as well as electricity prices. \nWe assume that the running cost of the unit is $c_{run}$ per unit time, the electricity price at time $t$ is $P_t$, and that a large additional cost $c_{low}$ must be paid for each unit of time that the production unit is run with insufficient flow $Q $ totaltolerance}\n{\n\t\n\t\n\n\t\t$u_{old} = u$;\n\t\t\n\ttotalerror = 0;\n\t\n\t\\For{time from $t_{N-1}$ to $t_k$ }{\n\t\t\n\t\t\\While{spatialerror $>$ spatialtolerance}{\n\t\t\t\n\t\t\tspatialerror = 0;\n\t\t\t\n\t\t\t\\For{all spatial points $q_j$}{\n\t\t\t\n \t\t\t\t\\For{$i \\in \\{1,\\dots, m\\}$}{\n \t\t\t\t\n \t\t\t\t\tDummy variable $y$ = the Crank-Nicolson scheme for $u_i(q_j,t)$;\n \t\t\t\t\t\n \t\t\t\t\t$obstacle_i = \\max_{j \\neq i} \\{u_j - c_{ij}\\}$;\n \t\t\t\t\t\n \t\t\t \t\t$y = \\max \\{y,\\text{$obstacle_i$} \\}$;\n \t\t\t \t\t\n \t\t\t \t\tspatialerror = spatialerror + $|u_i(q_j,t)-y| $;\n \t\t\t \t\t\n \t\t\t \t\t$u_i(q_j,t)=y$;\n\t\t\t\t}\n \t\t\t}\n \t\t\tInterpolate linearly at the edges of the spatial grid\n\t\t}\n\t}\n\ttotalerror = $|u_{old} - u|$;\n }\n\\caption{A numerical algorithm for estimating solutions to \\eqref{pde}.}\n\\label{alg:pseudo}\n\\end{algorithm}\n\n\n\\section{Results} \\label{sec:results}\n\nThis section contains results of our parameter estimation and shows the performance of our PDE-based strategy. \n\n\\subsection{Parameter values} \\label{sec:parametervalues}\nWe test our model using flow data from the Swedish river S\u00e4var\u00e5n\\footnote{The choice of S\u00e4var\u00e5n as our data source is made for no other reason than that the river lies close to Ume\u00e5 University where most of the work on this paper was done.} during the years $1980-2018$, of which we use the first 35 years ($1980-2014$) for model calibration and the last four ($2015-2018$) for benchmarking. We also use data from $1980-2014$ for a long term evaluation of the performance of our scheme. Leap days are excluded in favour of a coherent presentation of the results.\n\nWe study optimization over a one year horizon with possibility to change the state of production once every day, i.e.~$T=365$ days and $\\Delta t= 1$ day. Our main reason for sticking with this coarse time-discretization is that the flow data available to us is given with one data point per day.\n\n\\begin{remark}\nFor hydropower plants with several units, i.e.,~plant II, the total flow of S\u00e4var\u00e5n is somewhat low and a non-regulated river with higher flow would be more suitable for our model. For simplicity of the presentation however, we have chosen to limit ourselves to a single river. Specific parameters, such as running costs, deterioration costs, etc., vary over time and between different hydropower plants and the parameters used here should, as with the choice of flow data, be seen merely as an example.\n\\end{remark}\n\n\\subsubsection*{Flow parameters}\nFrom the historical flow data\\footnote{Flow data were gathered from SMHI (http:\/\/vattenwebb.smhi.se\/station) on June 15, 2020.} of S\u00e4var\u00e5n, the estimated values of the parameters in \\eqref{eq:varacf} are $\\kappa = 0.0208$ and $\\sigma=0.1018$. Figure \\ref{fig:flowmodel} shows $e^{r_t}$ and the flow during $2015-2018$ and Figure \\ref{fig:simulatedflows} shows independent random realisations of \\eqref{eq:sde} with these parameters.\n\n\\begin{figure}[h!]\n\\begin{subfigure}{.50\\textwidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{flow.jpg}\n\\caption{The seasonal model $e^{r_t}$ together with actual flows $2015-2018$.}\n\\label{fig:flowmodel}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[width=1\\linewidth]{simulated_flows.jpg}\n\\caption{The seasonal model $e^{r_t}$ together with 5 random realisations of \\eqref{eq:sde}.}\n\\label{fig:simulatedflows}\n\\end{subfigure}\n\\caption{A visual comparison of the seasonal model $e^{r_t}$, solutions to \\eqref{eq:sde} and actual flow during $2015-2018$.}\n\\end{figure}\n\n\\subsubsection*{Power plant parameters}\n\n\nThe parameters used in modeling our power plants are summarized and presented in Table \\ref{tab:constants}.\n\nStarting with the efficiency curve of our power plant units, i.e., the coefficients of \\eqref{eq:efficiency-curve-assumption}, Figure \\ref{fig:johej_eta} shows measured efficiency of two Swedish Kaplan units together with the assumed efficiency curve in \\eqref{eq:efficiency-curve-assumption} with least squared fitted coefficients $\\alpha$ and $\\beta$. Based on this data and the flow in S\u00e4var\u00e5n, we found it reasonable to choose the unit parameters as in Table \\ref{tab:constants}.\n\nThe running cost is estimated from \\cite{S12} to be approximately $1\/5$ of the electricity price. The power of our unit is approximately $500$ $kW$, and with $P_t=P_0\\equiv1$ it is thus reasonable to set $c_{run} = 100$~$m.u.\/h$. It is difficult to estimate the cost $c_{low}$ reflecting the running loss when the machine is run on too little water. The rotational speed of the turbine may drop so that the frequency of the produced electricity falls, resulting in a non-sellable production. We choose a value simply by multiplying $c_{run}$ with 10 and note that such a choice forces our algorithm to shut down the plant efficiently when the water supply fails.\n\n\n\nIt is not a trivial task to estimate a reasonable value of the switching costs. It may heavily depend on machine parameters related to the intake and specific properties of the turbine, tubes and the generator. Cost of personnel and environmental parameters such as local fish habitat may also be included, as well as the type of contract to which the electricity is sold. Due to these difficulties, we handle the switching costs as a parameter. In particular, we assume the switching costs to be constant and study the impact of varying this constant in Section \\ref{sec:results}.\n\nMoreover, when considering power plant 2, we assume that the cost of switching directly from state $0$ to state $2$ is cheaper than going through the intermediate state $1$, and vice versa. In particular, this implies that at any fixed time $t$, at most one switch is made. Exactly how much cheaper a direct switch from mode $0 \\to 2$ (or vice versa) should be compared to the alternative $0 \\to 1 \\to 2$ depends on the actual power plant under consideration. Here, we simply assume that the alternative route is $50 \\%$ more expensive. We summarize the switching costs in Table \\ref{tab:swcosts}. Note that we have assumed all switching costs to be positive. When applicable, negative switching costs representing a gain rather than a cost when, e.g.,~reducing production capacity or moving to a more environmentally friendly production mode, can be used as well.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale = 0.8]{efficiency.eps}\n\\caption{Efficiency values of two Swedish Kaplan units with design flow of approximately 100 $m^3\/s$ (red squares) and 170 $m^3\/s$ (black diamonds).\nThe curves are $\\eta(Q)$ defined in \\eqref{eq:efficiency-curve-assumption} with least square fitted coefficients.\nFor the smaller unit $\\alpha = 0.917$, $\\beta = 0.430$ (red dotted) and for the larger unit $\\alpha = 0.935$, $\\beta = 0.464$ (black solid).}\n\\label{fig:johej_eta}\n\\end{figure}\n\n\\begin{table\n\\begin{center}\n \\begin{tabular}{| l | l | l | l | }\n \\hline\n$P_0$ & 1 $[m.u. \/kWh]$ &$c_{low}$ & $ 1000 $ $[m.u.\/h]$ \\\\ \\hline\nh & 5 $[m]$ & $c_{run}$ & $ 100 $ $[m.u.\/h]$ \\\\ \\hline\nT & 365 [days] & $Q_{min}$ & 5 $[m^3\/s]$ \\\\ \\hline\n$\\alpha$ & 0.92 &$Q_{max}$ &13 $[m^3\/s]$ \\\\ \\hline\n$\\beta$ & 0.45 & $Q_{d}$ &10 $[m^3\/s]$ \\\\ \\hlin\n\\end{tabular}\n\\end{center}\n\\caption{Parameter values used in our numerical investigation.}\n\\label{tab:constants}\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n \\begin{tabular}{| l | c | c | c |}\n \\hline\n$c_{ij} $ & $j= 0$ &$j= 1$ & $ j= 2 $ \\\\ \\hline\n$i=0 $ &$0$ & $C $ & $1.5C$ \\\\ \\hline\n$i=1$ &$ C$ & $0$& $C $\\\\ \\hline\n$i=2$ & $1.5C$ & $C$& $0$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Relative switching costs. $C$ is a fixed constant determined in Section \\ref{sec:results}.}\n\\label{tab:swcosts}\n\\end{table}\n\\subsubsection*{Forecasts}\nAs already stated in the introduction, our main purpose is to highlight the use of optimal switching theory in production planning for ROR hydropower plants. The ambition is \\textbf{not} to provide methods for forecasting river flow, and to avoid such discussions, we will simply use the true flow as forecast. However, we keep the stochastic component in the dynamics \\eqref{eq:sde!} unchanged so that, even with forecast applied, our model does not know the future flow with certainty. Instead, this ``forecast'' only provides our model with a more accurate estimate of the average flow during the validity period of the forecast. We depict the impact of forecasts in Figure \\ref{fig:forecasts}.\n\n\\begin{figure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{forecast1.jpg}\n \\caption{Plot of $e^{r(t)}$, the true flow for year 2015, \\\\and $e^{g_k(t)}$ for $k=120$, $l=5$ and $\\ell=10$.}\n \\label{fig:sub-first}\n\\end{subfigure} \\hfill\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{forecast1zoom.jpg}\n \\caption{Zoom of figure (a)}\n \\label{fig:sub-second}\n\\end{subfigure}\n\n\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{forecast2.jpg}\n \\caption{Plot of $e^{r(t)}$, the true flow for year 2015, \\\\and $e^{g_k(t)}$ for $k=120$, $l=10$ and $\\ell=20$.}\n \\label{fig:sub-first}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{forecast2zoom.jpg}\n \\caption{Zoom of figure (c)}\n \\label{fig:sub-second}\n\\end{subfigure}\n\\caption{The impact of forecasts in relation to the long-term expected value $r_t$. Note that a stochastic term is also present in the flow model \\eqref{eq:sde!} and hence the plotted figure above only corresponds to an estimate of the expected value of the future flow.}\n\\label{fig:forecasts}\n\\end{figure}\n\n\n\\subsection{Comparison of different strategies}\nWe will benchmark the performance of our strategy to the \\textit{a fortiori} optimum, i.e.,~the optimal strategy in hindsight, with all information available. From a practical point of view, this value can only be achieved with certainty by ``looking into the future'', using the future flow of the river when making decisions. As this is of course impossible in practice, our comparison is slightly skew to the disadvantage of the model presented here. However, this value, the theoretical maximum output of the plant, is indeed achievable and should therefore be considered as the ultimate goal in any attempt to construct a production strategy. We emphasize and stress that, although the benchmark strategy can be found only in hindsight, our PDE-based strategy uses no other information when making a decision at time $t_k$ than historical information up to that point and the forecast starting at $t_k$. \n\nAs a comparison, we also show the result of using a \\textit{na\\\"{\\i}ve strategy} in which the manager always switches to the production mode which momentarily has the highest payoff. To ease the presentation, we assume in all cases that the starting state is $0$, i.e.,~that the plant is ``off'' at the beginning of the planning period.\n\nTo get comparable results, we first calculate a fixed constant value $D$, depending on the power plant under consideration, but not on the switching costs or the flow of the river. More precisely, $D$ is calculated as the profit generated by the plant if it works at maximum capacity for a full year without interruptions (and starting in the most beneficial state), i.e.,\n$$\nD= f_i(Q_{max}, P_0, t) * 365,\n$$ \nwhere $i =1$ or $i=2$ depending on the plant under consideration and $t \\in [0,365]$ is arbitrary (since \\eqref{eq:payoff} and \\eqref{eq:f3} are independent of $t$). After determining $D$, the switching cost constant $C$ (cf. Table \\ref{tab:swcosts}) is taken as a fixed percentage of $D$. \n\nLastly, for comparison of strategies, we use the quotient $\\gamma(\\mu)$, defined as the total payoff from the strategy $\\mu$ divided by $D$,\n$$\n\\gamma(\\mu) = \\frac{J_0(q_0,0,\\mu)}{D},\n$$\nwhere $q_0$ is the flow at the starting time $t_0=0$ and $J_0$ is as defined in \\eqref{eq:OSP}. \n\n\nResults are provided for $T=365$ days with forecast lengths $l \\in \\{0,5,10\\}$ days and with linear return to the long-term mean $e^{r_t}$ over $\\ell=20$ days. Here, $l=0$ means no forecast. We give detailed descriptions of the suggested strategies for 2015 and show summarized results for $2016-2018$. Moreover, we provide results on the long term performance of our strategy by comparing results from the years $1980-2014$ for a fixed parameter set\n\\footnote{It should be noted that, for convenience, this data set is the same as that used for calibrating parameters. Thus, for each year in the long term evaluation, the data tested is part of the data used for calibration. However, a single year out of the 35 used has minimal impact on the end calibration and the long term results are therefore still valid.}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{savarp1resulty1}\n\n\\caption{Strategies for plant I during 2015 with $C\/D=0.01$. The black line represents water flow and the yellow and red lines represent running payoff for states 1 and 0, respectively. Yellow circles (asterisks) represent the action of moving to state 1 and red circles (asterisks) the action of moving to state 0 under the PDE strategy with $l=10$ (optimal strategy).}\n\\label{fig:p1y1results}\n\\end{figure}\n\n\\begin{table}\n \\centering\n\\begin{tabular}{l | l}\n\\textbf{Strategy} & \\textbf{(Day of action, Move to state)} \\\\\n\\hline\nOptimal & (102,1) \\quad (225,0) \\\\ \\hline\nPDE, $l=0$ & (103,1) \\quad (226,0) \\\\ \\hline\nPDE, $l=5$ & (103,1) \\quad (225,0) \\\\ \\hline\nPDE, $l=10$ & (103,1) \\quad (225,0) \\\\ \\hline\nNa\\\"ive & (102,1) \\quad (225,0) \\quad (262,1) \\quad (265,0) \\quad (324,1) \\quad (327,0) \\\\ \\hline\n\\end{tabular}\n\\caption{Strategies for plant I during 2015 with $C\/D=0.01$.}\n\\label{tab:strategyp1}\n\\end{table}\n\n\\begin{figure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{savarp1relativesy1}\n \\caption{2015}\n \\label{fig:sub-first}\n\\end{subfigure} \\hfill\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{savarp1relativesy2}\n \\caption{2016}\n \\label{fig:sub-second}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{savarp1relativesy3}\n \\caption{2017}\n \\label{fig:sub-first}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=1\\linewidth]{savarp1relativesy4}\n \\caption{2018}\n \\label{fig:sub-second}\n\\end{subfigure}\n\\caption{Relative payoff for plant I during $2015-2018$ as a function of $C\/D$.}\n\\label{fig:relativesp1}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[scale=0.6]{averageresultsp1}\n \\caption{The PDE strategy consistently gives payoff close to the optimal. Here, $C\/D=0.01$ and $l=10$. The average quotient $\\gamma$ for plant I over the years $1980-2014$ is $0.2674 $, $0.2628$, and $0.2466$ for the optimal strategy, the PDE strategy and the na\\\"{\\i}ve strategy, respectively. (Lines are present for visual aid only.)} \n \\label{fig:averageresultsp1}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.6]{savarp2resulty1}\n\n\\caption{Strategies for plant II during 2015 with $C\/D=0.01$. The black line represents the water flow and the green, yellow and red lines represent running payoff for states 2, 1, and 0, respectively. Green circles (asterisks) represent the action of moving to state 2, yellow circles (asterisks) the action of moving to state 1, and red circles the action of moving to state 0 under the PDE strategy with $l=10$ (optimal strategy).}\n\\label{fig:p2y1results}\n\\end{figure}\n\n\\begin{table}\n \\centering\n\\begin{tabular}{l | l}\n\\textbf{Strategy} & \\textbf{(Day of action, Move to state)} \\\\\n\\hline\nOptimal & (102,1) \\quad (111,2) \\quad (159,1) \\quad (225,0) \\\\ \\hline\nPDE, $l=0$ & (103,1) \\quad (112,2) \\quad (162,1) \\quad (226,0) \\\\ \\hline\nPDE, $l=5$ & (103,1) \\quad (112,2) \\quad (162,1) \\quad (225,0) \\\\ \\hline\nPDE, $l=10$ & (103,1) \\quad (112,2) \\quad (162,1) \\quad (225,0) \\\\ \\hline\nNa\\\"ive & (102,1) \\quad (111,2) \\quad (159,1) \\quad (225,0) \\quad (262,1) \\quad (265,0) \\quad (324,1) \\quad (327,0) \\\\ \\hline\n\\end{tabular}\n\\caption{Strategies for plant II during 2015 with $C\/D=0.01$.}\n\\label{tab:strategyp2}\n\\end{table}\n\n\\begin{figure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{savarp2relativesy1}\n \\caption{2015}\n \\label{fig:sub-first}\n\\end{subfigure} \\hfill\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{savarp2relativesy2}\n \\caption{2016}\n \\label{fig:sub-second}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{savarp2relativesy3}\n \\caption{2017}\n \\label{fig:sub-first}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{savarp2relativesy4}\n \\caption{2018}\n \\label{fig:sub-second}\n\\end{subfigure}\n\\caption{Relative payoff for plant II during $2015-2018$ as a function of $C\/D$.}\n\\label{fig:relativesp2}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n \\includegraphics[scale=0.6]{averageresultsp2\n \\caption{The PDE strategy consistently gives payoff close to the optimal. Here, $C\/D=0.01$ and $l=10$. The average quotient $\\gamma$ for plant II over the years $1980-2014$ is $0.1708 $, $0.1624$, and $0.1244$ for the optimal strategy, the PDE strategy and the na\\\"{\\i}ve strategy, respectively. (Lines are present for visual aid only.)}\n \\label{fig:averageresultsp2}\n\\end{figure}\n\nThe different production schemes (optimal, PDE-based, na\\\"{\\i}ve) for plant I (plant II) during 2015, $l \\in \\{0,5,10\\}$ and $C\/D=0.01$ are presented in Figure \\ref{fig:p1y1results} (Figure \\ref{fig:p2y1results}) and Table \\ref{tab:strategyp1} (Table \\ref{tab:strategyp2}). The relative payoffs as a function of $C\/D$ for $l \\in \\{0,5,10\\}$ are given for all years in Figure \\ref{fig:relativesp1} (Figure \\ref{fig:relativesp2}). The long term performance of the PDE-based strategy for plant I (II) with $C\/D=0.01$ is presented in Figure \\ref{fig:averageresultsp1} (Figure \\ref{fig:averageresultsp2}).\n\nThe PDE-based strategy in most cases performs very close to the optimal strategy, with a difference of less than 2 \\% (5 \\%) from the theoretical maximum in the long term tests for plant I (plant II) with $C\/D=0.01$ and 10 days forecast. The most common mistake of the PDE-based strategy compared to the optimal is delaying the decision to change production mode. In all but a few cases, longer forecast also means better results. \n\n\n\\section{Discussion} \\label{sec:discussion}\n\n\nThe optimal switching theory is designed for maximizing the average payoff over a long period of time, but as our results show, it performs well also on single years. When comparing our strategy to the optimal one, we see that the differences between the strategies arise as our strategy occasionally recommends switching mode too late due to uncertainty regarding the future flow, see, e.g., Figure \\ref{fig:p2y1results}. Most often, the decision is only late by one day and with a finer time discretization, these differences would most likely disappear or, at least, the discrepancy would be much smaller. We also see that by introducing forecasts, we are able to remove this gap entirely in many cases, see Figures \\ref{fig:relativesp1} and \\ref{fig:relativesp2}. \n\nOur (artificial) forecast includes uncertainty from the first forecasted day and reduction in this uncertainty, which is reasonable and possible by, e.g., upstream measurements, would also help removing any delay in the decision making. Indeed, in our SDE model, we assumed the uncertainty to be the same regardless of if a forecast was introduced or not. If the uncertainty of the forecast is known, one could introduce a new parameter $\\sigma^f_k$ in a similar fashion as for $b^f_k$ and let the forecast influence also the stochastic volatility of the flow, having lower volatility close to the current time $t_k$ and increasing volatility further in the future. We have chosen not to alter the volatility $\\sigma$ during the forecast period, partly to keep our model as simple as possible, and partly to avoid the need to construct forecasts (which is outside the scope of the current paper). \n\nAlready without forecast, our PDE-based strategy outperforms the na\\\"{\\i}ve strategy in most cases, even for small values of $C\/D$, and in many cases also finds the truly optimal strategy or something very close to optimal. In the rare event that the na\\\"{\\i}ve strategy performs as good or better than the PDE-based strategy, it is because the na\\\"{\\i}ve strategy happens to be optimal. In these cases, the difference between the optimal and the PDE-based strategies is small. \n\nOur results show, as should be expected, that longer forecasts means better results. However, on a few occasions, this is not the case, see, e.g., Figure \\ref{fig:relativesp1} (c), where, for $C\/D \\geq 0.016$, we perform worse with a 10 day forecast than with a 5 day forecast. The reason for this (rather unsatisfactory) result is that, when large fluctuations up and down in the flow happens in a just a few days time, a longer forecast may capture both directions of the movement whereas a shorter only captures one direction, triggering a decision to open\/close if the current flow is at or close to a level at which different payoff functions intersect, whereas with longer forecast, the uncertainty introduced by forecasting also a rapid downward movement delays the decision slightly when the cost of switching is high. When run repeatedly over a large number of years, the decision made from the longer forecast would perform better \\textit{on average}, but it may come up short in a single year. Luckily, as in the comparison with optimal na\\\"{\\i}ve strategies, the deviation in the final payoff is small on these occasions.\n\nOur model is calibrated to a constant electricity spot price $P_0$ for convenience when interpreting results and we repeat that a time-varying deterministic electricity price causes no problems other than parsing the results. However, our model could also be calibrated to a random price process $P_t$ as well. There is no (theoretical) restriction in the number of underlying Markovian It\\^o processes our model can handle, so that allowing for such calibration is merely a question of computational power. In fact, not even the Markov property is a restriction as any discretized random process can be made Markovian at the cost of increasing its dimension. However, the cost of increasing the number of random sources is that the underlying optimization problem, which here is solved by PDE-methods, increases in dimensionality at the same rate. In practice, as long as the random sources are few, say 2 or 3, our approach based on numerical solutions of PDE can be used to find a solution. When the dimensionality increases even further, the PDE-methods become computationally heavy and other ways of attacking the resulting optimal switching problem may be preferable, e.g.~Monte Carlo-methods as in \\cite{ACLP12,BGSS20}. In the current setting, the algorithm for obtaining our strategy is run in only a few minutes on a standard laptop computer and is thus more than sufficient for the purpose of the current paper.\n\n\\subsection{Concluding remarks and future research} \\label{sec:concludingremarks}\nIn this paper we have, to the authors knowledge for the first time, used the mathematical optimal switching theory to create hydropower production plans which can incorporate random water flow and non-negligible costs of switching between different operational modes. Although our setup is somewhat simplified to keep the analysis of the results tractable, the results are satisfying, showing that automatic optimal switching schemes can perform close to the theoretical maximum already with small computational effort. Moreover, in our study the difference between our model and a na\\\"{\\i}ve approach increased with the number of available production modes, indicating that the decision support provided by optimal switching theory becomes increasingly valuable as the complexity of the underlying problem increases.\n\nAn interesting theoretical continuation of the work initiated in this paper would be to study hydropower plants with a dam or hydropower plants of pumped-storage type. At the practitioners' level, a natural next step would be to adapt the current scheme to a real hydropower plant and to benchmark its performance to the production strategy currently in use. \n\nIt is our firm believe that continued work in this direction, utilizing advanced stochastic control theory for production planning, will play an important role in making today's energy system more effective. Indeed, the theory outlined in this paper has potential to become as useful in the energy sector as the closely related theory of optimal stopping and singular- and non-singular control has become in financial mathematics.\n\n\n\\section*{Acknowledgements}\nThe work of Niklas L. P. Lundstr\\\"om was partially supported by the Swedish research council grant 2018-03743.\nMarcus Olofsson gratefully acknowledges the support from the Centre for Interdisciplinary Mathematics at Uppsala University.\n\n\\bibliographystyle{amsalpha}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMilitary confrontation is one of the key factors that have shaped human history. While instances of large-scale interstate warfare have decreased in the post-World War era, internal wars and low-intensity conflicts carried out by non-state armed groups (NAGs) against nation-states have become increasingly common \\cite{San-Akca:2016,Maoz:2012,Horowitz:2014,Phillips:2018,LaFree:2009,Phillips:2015,Freilicha:2015,Pinker:2012,Gleditsch:2013,Gleditsch:2016,Byman:2001,Kalyvas:2010}. NAGs include rebel, insurgent, guerrilla, or terrorist groups that engage in violent activities targeting the government, citizens, or institutions of nation-states. Many of these NAGs are cultivated by a complex network of supporting host states (HSs) --- states external to the locus of the internal \nconflict between the NAG and the target government. Examples include U.S. support to the Contras in Nicaragua in the 1980s, Israeli and Iranian support for Kurdish rebels in Iraq during the 1960s and 1970s, and NATO support for the Libyan rebels in 2011 \\cite{San-Akca:2016}. The\nHSs support NAGs by providing them with military and economic aid, sanctuary to leaders or members of NAGs, logistics, or training.\nThe bipartite relation between NAGs and host states has become a common mode of foreign policy conduct \\cite{San-Akca:2016,Maoz:2012,Gleditsch:2016,Byman:2001,Kalyvas:2010}, yet we lack reliable real-time data on NAG-HS relations as most of these interactions are covert; public information usually emerges \\emph{ex-post facto.} The few existing studies have focused on specific NAG-HS relations \\cite{LaFree:2009,Freilicha:2015,Popovic:2017,Salehyan:2010,Gleditsch:2008}. \nThe entire ecosystem of NAG and HS interactions and its evolution over time remains an uncharted territory.\nRevealing the underlying patterns and mechanisms behind the NAG-HS support network is important for analytical insights and policy implications.\n\nWe construct the global network of bipartite support between NAGs and HSs involving all military conflicts over the period of 1945-2010 and measure its global topological characteristics. Two types of links are present in the network: A HS can provide a NAG with either intentional (``active'') or de facto (``passive\") support. Active support entails a deliberate decision by a HS to form an alliance with a NAG in order to further common interests, such as the Israeli and Iranian support for the Kurds in the 1960s and 1970s \\cite{maoz2006}. Passive support refers to situations where a NAG operates or extracts resources from a HS, while the HS does not engage in a deliberate action or create channels to provide any support.\nExamples of passive support include the IRA operations in the Irish community in the US during the 1980s and 1990s \\cite{guelke1996}, or the fundraising for Palestinian insurgents among Islamic communities in the US prior to September 11, 2001 \\cite{levitt2008}.\nLikewise, a NAG may operate in a HS whose institutions are weak and incapable of resisting the NAG's activities (e.g., the PLO operation in Lebanon over the 1970-1982 period) \\cite{San-Akca:2016, San-Akca:2014}. \n\n\nBy tracking the changes in the players and supporting relations, we find that the network evolves via a mechanism of generalized preferential attachment and detachment: highly connected players are more likely to both gain and lose connections relative to peripheral players. Such evolution leads to characteristic patterns typically associated with ecological networks, such as plant-pollinator networks, seed-frugivore networks \\cite{Bascompte:2003,Olesen:2007} and \nhost-parasite networks \\cite{Poulin:2010,Graham:2009}, which similarly involve tradeoffs through cooperative or exploitative relations. \nIn particular, we find a robust nested and modular architecture over a broad time span. Nestedness entails a nonrandom pattern of generalist-specialist structure, which is a crucial factor determining the structural stability of a network \\cite{Okuyama:2008,Bastolla:2009,Rohr:2014}. Modularity indicates the extent to which the network is organized into clearly delimited clusters. Such patterns are also frequently observed in socio-economic systems, from the network of bankers \\cite{May:2008} to the designer-manufacturer network of New York garment industry \\cite{Saavedra:2009}. \n\nUsing a temporal community detection algorithm we find that there are nine major modules that persist over time and many small, short-lived modules. The persistent modules reveal both regional and trans-regional composition and historical connections. These major modules display significant variation in their membership, whereas the transitory ones show high consistency. Furthermore, we identify the roles of nodes and find that if a NAG or HS is involved in both the active and passive support subnetworks they play the similar role of hub or peripehral in both subnetworks. The discussion section addresses some of the implications of these findings. We suggest that our understanding of robustness and interventions in ecological bipartite networks may be useful for developing efficient disruptions of malign relations in the NAG-HS network, where we can treat the support as mutualistic and parasitic relationships. \n\n\n\n\\section{Results}\nWe focus on the relations between NAGs and external state supporters, extracted from the Dangerous Companions Database covering the time period from 1945 to 2010 \\cite{San-Akca:2016}. This considers only violent NAGs (that is NAGs whose operations resulted in 25 or more deaths per year of operation). A HS is a country that provides any type of material support to a given NAG, including safe haven to NAG members, training, or military or financial aid. A temporal network is constructed from these bipartite relations, including both active and passive support. During the time period of study, the network experiences substantial growth with an almost monotonic increase peaking in the early 1990s, corresponding to first years in the post-Cold War era, followed by a decrease in both numbers of nodes and links (Fig. \\ref{NAG_CDF_H_N_INSET}a and Fig. S1 in SI), whereas the connectance (link density) shows an opposite trend (Fig. \\ref{NAG_CDF_H_N_INSET}b).\n\n\nFirst we track the assembly and disassembly of the NAG-HS network, focusing on the relative probabilities of acquisition and disassociation of actors (NAGs and HSs) and links (NAG-HS relations). Then, we show that the network evolves to have\na particular architecture exhibiting both nested and modular patterns.\nFinally, based on this characteristic architecture, we determine the roles of individual NAGs and HSs.\\\\\n\n\n\n\\noindent\\textbf{Fitness of actors.} \nWe consider two indicators of a NAG's or HS's fitness. The first indicator is based on duration over time, specifically the lifespan of a NAG or the hosting time of a HS (how long the state participates in the supporting activities in any capacity). Generally, we find that these follow an exponential-like distribution, as shown in Fig. \\ref{NAG_CDF_H_N_INSET}c and \\ref{NAG_CDF_H_N_INSET}d. The second indicator is based on a snapshot of time, specifically the extent of connectivity of the NAG or HS measured by its degree, $k_{NH}$ or $k_{HN}$ respectively. The insets of Fig. \\ref{NAG_CDF_H_N_INSET} show the Pearson coefficients $\\rho$ between the lifespan\/hosting time and node degree ($p<0.0001$ using a two-sided t-test); a NAG tends to endure longer with more supporting HSs, while the hosting time of a HS is more likely to increase the more NAGs rely on it.\n\nThese results suggest that ``successful\" NAGs---with longer survival time---depend on their ability to extract external support. Likewise, ``active\" HSs---states with longer record of support for NAGs---tend to be generalists in that they support more NAGs, compared to the activity duration of ``specialist\" HSs.\\\\\n\n\n\\begin{figure}[t]\n\\hskip 1.8cm\n\\includegraphics[width=0.75\\textwidth]{NAG_Fig1.png}\n\\caption{\\label{NAG_CDF_H_N_INSET} Evolving NAG-HS support network. \\textbf{a}, Changing size of the bipartite network over time, in terms of the number of all actors and the components. \\textbf{b}, Network connectance, defined as the ratio of the actual number of links against the maximum possible number of links ($n_{NAG}*n_{HS}$). Inset: fractions of links that correspond to active, passive and both types of support, respectively. \\textbf{c, d}, Cumulative probability distributions of life spans of NAGs and hosting time of host states (exponential-like distribution). Insets: Correlation of life span or hosting time and the number of attached actors (degree). }\n\\end{figure}\n\n\n\\noindent\\textbf{Network assembly and disassembly.} To understand how the actors (nodes) and the associated relations (links) attach to and detach from the network, we examine the assembling and disassembling processes that unfold on it. Specifically, we observe how actors acquire (``$+$\") and remove (``$-$\") counterpart actors by comparing the adjacency matrices in two consecutive years, as illustrated in Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}a. From the perspective of HSs, the histograms show that both attachment and detachment of NAGs occur more frequently at HSs with a lower degree (Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}b and d), but this is biased since the degree distribution of the incumbent HSs is already skewed before those events. It is thus particularly revealing to calculate the relative probability of a HS gaining or losing a link from a NAG, which is equal to the absolute probability divided by the probability from random selection \\cite{Newman:2001,Saavedra:2008} (with that any unevenness in the degree distribution is \noffset), over the entire time span of 66 years. A nonrandom distribution of relative probability is observable if there is an intrinsic bias in the attaching or detaching process. \n\nIndeed, both attachment and detachment of NAGs are found to occur preferentially at incumbent HSs of higher fitness as measured by their degrees. First, we consider the attachment of NAGs, that is, the relative probability for an incumbent HS to acquire links from newly arriving NAGs (NAGs that start to receive support from at least one HS) as a function of the degree of the HS, denoted $T^+(k_{HN})$, and find that a newcoming NAG is more likely to attach to a generalist (high-degree) HS. Explicitly, the relative probability $T^+(k_{HN}) = P^+(k_{HN})\/P_0^+(k_{HN})$ is defined as the ratio of the absolute probability $P^+(k_{HN})$ that a NAG joining the network in year $t$ connects to an incumbent HS that has already been supporting $k_{HN}$ NAGs divided by the probability $P_0^+(k_{HN})$ that a HS with degree $k_{HN}$ is uniformly selected at random (see Methods). $T^+(k_{HN})$ is shown \nin Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}c, which is approximately fit by $T^+(k_{HN})\\sim k^{0.86}_{HN}$, indicating a \n(slightly sublinear) preferential attachment (PA) process \\cite{Yule:1925,Barabasi:1999}.\nTurning to detachment, we find the same tendency holds. Specifically, \na NAG departing the supporting network (a NAG has no support from any HS or has been dismissed) is more likely to detach from a counterpart HS supporting more NAGs, as shown by the linearly increasing relative probability $T^-(k_{HN})\\sim k^{1.1}_{HN}$ with the degree of the latter (see Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}f). This (slightly superlinear) preferential detachment (PD) is somewhat counter-intuitive as it suggests that the network tends to disassemble at higher-fitness nodes. \n\n\\begin{figure}[t]\n\\hskip 1cm\n\\includegraphics[width=0.84\\textwidth]{fig2_new2_newfit.png}\n\\caption{\\label{NAG_REL_PROB_TK_RK_ALL2}Network assembly and disassembly. \\textbf{a}, Link addition (``$+$\") and removal (``$-$\") as the adjacency matrix evolves from $M_{t-1}$ to $M_t$ in consecutive years. $M_t$ is divided into four blocks ($B_1$ to $B_4$) by separating incumbent and newly joining actors. \\textbf{b, e}, Histograms for the frequencies of acquiring and losing links from NAGs by HSs with degree $k_{HN}$ (zero-degree for new HSs). Inset of panel b: frequencies of adding links in the four blocks. \\textbf{c}, Relative probability $T^+(k_{HN})$ that an incumbent HS with $k_{HN}$ previous connections acquires links from newly joining NAGs. \\textbf{d}, Relative probability $R^+(k_{HN})$ that an incumbent HS acquires new links from an incumbent NAG. \\textbf{f}, Relative probability $T^-(k_{HN})$ that an incumbent HS loses links from NAGs that depart the network. \\textbf{g}, Relative probability $R^-(k_{HN})$ that an incumbent HS loses links from an incumbent NAG.} \n\\end{figure}\n\nSecond,\nbesides the contribution from newcoming NAGs, an incumbent NAG (a NAG that receives support from at least one HS) of the supporting network may also add or delete links to another incumbent HS over time. To include such cases, \nwe measure the relative probability $R^+(k_{HN})$ (slight superlinear) that an incumbent HS that has $k_{HN}$ links in the year prior to $t$ will acquire links from incumbent NAGs in year $t$, excluding those from newcoming NAGs (Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}d). Similarly, Fig. \\ref{NAG_REL_PROB_TK_RK_ALL2}g shows the case for incumbent HSs losing links from incumbent NAGs (remaining in the network), as indicated by $R^-(k_{HN})$ (linear). All these attachment and detachment show that NAGs interact preferentially with HSs of greater fitness. We can also observe the tendencies from the perspective of NAGs, in terms of acquiring and losing links from HSs. Yet, the dependencies are more obscure (though the overall positive dependencies are still observable), due to the paucity of data points at some degrees (see Fig. S5 in SI). \n\nThe above analyses thus show that the assembly and disassembly of the supporting network, including joining and departure of NAGs, and rewiring between incumbent actors, proceed with propensities that are essentially dependent on nodal degree, which seems to reflect the fitness of the HS supporters. \nHowever, these analyses also show two conflicting processes:\nwhile the attachment is driven in attempt to maximize individual fitness, having higher fitness can also lead to higher possibility of losing counterpart actors, which has not been observed in other bipartite network disassembly \\cite{Saavedra:2008}. \nAs discussed later, this could reflect the fact that an actor with many alliances may benefit from having fewer interactions of higher quality.\n\\\\\n\n\\noindent\\textbf{Nested structures.} Despite significant variations in the network size (numbers of actors and links), the network exhibits a robust nested and modular architecture over a long time span. The measure of nestedness simply reflects to what extent actors share the same set of partners. It is calculated with the NODF metric (``nestedness metric based on overlap and decreasing fill\") \\cite{AlmeidNeto:2008}. To show how it evolves, we calculate the nestedness of the aggregated network in a 5-year sliding window, as shown in Fig. \\ref{NG_NODF_MOD_LL_CORR}a (this is to avoid statistical fluctuation due to the sparsity of entries in single yearly network slices). Between 1945 (end of World War II) and 1975 (end of Vietnam War), the nestedness keeps at a level that is comparable to that of a random network. Starting from 1975, the empirical nestedness score exceeds significantly the random scores, peaking in 1987 (towards the end of the Cold War). The increase parallels the cumulative broadening of degree distribution due to the preferential attachment and detachment processes (see the comparisons in Fig. S4 in SI). This is then followed by a dip and then upswing back to a similarly high level in 2010. For comparison, random networks are used as the null model, where the probability of each entry of the adjacency matrix being occupied is the average of the occupation probabilities of the row and column \\cite{Bascompte:2003}. The significantly high nestedness implies that actors (say NAGs) are more likely to share common partners (say HSs) which tends to make these common partners generalists. This renders a highly unbalanced structure of relationships: specialists (say NAGs only focusing on few HSs) are more likely attached to generalists (HSs supporting many NAGs) rather than other specialists (HSs supporting only few NAGs). \n\n\n \n\n\\begin{figure}[tb]\n\\hskip 0.3cm\n\\includegraphics[width=0.95\\textwidth]{NG_Fig3_new.png}\n\\caption{\\label{NG_NODF_MOD_LL_CORR}Evolution of structural measures of the temporal NAG-HS network: (a) nestedness NODF and (b) modularity Q. The nestedness is measured for the 5-year aggregated networks in a sliding window; the modularity is obtained by applying a temporal detection algorithm (note that the optimization algorithm can result in slightly different partitions of the same network). The results for the empirical network are compared with corresponding null models: I. average occupation probability (labeled ``null occu.\") and II. random slice permutation (labeled ``null perm.\"). Confidence interval of one standard deviation is used. (c) Negative correlation of modularity and nestedness in the NAG-HS supporting network, in comparison to the random relation between the two measures for the null (occu.) model in the inset. (d) Temporal \ncommunity detection algorithm. The example temporal bipartite network (circles for NAGs and squares for HSs) consists of four time slices that are linked by connecting nodes in one slice to themselves in the adjacent slices (demonstrated for nodes $i$ and $j$). The partitioning is based on the entire temporal network and the yearly modularity is then obtained according to the partition in each slice (each color representing one module). }\n\\end{figure}\n\nA heuristic explanation can be suggested for the formation of the nested architecture, following the principle of fitness maximization in ecology \\cite{Suweis:2013,San-Akca:2009}: For those NAGs and HSs maintaining mutualistic relations, the successful military action taken by a NAG may improve the status of the counterpart HS and thus increase the state's propensity and willingness to support more NAGs. \nAnd the successful NAG may also attract support from more HSs intending to confront their rivals in this way. Similarly, for parasitic NAG-HS relations, the feasibility of one NAG gaining resources from a HS may encourage more NAGs to attach to that state. \nThis mechanism is weakened by the detaching events, which however occur much less than the attaching events up to the middle of 1990s (see the number of links in Fig. S1a). This has led to ``net\" preferential attachment. \n\nA special note should be made to the indirect benefit for the NAGs in the nested configuration \\cite{Bastolla:2009,Maoz:2012,Phillips:2018}. The NAGs that engage in mutualistic support from the same HSs may avoid or reduce possible competition between these NAGs for territory or resources. Since successful military actions by a given NAG may improve the willingness of HS to support additional NAGs, NAGs can be thought to indirectly contribute to each other's support levels. This complies with the general ecological principle of mutualism \\cite{Bastolla:2009}: a more nested architecture is able to pack more actors in the community by minimizing competition among like-type actors. This higher capacity marks a more structurally stable architecture \\cite{Rohr:2014}. \\\\\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.98\\textwidth]{area_H_N_JJ_new.png}\n\\caption{\\label{NG_MOD_YR_HN}Temporal modular structure in the (a) HS and (b) NAG guilds. The height of the curve represents the number of actors in the submodule. The modules are numbered (vertical axis), among which 9 modules contain more than 3 different actors in each while the other 24 are small groups or individual actors. Inset: Positive correlation of the Jaccard indices of NAG- and HS-submodules of two of the major modules shows coherent fluctuation in their membership (Pearson coefficient $\\rho = 0.71$ with $p = 6.1 \\times 10^{-9}$ (two-sided t-test) for mod \\#19; $\\rho = 0.56$ with $p = 9.9 \\times 10^{-7}$ for mod \\#3).}\n\\end{figure}\n\n\n\n\n\n\\noindent\\textbf{Modular structures.} Concurrently, the network exhibits a nontrivial modular structure, in which actors are more densely connected to partners within the same module than across modules. The modularity measure is usually calculated by seeking a partition of an individual network that maximizes a modularity quality function \\cite{Newman:2006}. However, for this temporal network, partitioning the yearly slices of the temporal network individually would lose the inter-slice relation, making the modules in successive years independent of each other. Here we use a temporal community detection algorithm based on Markov stability, proposed by Mucha, et al \\cite{Mucha:2010}. Instead of partitioning separate time slices, this algorithm relies on the systematic optimization of a single quality function for all network slices in the 66 years. It has the merit of allowing the identification of the same module through all years and thus tracking the evolution of modules \\cite{Fortunato:2010}. \n\nThe modularity of the network each year as measured via the metric $Q(t)$ is significantly higher compared to that of corresponding null models, as shown in Fig. \\ref{NG_NODF_MOD_LL_CORR}b (note that the partition is not unique). There are two appropriate null models: 1. randomly permuted yearly slices (see Fig. S11a) and 2. randomly shuffled within-slice links that maintain the same degree sequence (see Fig. S11b). The comparisons demonstrate that both intra-slice linkage and the temporal order of network slices contribute to the nontrivial modular structure \\cite{Bassett:2011}. \nThe modularity value reaches a minimal value around 1990, a time point marking the end of the Cold War. \n\n\n\\begin{figure}[t]\n\\hskip -0.2cm\n\\includegraphics[width=1.02\\textwidth]{Y1970-2010_shapes.png}\n\\caption{\\label{Y1970-2000F} Partitioned NAG-HS networks for representative years (a) 1970 and (b) 2000. Nodes are coloured differently according to their bipartite module-membership, but the colour of the same module does not alter for different years. NAGs are represented by circles and HSs by squares.\n}\n\\end{figure}\n\n \n\n\nWe then seek a representative partition of the network. This is realized by using a stabilizing technique for robust module detection \\cite{Bassett:2013} and maximizing the Adjusted Mutual Information (AMI) (see Methods and Sec. 2 in SI). Our more detailed analyses are then based on this unique representative partition. The supporting network over the 66 years is partitioned into 33 modules, in which 9 major modules contain more than 3 different actors in each and the other 24 modules are separate nodes or tiny groups. In Fig. \\ref{NG_MOD_YR_HN}, we show the sizes (number of actors) of modules in all years in the HS- and NAG-guilds (the NAG- and HS-submodules of each bipartite module), respectively (see also the modular structures in representative years in Fig. S9 in SI). The contrast is sharp: while the major modules can persist for significantly long time spans, the others are merely temporary. \n\nWhile the 24 temporary modules are almost constant in their membership, all the major 9 modules have experienced significant turnovers (see module membership in 1970 and 2000 shown in Tab. S1 and S2 in SI). This is consistent with the general tendency found in the unipartite coauthorship and the phone-call networks \\cite{Palla:2007}: small modules survive when they are stationary in the membership, while large modules survive if they experience sufficient changes. For the 9 major modules, we further observe that the fluctuations in the NAG- and HS-submodules are coherently correlated. Quantitatively, we calculate the temporal Jaccard index to show the stationarity of the membership of the submodule, which compares the contents of in the successive submodules \n\\begin{equation}\n\\label{eq:jac}\nJ_{\\alpha}(t,t-1)=\\frac {|A_{\\alpha}(t)\\cap A_{\\alpha}(t-1)|}{|A_{\\alpha}(t)\\cup A_{\\alpha}(t-1)|}\n\\end{equation}\nwhere $A_{\\alpha}$ denotes the members of a specific submodule in focus \\cite{Palla:2007,Bassett:2013}. Thus, the Jaccard index describes the similarity of successive modules (auto-correlation), ranging from 0 (no common member) to 1 (identical). For all submodules of the 9 major modules, the Jaccard indices averaged over the active years turn out to be high (see values in Fig. S6 in the SI), indicating high cross-time stability of these modules. In individual modules, for 7 of the 9 major modules, the temporal Jaccard indices of the corresponding NAG- and HS-submodules, $J_{NAG}(t)$ and $J_{HS}(t)$, are positively correlated ($p<0.1$; two-sided t-test), as illustrated for the two modules with the highest correlation coefficients in the inset of Fig. \\ref{NG_MOD_YR_HN} (see Fig. S7 in the SI for all major modules). This implies that the constituents of most major bipartite modules are co-changing: NAGs and their attached states tend to join or leave a module together.\n\nViewing the nestedness and modularity measures jointly, we find a clear negative correlation between them through the 66 years, as shown in Fig. \\ref{NG_NODF_MOD_LL_CORR}c. This is strikingly different from that observed for the null model. Heuristically, on one hand, NAGs may favor a nested structure by attaching to common HSs so that the HS may increase the willingness to provide more support. On the other hand, the benefits of support could be diluted if the difference between the traits (including geographic, political and religious factors) of a pair of NAG and HS is too significant, which may lead them to fall into different modules. This is consistent with the ecological principle that at any given time, the network tends to find an optimal structure balancing nestedness and modularity, which trades off benefits against disadvantages \\cite{Bastolla:2009,Suweis:2013,Cai:2020}. \\\\\n\n\n\n\\noindent\\textbf{Roles of actors.} Based on the representative partition, we may further identify the roles of actors in the network setting. We calculate two characteristics of each actor, the standardized within-module degree $\\{d_i\\}$ and the participation coefficient $\\{c_i\\}$ \\cite{Guimera:2005}. The c-score is 0 if all the node's links are within its own module and approaches 1 if they are distributed evenly among modules. The two scores characterize the contribution of a node to the degree distribution (thus to the nestedness) and modular structure. Then the roles can be classified into four categories: peripherals, module hubs, connectors and network hubs. Figure \\ref{NAG_ROLES_ALL2}a shows the two scores of all actors in the networks in two typical years 1970 (I) and 2000 (II), respectively. It is notable that although the number of actors, including both NAGs and HSs, has increased from 98 to 160, the distribution of roles of all actors has remained very similar (mean value: $\\langle d^I \\rangle = \\langle d^{II} \\rangle = 0$, $\\langle c^I \\rangle = 0.1086$ vs $\\langle c^{II} \\rangle = 0.1402$, standard deviation: $\\sigma(d^I) = 0.9250$ vs $\\sigma(d^{II}) = 0.9582$, $\\sigma(c^I) = 0.2133$ vs $\\sigma(c^{II}) = 0.2348$). For each actor, we measure the fluctuation of its yearly role scores by the standard deviation throughout its lifespan (for NAG) or hosting time (for HS). As shown in Fig. \\ref{NAG_ROLES_ALL2}b, most actors turn out to be consistent in their d-scores, with $82\\%$ of all actors experiencing fluctuations less than 0.5. For the c-score, $67\\%$ of the actors remain the same or have a fluctuation less than 0.1. Most actors have thus kept their within-module degree (reflecting its fitness) unchanged and meanwhile stayed similar in the role of connecting different modules. In this sense, the roles of the actors (being hubs or peripherals) tend to be maintained in the network evolution regardless of addition and deletion of actors. \n\n\\begin{figure}[t]\n\\hskip 0.2cm\n\\includegraphics[width=0.95\\textwidth]{NAG_ROLES_ALL3.png}\n\\caption{\\label{NAG_ROLES_ALL2} Roles of actors. \\textbf{a}, Standardized within-module degree $d$ and participation coefficient $c$ for the network in year 1970 and 2000, respectively. According to the two scores, any node falls into one of the four predefined categories. \\textbf{b}, Histograms of fluctuation in d- (left panel) and c-scores (right panel) measured by the standard deviation. \\textbf{c}, Correlation of temporal d-scores (upper panel) and c-scores (lower panel) in the active and passive support subnetworks, for the actors that are involved in both relations. Insets: p-values of the temporal correlations. }\n\\end{figure}\n\nAnalogous to ecological mutualistic and parasitic relations, the NAG-HS network consists of active and passive support relations, which comprise two subnetworks, each containing only one type of support. Now we analyze the roles of the actors that are involved in both relations. We project the partition of the whole network onto the subnetworks (so the actor's module-membership in the subnetwork is determined by the partition of the whole network).\nWe calculate the d- and c-scores of actors in each year in each of the subnetworks, respectively, and then the Pearson coefficients between the yearly d-\/c-scores of all actors for the two subnetworks, as shown in Fig. \\ref{NAG_ROLES_ALL2}c. Except a few years, the correlation of the d-scores is significantly high in the majority of the 66 years $(p<0.01$; two-sided t-test). Similarly, the correlation of c-scores shows an increasing tendency over time from 1975 on and becomes significantly high from 1992 on $(p<0.01)$. \n\nThe correlation indicates that an actor involved in both active and passive relations generally plays a consistent role in both subnetworks. Concretely, if the actor plays a role as a peripheral\/connector\/module-hub\/network-hub in one subnetwork, it tends to play a similar role in the other subnetwork. In terms of ecology, this means that if a NAG relies heavily on mutually beneficial relations with supporting states, it is very likely to develop substantial parasitic relations as well, such as extracting funds, transporting weapons or finding safe heavens without the approval or notice of the HS \\cite{San-Akca:2009,Phillips:2015}. It is notable here that a HS supporting a set of NAGs actively may support another set of NAGs passively. For example, the US provided active support to the Mujahedin in Afghanistan in the 1980s, and it was exploited otherwise by the IRA, PLO, and Hamas during the same period.\\cite{guelke1996, levitt2008, abu1993}\n\n\n\n\n\\section*{Discussion}\n\nAs for any ecological system, identifying the resources that cultivate the actors is crucial for understanding the actors' life cycles and their collective behaviours. Although NAGs are not persistently owned or controlled by any state, they acquire resources from selected states in a nonrandom manner. The support network is self-organized through a process where every actor aims to maximize their individual fitness \\cite{Suweis:2013,Cai:2020}, which is known as ``cumulative advantages\" \\cite{deSollaPrice1976,Perc2014}. This is reflected by the preferential selection of partnerships and results in characteristic network patterns. This means that NAGs select HSs that are most likely to provide support for longer periods of time. For a similar reason, \nlong-lasting NAGs tend to attract HSs, which tend to cluster around the same NAGs. \n\n\nThe tendency of attaching to higher degree nodes is consistent with other network formation processes, such as in the unipartite scientific collaborations, the bipartite plant-pollinator relations \\cite{Bascompte:2003}, or the designer-contractor partnerships \\cite{Saavedra:2008}. Yet the generalized preferential detachment in the disassembly of the NAG-HS network is unique. It means that an actor with more associated partners (if mutualistic) or exploiters (if parasitic) may be more likely to break a link, which is most probably sub-optimal.\nThis may be due to the fact that an actor with more links tends to be more selective over time to minimize its dependence on others. Since supporting a NAG is an act of animosity and might attract retaliation, limiting interactions is the smart strategy to pursue by HSs. By the same token, if we consider that HSs might also constrain the operations of NAGs, the latter would try to keep their links limited. For example, Arafat was particularly careful about getting support from Syria and Egypt in the 1970s since both of these countries had strong leaders who prevented Palestinian infiltration to Israel from their own territory, for fear of Israeli retaliation. Instead he opted to form strong ties with Jordan (in the late 1960s) and with Lebanon in the 1970s and early 1980, because he believed that these states had weak governments that could be exploited \\cite{Sayigh:1997,Rubin:2003, maoz2006}.\n\n\nDespite continuous attachment and detachment, the self-organized nontrivial architecture persists over a wide time span. Moreover, the independent measures of nestedness and modularity turn out to be negatively correlated in the NAG-HS supporting network, but no correlation appears in the null model. While nestedness can result from the compound effect of PA and PD, modularity seems to be the consequence of matching of actors' traits, which are determined by geographical, political and religious factors. A similar negative correlation is also observed between the dyadic measures of a large ensemble of ecological mutualistic networks, across a wide range of geographical factors and constituents \\cite{Cai:2020}. The emergence of a nested, modular structure and their negative correlation suggests that this network of military support and mutualistic ecological networks may share a common ecological principle, that is, optimizing fitness in light of tradeoffs. \n\nThe temporal community detection algorithm confirms the existence of an almost unique partition of the support network (see Sec. 2 in SI). Most importantly, this approach is able to track the temporal changes of modules. Unlike the modules in the US congress roll call network \\cite{Mucha:2010}, partitioned by using the same algorithm, where a range of duration periods of modules appear, the modules in the NAG-HS supporting network are either long-lasting ones containing a variety of members or short-lived ones consisting of less than three pairs of interacting actors. This suggests that a module has to contain sufficiently many actors and internal interactions to persist; it cannot survive on only few isolated interactions.\n\nGeographic similarity is shared by most actors in each of the nine established modules (see Tab. S1 and S2 in SI for contents in two representative years), as most prominently\ndemonstrated by the HSs \nand would thus give credits to regional studies that focus essentially on internal dense interactions \\cite{San-Akca:2016,San-Akca:2009,Phillips:2015}. However, it is notable that remote actors beyond the region are also often present in the same module. Most of them are major powers with global force and financial projection capacity, connecting several modules, including US, UK, Russia (Soviet Union) and China. These were related to the Africa- or Mideast-centered regions (see Fig. \\ref{Y1970-2000F} for the partitions in 1970 and 2000). In addition, a few states may have transregional influences due to specific historical reasons, for instance, the leaders of GAM, a separatist group in Indonesia, are known to have lived in and operated from Sweden for most of the 80s and 90s before its dissolution in 2005 \\cite{Schulze:2004}. This suggests that the temporal partition of modules may reveal deeper relations of intimately connected actors beyond the simple methodology of regional division and thus justifies the modules as proper study objects.\n\n\nRevealing the nontrivial structure enhances our understanding of the roles of the actors. It allows us to capture the manner in which certain NAGs act as connectors, and in which some states form large connecting hubs for multiple NAGs. This may open the door for studying policies to cut off key actors that may cause malfunction of actors only within a module, or lead to cascades that would have extendable influence on multiple modules. These research avenues can benefit from the large body of recent studies on the percolation and cascading failure in modular and\/or nested networks \\cite{Saavedra:2011,Memmott:2004,Burgos:2007,KaiserBunbury:2010,Gleeson:2008,Snyder2020}, as well as on their structural stability and capacity \\cite{Rohr:2014}. \n\n\nIn all, quantifying the ecosystem of NAG-HS interactions provides a unique and important perspective towards the comprehension of the origin, evolution and termination of NAGs, the states supporting them, and their collective behaviours. Ecological organizing principles, originally discovered in natural systems, seem to be shared by this military network, as well as a variety of previously studied socio-economic networks \\cite{May:2008,Saavedra:2009}. We believe further investigations of the NAG-HS network will be fruitful for the domain of warfare and conflicts as well as developing theories to explain the origin of the ubiquitous network patterns beyond the domain.\n\n\n\n\n\n\n\n\n\n\\section*{Methods}\n\n\\textbf{Dataset of NAGs.} Our study objects are the bipartite adjacency matrices between NAGs and their attached states of support, constructed from the Dangerous Companions Database \\cite{San-Akca:2016}. The dataset covers the period between 1945 and 2010, which introduces a profile for each NAG with information on the foundation year, objectives and ideational characteristics. All NAGs are engaged in a violent conflict against one or more governments and only conflicts that caused more than 25 battle-related deaths (BRD) are collected, namely the major ones. \nA HS can provide multiple types of support to a NAG\neach of which is classified as active if it is intentional or passive (de facto) if it is inadvertent. \\\\ \n\n\n\\noindent\\textbf{Relative probability.} We calculate the relative probability $T^+(k_{HN})$ that a NAG added in year $t$ connects to an existing state which is connected to $k_{HN}$ NAGs in year $t-1$. The relative probability $T^+(k_{HN}) = P^+(k_{HN},t)\/P_0^+(k_{HN},t)$ is defined as the ratio of the absolute probability $P^+(k_{HN},t)$ that a NAG connects to a HS with degree $k_{HN}$ against the the probability $P_0^+(k_{HN},t)$ that a HS with degree $k_{HN}$ is uniformly selected at random, where $P_0^+(k_{HN},t) = n_s(k_{HN},t)\/N_s(t)$ with $n_s(k_{HN},t)$ being the number of HSs with degree $k_{HN}$ prior to the addition of this NAG and $N_s(t)$ being the number of all HSs. Then $T^+(k_{HN})$ can be estimated by making a histogram of the degrees $k_{HN}$ of the HSs to which each NAG is added, in which each sample is weighted by a factor of $N_s(t)\/n_s(k_{HN},t)$ \\cite{Newman:2001}. In the random case, $T^+(k_{HN})$ should equal 1 for all $k_{HN}$. If it is a nonconstant function of $k_{HN}$, some bias is involved in the attachment process. In particular, a linearly increasing function corresponds to the standard preferential attachment (PA); while a general monotonically increasing function corresponds to generalized PA. Similarly, we calculate the relative probability $T^-(k_{HN})$ that a NAG detaches from a HS with degree $k_{HN}$. Furthermore, the relative probability of a HS acquires or loses a connection from an incumbent NAG, namely, $R^+(k_{HN})$ and $R^-(k_{HN})$. From the perspective of NAGs, we can also calculate the relative probability that a NAG acquires or loses a connection from a HS (see Fig. S5 in SI). \\\\\n\n\n\n\\noindent\\textbf{Temporal detection of modules.} We use a temporal community detection algorithm, GenLouvain, based on the Markov stability, proposed by Mucha, et al \\cite{Mucha:2010}. The merit of the temporal algorithm is its ability to find a systematic optimal partition of the network by optimizing a single quality function throughout all years. Note that by using this optimization algorithm, there can still be arbitrariness due to different partitions $\\{P_s\\}$ of the same network that give comparable modularity values close to the optimum. A specific $s$th partition is denoted by $P_s(t)=\\cup_i M_{s,i}(t)$, where the subscript $i$ denotes the $i$th module and $t$th year. \n\n\nWe then seek a representative partition if the network has a truly nontrivial modular structure (for details see Sec. S2 in SI). This is realized by using a further stabilizing technique for robust module detection \\cite{Bassett:2013}. We construct a nodal association matrix $T$, each entry of which is the frequency of a pair of actors being partitioned in the same module according to the partitions $\\{P_s\\}$ (only statistically significant contributions are retained). We then partition this matrix $T$, which reveals the network modular structure from all partitions $\\{P_s\\}$. It turns out that all partitions $\\{\\tilde{P}_s\\}$ of $T$, after applying this technique, are almost identical to each other, indicating the existence of a highly robust modular structure of the studied network. Finally, to remove the remaining slight uncertainty, we select a representative partition $\\tilde{P}^*$, which has a maximum average Adjusted Mutual Information (AMI) with all other partitions $\\{\\tilde{P}_s\\}$ \\cite{Vinh:2010}.\\\\ \n\n\\noindent\\textbf{Roles of actors.} We calculate two characteristics of each actor, the standardized within-module degree and the participation coefficient \\cite{Guimera:2005}\n\\begin{eqnarray}\n\\label{eq:zscs}\nd_i&=&(k_{is}-\\langle \\{k_{is}\\} \\rangle)\/\\sigma(\\{k_{is}\\}) \\\\\nc_i&=&1-\\sum_{t=1}^{N_m}k_{it}\/k_i\n\\end{eqnarray}\nwhere $k_i$ is the degree of nodes $i$, $k_{is}$ is the within-module degree (number of links of node $i$ to other nodes in its own module $s$), $\\langle \\{k_{is}\\} \\rangle$ and $\\sigma(\\{k_{is}\\})$ are the average and standard deviation of within-module degree of all nodes in $s$, and $k_{it}$ is the number of links from node $i$ to nodes in module $t$.\n\n\n\\section*{Data Availability}\nThe ``Dangerous Companions\" Dataset used in this study is available at http:\/\/armedgroups.net.\n\n\\begin{sloppypar}\n\n\\section*{Code Availability}\nCodes for analyzing the network are available upon request. The code for measuring nestedness from the library ``BiMat\" is available at https:\/\/bimat.github.io. The code for temporal community detection from the library ``GenLouvain\" is available at https:\/\/github.com\/GenLouvain\/GenLouvain. \n\n\\end{sloppypar}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Introduction}\nThe research presented in this paper is part of an ongoing investigation into the suitability of classical logic in the context of programming languages with control.\nRather than looking at how to encode known control features into calculi like the $`l$-calculus \\cite{Church'36,Barendregt'84}, Parigot's $\\lmu$-calculus \\cite{Parigot'92}, or $\\Lmu$\\Paper{\\footnote{The name $\\Lmu$ was first introduced in \\cite{Saurin'05}, that also introduced a different notation for terms, in placing names \\emph{behind} terms, rather than in front, as done by Parigot and de Groote; we use their notation here.}} \\cite{deGroote'94}, as has been done in great detail by others, we focus on trying to understand what is exactly the notion of computation that is embedded in calculi like $\\lmu$; we approach that problem here by presenting a fully abstract interpretation for that calculus into the (perhaps better understood) $`p$-calculus \\cite{Milner'92}.\n\n\\CLAC{\n\nIn the past, many \nresearchers \\Paper{to }investigate\\CLAC{d}\ninterpretations into the $`p$-calculus of various calculi that have their foundation in classical logic\\Paper{, as done in, for example, \\cite{Honda-Yoshida-Berger'04,vBCV-CLaC'08,CiminiCS'10,Beffara-Mogbil'12}}.\nFrom these papers it might seem that the interpretation of such `classical' calculi comes at a great expense; for example, to encode \\emph{typed} $\\lmu$, \\cite{Honda-Yoshida-Berger'04} defines an extension of Milner's encoding and considers a \\Paper{version of the $`p$-calculus that is strongly typed}\\CLAC{strongly typed $`p$-calculus}; \\cite{vBCV-CLaC'08} shows preservation of reduction in $\\X$ \\cite{Bakel-Lescanne-MSCS'08} only with respect to $\\Ctg$, the contextual ordering (so not with respect to $\\equivC$, contextual equivalence, nor with respect to weak bisimilarity); \\cite{CiminiCS'10} defines a non-compositional interpretation of $\\lmmt$ \\cite{Curien-Herbelin'00} that strongly depends on recursion, and does not regard the logical aspect\\Paper{ at all}.\n\n\\CLAC{In \\cite{Bakel-Vigliotti-IFIPTCS'12} we started our investigations by presenting an interpretation for de Groote's variant $\\Lmu$ into the $`p$-calculus \\cite{Milner'92} and proved a soundness result; here we show that this interpretation is fully abstract, but have to limit the interpretation to $\\lmu$ terms. \nWe study an output-based encoding of $\\lmu$ into the $`p$-calculus that is an extension of the one we defined for the $`l$-calculus \\cite{Bakel-Vigliotti-CONCUR'09} and is a natural variant of that for $\\Lmu$ in \\cite{Bakel-Vigliotti-IFIPTCS'12}.\nIn those papers, we have shown that our encoding respects \\emph{single-step explicit head reduction} (which only ever replaces the head variable of a term) modulo $\\equivC$\\Paper{; here we restate those properties with respect to $\\wbisimilar$, weak bisimilation}.\n\\CLAC{\n\nWe will here address the natural question that arises next: are two terms that are equal under the interpretation also operational equivalent, \\emph{i.e.}: is the interpretation \\emph{fully abstract}?\nWe answer that question positively, using a new approach to showing full abstraction, for our interpretation of $\\lmu$-terms (rather than $\\Lmu$ as used in \\cite{Bakel-Vigliotti-IFIPTCS'12}) and thereby also for the standard $`l$-calculus. \n\\Paper{Using a new technique, we will show that our interpretation is fully abstract, \\emph{i.e.}~two terms that are equal under the interpretation are also operational equivalent; our approach will be to first show that our interpretation respects single-step explicit head reduction $\\redxh$ modulo \\emph{weak bi-simularity} $\\wbisim$ and to \n\\CLAC{Following the approach of \\cite{Bakel-Vigliotti-IFIPTCS'12} we can show that our interpretation respects single-step explicit head reduction $\\redxh$ modulo \\emph{weak bisimularity} $\\wbisim$ (rather than $\\equivC$ as used in \\cite{Bakel-Vigliotti-IFIPTCS'12}; we omit the details here).\nWe \nextend this result to $\\equivwxh$, the equivalence relation generated by $\\redxh$ that equates also terms without (weak) normal form with respect to $\\redxh$.\nThe main proof of the full abstraction result is then achieved through showing that $\\equivwxh$ equates to $\\equivwbmu$, the equivalence relation generated by standard reduction that also equates terms without weak head normal form\\Paper{: this latter result is stated entirely within $\\lmu$}.\n\nThis technique is considerably different from the one used by Sangiorgi, who has shown a full abstraction result \\cite{Sangiorgi'94,Sangiorgi-Walker-Book'01} for Milner's encoding $\\MilSem[M] a $ of the lazy $`l$-calculus \\cite{Milner'92}.\n\\Paper{The characterisation of $\\MilSem[M] a \\wbisim \\MilSem[N] a $, left as open problem in \\cite{Milner'92}, was achieved through showing that the equivalence classes under weak-bisimilarity of Milner's encoding form a model for the lazy $`l$-calculus in the sense that it provides a denotational semantics, similar to Cor.~\\ref{semantics} below. \nTo achieve full abstraction, Sangiorgi proves that $\\MilSem[M] a \\wbisim \\MilSem[N] a $ if and only if $M \\AppBis N$, where $\\AppBis$ is the \\emph{applicative bisimularity} on $`l$-terms \\cite{Abramsky-Ong'93b}\\CLAC{. }\\Paper{, an operational notion of equivalence on terms of the lazy $`l$-calculus as defined by Abramsky and Ong, rather than $`b$-equality.\n\n\nHowever, this result comes at a price, since applicative bisimulation equates \\Paper{the }terms \\Paper{$ x (x \\Theta `D`D) \\Theta $ and $ x (`l y . x \\Theta `D`D y ) \\Theta $ (with $`D = `lx.xx$, and $\\Theta$ is such that, for every $N$, $\\Theta N$ is reducible to an abstraction) whereas these terms}\\CLAC{that} are not weakly bisimilar under \\Paper{the interpretation }$\\MilSem[`.] `. $: in order to achieve full abstraction, Sangiorgi had to extend Milner's encoding to $`L_c$, a $`l$-calculus enriched with constants and by exploiting a more abstract encoding into the \\emph{Higher Order} $`p$-calculus, a variant of the $`p$-calculus with higher-order communications.\nSangiorgi's result then essentially states that the interpretations of closed $`L_c$-terms $M$ and $N$ are contextually equivalent if and only if $M$ and $N$ are applicatively bisimilar; in \\cite{Sangiorgi'94} he shows that the interpretation of terms in $`L_c$ in the standard $`p$-calculus is weakly bisimilar if and only if they have the same L\\'evy-Longo tree\\Paper{ \\cite{Levy'76,Longo'83} (a lazy variant of B\\\"ohm trees \\cite{Barendregt'84})}.\n\n\nWe would like to stress that in order to achieve full abstraction for our interpretation, \\emph{we did not need to extend the interpreted calculus, and use a first order $`p$-calculus}.\nIn fact, \\CLAC{the main contribution of this paper and }novelty of our proof is the \\emph{structure of the proof} of the fact that our interpretation gives a fully abstract semantics.\nTo wit, we define a choice of operational equivalences for the $\\lmu$-calculus, both with and without explicit substitution.\nWe define the \\emph{weak explicit head equivalence} $\\equivwxh$ and show that this is exactly the relation that is naturally representable in the $`p$-calculus; we define \\emph{weak head equivalence} $\\equivxh$ and show that for $\\lmu$-terms without explicit substitution, $\\equivwxh$ corresponds to $\\equivxh$.\nThe relation $\\equivwxh$ essentially equates terms that have the same L\\'evy-Longo tree, but of course defined for $\\lmu$, which gets shown through a notion of weak approximation.\nWe then show that the relation $\\equivwA$, which expresses that terms have the same set of weak approximants, $\\equivwxh$, and $\\equivwbmu$ all correspond.\n\n\\CLAC{The combined results of \\cite{Bakel-Vigliotti-CONCUR'09,Bakel-Vigliotti-IFIPTCS'12} and the full abstraction}\\Paper{The} results we present here stress that the $`p$-calculus constitutes a very powerful abstract machine indeed: although the notion of structural reduction in $\\lmu$ is very different from normal $`b$-reduction, no special measures had to be taken in order to be able to express it through our interpretation.\nIn fact, the distributive character of application in $\\lmu$, and of both term and context substitution is dealt with \\Paper{entirely }by congruence in $`p$\\Paper{ (see also Example~\\ref{redex example})}, and both naming and $`m$-binding are dealt with entirely statically by the interpretation.\n\n\n\n\\Comment{\nThe results of \\cite{Bakel-Vigliotti-IFIPTCS'12} concentrated on the relation between $\\Lmux$ and the $`p$-calculus, and in particular on Soundness and Completeness (Theorem~\\ref{rtc soundness} here); in this paper, we will focus on full abstraction for $\\lmu$.\nWe will define a notion of equivalence $\\equivwxh$ between terms of $\\lmux$ that equates also terms that have no weak head-normal form, and show that terms are equivalent with respect to $\\equivwxh$ if and only if their images under $ \\PilmuTerm[`.] `. $ are contextually equivalent.\nWe will then generalise this to equivalences generated by head reduction, standard reduction, and weak-approximation, respectively, that all equate terms that have no weak head-normal form, and show that all coincide: this will lead to our main result: $M \\equivwbmu N \\Iff \\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $.\n\n\n\n \\vspace{2mm}\n\\noindent\n\\textbf{Organisation of this paper:}\nWe start with revisiting the $\\lmu$-calculus in Section~\\ref{lambda mu calculus} and define a notion of \\emph{head-reduction} $\\redh$.\nIn Section~\\ref{pi with pairing} we revisit the $`p$-calculus, enriched with \\emph{pairing}.\nIn Section~\\ref{lmux section} we define $\\lmux$, a version of $\\lmu$ with \\emph{explicit substitution}, as well as a notion of \\emph{explicit head reduction} and in Section~\\ref{lambda interpretation} define our \\emph{logical interpretation} of $\\lmux$ in to $`p$.\n\nWorking towards our full abstraction result, in Section~\\ref{weak reduction} we will define notions of weak reduction, in particular \\emph{weak head reduction} and \\emph{weak explicit head reduction}.\nWe then define the two notions of equivalence these induce, also equating terms without weak head-normal form and show that these notions coincide on pure $\\lmu$ terms (\\emph{i.e.}~without explicit substitutions).\nWe also define the equivalence $\\equivwbmu$ induced by $\\redbmu$ on pure $\\lmu$ terms, that also equates terms without weak head-normal form.\nIn Section~\\ref{approximation section}, we define a notion of \\emph{weak approximation} for $\\lmu$, and show the semantics this induces, $\\equivwA$, is fully abstract with respect to both $\\equivwh$ and $\\equivwbmu$.\nWe show that our logical interpretation is fully abstract with respect to weak bisimilarity $\\wbisim$ on processes and $\\equivwxh$, $\\equivwh$, $\\equivwA$, and $\\equivwbmu$ on pure $`l`m$-terms.\n\n\n\n\\vspace*{2mm}\n\\noindent\n\\textbf{Notation:}\nWe will use a vector notation $\\Vect{`.}$ as abbreviation for any sequence: for example, $\\Vect{x_i}$ stands for $x_1, \\ldots, x_n$, for some $n$, or for $\\Set{x_1, \\ldots, x_n}$, \\CLAC{and\n}$\\langle\\Vect{ `a_i := N_i `. `b_i }\\rangle$ for $ \\excontsub`a_1 := N_1 . `b_1 \\,$ $\\dots\\, \\excontsub `a_n := N_n . `b_n $,\\Paper{ $\\Vect{M_i \\eqh N_i}$ for $\\Forall 1 \\seq i \\seq n \\Pred [ M_i \\eqh N_i ] $,} etc.\nWhen possible, we will drop the indices.\n\n\n \\section{The $\\lmu$~calculus\\CLAC{ and explicit substitution}} \\label{lambda mu calculus}\n\nIn this section, we will briefly discuss Parigot's $\\lmu$-calculus~\\cite{Parigot'92}; we assume the reader to be familiar with the $`l$-calculus and its notion of reduction $\\bred$ and equality $=_{`b}$.\n\n$\\lmu$ is a proof-term syntax for classical logic, expressed in Natural Deduction, defined as an extension of the Curry type assignment system for the {\\LC} by adding the concept of \\emph{named} terms, and adding the functionality of a \\emph{context switch}, allowing arguments to be fed to subterms.\n\n\\Paper{In the next section we will define explicit head reduction for $\\lmux$, a variant of $\\lmu$ with explicit substitution \\emph{\\`a la} $\\Lx$ \\cite{Bloo-Rose'95}, and will show full abstraction results for $\\lmux$; since $\\lmux$ implements $\\lmu$-reduction, this implies that, automatically, our main results are also shown for standard reduction (with implicit substitution).\n\n \\begin{definition}[Syntax of $\\lmu$] \\label{lm-terms}\n\\Paper{The $\\lmu$-\\emph{terms} we consider are defined over the set of \\emph{variables} represented by Roman characters, and \\emph{names}, or \\emph{context} variables, represented by Greek characters, through the grammar:}%\n\\CLAC{The $\\lmu$-\\emph{terms} we consider are defined over the set of \\emph{variables} (Roman characters) and \\emph{names}, or \\emph{context} variables (Greek characters), through:}\n\\Paper{\n \\[ \\begin{array}{@{}rrl@{\\quad}l}\nM,N &::=& x & \\textit{variable} \\\\\n& \\mid & `l x.M & \\textit{abstraction} \\\\\n& \\mid & MN & \\textit{application} \\\\\n& \\mid & \\muterm`a.[`b]M & \\textit{context switch}\n \\end{array} \\]\n\n\\CLAC{\n \\[ \\begin{array}{rcl}\nM,N &::=& x \\mid `l x.M \\mid MN \\mid \\muterm`a.[`b]M\n \\end{array} \\]\n\nWe will occasionally write $\\Cmd$ for the pseudo-term $[`a] M$.\n \\end{definition}\n\\Paper{In fact, the main difference between $\\Lmu$ and $\\lmu$ is that in the former, $[`a] M$ is considered to be a term.}\n\nAs usual, $`l x.M$ binds $x$ in $M$, and $\\muterm `a .\\Cmd$ binds $`a $ in $\\Cmd$, and the notions of free variables $\\fv(M)$ and names $\\fn(M)$ are defined accordingly; the notion of $`a $-conversion extends naturally to bound names and we assume Barendregt's convention in that we assume that free and bound variables and names are always distinct, using $`a$-conversion when necessary.\n\\Paper\nAs usual, we call a term \\emph{closed} if it has no free variables.\n\nSimple type assignment for $\\lmu$ is defined as follows:\n \\begin {definition}[Types, Contexts, and Typing] \\label{types} \\label {typing for lmu}\n \\begin {enumerate}\n\n \\item\nTypes are defined by:\n \\[ \\begin{array}{rcl}\nA,B & ::= & \\tvar \\mid A \\arrow B\n \\end{array} \\]\nwhere $\\tvar$ is a basic type of which there are infinitely many.\n\n \\item\nA \\emph{context of inputs} $`G$ is a mapping from term variables to types, denoted as a finite set of \\emph{statements} $\\stat{x}{A}$, such that the \\emph{subject} of the statements ($x$) are distinct.\nWe write $\\G_1,\\G_2$ for the \\emph{compatible} union of $\\G_1$ and $\\G_2$ (if $\\stat{x}{A_1} \\ele \\G_1$ and $\\stat{x}{A_2} \\ele \\G_2$, then $A_1 = A_2$), and write $`G, \\stat{x}{A}$ for $`G, \\{\\stat{x}{A}\\}$.\n\n \\item\nContexts of \\emph{outputs} $`D$ as mappings from names to types, and the notions $`D_1,`D_2$ and $`a{:}A,`D$ are defined similarly.\n\n \\item\nType assignment for $\\lmu$ is defined by the following natural deduction system.\n \\[ \\kern-5mm\n \\begin{array}{@{}rl@{\\quad}rl}\n(\\Ax) : &\n\\Inf\t{ \\derLmu `G,x{:}A |- x : A | `D }\n&\n (`m) : &\n \\Inf\t[`a \\notele `D]\n\t{ \\derLmu `G |- M : B | `a{:}A,`b{:}B,`D\n\t}{ \\derLmu `G |- \\muterm`a.[`b]M : A | `b{:}B,`D }\n\\quad\n \\Inf\t[`a \\notele `D]\n\t{ \\derLmu `G |- M : A | `a{:}A,`D\n\t}{ \\derLmu `G |- \\muterm`a.[`a]M : A | `D }\n\\end{array} \\] \\[ \\begin{array}{@{}rl@{\\dquad}rl}\n(\\arrI) : &\n\\Inf\t[x \\notele `G]\n\t{ \\derLmu `G,x{:}A |- M : B | `D\n\t}{ \\derLmu `G |- `l x.M : A\\arrow B | `D }\n&\n(\\arrE) : &\n\\Inf\t{ \\derLmu `G |- M : A\\arrow B | `D\n\t \\quad\n\t \\derLmu `G |- N : A | `D\n\t}{ \\derLmu `G |- MN : B | `D }\n \\end{array} \\]\n\n \\end {enumerate}\n \\end{definition}\nSo, for the context $`G, \\stat{x}{A}$, we have either $\\stat{x}{A} \\in `G$, or $`G$ is not defined on $x$.\n\nIn $\\lmu$, reduction of terms is expressed via implicit substitution; a%\n\n\\CLAC{A}s usual, $M[N \\For x]$ stands for the substitution of all occurrences of $x$ in $M$ by $N$, and $M [ N{`.}`g \\For `a ]$, the \\emph{structural substitution}, for the term obtained from $M$ when every (pseudo) sub-term of the form $[`a]M'$ is replaced by $[`g]M' N$.\\Paper{\\footnote{This notion is often defined as $M [ N \\For `a ]$, where every (pseudo) sub-term of the form $[`a]M'$ is replaced by $[`a]M' N$; in our opinion, this creates confusion, since $`a$ gets `reintroduced'; it is in fact a new name, which is illustrated by the fact that, in the typed version $`a$ then changes type.}}\n\\CLAC{(We omit the formal definition here; see Def.~\\ref{definition lmux} for the variant with \\emph{explicit} structural substitution.)}\n\\Paper\nFor reasons of clarity, and because below we will present a version of $\\lmu$ that makes the substitution explicit, we define the $`m$-substitution formally.\n\n \\begin{definition} [Structural substitution] \\CLAC{\\hfill}\nWe define $M [ N{`.}`g \\For `a ]$ \\Paper{(where every sub-term $[`a]L$ of $M$ is replaced by $[`g]LN$) }by induction over the structure of (pseudo-)terms by:\n \\[ \\begin{array}{@{}r@{\\,}lcl@{\\quad}l}\n([`a]M) & [ N{`.}`g \\For `a ] & \\ByDef & [`g] ( M \\, [ N{`.}`g \\For `a ] ) N\n\t\\\\\n([`b]M)&[ N{`.}`g \\For `a ] & \\ByDef & [`b] ( M \\, [ N{`.}`g \\For `a ] ) & (`a\\not=`b)\n\t\\\\\n(\\muterm`b.\\Cmd) & [ N{`.}`g \\For `a ] & \\ByDef & \\muterm`b. ( \\Cmd \\, [ N{`.}`g \\For `a ] )\n\t\\\\\nx & [ N{`.}`g \\For `a ] & \\ByDef & x\n\t\\\\\n(`lx.M) & [ N{`.}`g \\For `a ] & \\ByDef & `lx . (M \\, [ N{`.}`g \\For `a ] )\n\t\\\\\n(M_1M_2) & [ N{`.}`g \\For `a ] & \\ByDef & M_1 \\, [ N{`.}`g \\For `a ]~M_2 \\, [ N{`.}`g \\For `a ]\n \\end{array}\\]\n \\end{definition}\n\nWe have the following rules of computation in $\\lmu$:\n\n\n \\begin{definition}[$\\lmu$ reduction] \\label{lmu reduction}\n\\CLAC{Reduction on $\\lmu$-terms is defined as the contextual closure of the rules:}\n\\Paper{$\\lmu$ has a number of reduction rules: two \\emph{computational rules}: }\n \\[ \\begin{array}{@{}rrcll}\n\\textit{logical } (`b): & (`l x . M ) N & \\red & M [ N \\For x ]\n\t\\\\\n\\textit{structural } (\\mu): & (\\muterm `a . \\Cmd ) N & \\red & \\muterm`g. ( \\Cmd [N{`.}`g \\For `a] )\n\n\\Paper{ \\end{array} \\]\nas well as the \\emph{simplification rules}:\n \\[ \\begin{array}{@{}rrcll}\n}\\CLAC{\\\\ }\n\\textit{renaming} : & \\muterm `d . [`b](\\muterm`g.[`a]M) & \\red & \\muterm `d . [`a] M[`b \\For `g]\n\t\\\\\n\\textit{erasing} : & \\muterm `a . [`a] M & \\red & M & (`a \\notele \\fn(M))\n \\end{array} \\]\n\\Paper{which are added mainly to simplify the presentation of results.\nWe use the contextual rules:\\footnote{Normally the contextual rules are not mentioned but are left implicit; here we need to state them, since we will below consider notions of reduction that do not permit all contextual rules.}\n \\[ \\begin {array}[t]{@{}r@{~~}c@{~~}l}\nM \\red N &\\Then&\n \\begin{cases}\nML &\\red& NL \\\\\nLM &\\red& LN \\\\\n`lx.M &\\red& `lx.N \\\\\n\\muterm`a.[`b]M & \\red & \\muterm`a.[`b]N\n \\end{cases}\n \\end {array} \\]}%\nWe use ${}\\rtcredbmu{}$ for the pre-congruence\\Paper{\\footnote{A \\emph{pre-congruence} is a reflexive and transitive relation that is preserved in all contexts; a \\emph{congruence} is symmetric pre-congruence.}} based on these rules, ${}\\eqbmu{}$ for the congruence, write $M \\rtcredbmu[\\nf] N$ if $M \\rtcredbmu N$ and $N$ is in normal form, $M \\rtcredbmu[\\hnf] N$ if $M \\rtcredbmu N$ and $N$ is in head-normal form, $M \\Shows $ if there exists a finite reduction path starting from $M$,\\Paper{\\footnote{Note that this does not imply that \\emph{all} paths are finite.}} and $M \\Diverges$ if this is not the case; we will use these notations for other notions of reduction as well.\n\n\n \\end{definition}\n\nThat this notion of reduction is confluent was shown in \\cite{Py-PhD'98}; so we have:\n\n \\begin{proposition} \\label{confluence eq}\nIf $M \\eqbmu N $ and $M \\rtcredbmu P $, then there exists $Q$ such that $ P \\rtcredbmu Q $ and $N \\rtcredbmu Q $.\n \\end{proposition}\n\n\n\n\\Paper\nFor convenience, Parigot also considers $[`a]M$ and $\\muterm`a.M$ as \\emph{pseudo}-terms, that represent \\emph{deactivation} and \\emph{activation}.\\footnote{In fact, \\cite{Parigot'92} formulates the renaming rule as $ [`b](\\muterm`g.M) \\red M[`b \\For `g] $.}\n\nThe intuition behind the structural rule is given by \\cite{deGroote'94}: ``\\emph{in a $\\lmu$-term $\\muterm`a.M$ of type $A \\arr B$, only the subterms named by $`a$ are \\emph{really} of type $A \\arr B$ (\\ldots); hence, when such a $`m$-abstraction is applied to an argument, this argument must be passed over to the sub-terms named by $`a$.}''\nWe can think of $[`a]M$ as storing the type of $M$ amongst the alternative conclusions by giving it the name $`a $.\n\n\\cite{Parigot'93} has shown that typeable terms are strongly normalisable.\nIt also defines the extensional rules:\n\\[ \\begin{array}{@{}r@{\\quad}rcl@{\\quad}l}\n(\\eta): & `lx.Mx & \\red & M & (x\\not\\in \\fv(M))\n\t\\\\\n(`h`m) : & \\muterm`a . [`b] M & \\red & `lx\\muterm`g . [`b] M[x{`.}`g \\For `a]\n \\end{array} \\]\nHere we do not consider these rules: the model we present through our interpretation is not extensional, and we can therefore not show that those rules are preserved by the interpretation (see Rem.~\\ref{not extensional}).\n\n \\begin{example} \\label{lmu double negation}\nAs an example illustrating the fact that this system is more powerful than the system for the \\LC, here is a proof of it is possible to inhabit Peirce's Law (due to \\cite{Ong-Stewart'97}):\n \\[ \\begin{array}{@{}c}\n \\Inf\t[\\arrI]\n\t{\\Inf\t[`m]\n\t\t{\\Inf\t[\\arrE]\n\t\t\t{\\Inf\t[\\Ax]\n\t\t\t\t{ \\derLmu x{:}(A\\arrow B)\\arrow A |- x : (A\\arrow B)\\arrow A | `a{:}A }\n\t\t\t \\Inf\t[\\arrI]\n\t\t\t\t{\\Inf\t[`m]\n\t\t\t\t\t{\\Inf\t[\\Ax]\n\t\t\t\t\t\t{ \\derLmu x{:}(A\\arrow B)\\arrow A,y{:}A |- y : A | `b{:}B }\n\t\t\t\t\t}{ \\derLmu x{:}(A\\arrow B)\\arrow A,y{:}A |- \\muterm`b.[`a]y : B | `a{:}A }\n\t\t\t\t}{ \\derLmu x{:}(A\\arrow B)\\arrow A |- `ly.\\muterm`b.[`a]y : A\\arrow B | `a{:}A }\n\t\t\t}{ \\derLmu x{:}(A\\arrow B)\\arrow A |- x(`ly.\\muterm`b.[`a]y) : A | `a{:}A }\n\t\t}{ \\derLmu x{:}(A\\arrow B)\\arrow A |- \\muterm`a.[`a](x(`ly.\\muterm`b.[`a]y)) : A | {} }\n\t}{ \\derLmu { } |- `lx.\\muterm`a.[`a](x(`ly.\\muterm`b.[`a]y)) : ((A\\arrow B)\\arrow A)\\arrow A | {} }\n \\end{array} \\]\nThe underlying logic of the system of Def.~\\ref{typing for lmu} corresponds to \\emph{minimal classical logic} \\cite{Ariola-Herbelin'03}.\n \\end{example}\n\nWe also consider the notion of head reduction; it is defined in \\cite{Wadsworth'76} for the $`l$-calculus by first defining the head-redex of a term as the subterm $(`lx.M)N$ in a term of the form\n \\[ `lx_1x_2\\dots x_n.((`lx.M)N)L_1L_2\\dots L_m \\quad (n \\geq 0, m \\geq 0) \\]\nHead reduction is then that notion in which each step is determined by contraction of the head redex (when it exists), and head-normal forms (the normal forms with respect to head reduction) are of the generic shape\n \\[ `lx_1x_2\\dots x_n.yL_1L_2\\dots L_m \\quad (n \\geq 0, m \\geq 0) \\]\nand $y$ in this term is called the head variable.\nIn $\\lmu$, given the naming and $`m$-binding features, the notion of head redex is not this easily defined; rather here we define head reduction by not allowing reductions to take place in the right-hand side of applications (in the context of the $`l$-calculus, this gives the original notion); we also define a notion of head-normal form for $\\lmu$.\n\n\n \\begin{definition} [Head reduction for $\\lmu$ (cf.~\\cite{Lassen'06})] \\label{head reduction definition}\n \\begin{enumerate}\n\n \\item\nWe define \\emph{head reduction} $\\redh$ as the restriction of $\\redbmu$ by removing the contextual rule:\n \\CLAC{$}\\Paper{\\[} \\begin{array}{rcl@{}}\nM \\red N &\\Then& LM \\red LN\n \\end{array} \\CLAC{$}\\Paper{\\]}\n\n \\item\nThe $\\lmu$ \\emph{head-normal forms} (\\HNF) are defined through the grammar:\n \\[ \\begin {array}{@{}rrl@{\\quad}l}\n\\lmuHNF & ::= &\n`lx. \\lmuHNF\n\t\\\\ &\\mid &\nxM_1\\dots M_n & (n \\geq 0)\n\t\\\\ &\\mid &\n\\muterm`a.[`b] \\lmuHNF\n\t& (`b \\not= `a \\textrm{ or } `a \\in \\lmuHNF, \\textrm{ and }\n\t\\lmuHNF \\not= \\muterm`g.[`d] \\lmuHNF' )\n \\end {array} \\]\n\n \\end{enumerate}\n \\end{definition}\n\\Paper{Notice that the $\\redbmu$ \\HNF s are $\\redh$-normal forms.}\n\nThe following is straightforward:\n \\begin{proposition} [$\\redh$ implements $\\lmu$'s head reduction]\n\\label{head reduction}\nIf $ M \\rtcredbmu N $ with $N$ in {\\HNF} (so $ M \\rtcredbmu[\\hnf] N $), then there exists $\\lmuHNF$ such that $ M \\rtcredh[\\nf] \\lmuHNF $ (so $\\lmuHNF$ is in $\\redh$-normal form) and $ \\lmuHNF \\rtcredbmu N $ without using $\\redh$.\n\n \\end{proposition}\n\n\\Paper\nNotice that\n $ \\begin{array}[t]{@{}lcl}\n`lf.(`lx.f(xx))(`lx.f(xx))\n\t& \\redh &\n`lf.f((`lx.f(xx))(`lx.f(xx)))\n \\end{array} $\nand this last term is in \\HNF, and in $\\redh$-normal form.\n\n\n\n\n\n \\section{The synchronous \\texorpdfstring{$`p$}{}-calculus with pairing} \\label{pi with pairing}\n\nThe notion of $`p$-calculus that we consider in this paper was already considered in \\cite{Bakel-Vigliotti-CONCUR'09} and is different from other systems studied in the literature \\cite{Honda-Tokoro'91} in that it adds \\emph{pairing} and uses a {\\proc{let}}-construct to deal with inputs of pairs of names that get distributed, similar to that defined in \\cite{Abadi-Gordon'97}; in contrast to \\cite{vBCV-CLaC'08,Bakel-Vigliotti-CONCUR'09}, we do not consider the asynchronous version of this calculus.\n\n\n \\begin{definition}[Processes] \\label{pi calculus}\n\\emph{Channel names} and \\emph{data} are defined by:\n \\[ \\begin{array}{rcl@{\\quad}l}\n& & a,b,c,\\dchan,x,y,z &\\textsl{names}\n \\end{array} \\hspace*{1cm} \\begin{array}{rcl@{\\quad}l}\n\\proc{p} & ::= & a\\mid \\PiPair(a,b) ~ & \\textsl{data}\n \\end{array} \\]\n\\Paper{(the pairing in data is \\emph{not} recursive.)}\nProcesses are defined by:\n\\Paper{\n \\[ \\begin{array}{@{}rrl@{\\quad}l}\n\\proc{P}, \\proc{Q} & ::= & \\Zero & \\textsl{nil}\n\t\\\\\n&\\mid& \\proc{P} \\Par \\proc{Q} & \\textsl{composition}\n\t\\\\\n&\\mid& \\Bang\\proc{P} & \\textsl{replication}\n\t\\\\ %\n&\\mid& \\New{a} \\proc{P} & \\textsl{restriction}\n\t\\\\\n&\\mid& \\In a(x) . \\proc{P} & \\textsl{input}\n\t\\\\\n&\\mid& \\Out a <\\procp> . \\proc{P} & \\textsl{output}\n\t\\\\\n&\\mid& \\Let = \\proc{p} in \\proc{P}\t& \\textsl{let construct}\n \\end{array} \\]\n}\\CLAC{\\[ \\begin{array}{@{}rrl@{\\quad}l}\n\\proc{P}, \\proc{Q} & ::= & \\Zero \\mid \\proc{P} {\\Par} \\proc{Q} \\mid \\Bang\\proc{P} \\mid \\New{a} \\proc{P} \\mid\n\\In a(x) . \\proc{P} \\mid \\Out a <\\procp> . \\proc{P} \\mid \\Let = \\proc{p} in \\proc{P}\n \\end{array} \\]\n}\nWe see, as usual, $`n$ as a binder, and call the name $n$ \\emph{bound} in $\\New n \\proc{P} $, $x$ bound in $\\In a(x) . \\proc{P} $ and $x,y$ bound in $\\Let = \\proc{p} in \\proc{P} $; we write $\\bn(\\proc{P})$ for the set of bound names in \\proc{P}; $n$ is \\emph{free} in $\\proc{P}$ if it occurs in $\\proc{P}$ but is not bound, and we write $\\fn(\\proc{P})$ for the set of free names in $\\proc{P}$.\n\\Paper{We call $a$ in $\\New a \\proc{P}$ a \\emph{hidden} channel.\nA {\\it context} $ \\Cont[ \\cdot]$ is a process with a hole $[ \\,]$; we call $\\In a (x) $ and $\\Out a <\\procp> $ \\emph{guards}, and call \\proc{P} in $\\In a (x) . \\proc{P} $ and $\\Out a <\\procp> . \\proc{P} $ a process \\emph{under guard}.}\n\n \\end{definition}\n\nNotice that data occurs only in two cases: $\\Out a <{\\proc{p}}> $ and $\\Let = \\proc{p} in \\proc{P} $, and that then $\\proc{p}$ is either a single name, or a pair of names; we therefore do not allow $\\In a ({ \\PiPair}) . \\proc{P} $, nor $\\Out a <{ \\PiPair<{ \\PiPair},d>}> . \\proc{P} $, nor $\\Out {\\PiPair} <{\\proc{p}}> . \\proc{P} $, nor $\\New \\PiPair \\proc{P} $, nor $\\Let <\\mbox{$\\PiPair$},y> = \\proc{p} in \\proc{P} $, etc.\n\\Paper{So substitution $\\proc{P}[p\\For x]$ is a partial operation, which depends on the places in $\\proc{P}$ where $x$ occurs; whenever we use $\\proc{P}[p\\For x]$, we will assume it is well defined.\nIt is worthwhile to point out that using pairing is not the same as working with the polyadic (or even dyadic) $`p$-calculus, because there each channel has a fixed arity, whereas we allow data to be sent, which is either a name or a pair of names.}\n\nWe abbreviate $\\In {a} (x) . \\Let = x in \\proc{P} $ by $\\In {a} (y,z) . \\proc{P} $, \\Paper{as well as }$\\New m \\New n \\proc{P} $ by $\\New mn \\proc{P} $, and write $\\Out a <\\procp> $ for $\\Out a <\\procp> . \\Zero $.\nAs in \\cite{Sangiorgi-Walker-Book'01}, we write $\\Eq a=b $ for the \\emph{forwarder} $ \\In a (x) . \\Out b $ \\Paper{and $\\BOut x (w) . \\proc{P} $ for $ \\New w ( \\Out x . \\proc{P} ) $}.\n\n \\begin{definition}[Structural Congruence]\nThe \\emph{structural congruence} is the smallest congruence generated by the rules:\n\\noindent\n \\[ \\begin{array}{c@{\\quad}c} \\begin{array}{rcll}\n\\proc{P} \\Par \\Zero &\\StrCon& \\proc{P}\n\t\\\\\n\\proc{P} \\Par \\proc{Q} &\\StrCon & \\proc{Q} \\Par \\proc{P}\n\t\\\\\n\\Bang \\proc{P} &\\StrCon& \\proc{P} \\Par \\Bang \\proc{P}\n\t\\\\\n\\New n \\Zero &\\StrCon& \\Zero\n\\CLAC{\t\\end{array} & \\begin{array}{rcll} }\n\\Paper{\\\\ }\n(\\proc{P} \\Par \\proc{Q} ) \\Par \\proc{R} &\\StrCon& \\proc{P} \\Par (\\proc{Q} \\Par \\proc{R})\n\t\\\\\n\\New m \\New n \\proc{P} &\\StrCon& \\New n \\New m \\proc{P}\n\t\\\\\n\\New n (\\proc{P} \\Par \\proc{Q}) &\\StrCon& \\proc{P} \\Par \\New n \\proc{Q} & (n \\notin \\fn(\\proc{P}))\n\t\\\\\n\\Let = \\PiPair(a,b) in \\proc{P} &\\StrCon& \\proc{P} [a \\For x,b \\For y]\n \\end{array} \\end{array}\n\\]\n\n \\end{definition}\n\nAs usual, we will consider processes modulo congruence and $`a$-conversion: this implies that we will not deal explicitly with the process $ \\Let = \\PiPair(a,b) in \\proc{P} $, but rather with $ \\proc{P} [a \\For x,b \\For y] $.\nBecause of rule $(\\proc{P} \\Par \\proc{Q} ) \\Par \\proc{R} \\StrCon \\proc{P} \\Par (\\proc{Q} \\Par \\proc{R}) $, we will \\Paper{normally }not write brackets in a parallel composition of more than two processes.\n\nComputation in the $`p$-calculus with pairing is expressed via the exchange of \\textsl{data}.\n \\begin{definition}[Reduction] \\label{pi reduction}\n\\Paper{ \\begin{enumerate}\n\n \\item}\nThe \\emph{reduction relation} over the processes of the $`p$-calculus is defined by the following (elementary) rules:\n \\[ \\begin{array}{rcl@{~}l}\n\\Out a <\\procp> . \\proc{P} \\Par \\In a (x) . \\proc{Q} &\\redPi& \\proc{P} \\Par \\proc{Q} [\\proc{p} \\For x]\n\t\\Paper{ & (\\textsl{synchronisation}) }\n\t\\\\\n\\proc{P} \\redPi \\proc{P'} &\\Then& \\New n \\proc{P} \\redPi \\New n \\proc{P}' \\quad\n\t\\Paper{ & (\\textsl{hiding}) }\n\t\\\\\n\\proc{P} \\redPi \\proc{P}' &\\Then& \\proc{P} \\Par \\proc{Q} \\redPi \\proc{P}' \\Par \\proc{Q}\n\t\\Paper{ & (\\textsl{composition}) }\n\t\\\\\n \\proc{P} \\StrCon \\proc{Q} \\And \\proc{Q} \\redPi \\proc{Q}' \\And \\proc{Q}' \\StrCon \\proc{P}'\n\t& \\Then& \\proc{P} \\redPi \\proc{P}'\n\t\\Paper{ & (\\textsl{congruence}) }\n \\end{array} \\]\n\\Paper{We write $ \\proc{P} \\redPi(c) \\proc{Q} $ if $\\proc{P}$ reduces to $\\proc{Q}$ in a single step via a synchronisation over channel $c$, and write $\\redPi(=_{`a})$ if we want to point out that $`a$-conversion has taken place during the synchronisation.\nWe say that $ \\proc{P} \\redPi(c) \\proc{Q} $ takes place \\emph{over a hidden channel} if $c$ is hidden in $\\proc{P}$.}\n\n\\Paper{ \\item\nWe say that a $\\proc{P}$ is \\emph{irreducible} (is in \\emph{normal form}) if it does not contain a possible synchronisation.\n\n \\end{enumerate}}\n \\end{definition}\nNotice that the first rule is only allowed if $\\proc{Q} [\\proc{p} \\For x]$ is a well-defined process.\n\\Comment{As usual, we write $\\rtcredPi[+]$ for the transitive closure of $\\redPi$, and $\\rtcredPi$ for its reflexive and transitive closure; we write $\\redPi(a)$ if we want to point out that a synchronisation took place over channel $a$}\n\\Paper\nAlso,\n \\[ \\begin{array}[t]{@{}lclclcl}\n\\Out a \\Par \\In a(x,y) . \\proc{Q}\n\t& \\ByDef &\n\\Out a <{\\PiPair(b,c)}> \\Par \\In a(z) . \\Let = z in \\proc{Q} \\\\\n\t& \\redPi &\n\\Let = \\PiPair(b,c) in \\proc{Q} \\\\\n\t& \\StrCon &\n\\proc{Q} [b \\For x,c \\For y]\n \\end{array} \\]\n\n\n\nThere are several notions of equivalence defined for the $`p$-calculus: the one we consider here, and will show is related to our encoding, is that of weak-bisimilarity.\n\n \\begin{definition} [Weak-bisimilarity] \\label{bisimilarity} \\label{other reduction}\n \\begin{enumerate}\n\n \\item\nWe write $\\proc{P} \\outson n $ and say that $\\proc{P}$ \\emph{outputs on} $n$ (or $\\proc{P}$ exhibits an output barb on $n$) if $\\proc{P} \\StrCon \\New \\Vect{b} ( \\Out n <\\procp> . \\proc{Q} \\Par \\proc{R} ) $, where $ n \\notele \\Vect{b}$, and $\\proc{P} \\inson n $ ($\\proc{P}$ \\emph{inputs on} $n$) if $\\proc{P} \\StrCon \\New \\Vect{b} ( \\In n (x) . \\proc{Q} \\Par \\proc{R} ) $, where $ n\\notele \\Vect{b}$.\n\n \\item\nWe write $\\proc{P} \\Outson n $ ($\\proc{P}$ \\emph{will output on} $n$) if there exists $\\proc{Q}$ such that $\\proc{P} \\rtcredPi \\proc{Q}$ and $\\proc{Q} \\outson n $, and $\\proc{P} \\Inson n $ ($\\proc{P}$ \\emph{will input on} $n$) if there exists $\\proc{Q}$ such that $\\proc{P} \\rtcredPi \\proc{Q}$ and $\\proc{Q} \\inson n $.\n\n \\item A \\emph{barbed bisimilarity} $\\bbisimilar$ is the largest symmetric relation such that $\\proc{P} \\bbisimilar \\proc{Q}$ satisfies:\n\n \\begin{itemize}\n\n \\item for every name $n$: if $\\proc{P} \\outson n $ then $\\proc{Q} \\Outson n $, and if $\\proc{P} \\inson n $ then $\\proc{Q} \\Inson n $;\n\n \\item for all $\\proc{P}'$, if $\\proc{P} \\rtcredPi \\proc{P}'$, then there exists $\\proc{Q}'$ such that $\\proc{Q} \\rtcredPi \\proc{Q}'$ and $\\proc{P}' \\bbisimilar \\proc{Q}'$;\n\n\n \\end{itemize}\n\n \\item \\emph{Weak-bisimilarity} is the largest relation $\\wbisimilar$ defined by: $ \\proc{P} \\wbisimilar \\proc{Q}$ if and only if $\\Cont[ \\proc{P}] \\bbisimilar \\Cont[\\proc{Q}]$ for any context $\\Cont[`.]$.\n\n \\end{enumerate}\n \\end{definition}\n\n\n\\Paper\nThe following is easy to show.\\footnote{This property was stated in \\cite{Bakel-Vigliotti-IFIPTCS'12} using contextual equivalence rather than weak bisimilarity.}\n \\begin{proposition} \\label{reduction hidden}\nLet $\\proc{P}, \\proc{Q}$ not contain $a$ and $a \\not= b$, then\n \\[ \\begin{array}{rcl}\n\\New a ( \\Out a <{\\proc{p}}> . \\proc{P} \\Par \\In a (x) . \\proc{Q} )\n\t& \\wbisim &\n\\proc{P} \\Par \\proc{Q}[\\proc{p} \\For x]\n\t\\\\ [1mm]\n\\New a ( \\Bang \\Out a <{\\proc{p}}> . \\proc{P} \\Par \\In a (x) . \\proc{Q} )\n\t& \\wbisim &\n\\proc{P} \\Par \\proc{Q}[\\proc{p} \\For x]\n \\end{array} \\]\n \\end{proposition}\nThis expresses that synchronisation over hidden (internal) channels is unobservable.\n\nThe following property is needed in the proof of Theorem~\\ref{soundness}.\n\n \\begin{lemma} [\\cite{Bakel-Vigliotti-JLC'14}] \\label {replication lemma}\nLet $x$ only be used as input channel in $\\proc{P}$ and $\\proc{Q}$, and not appear in \\proc{R}, then:\n\n \\[ \\begin{array}{@{}rcl@{}}\n\\New x (\\proc{P} \\Par \\proc{Q} \\Par \\Bang \\Out x . \\proc{R})\n\t&\\wbisimilar&\n\\New x (\\proc{P} \\Par \\Bang \\Out x . \\proc{R}) \\Par \\New x (\\proc{Q} \\Par \\Bang \\Out x . \\proc{R})\n\t\\\\\n \\New x ( \\Bang \\proc{P} \\Par \\Bang \\Out x . \\proc{R})\n\t&\\wbisimilar&\n\\Bang \\New x ( \\proc{P} \\Par \\Bang \\Out x . \\proc{R})\n\t\\\\\n\\New x ({ \\In c (y) . \\proc{P} \\Par \\Bang \\Out x . \\proc{R} })\n\t&\\wbisimilar&\n\\In c (y) . ({ \\New x ({ \\proc{P} \\Par \\Bang \\Out x . \\proc{R} }) })\n\t\\\\\n\\New x ({ \\Out c . \\proc{P} \\Par \\Bang \\Out x . \\proc{R} })\n\t&\\wbisimilar&\n \\Out c . ({ \\New x ({ \\proc{P} \\Par \\Bang \\Out x . \\proc{R} }) })\n \\end{array} \\]\n \\end{lemma}\n\n\n \\begin{lemma} \\label {replication lemma}\n \\begin{enumerate}\n\n \\item \\label{out distributes}\nAssume that $a$ does not occur in $\\proc{p}$, \\proc{P}, and is only used for input in \\proc{R} and \\proc{Q}. Then:\n \\[ \\begin{array}{lcl}\n\\New a (\\Bang \\Out a <\\procp> . \\proc{P} \\Par \\proc{Q} \\Par \\proc{R} )\n\t& \\wbisim &\n{ \\New a (\\Bang \\Out a <\\procp> . \\proc{P} \\Par \\proc{Q} ) \\Par \\New a (\\Bang \\Out a <\\procp> . \\proc{P} \\Par \\proc{R} ) }\n \\end{array} \\]\n\n\n\n\n \\item \\label{out splits}\nAssume \\proc{P}, \\proc{Q} only output on $a$ and $a$ does not appear in \\proc{R} or \\proc{p}. Then:\n \\[ \\begin{array}{lcl}\n\\New a ( \\Out a <\\procp> . \\proc{P} \\Par \\proc{Q} \\Par \\Bang \\In a (x) . \\proc{R} )\n\t& \\wbisim &\n{ \\New a ( \\New b ({ \\Out b <\\procp> . \\proc{P} \\Par \\proc{Q} \\Par \\Bang \\In b (x) . \\proc{R} }) \\Par \\Bang \\In a (x) . \\proc{R} ) }\n \\end{array} \\]\n\n \\item \\label{in distributes}\nAssume that $a$ does not appear in \\proc{P} and is only used for output in \\proc{R} and \\proc{Q}. Then:\n \\[ \\begin{array}{lcl}\n\\New a (\\Bang \\In a (x) . \\proc{P} \\Par \\proc{Q} \\Par \\proc{R} )\n\t& \\wbisim &\n{ \\New a (\\Bang \\In a (x) . \\proc{P} \\Par \\proc{Q} ) \\Par \\New a (\\Bang \\In a (x) . \\proc{P} \\Par \\proc{R} ) }\n \\end{array} \\]\n\n \\end{enumerate}\n\n \\end{lemma}\n\n \\begin{Proof} Straightforward. \\quad\\usebox{\\proofbox}\n \\end{Proof}\n\nThe following properties follow directly from the definition of $\\wbisim$ and Proposition~\\ref{reduction hidden}:\n\n \\begin{proposition} \\label{wbisim lemma}\n \\begin{enumerate}\n\n \\item If for all $\\proc{P}'$, $\\proc{Q}'$ such that $ \\proc{P} \\rtcredPi \\proc{P}' $, $ \\proc{Q} \\rtcredPi \\proc{Q}' $ over hidden channels, we have $ \\proc{P}' \\wbisim \\proc{Q}' $, then $ \\proc{P} \\wbisim \\proc{Q} $.\n\n \\item Let \\proc{P}, \\proc{Q} be such that no interaction is possible between them, then: $ \\proc{P} \\wbisim \\proc{P}' $ and $ \\proc{Q} \\wbisim \\proc{Q}'$ if and only if $ \\proc{P} \\Par \\proc{Q} \\wbisim \\proc{P}' \\Par \\proc{Q}' $.\n\n\n \\end{enumerate}\n \\end{proposition}\n\n\nThe $`p$-calculus is equipped with a rich type theory \\cite{Sangiorgi-Walker-Book'01}, from the basic type system for counting the arity of channels \\cite{Pierce-Sangiorgi'96} to session types \\cite{Honda'93} and sophisticated linear types in \\cite{Honda-Yoshida-Berger'04}.\nThe notion of type assignment we use here is the one first defined in \\cite{vBCV-CLaC'08} and differs from systems presented in the past in that types do not contain channel information, and in that it expresses \\emph{implication}, \\emph{i.e.}~has functional types and describes the `\\emph{input-output interface}' of a process.\n\n \\begin {definition}[Context assignment for $`p$ \\cite{vBCV-CLaC'08}] \\label{type assignment for pi}\nFunctional type assignment for the {\\PiC} is defined by the following sequent system:\\footnote{Type assignment is classical in nature (\\emph{i.e.}~not intuitionistic), since we can have more than one conclusion.}\n{\\def\\Turn{\\Turn}\n \\[ \\begin{array}{rlrlrl}\n(\\Zero): &\n \\Inf\t{ \\Pider {\\Zero} : `G |- `D }\n &\n(\\InRule) : &\n \\Inf\t{ \\Pider \\proc{P} : `G, x{:}A |- x{:}A,`D }\n\t{ \\Pider \\In a(x) . \\proc{P} : `G,a{:}A |- `D }\n &\n~ (\\OutRule) : &\n \\Inf\t[a \\not= b]\n\t{ \\Pider \\proc{P} : `G,b{:}A |- b{:}A,`D }\n\t{ \\Pider \\Out a . \\proc{P} : `G,b{:}A |- a{:}A,b{:}A,`D }\n \\\\ [5mm]\n(!): &\n \\Inf\t{ \\Pider \\proc{P} : `G |- `D }\n\t{ \\Pider \\Bang \\proc{P} : `G |- `D }\n &\n(`n): &\n \\Inf\t{ \\Pider \\proc{P} : `G,a{:}A |- a{:}A,`D }\n\t{ \\Pider \\New a \\proc{P} : `G |- `D }\n &\n(\\Par): &\n \\Inf\t{ \\Pider \\proc{P} : `G |- `D \\quad \\Pider \\proc{Q} : `G |- `D }\n\t{ \\Pider \\proc{P} \\Par \\proc{Q} : `G |- `D }\n\\\\ [5mm]\n&&(\\Let) : &\n \\multicolumn{3}{l}{\n \\Inf\t[y,z \\notele `D; x \\notele `G]\n\t{ \\Pider \\proc{P} : `G,y{:}B |- x{:}A,`D }\n\t{ \\Pider \\Let = z in \\proc{P} : `G,z{:}A\\arr B |- `D }\n}%\n\\\\ [5mm]\n& \\multicolumn{2}{r}{(\\PairOut)} : &\n \\multicolumn{3}{l}{\n\\Inf\t[a \\notele `D; a,c \\notele `G]\n\t{ \\Pider \\proc{P} : `G,b{:}A |- c{:}B,`D }\n\t{ \\Pider \\Out a . \\proc{P} : `G,b{:}A |- a{:}A\\arr B,c{:}B,`D }\n}%\n \\end{array} \\] }\n\nWe write $\\Pider \\proc{P} : `G |- `D $ if there exists a derivation using these rules that has this expression in the conclusion.\n \\end {definition}\nWe should perhaps stress that it is not known if this system has a relation with logic, other than the one established in \\cite{vBCV-CLaC'08}.\n\nNotice that the `\\emph{input-output interface of a $`p$-process}' property is nicely preserved by all the rules; handling of arrow types is restricted by the type system to the rules $(\\Let)$ and $(\\PairOut)$.\n\n\n \\begin {example}\nThe inference rules\n \\[ \\begin{array}{@{~}rlrl}\n(\\textit{Weak}): &\n\\Inf\t[`G' \\supseteq `G,`D' \\supseteq `D]\n\t{ \\Pider \\proc{P} : `G |- `D }\n\t{ \\Pider \\proc{P} : `G' |- `D' }\n &\n(\\PairIn) : &\n \\Inf\t[y,a \\not\\in `D, \\, x \\not\\in `G]\n\t{ \\Pider \\proc{P} : `G,y{:}B |- x{:}A,`D\n\t}{ \\Pider \\In a (x,y) . \\proc{P} : `G,a{:}A\\arr B |- `D }\n \\\\ [5mm]\n(\\OutRule\\,') : &\n\\Inf\t{ \\Pider \\Out a : `G,b{:}A |- a{:}A,b{:}A,`D }\n &\n(\\PairOut\\,') : &\n\\Inf\t{ \\Pider \\Out a : `G,b{:}A |- a{:}A\\arr B,c{:}B,`D }\n \\\\ [5mm]\n({\\forwarder}) : &\n\\Inf\t{ \\Pider \\BEq u=a : u{:}A |- a{:}A }\n &\n(\\PiOverline{`n}) : &\n\\Inf\t{ \\Pider \\PilmuTerm [Q] w : `G |- w{:}B,`D\n\t}{ \\Pider { \\BOut b (w) . { \\PilmuTerm [Q] w } } : `G |- b{:}B,`D }\n \\end{array} \\]\nare admissible.\nThat weakening is admissible follows by a straightforward reasoning over the structure of derivations; for the other rules, consider:\n \\[ \\begin{array}{ccc}\n\\Inf\t[\\InRule]\n\t{\\Inf\t[\\Let]\n\t\t{\\InfBox{ \\Pider \\proc{P} : `G,y{:}B |- x{:}A,`D } }\n\t\t{ \\Pider \\Let = z in \\proc{P} : `G,z{:}A\\arr B |- `D }\n\t}\n\t{ \\Pider \\In a (z) . \\Let = z in \\proc{P} : `G,a{:}A\\arr B |- `D }\n \\end{array} \\]\n \\[ \\begin{array}{ccc}\n \\Inf\t[\\OutRule]\n\t{\\Inf\t[\\Zero]\n\t\t{\\Pider {\\Zero} : `G,b{:}A |- b{:}A,`D }\n\t}{ \\Pider \\Out a . \\proc{P} : `G,b{:}A |- a{:}A,b{:}A,`D }\n &&\n\\Inf\t[\\PairOut]\n\t{\\Inf\t[\\Zero]\n\t\t{ \\Pider {\\Zero} : `G,b{:}A |- c{:}B,`D }\n\t}{ \\Pider \\Out a . {\\Zero} : `G,b{:}A |- a{:}A\\arr B,c{:}B,`D }\n \\end{array} \\]\n \\[ \\begin{array}{ccc}\n\\Inf\t[\\InRule]\n\t{\\Inf\t[\\OutRule\\,']\n\t\t{ \\Pider \\Out a : `G,w{:}A |- a{:}A,w{:}A }\n\t}{ \\Pider \\In u (w) . \\Out a : `G,u{:}A |- a{:}A }\n &&\n\\Inf\t[`n]\n\t{\\Inf\t[\\OutRule]\n\t\t{\\InfBox{ \\Pider \\PilmuTerm [Q] w : `G |- w{:}B,`D }\n\t\t}{ \\Pider \\Out b . \\PilmuTerm [Q] w : `G |- b{:}B,w{:}B,`D }\n\t}{ \\Pider \\New w ({ \\Out b . \\PilmuTerm [Q] w }) : `G |- b{:}B,`D }\n \\end{array} \\]\n \\end {example}\n\n\nSince weakening is included, we allow ourselves to be a little less precise when we construct derivations, and freely switch to multiplicative style where rules join contexts whenever convenient, by using, for example, the rule\n \\[ \\begin{array}{@{}rl}\n(\\mid): &\n \\Inf\t{ \\Pider \\proc{P}_1 : \\G_1 |- `D_1 \\quad \\dots \\quad \\Pider \\proc{P}_n : \\G_n |- `D_n }\n\t{ \\Pider \\proc{P}_1 \\Par\\dots \\Par \\proc{P}_n : \\G_1, \\ldots, \\G_n |- `D_1, \\ldots, `D_n }\n \\\\ [4mm]\n \\end{array} \\]\n\n\\Comment\nThe main soundness result for our notion of type assignment for $`p$ is stated as:\n \\begin {theorem}[Witness reduction \\cite{vBCV-CLaC'08}] \\label{Witness reduction}\nIf $\\Pider \\proc{P} : `G |- `D $ and $\\proc{P} \\redPi \\proc{Q}$, then $\\Pider \\proc{Q} : `G |- `D $.\n \\end {theorem}\n\n\n\n \\section{Context and background of this paper} \\label{context and background}\n\nIn the past, there have been several investigations of interpretation from the $`l$-calculus into the $`p$-calculus.\nResearch in this direction started by Milner's interpretation $\\MilTerm [`.] `. $ of $`l$-terms \\cite{Milner'92}; Milner's interpretation is input based and the interpretation of closed $`l$-terms respects large-step \\emph{lazy} reduction $\\redlazy$ \\cite{Abramsky'90} to normal form up to substitution; this was later generalised to $`b$-equality, but using weak bisimilarity \\cite{Sangiorgi-Walker-Book'01}.\n\nFor many years, it seemed that the first and final word on the interpretation of the $`l$-calculus has been said by Milner; in fact, input-based interpretations of the $`l$-calculus into the $`p$-calculus\nhave become the \\emph{de facto} standard, and most published systems are based on Milner's interpretation.\nThe various interpretations studied in \\cite{Sangiorgi-Walker-Book'01} constitute examples, also in the context of the higher-order $`p$-calculus; \\cite{Honda-Yoshida-Berger'04} used Milner's approach with a typed version of the $`p$-calculus; \\cite{Thielecke'97} used it in the context of continuation-passing style languages.\n\nMilner's interpretation of the $`l$-calculus into the (synchronous, monadic) $`p$-calculus is defined by:\n\n \\begin {definition}[Milner's interpretation \\cite{Milner'92}] \\label{Milner's interpretation}\nThe \\emph{input-based interpretation} of $`l$-terms into the $`p$-calculus\nIt is defined by:\n \\[ \\begin{array}{rcll}\n\\MilSem [v x] a & \\ByDef & \\Milvar x a & (x \\not= a)\n \\\\\n\\MilSem [l x . M] a & \\ByDef & \\MilTerm [l x . M] a & (b \\textit{ fresh})\n\\\\\n\\MilSem [a M N] a & \\ByDef & \\MilTerm [a M N] a & (c,z \\textit{ fresh})\n\\\\\n\\MilSem [x := M] & \\ByDef & \\MilSubX [M] x & (w \\textit{ fresh})\n \\end{array} \\]\nMilner calls $\\MilSem [x := M] $ an ``\\emph{environment entry}''; it could be omitted from the definition above, but is of use separately.\n \\end {definition}\n\nNotice that, in $\\MilSem [a M N] a $, the interpretation of the operand $N$ is placed under output, and thereby blocked from running; this comes at a price: now $`b$-reductions that occur in the operand can no longer be mimicked.\nCombined with using input to model abstraction, this makes that a redex can only be contracted if it occurs on the outside of a term (is the \\emph{top} redex): the modelled reduction is \\emph{lazy}.\n\nMilner states an Operational Soundness result:\n \\begin{theorem}[\\cite{Milner'92}] \\label{Milner's result}\nFor closed\n$M$, either $M \\diverges$ ($M$ diverges) and $\\MilSem[M] u \\diverges$, or $M \\redlazy `l y . R \\Vect{[N \\For x ]} $, and\n \\[ \\begin{array}{rcl}\n\\MilSem[M] u & \\rtcredPi & (\\Vect{`nx}) \\, ( \\MilSem[`ly.R] u \\Par \\Vect{\\MilSem[x:=N ]} ).\n \\end{array} \\]\n \\end{theorem}\n\n\n\nAlthough obviously the intention is that the substitutions $\\Vect{[N \\For x ]}$ are generated by the lazy reduction, the way the result is stated this need not be the case.\nThis `glitch' was fixed in \\cite{Bakel-Vigliotti-CONCUR'09} by reformulating Milner's result using \\emph{explicit} substitution.\nThat paper also\npresented a \\emph{logical}, \\emph{output-based} interpretation $\\PiLSem [`.] `. $ that interprets abstraction $`lx.M$ not using \\emph{input}, but via an asynchronous \\emph{output} which leaves the interpretation of the body $M$ free to reduce.\nThat interpretation is defined as:\n\n \\begin{definition} [Spine interpretation \\cite{Bakel-Vigliotti-CONCUR'09}] \\label{head interpretation} ~\n\\def\\PiSTerm{\\PiSTerm}\n \\[ \\begin {array}{rcll}\n\\PiLSem [v x] a & \\ByDef & \\PiHTerm [v x] a\n \\\\\n\\PiLSem [l x . M] a & \\ByDef & \\PiHTerm [l x . M] a\n \\\\\n\\PiLSem [a M N] a & \\ByDef & \\PiHTerm [a M N] a\n \\\\\n\\PiLSem [s M x := N] a & \\ByDef & \\PiHTerm [s M x := N] a\n \\end {array} \\]\n \\end{definition}\nFor this interpretation, \\cite{Bakel-Vigliotti-CONCUR'09} showed Operational Soundness and Type Preservation, but with respect to the notion of \\emph{explicit head reduction} $\\redxh$, similar to the notion defined in Def.~\\ref{explicit head reduction}, and the notion of type assignment defined in Def.~\\ref{type assignment for pi}.\nThe main results shown are (using $\\Diverges$ to denote non-termination):\n \\begin{theorem} [\\cite{Bakel-Vigliotti-CONCUR'09}] \\label{head reduction preservation}\n \\begin{enumerate}\n\n \\item If $M \\Diverges $ then $\\PiLSem[M] a \\Diverges$, and if $M \\redxh N $ then $ \\PiLSem[M] a \\rtcredPi\\equivC \\PiHTerm[N] a $.\n\n \\item If $\\der `G |- M : A $ then $ \\derPi \\PiLSem[M] a : `G |- a{:}A $.\n\n \\end{enumerate}\n \\end{theorem}\n\n\nAs argued in \\cite{Bakel-Vigliotti-CONCUR'09}, to show the above result, which formulates a direct \\emph{step-by-step} relation between $`b$-reduction and the synchronisation in the $`p$-calculus, it is necessary to make the substitution explicit.\nThis is a direct result of the fact that, in the $`p$-calculus, $`l$'s implicit substitution gets `implemented' \\emph{one variable at the time}, rather than all in one fell swoop.\nSince we aim to show a similar result for $\\lmu$, we will therefore define a notion of explicit substitution.\n\nAlthough termination is not studied in that paper, it is easily achieved through restricting the notion of reduction in the $`p$-calculus by not allowing reduction to take place inside processes whose output cannot be received, or by placing a guard on the replication as we do in this paper. \\\\\n\n\\noindent\nA natural extension of this line of research is to see if the $`p$-calculus can be used to interpret more complex calculi as well, as for example calculi that relate not to intuitionistic logic, but to classical logic, as $\\lmu$, $\\lmmt$, or $\\X$.\nThere are, to date, a number of papers on this topic.\n\nIn \\cite{Honda-Yoshida-Berger'04} an interpretation of Call-by-Value $\\lmu$ is defined that is based on Milner's.\nContrary to what is claimed in that paper, this interpretation itself is not based on Milner's \\CBV-interpretation, but rather is a variant of $\\MilTerm [`.] `. $; this implies that, \\emph{a priori}, a process like $\\sem{(`lx.x)((`lx.x)(`lx.x))} \\, a $ cannot be reduced: the top redex should not be contractable since the interpretation should respect the \\CBV-reduction, and the contraction of the right-hand redex cannot be modelled, since that term is placed under a guard.\nThe authors address this problem by considering \\emph{typed processes only}, and using a much more liberal notion of reduction on processes. The syntax of processes there considered is\n \\[ \\begin{array}{rcl}\nP &::=& \\Bang \\In x (\\Vect{y}) . \\proc{P} ~\\mid~ \\New \\Vect{y} ( \\Out x <{\\Vect{y}}> \\Par \\proc{P} ) ~\\mid~ \\proc{P} \\Par \\proc{Q} ~\\mid~ \\New x \\proc{P} ~\\mid~ \\Zero\n \\end{array} \\]\nand the notion of reduction on processes is extended to that of $\\searrow$, defined as the least compatible relation over typed processes (\\emph{i.e.}~closed under typed contexts), taken modulo $\\congruent$, that includes:\n \\[ \\begin{array}{rcl}\n\\Bang \\In x (\\Vect{y}) . \\proc{P} \\Par \\New \\Vect{a} ( \\Out x <{\\Vect{a}}> \\Par \\proc{Q} ) & \\red & \\Bang \\In x (\\Vect{y}) . \\proc{P} \\Par \\New \\Vect{a} ( \\proc{P}[\\Vect{a \\For y}] \\Par \\proc{Q} )\n \\end{array} \\]\nas the basic synchronisation rule, as well as\n\\[ \\begin{array}{rcl}\n\\Cont[{ \\New \\Vect{a} ( \\Out x <{\\Vect{a}}> \\Par \\proc{P} ) }] \\Par \\Bang \\In x (\\Vect{y}) . \\proc{Q} &\\searrow_r&\n\\Cont[{ \\New \\Vect{a} ( \\proc{P}[\\Vect{a \\For y}] \\Par \\proc{Q} ) }] \\Par \\Bang \\In x (\\Vect{y}) . \\proc{Q}\n\t\\\\\n\\New x ( \\Bang \\In x (\\Vect{y}) . \\proc{Q} ) & \\searrow_g & \\Zero \\qquad\n \\end{array} \\]\nwhere $\\Cont[`.]$ is an arbitrary (typed) context; note that $\\searrow$ synchronises with any occurrence of $\\Out x <{\\Vect{a}}> $, no matter what guards they may be placed under.\nThe resulting calculus is thereby very different from the original $`p$-calculus.\nTypes for processes prescribe usage of names, and name passing is restricted to \\emph{bound (private) name passing}.\\footnote{This is a feature of all related interpretations into the $`p$-calculus.}\n\nOn the relation between Girard's linear logic \\cite{Girard'87} and the $`p$-calculus,\n\\cite{Bellin-Scott'94} give a treatment of information flow in proof-nets; only a small fragment of Linear Logic was considered, and the translation between proofs and $`p$-calculus was left rather implicit as also noted by \\cite{Caires-Pfenning'10}.\n\nTo illustrate this, notice that Bellin and Scott use the standard syntax for the polyadic $`p$-calculus\n \\[ \\begin{array}{rrl@{\\quad}l}\n\\proc{P},\\proc{Q} & ::= &\n\\Zero\n\\mid \\proc{P} \\Par \\proc{Q}\n\\mid \\Bang \\proc{P}\n\\mid \\New a \\proc{P}\n\\mid \\In a ({ \\Vect{x} }) . \\proc{P}\n\\mid \\Out a < {\\Vect{p} }> . \\proc{P}\n \\end{array} \\]\nsimilar to the one we use here (see Def.~\\ref{pi calculus}) but for the fact that for us output is not synchronous, and there the $\\Let$-construct is not used.\nHowever, the encoding of a `cut' in linear logic\n\\[ \\def \\Turn} \\begin{array}[t]{@{}c@{\\quad}c@{\\quad}c{\\Turn}\n\\Inf\t{\\derLK {} |- x{:}A\\otimes B,y{:}{(A\\otimes B)^\\perp}\n\t \\quad\n\t \\Inf\t{\\derLK {} |- n{:}A,m{:}{A^\\perp}\n\t\t \\quad\n\t\t \\derLK {} |- z{:}B,w{:}{B^\\perp}\n\t\t}{\\derLK {} |- m{:}A^\\perp,w{:}B^\\perp,v{:}A\\otimes B }\n\t}{ \\derLK {} |- x{:}A\\otimes B,m{:}A^\\perp,w{:}{B^\\perp} }%\n\\]\n\\emph{i.e.}~the `term' $ x{:}A\\otimes B,m{:}A^\\perp, w{:}B^\\perp$, gets translated into a `language of proofs' which looks like:\n \\[\nCut^k(I,\\bigotimes^{n,z}_ v(I,I)m w z )x,(m,w) = (\\nu k) \\big( I[k\/y ] \\mid \\bigotimes^{n,z}_ v(I,I)m w z[k\/v ] \\big)\n \\]\nwhere the terms $Cut$ and $I$ are (rather loosely) defined in.\nNotice the use of arbitrary application of processes to channel names, and the operation of pairing; the authors do not specify how to relate this notation to the above syntax of processes they consider.\n\nHowever, even if this relationship is made explicit, also then a different $`p$-calculus is needed to make the encoding work.\nTo clarify this point, consider the translation in the $`p$-calculus of the term above, which according to the definition given in \\cite{Bellin-Scott'94} becomes:\n \\[\n(\\nu k) \\big ( x(a).\\underbrace{k(a)} \\mid (\\nu nz) (\\underbrace{\\Vect k(n,z).}\\big( n(b).m(b) \\mid z(b).w(b) \\big) ) ) .\n \\]\nAlthough intended, no communication is possible in this term.\nWe have `underbraced' the desired communication which is impossible, as the arity of the channel $k$ does not match.\nTo overcome this kind of problem, Bellin and Scott would need the \\Let-construct with use of pairs of names as we have introduced in this paper in Def.~\\ref{pi calculus}.\nMoreover, there is no relation between the interpreted terms and proofs stated in \\cite{Bellin-Scott'94} in terms of logic, types, or provable statements; here, we make a clear link between interpreted proofs and the logic through our notion of type assignment for the $`p$-calculus.\n\nIn \\cite{vBCV-CLaC'08} an interpretation into $`p$ of the sequent calculus $\\X$ is defined that enjoys the Curry-Howard isomorphism for Gentzen's {\\LK} \\cite{Gentzen'35}, which is shown to respect reduction.\nHowever, this result is only partial, as it is formulated as ``\\emph{if $ P \\redX Q $, then $\\PiSem[P] \\gtC \\PiSem[Q] $}'', allowing $\\PiSem[P] $ to have more observable behaviour than $\\PiSem[Q] $; the main reason for this is that reduction in $\\X$ is non-confluent.\nAlthough in \\cite{vBCV-CLaC'08} it is reasoned that this is natural in the context of non-confluent, symmetric sequent calculi, and is shown that the interpretation preserves types, it is a weaker result than could perhaps be expected.\n\nAn interpretation of $\\lmmt$ is studied in \\cite{CiminiCS'10}; the interpretation defined there strongly depends on recursion, is not compositional, and preserves only outermost reduction; no relation with types is shown.\n\n\n\n\n\\section{$\\lmux$: $\\lmu$ with explicit substitution}\n\\label{lmux section}\n\nOne of the main achievements of \\cite{Bakel-Vigliotti-CONCUR'09} is that it establishes a strong link between reduction in the $`p$-calculus and step-by-step \\emph{explicit substitution} \\cite{Bloo-Rose'95} for the $`l$-calculus, by formulating a result\nwith respect to explicit head reduction and the spine interpretation defined there.\n\nIn view of this, for the purpose of our interpretation it was natural to study a variant of $\\Lmu$ in \\cite{Bakel-Vigliotti-IFIPTCS'12} with explicit substitution as well; since here we work with $\\lmu$, we present here $\\lmux$, as a variant of $\\Lmux$ as presented in that paper.\nExplicit substitution treats substitution as a first-class operator, both for the logical and the structural substitution, and describes all the necessary steps to effectuate both.\n \\begin {definition}[$\\lmux$] \\label{definition lmux}\n\n \\begin {enumerate}\n\n \\item\nThe syntax of the \\emph{explicit $\\lmu$ calculus}, $\\lmux$, is defined by:\n \\[ \\begin {array}{@{}rrl}\nM,N & ::= & x \\mid `lx.M \\mid MN \\mid \\Sub M x := N \\mid\n\\muterm`a.[`b]M \\mid \\ContSub M `a := N . `g\n \\end {array} \\]\nWe consider the occurrences of $x$ in $M$ bound in $ \\Sub M x := N $, and those of $`a$ in $M$ in $ \\Sub M `a := N{`.}`g $; by Barendregt's convention, $x$ and $`a$ do not appear outside $M$.\n\n\n \\item\nThe reduction relation $\\redx$ on \\Paper{terms in }$\\lmux$ is defined \\CLAC{as the contextual closure of}\\Paper{through} the following rules\\Paper{ (for the sake of completeness, we list all)}:\n\n \\begin{enumerate} \\itemsep 0pt\n\n \\item Main reduction rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n (`l x.M) N &\\red & \\Sub M x := N \\\\\n(\\muterm `a . {\\Cmd} ) N & \\red & \\muterm`g . \\Sub {\\Cmd} `a := N{`.}`g & (`g \\textit{ fresh}) \\\\\n\\muterm`b.[`b]M &\\red& M & (`b \\not\\in \\fn(M)) \\\\ {}\n[`b]\\muterm`g.\\Cmd &\\red& \\Cmd[`b \\For `g]\n \\end {array} \\]\n\n \\item Term substitution rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\Sub x x := N &\\red & N \\\\\n\\Sub M x := N &\\red & M & (x \\not\\in \\fv(M)) \\\\\n\\Sub (`ly.M) x := N &\\red & `l y.(\\Sub M x := N ) \\\\\n\\Sub (PQ) x := N &\\red & (\\Sub P x := N )(\\Sub Q x := N ) \\\\\n\\Sub (\\muterm`a.[`b]M) x := N & \\red & \\muterm`a.[`b](\\Sub M x := N)\n \\end {array} \\]\n\n \\item Structural rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\Sub (\\muterm`d.\\Cmd) `a := N{`.}`g & \\red & \\muterm`d.(\\Sub {\\Cmd} `a := N{`.}`g ) \\\\\n\\Sub ([`a]M) `a := N{`.}`g & \\red & [`g](\\Sub M `a := N{`.}`g ) N \\\\\n\\Sub ([`b]M) `a := N{`.}`g & \\red & [`b](\\Sub M `a := N{`.}`g ) & (`a \\not= `b) \\\\\n\\Sub M `a := N{`.}`g & \\red & M & (`a \\not\\in \\fn(M)) \\\\\n\\Sub (`lx.M) `a := N{`.}`g & \\red & `lx.\\Sub M `a := N{`.}`g \\\\\n\\Sub (PQ) `a := N{`.}`g & \\red & (\\Sub P `a := N{`.}`g )( \\Sub Q `a := N{`.}`g )\n \\end {array} \\]\n\n\\Paper\n \\item Contextual rules\\CLAC{ ~ \\vspace*{-10pt} }\n \\[ \\begin {array}[t]{rcl}\nM \\red N &~\\Then ~~ &\n \\begin{cases}\n`lx.M &\\red& `lx.N \\\\\nML &\\red& NL \\\\\nLM &\\red& LN \\\\\n\\muterm`a.[`b]M & \\red & \\muterm`a.[`b]N \\\\\n\\Sub M x := L &\\red& \\Sub N x := L \\\\\n\\Sub L x := M &\\red& \\Sub L x := N \\\\\n\\Sub M `a := L{`.}`g &\\red& \\Sub N `a := L{`.}`g \\\\\n\\Sub L `a := M{`.}`g &\\red& \\Sub L `a := N{`.}`g \\\\ [1mm]\n \\end{cases}\n \\end {array} \\]\n\n\n \\end {enumerate}\n\n \\item \\label{redxsub definition}\nWe use $\\redxsub$ for the notion of reduction where only term substitution and structural rules are used (so not the main reduction rules)\\Paper{, and $\\eqx$ for the congruence generated by $\\redx$}.\n\n \\end {enumerate}\n\n \\end{definition}\n\n\\Paper\nThis is a system different from that of \\cite{Audebaud'94}, where a version with explicit substitution is defined for a variant of $\\lmu$ that uses de Bruijn indices \\cite{deBruijn'72}.\n\nNotice that since reduction in $\\lmux$ \\Paper{actually }is formulated via term rewriting rules \\cite{Klop'92}, reduction is allowed to take place also inside the substitution term.\n\\Paper{\n\nExplicit substitution describes explicitly the process of executing a $`b`m$-reduction, \\emph{i.e.}~expresses syntactically the details of the computation as a succession of atomic steps (like in a first-order rewriting system), where the implicit substitution of each $`b`m$-reduction step is split up into reduction steps.\nThereby t}\\CLAC{T}he following is straightforward:\n\n \\begin{proposition} [$\\lmux$ implements $\\lmu$-reduction]\n\\label{lmu vs lmux}\n \\begin{enumerate}\n \\item $ M \\redbmu N \\Implies M \\rtcredx N $.\n \\item $M \\ele \\lmu \\And M \\redx N \\Implies \\Exists L \\ele \\lmu \\Pred [ N \\rtcredxsub L ] $.\n \\end{enumerate}\n \\end{proposition}\n\n\\Paper\nThe notion of type assignment on $\\lmux$ is a natural extension of the system for the $\\lmu$-calculus of Def.~\\ref{typing for lmu} by adding rules $(\\TCut)$ and $(\\CCut)$.\n \\begin {definition}\nUsing the notion of types in Def.~\\ref{types}, type assignment for $\\lmux$ is defined by:\n \\[ \\begin {array}{@{}r@{~~}l@{\\dquad}r@{~~}l}\n(\\Ax) : &\n\\Inf\t{ \\derLmu `G,x{:}A |- x : A | `D }\n&\n (`m) : &\n \\Inf\t{ \\derLmu `G |- M : B | `a{:}A,`D\n\t}{ \\derLmu `G |- \\muterm`a.[`b]M : A | `b{:}B,`D }\n\\quad\n \\Inf\t{ \\derLmu `G |- M : A | `a{:}A,`D\n\t}{ \\derLmu `G |- \\muterm`a.[`a]M : A | `D }\n\\\\ [5mm]\n(\\arrI) : &\n\\Inf\t{ \\derLmu `G,x{:}A |- M : B | `D\n\t}{ \\derLmu `G |- `l x.M : A\\arrow B | `D }\n&\n(\\arrE) : &\n\\Inf\t{ \\derLmu `G |- M : A\\arrow B | `D\n\t \\quad\n\t \\derLmu `G |- N : A | `D\n\t}{ \\derLmu `G |- MN : B | `D }\n \\end{array} \\]\n \\[ \\begin {array}{@{}r@{~~}l@{\\dquad}r@{~~}l}\n(\\TCut): &\n\\Inf\t{ \\derLmu `G,x{:}A |- M : B | `D \\quad \\derLmu `G |- N : A | `D }\n\t{ \\derLmu `G |- {\\Sub M x := N } : B | `D }\t\t\n&\n(\\CCut): &\n\\Inf\t{ \\derLmu `G |- M : C | `a{:}A\\arr B,`D \\quad \\derLmu `G |- N : A | `D }\n\t{ \\derLmu `G |- {\\Sub M `a := N{`.}`g } : C | `g{:}B,`D }\n \\end {array} \\]\nWe write $\\derLmux `G |- M : A $ for judgements derivable in this system.\n\n \\end {definition}\n\n\n\\Comment\n \\begin{lemma}\nIf $M \\redx N$, and $\\derLmu `G |- M : A $ then $\\derLmu `G |- N : A $.\n \\end{lemma}\n\n \\begin{Proof}\n{ To do.}\n\n\n\nIn the context of head reduction and explicit substitution, we can economise further on how substitution is executed, and perform only those that are essential for the continuation of reduction.\nWe will therefore limit substitution to allow it to \\emph{only replace} the head variable of a term.\n(This principle is also found in Krivine's machine \\cite{Krivine'07}.)\\,\nThe results of \\cite{Bakel-Vigliotti-CONCUR'09} show that this is exactly the kind of reduction that the $`p$-calculus naturally encodes.\n\n \\begin{definition}[Explicit head reduction]\n \\label{explicit head reduction}\n\nThe\n\\emph{head variable} of $M$, $\\hv(M)$, is defined as expected, adding $ hv(\\Sub M x := N ) = \\hv(M) $ if $ \\hv(M) \\not= x $, and the \\emph{head name} $\\hn(M)$ is defined by $ \\hn(\\muterm`a.[`b] \\lmuHNF) = `b $, $ \\hn(\\Sub M x := N ) = \\hn(M) $, and $ \\hn(\\ContSub M `a := N . `g ) = \\hn(M) $ if $ \\hn(M) \\not= `a $.\n\nWe define \\emph{explicit head reduction} $\\redxh$ on $\\lmux$ as $\\redx$, but change and add a few rule\n{ (we only give the changes)}:\n\n\\Comment{\n \\begin{enumerate}\n \\item \\Paper{Main reduction rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n (`l x.M) N &\\red & \\Sub M x := N & \\\\\n(\\muterm `a . M ) N & \\red & \\muterm`g . \\Sub M `a := N{`.}`g & (`g \\textit{ fresh}) \\\\\n\\muterm`a.[`a]M &\\red& M & (`a \\not\\in \\fn(M)) \\\\\n\\muterm`a.[`b]\\muterm`g.\\Cmd &\\red& \\muterm`a.\\Cmd[`b \\For `g]\n \\end {array} \\]\n\n \\item \\label{term rules} \n\\label{changed rules}\nTerm substitution rules:\n \\Paper{ \\[ }\\CLAC{$} \\begin {array}{rcl@{\\quad}l}\n\\Paper{\\Sub x x := N &\\red & N \\\\\n\\Sub M x := N &\\red & M & (x \\not\\in \\fv(M)) \\\\\n\\Sub (`ly.M) x := N &\\red & `l y.(\\Sub M x := N ) & (x = \\hv(M)) \\\\ \n\\Sub (PQ) x := N &\\red & \\Sub {( \\Sub P x := N \\,Q )} x := N & (x = \\hv(P)) \\Paper{ \\\\\n\\Sub (\\muterm`a.[`b]M) x := N & \\red & \\muterm`a.[`b](\\Sub M x := N) & (x = \\hv(M)) \\\\ \n \\end {array} \\Paper{ \\] }\\CLAC{$}\n\n \\item\\label{structural rules}\nThere are only two structural rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\Sub (\\muterm`b.[`a]M) `a := N{`.}`g & \\red & \\muterm`b.[`g](\\ContSub M `a := N . `g ) N & (`a = \\hn(\\muterm`b.[`a]M)) \\\\\n\\Sub M `a := N{`.}`g & \\red & M & (`a \\not\\in \\fn(M)) \\\\\n \\end {array} \\]\n\n \\item\nWe remove the following contextual rules:\n \\[ \\begin {array}[t]{rcl}\nM \\red N &\\Then&\n \\begin{cases}\n LM &\\red& LN \\\\\n \\Sub L x := M &\\red& \\Sub L x := N \\\\\n \\Sub L `a := M{`.}`g &\\red& \\Sub L `a := N{`.}`g \\\\\n \\end{cases}\n \\end {array} \\]\nso no longer allow reduction inside the substitution or inside the right-hand side of an application.\n\n \\item \\label{substitution rules} \\label{new rules}\nWe add two substitution rules:\n \\[ \\kern-3mm \\begin {array}{@{}rcll}\n\\Sub {\\Sub M x := N } y := P &\\red&\n\t{ \\Sub { \\Sub { \\Sub M y := P } x := N } y := P }\n\t& (y = \\hv(M))\n\t\\\\\n\\Sub {\\Sub M `a := N{`.}`g } `b := L{`.}`d &\\red&\n\t{\n\t\\Sub { \\Sub { \\Sub M `b := L{`.}`d } `a := N{`.}`g } `b := L{`.}`d }\n\t& (`b = \\hn(M))\n \\end {array} \\]\n\n \\end {enumerate}\n\n\n \\begin{enumerate}\n \\item \\label{changed rules} \nWe replace the term substitution rule for application and add side-conditions:%\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\Sub (`ly.M) x := N &\\red & `l y.(\\Sub M x := N ) & (x = \\hv(`ly.M)) \\\\\n\\Sub (PQ) x := N &\\red & \\Sub {( \\Sub P x := N \\,Q )} x := N & (x = \\hv(PQ)) \\\\\n\\Sub (\\muterm`a.[`b]M) x := N & \\red & \\muterm`a.[`b](\\Sub M x := N) & (x = \\hv(\\muterm`a.[`b]M))\n \\end {array} \\]\n\n \\item\\label{structural rules}\nThere are only two structural rules:\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\Sub (\\muterm`b.[`a]M) `a := N{`.}`g & \\red & \\muterm`b.[`g](\\ContSub M `a := N . `g ) N\n\\\\\n\\Sub M `a := N{`.}`g & \\red & M & (`a \\not\\in \\fn(M)) \\\\\n \\end {array} \\]\n\n \\item\nWe remove the following contextual rules:\n \\[ \\begin {array}[t]{rcl}\nM \\red N &\\Then&\n \\begin{cases}\n LM &\\red& LN \\\\\n \\Sub L x := M &\\red& \\Sub L x := N \\\\\n \\Sub L `a := M{`.}`g &\\red& \\Sub L `a := N{`.}`g \\\\\n \\end{cases}\n \\end {array} \\]\n\n \\item \\label{substitution rules} \\label{new rules}\nWe add four substitution rules:\n \\[ \\kern-3mm \\begin {array}{@{}rcll}\n\\Sub {\\Sub M x := N } y := L &\\red& \\Sub { \\Sub { \\Sub M y := L } x := N } y := L\n\t& (y = \\hv(M)) \\\\\n\\Sub {\\ContSub M `a := N . `b } y := L &\\red& \\Sub { \\ContSub { \\Sub M y := L } `a := N . `b } y := L\n\t& (y = \\hv(M)) \\\\\n\\ContSub {\\ContSub M `a := N . `g } `b := L . `d &\\red& \\ContSub { \\ContSub { \\ContSub M `b := L . `d } `a := N . `g } `b := L . `d\n\t& (`b = \\hn(M)) \\\\\n\\ContSub {\\Sub M x := N } `b := L . `d &\\red& \\ContSub { \\Sub { \\ContSub M `b := L . `d } x := N } `b := L . `d\n\t& (`b = \\hn(M))\n \\end {array} \\]\n\n \\end {enumerate}\n \\end{definition}\nNotice that, for example, in case\\CLAC{~}\\ref{changed rules}, \\Comment{the fourth of }the clause\\Comment{s} postpones the substitution $ \\exsub x := N $ on $Q$ until such time that an occurrence of the variable $x$ in $Q$ becomes the head-variable of the full term, and that we no longer allow reduction inside the substitution or inside the right-hand side of an application.\n\n\\Paper\nNotice that we do not add rules like\n $ \\Sub { \\Sub M x := N } y := L \n\t\\red\n\\Sub { \\Sub M y := L } x := {\\Sub N y := L }\n$;\nas in \\cite{Bloo-Rose'95}, this would introduce undesired non-termination.\n\nRemark that we need to add the rules of case~\\ref{new rules}: if we take the term $\\Sub { \\Sub P x := Q } y := R $, under `normal' reduction $\\redx$, the innermost substitution has to complete first before the outermost can run.\nWhen moving towards explicit head reduction, this would mean that we cannot reduce, for example, $\\Sub { \\Sub yx x := Q } y := R $; the innermost substitution cannot advance, since $x$ is not the head variable, and the outermost cannot advance since it is not defined for the substitution term it has to work on, $\\Sub yx x := Q $ in this case.\nTo allow outermost substitution to `jump' the innermost, we need to add the extra rules.\nThis, potentially, introduces non-termination, but by demanding that the variable concerned is actually the head-variable of the term, we avoid this.\n\nNormal forms of explicit head reduction are naturally defined as follows:\n \\begin{definition} [cf.~\\cite{Lassen'06}]\nThe normal forms with respect to $\\redxh$ are defined through the grammar:\n \\[ \\begin {array}{@{}rrll}\n\\lmuNF & ::= &\n`lx. \\lmuNF\n\t\\\\ &\\mid &\nxM_1\\dots M_n & (n \\geq 0)\n\t\\\\ &\\mid &\n\\muterm `a . [`b] \\lmuNF & (`a \\not= `b \\Or `a \\in \\lmuNF', \\lmuNF \\not= \\muterm`g.[`d] \\lmuNF' )\n\t\\\\ &\\mid &\n\\Sub {\\lmuNF} x := M & (\\hv(\\lmuNF) \\not= x)\n\t\\\\ &\\mid &\n\\ContSub {\\lmuNF} `a := M . `g & (\\hn(\\lmuNF) \\not= `a)\n \\end {array} \\]\n \\end{definition}\n\n\n\nThe following proposition states the relation between explicit head reduction, head reduction, and explicit reduction.\n \\begin{proposition} \\label{explicit versus head}\n \\begin{enumerate}\n\n \\item\nIf $ M \\rtcredh N $, then there exists $L \\ele \\lmux$ such that $ M \\rtcredxh L $ and $ L \\rtcredxsub N $.\n\n\n \\item\nIf $ M \\rtcredxh[\\nf] N $ with $M \\ele \\lmu$, then there exists $ L \\ele \\lmu$ such that $ N \\rtcredxsub[\\nf] L $, and $ M \\rtcredh[\\nf] L $.\n\n \\item\n$M \\rtcredbmu[\\nf] N $ if and only if there exists $L \\ele \\lmux$ such that $ M \\rtcredxh[\\nf] L $ and $ L \\rtcredx[\\nf] N $.\n\n \\end{enumerate}\n \\end{proposition}\nThis result gives that we can show our main results for $\\lmux$ for reductions that reduce to \\HNF.\n\n\n\\Paper\nWe will give some examples that illustrate $\\lmux$ and $\\redxh$.\n \\begin{example} \\label{redxh examples} \\label{reductions example}\n \\begin{enumerate} \\itemsep1mm\n\n \\item\n$\n\\begin{array}[t]{lclcl}\n(`lx.xx)(`ly.y)\n\t& \\redxh &\n\\Sub xx x := `ly.y\n\t& \\redxh & \\\\\n\\Sub {(\\Sub x x := `ly.y x)} x := `ly.y\n\t& \\redxh &\n\\Sub {}(`ly.y)x x := `ly.y\n\t& \\redxh & \\\\\n\\Sub {\\Sub y y := x } x := `ly.y\n\t & \\redxh &\n\\Sub x x := `ly.y\n\t& \\redxh & \\\\\n\\Sub `ly.y x := `ly.y\n\t& \\redxh &\n`ly.y\n \\end{array} $\n\n \\item\nReduction in $\\redxh$ is not deterministic; notice that the term $\\LTerm$ can reduce in two ways:\n \\[ \\begin{array}{rcccl}\n\\LTerm\n\t&\\redxh&\n \\left \\{\n\\begin{array}{l}\n\\LTerm\n\t\\\\\n\\LTerm\n \\end{array}\n\\right \\}\n\t& \\redxh &\n\\LTerm\n \\end{array} \\]\n(the last step is not the only one possible).\n\n \\item \\label{mu reduction}\n$ \\begin{array}[t]{@{}lclcl}\n(\\muterm`a.[`b]\\muterm`d.[`a](`ly.y))(`lz.z)\n\t& \\redxh &\n\\Sub \\muterm`g.[`b]\\muterm`d.[`a](`ly.y) `a := `lz.z{`.}`g\n\t& \\redxh & \\\\\n\\Sub \\muterm`g.[`a](`ly.y)[`b\\For `d] `a := `lz.z{`.}`g\n\t& = &\n\\Sub \\muterm`g.[`a](`ly.y) `a := `lz.z{`.}`g\n\t& \\redxh & \\\\\n\\muterm`g.[`g](`ly.y)(`lz.z)\n\t& \\redxh &\n\\muterm`g.[`g]\\Sub y y := `lz.z\n\t& \\redxh & \\\\\n\\muterm`g.[`g]`lz.z\n\t& \\redxh &\n`lz.z\n \\end{array} $\n\n\n \\item\nSome reductions leave substitutions in place:\n\n\\[ \\begin{array}[t]{@{}lcl}\n`lf.(`lx.f(xx))(`lx.f(xx))\n\t& \\redxh &\n`lf.(\\Sub f(xx) x := `lx.f(xx) )\n \\end{array} \\]\n\nand the last term is in $\\redxh$-normal form.\n\n\\Comment\n \\item\n $ \\begin{array}[t]{@{}lclcl}\n\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y)\n\t& \\redxh &\n\\muterm`a.[`a]\\Sub q q := \\muterm`b.[`a]`ly.y\n\t& \\redxh & \\quad \\\\\n\\muterm`a.[`a]\\muterm`b.[`a]`ly.y\n\t& \\redxh &\n\\LTerm\n \\end{array} $\n\n\n \\item \\label{non-terminating reduction}\nOf course in $\\redxh$ we can have non-terminating reductions.\nWe know that in $\\redbmu$ and $\\redh$, $ (`lx.xx) (`lx.xx) $ reduces to itself; this is not the case for $\\redxh$, as is illustrated by:\n \\[ \\begin{array}[t]{@{}lclcl}\n`D`D ~=~ (`lx.xx) (`lx.xx)\n\t& \\redxh &\n\\Sub xx x := `D\n\t& \\rtcredxh & \\\\\n\\Sub {{}(\\Sub x x := `D x)} x := `D\n\t& \\redxh\n\t&\n\\Sub {{}(`ly.yy) x} x := `D\n\t& \\redxh & \\\\\n\\Sub {\\Sub yy y := x } x := `D\n\t& \\rtcredxh &\n\\Sub {\\Sub {(\\Sub y y := x y)} y := x } x := `D\n\t& \\redxh & \\\\\n\\Sub {\\Sub xy y := x } x := `D\n\t& \\redxh &\n\\Sub {\\Sub {\\Sub xy x := `D } y := x } x := `D\n\t& \\redxh & \\\\\n\\Sub {\\Sub {\\Sub {(\\Sub x x := `D y)} x := `D } y := x } x := `D\n\t& \\rtcredxh\n\t&\n\\Sub {\\Sub {{}(`lz.zz)y} y := x } x := `D\n\t& \\rtcredxh & \\\\\n\\Sub {\\Sub {\\Sub {{}(`lw.ww)z} z := y } y := x } x := `D\n\t& \\rtcredxh &\n\t\t\t\\dots\n \\end{array} \\]\n(notice the $`a$-conversions, needed to adhere to Barendregt's convention).\nThis reduction is deterministic and clearly loops.\nNotice that $`D`D$ does not run to itself; however,\n \\[ \\begin{array}[t]{@{}lclcl}\n\\Sub {\\Sub xy y := x } x := `D\n\t& \\rtcredxsub &\n\\Sub xx x := `D\n\t& \\rtcredxsub & `D`D\n \\end{array} \\]\nso, as stated by Proposition~\\ref{explicit versus head}, the standard reduction result can be achieved by reduction in $\\redxsub$ (we will use $`D$ again below).\n\n \\end{enumerate}\n \\end{example}\n\n \\Comment\n \\begin{example} \\label{reductions example}\nTo illustrate the\nnotions of reduction, take $ M = (`lf.(`lx.f(xx))(`lx.f(xx)))(`lab.a) $, and $N = `lxb.xx $; then\n \\begin{itemize}\n\n \\item $\\begin{array}[t]{@{}lclclclclc}\nM &\\rtcredbmu& NN\n\t&=& (`lxb.xx)N\n\t&\\rtcredbmu& `lb.NN\n\t&\\rtcredbmu& `lbb'.NN & \\dots\n \\end{array} $\n\n \\item $\\begin{array}[t]{@{}lclclclclc}\nM &\\rtcredh& NN\n\t&=& (`lxb.xx)N\n\t&\\rtcredh& `lb.NN\n\t&\\rtcredh& `lbb'.NN & \\dots\n \\end{array} $\n\n\n \\item $\\begin{array}[t]{@{}lclc}\nM &\\rtcredxh& \\Sub (`lx.f(xx))((`lx.f(xx))) f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub f(yy) y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub (`lab.a)(yy) y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub {\\Sub `lb.a a := yy } y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub `lb.yy y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub `lb.(`lz.f(zz))y y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub { \\Sub `lb.f(zz) z := y } y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub { \\Sub `lb.(`lab'.a)(zz) z := y } y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub { \\Sub {\\Sub `lbb'.a a := zz } z := y } y := `lx.f(xx) } f := `lab.a \\\\\n\t&\\rtcredxh& \\Sub {\\Sub { \\Sub `lbb'.zz z := y } y := `lx.f(xx) } f := `lab.a\n\t& \\dots\n \\end{array} $\n\n \\end{itemize}\n \\end{example}\n \n\n\n\n\n \\section{A logical interpretation of \\texorpdfstring{$\\lmux$}{}-terms to \\texorpdfstring{$`p$}{}-processes}\n\\label{lambda interpretation}\n\nWe will now define our logical,\\footnote{It is called \\emph{logical} because it has its foundation in the relation between natural deduction and Gentzen's sequent calculus\\CLAC{.}\\Paper{ \\LK; in particular, the case for application is based on the representation of \\emph{modus ponens}\n \\[ \\def \\Turn} \\begin{array}[t]{@{}c@{\\quad}c@{\\quad}c {\\Turn} \\begin{array}[t]{@{}c@{\\quad}c@{\\quad}c}\n\\Inf{ \\derLK `G |- A\\Arr B \\quad \\derLK `G |- A }{ \\derLK `G |- B }\n& \\textrm{ by } &\n \\Inf{ \\derLK `G |- A\\Arr B,`D \\quad \\Inf { \\derLK `G |- A,`D \\quad \\derLK `G,B |- `D }{ \\derLK `G,A\\Arr B |- `D }\n}{ \\derLK `G |- B,`D }\n \\end{array} \\]}}\noutput-based interpretation $\\PilmuTerm[M] a $ of the $\\lmux$-calculus into the $`p$-calculus\\Paper{. }\\CLAC{ (where $M$ is a $\\lmu$-term, and $a$ is the name given to its (anonymous) output), which is essentially the one presented in \\cite{Bakel-Vigliotti-IFIPTCS'12}, but no longer considers $[`a]M$ to be a term.\nThe reason for this change is the following: using the interpretation of \\cite{Bakel-Vigliotti-IFIPTCS'12},\n \\[ \\begin{array}{rcl}\n\\SPilmuTerm[\\muterm`a.`lx.x] a\n\t&=&\n\\PilmuTerm[U `a . {l x . {v x}}] a\n \\end{array} \\]\nis in normal form, and all inputs and outputs are restricted; thereby, it is weakly bisimilar to $\\Zero$ and to $\\SPilmuTerm[a {(l x . xx)} {(l x . xx)}] a $.\nSo using that interpretation, we cannot distinguish between \\emph{blocked} and \\emph{looping} computations, which clearly affects any full-abstraction result.\nWhen restricting our interpretation to $\\lmu$, this problem disappears: since naming has to follow $`m$-abstraction, $\\muterm`a.`lx.x$ is not a term in $\\lmu$.\nSince $\\lmu$ is a subcalculus of $\\Lmu$, this change clearly does not affect the results shown in \\cite{Bakel-Vigliotti-IFIPTCS'12} that all hold for the interpretation we consider here as well.\n\n}\nThe main idea behind the interpretation, as in \\cite{Bakel-Vigliotti-CONCUR'09}, is to give a name to the anonymous output of terms; it combines this with the inherent naming mechanism of $\\lmu$.\n\\Paper{As we will show in Thm.~\\ref{soundness},}\\CLAC{As shown in \\cite{Bakel-Vigliotti-IFIPTCS'12},} this encoding naturally represents explicit head reduction; we will need to consider weak reduction later for the full abstraction result, but not for soundness, completeness, or termination.\n\n \\begin{definition}[Logical interpretation\\Paper{ of $\\lmux$ terms}\\CLAC{ \\cite{Bakel-Vigliotti-IFIPTCS'12}}] \\label{lmu interpretation}\nThe interpretation of $\\lmux$ terms into the $`p$-calculus is defined by:\n \\[ \\begin {array}{rcl@{\\quad}l}\n\\SPilmuTerm [x] a & \\ByDef & \\PilmuTerm [v x] a & (u \\textit{ fresh})\n\t\\\\\n\\SPilmuTerm [`lx.M] a & \\ByDef & \\PilmuTerm [l x . M] a & (b \\textit{ fresh})\n\t\\\\\n\\SPilmuTerm [MN] a & \\ByDef & \\PilmuTerm [a M N] a & (c,v,d \\textit{ fresh})\n\t\\\\\n\\SPilmuTerm [\\Sub M x := N ] a & \\ByDef & \\PilmuTerm [S M x := N] a\n\t\\\\\n\\SPilmuTerm [x := N] {\\textcolor{white}{a}} & \\ByDef & \\PiExsub x := N & (w \\textit{ fresh})\n\t\\\\\n\\SPilmuTerm [\\muterm`g.\\Cmd] a & \\ByDef & \\PilmuTerm [m `g . \\Cmd] a & (s \\textit{ fresh})\n\t\\\\\n\\SPilmuTerm [{[`b]M}] a & \\ByDef & \\PilmuTerm [n `b M] a\n\t\\\\\n\\SPilmuTerm [\\Sub M `b := N{`.}`g ] a & \\ByDef & \\PilmuTerm [C M `b := N . `g] a\n\t\\\\\n\\SPilmuTerm [`a := M{`.}`g] {\\textcolor{white}{a}} & \\ByDef & \\PiExContsub `a := N . `g & (v,d \\textit{ fresh})\n \\end {array} \\]\n \\end{definition}\n\n \\Comment\nAs we will show below, this interpretation improves on the results of \\cite{Bakel-Vigliotti-CONCUR'09} in that we encode not only explicit head reduction for the $`l$-calculus, but also explicit head reduction for the $\\lmu$-calculus.\n\nIn \\cite{Bakel-Vigliotti-IFIPTCS'12}, this interpretation was defined for $\\Lmu$, where naming and $`m$-abstraction are separated; in fact, there the definition above contained:\n \\[ \\begin {array}{rcll}\n\\SPilmuTerm [\\muterm`g.\\Cmd] a & \\ByDef & \\PilmuTerm [m `g . \\Cmd] a , & s \\textit{ fresh}\n\t\\\\\n\\SPilmuTerm [[`b]M] a & \\ByDef & \\PilmuTerm [n `b M] a\n \\end {array} \\]\n\\Paper\nNote that the this encoding very elegantly expresses that the main computation in $\\muterm`g.\\Cmd$ is blocked: the name $s$ is fresh and bound, so the main output of $\\PilmuTerm [m `g . \\Cmd] a $ cannot be received.\nMoreover, n}%\n\\CLAC{\\noindent N}otice that\n \\CLAC{$ \\begin{array}{rclclcl}\n\\SPilmuTerm [\\muterm`g.[`b]M] a\n\t&\\ByDef&\n\\PilmuTerm [m `g . {{}[`b]M}] a\n\t&\\ByDef&\n\\PilmuTerm [m `g . n `b M] a\n\t& \\congruent &\n\\PilmuTerm [u `g . `b M] a\n \\end{array} $}%\n\\Paper{ \\[ \\begin{array}{rclclcl}\n\\SPilmuTerm [\\muterm`g.[`b]M] a\n\t&\\ByDef&\n\\PilmuTerm [m `g . {{}[`b]M}] a\n\t&\\ByDef&\n\\PilmuTerm [m `g . n `b M] a\n\t& \\congruent &\n\\PilmuTerm [u `g . `b M] a\n \\end{array} \\]\nwhich implies that we can add $ \\SPilmuTerm [\\muterm`g.[`b]M] a\n\t\\ByDef\n\\PilmuTerm [u `g . `b M] a $ to our encoding.\n\n\\Paper\nNote that we could have avoided the implicit renaming in the case for $`m$-abstraction by defining:\n $ \\SPilmuTerm [\\muterm`g.\\Cmd] a \\ByDef \\New {s} ({ \\PilmuTerm[\\Cmd] {s} \\Par \\BEq `g=a }) $\nwhich is operationally the same as $ \\PilmuTerm [m `g . \\Cmd] a $ (they are, in fact, weakly bisimilar) but then we could not show that terms in {\\HNF} are translated to processes in normal form (Lem.~\\ref{head pi normal form}), a property that is needed in the proof of termination.\n\nThere is a strong relation between this interpretation and the abstract machine defined in \\cite{Crolard'99}, but for the fact that that only represents lazy reduction.\n\nAs in \\cite{Bakel-Vigliotti-JLC'14}, we can make the following observations:\n\n \\begin{remark} \\label{natural observations}\n \\begin{itemize}\n\n \\item\nThe synchronisations generated by the encoding only involve processes of the shape:%\n \\[ \\begin{array}{c@{\\dquad}c@{\\dquad}c}\n\\Picaps & \\Out `b & \\In z (`b,y) . (\\proc{P} \\Par \\proc{Q})\n \\end{array} \\]\nso in particular, substitution is always well defined.\nThese synchronisations are of the shape:\n \\[ \\begin{array}{rcl}\n\\New c ({\\New y`b ({\\proc{P} \\Par \\Out c }) \\Par \\In c (v,d) . ({ \\proc{R} \\Par \\In d (w) . \\Out a }) })\n\t&\\redPi&\n\\New y`b ({\\proc{P} \\Par \\proc{R}[y\\For v] \\Par \\In b (w) . \\Out a })\n \\end{array} \\]\nand after the synchronisation over $c$, $\\proc{P}$ can receive over $y$ from $\\proc{R}[y\\For `a]$ and send over $b$ to $\\In b (w) . \\Out a $; or of the shape\n \\[ \\begin{array}[t]{@{}lcl@{}}\n\\New c ({\\New yb ({\\proc{P} \\Par \\Out c }) \\Par \\Picaps })\n\t&\\redPi&\n\\New yb ({\\proc{P} \\Par \\Out a })\n \\end{array} \\]\n\n \\item\nAll synchronisation takes place \\emph{only} over channels whose names are bound connectors in the terms that are interpreted.\n\n \\item\nTo underline the significance of our results, notice that the encoding is not trivial, since%\n \\[ \\begin{array}{rcl}\n\\SPilmuTerm [`lyz.y] a & = & \\PilmuTerm [l y . {l z . {v y}}] a\n\\\\\n\\SPilmuTerm [`lx.x] a &=& \\PilmuTerm [l x . {v x}] a\n \\end{array} \\]\nprocesses that differ under $\\wbisimilar$.\n\n \\end{itemize}\n \\end{remark}\n\n\nNotice that the context switches do not really influence the structure of the process that is created by the interpretation since they have no representation in $`p$, but are statically encoded through renaming.\nThis strengthens our view that, as far as our interpretation is concerned, $`m$-reduction is not a separate computational step, but is essentially static administration, a reorganisation of the applicative structure of a term, which is dealt with by our interpretation statically rather than by synchronisation between processes.\nIn other words, the point of view from the $`p$-calculus is that modelling $`b$-reduction involves a computational step, but context switches are dealt with by congruence; this is only possible, of course, because the interpretation of the operand in application uses replication.\n\nIn \\cite{Bakel-Vigliotti-CONCUR'09} the case for application in the interpretation for $`l$-terms was defined as:\n \\[ \\begin {array}{rcll}\n\\PiLSem [a M N] a & \\ByDef & \\PiHTerm [a M N] a\n \\end {array} \\]\nIn particular, there the input on name $c$ is \\emph{not replicated}: this corresponds to the fact that for $`l$-terms, in $\\PiHTerm[M] c $, the output $c$ is used \\emph{exactly once}, which is not the case for the interpretation of $\\lmu$-terms: for example, $`a$ might appear many times in $M$, and since $\\SPilmuTerm[u `a . `a M] a = \\PilmuTerm[u `a . `a M] a = \\PilmuTerm[{M[a\/`a]}] a $, then the name $a$ appears many times in the latter.\n\nWe would like to stress that, although inspired by logic, our interpretation does not depend on types \\emph{at all}; in fact, we can treat untypeable terms as well, and can show that $ \\SPilmuTerm [{}(`lx.xx)(`lx.xx)] a $ (perhaps the prototype of a non-typeable term) runs forever without generating output (see Example~\\ref{DD is a zero process\n; this already holds for the interpretation of \\cite{Bakel-Vigliotti-CONCUR'09}).\n\nNotice that, as is the case for Milner's interpretation and in contrast to the interpretation of \\cite{Bakel-Vigliotti-CONCUR'09}, a guard is placed on the replicated terms.\nThis is not only done with an eye on proving preservation of termination, but more importantly, to make sure that $ \\New x ( \\PiExSub x := N ) \\wbisim \\Zero $, a property we need for our full abstraction result: since a term can have named sub-terms, the interpretation will generate output not only for the term itself, but also for those named terms, so $ \\New x ( \\PilmuTerm[N] x ) $ -- as used in \\cite{Bakel-Vigliotti-CONCUR'09} -- \\emph{can have} observable behaviour, in contrast to here, where $ \\New x ( \\PiExSub x := N ) = \\New x ( \\PiExsub x := N ) $ \\emph{is} weakly bisimilar to $\\Zero$.\n\n\nObserve the similarity between\n \\[ \\begin{array}{rcll}\n\\Paper{\\hspace*{13mm} }\\SPilmuTerm[a M N] a &\\ByDef& \\PilmuTerm[a M N] a & \\textrm{and} \\\\\n\\SPilmuTerm [\\Sub M c := N{`.}`g ] a &\\ByDef&\n\\PilmuTerm [C M c := N . `g ] a \\\\ &\\ByDef&\n\\PilmuTerm [c M c := N . `g ] a\n \\end{array} \\]\nThe first communicates $N$ via the output channel $c$ of $M$\\CLAC{ (which might occur more than once inside $\\PilmuTerm[M] c $, so replication is needed)}, whereas the second communicates with all the sub-terms that have $c$ as output name, and changes the output name of the process to $`g$.\\Paper{\\footnote{A similar observation can be made for the interpretation of $\\lmu$ in $\\X$ \\cite{Bakel-Lescanne-MSCS'08}.}}\nIn other words, application is just a special case of explicit structural substitution;\nthis allows us to write $ \\PilmuTerm[A M N] a\n$ for $\\SPilmuTerm[a M N] a $.\n\\Paper{This very elegantly expresses exactly what the structural substitution does: it `connects' arguments with the correct position in a term.}\nThis stresses that the $`p$-calculus constitutes a very powerful abstract machine indeed: although the notion of structural reduction in $\\lmu$ is very different from normal $`b$-reduction, no special measures had to be taken in order to be able to express it; the component of our interpretation that deals with pure $`l$-terms is almost exactly that of \\cite{Bakel-Vigliotti-CONCUR'09} (ignoring for the moment that substitution is modelled using a guard, which affects also the interpretation of variables), but for the use of replication in the case for application.\n\n\n\\Paper\nThe operation of \\emph{renaming} we will use below is defined and justified via the following lemma, which states that we can safely rename the output of an interpreted $\\lmu$-term.\nFirst we need to show:\n\n \\begin{proposition} [\\cite{Bakel-Vigliotti-JLC'14}] \\label{aux-renaming lemma}\n \\[ \\begin{array}{rcl}\n\\New xb ({ \\In c(v,d) . ( \\proc{P} \\Par \\BEq d=e ) }) &\\wbisim& \\CLAC{ \\\\ }\n\\New a ({ \\BEq a=e \\Par \\New xb ({ \\In c(v,d) . ( \\proc{P} \\Par \\BEq d=a ) }) })\n \\end{array} \\]\n\n \\end{proposition}\n\nWe use this result to show the following:\n\n \\begin{lemma} [Renaming lemma] \\label{renaming lemma}\n \\begin{enumerate}\n\n \\item \\label{renaming outside}\nLet $ a \\not\\in \\fv(M)$, then\n $ \\begin{array}[t]{@{}lcl@{}}\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [M] a ) & \\wbisim & \\SPilmuTerm [M] e .\n \\end{array} $\n\n \\item \\label{renaming inside}\n $ \\begin{array}[t]{@{}lcl@{\\quad}l@{}}\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [M] b ) & \\wbisim & \\SPilmuTerm [{M[e \\For a]}] b & (b \\not= a)\n \\end{array} $.\n\n \\end{enumerate}\n \\end{lemma}\n\n\n \\begin{Proof}\nBy induction on the structure of $\\lmux$-terms.\n\n \\begin{description} \\itemsep 4pt\n\n \\item [$ M = x $]\n$ \\begin{array}[t]{@{}lclclcl}\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [v x] a )\n\t& \\ByDef &\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [v x] a )\n\t& \\wbisim &\n\\PilmuTerm [v x] e\n\t& \\ByDef &\n\\SPilmuTerm [v x] e\n \\end{array} $\n\n\n \\myitem [$ M = `lx.N $]\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [l x . N] a )\n\t& \\ByDef &\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [l x . N] a )\n\t& \\redPi (a) \\\\\n\\New axb ( \\BEq a=e \\Par \\PilmuTerm [N] b \\Par \\Out e )\n\t& \\wbisim & (a \\not= x \\Then a \\notele \\fv(N)) \\\\\n\\New a ( \\BEq a=e ) \\Par \\PilmuTerm [l x . N] e\n\t& \\wbisim &\n\\SPilmuTerm [l x . N] e\n \\end{array} $\n\n\n \\myitem [$ M = PQ $]\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [a P Q] a )\n\t& \\ByDef \\\\\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [X P Q] a )\n\t& \\wbisim & (\\Ref{aux-renaming lemma}, a \\not=c, a \\notele \\fv(P,Q)) \\\\\n\\PilmuTerm [X P Q] e\n\t& \\ByDef &\n\\SPilmuTerm [PQ] e\n \\end{array} $\n\n\n \\myitem [$ M = \\Sub P x := Q $]\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [s P x := Q] a )\n\t& \\ByDef & \\\\\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [s P x := Q] a )\n\t& \\wbisim & (\\IH\\Ref{renaming outside}, x \\not= a, a \\notele \\fv(P,Q)) & \\\\\n\\PilmuTerm [s P x := Q] e\n\t& \\ByDef &\n\\SPilmuTerm [s P x := Q] e\n \\end{array} $\n\n\n \\myitem [{$ M = \\muterm`b. [`g] N $}]\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [{\\muterm`b. [`g] N}] a )\n\t\\kern-2cm &&\n\t& \\ByDef & \\\\\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [u `b . `g N] a )\n\t& \\wbisim & \\multicolumn{3}{l}{ (\\IH\\Ref{renaming inside}, a \\not=`g, a \\notele \\fv(N)) } \\\\\n\\PilmuTerm [u `b . `g N] a \\, [e \\For a]\n\t& \\wbisim &\n\\PilmuTerm [u `b . `g N] e\n\t& \\ByDef &\n\\SPilmuTerm [u `b . `g N] e\n \\end{array} $\n\n\n \\myitem [$ M = \\Sub P `b := Q{`.}`g $]\n\\New a ( \\BEq a=e \\Par \\SPilmuTerm [\\Sub P `b := Q{`.}`g ] a )\n\t& \\ByDef & \\\\\n\\New a ( \\BEq a=e \\Par \\PilmuTerm [C P `b := Q . `g] a )\n\t& \\wbisim & (\\IH\\Ref{renaming outside}, a\\not= `b, a \\notele \\fv(P,Q)) \\\\\n\\PilmuTerm [C {P[e \\For a]} `b := {Q[e \\For a]} . `g] e\n\t& \\ByDef & \\\\\n\\PilmuTerm [C P `b := Q . `g] e\n\t& \\ByDef &\n\\SPilmuTerm [C P `b := Q . `g] e \\quad\\usebox{\\proofbox}\n \\end{array} $\n\n \\end{description}\n \\end{Proof}\n\nFor reasons of clarity, we use some auxiliary notions of equivalence, that are used in Thm~\\ref{soundness}.\n\n \\begin{definition} \\label{sbired definition}\n \\begin{enumerate}\n\n \\item\nWe define a \\emph{garbage collection} bisimilarity by: $ \\proc{P} \\redGC \\proc{Q} $\nif and only if there exists $\\proc{R}$ such that $ \\proc{P} = \\proc{Q} \\Par \\proc{R} $ and $ \\proc{R} \\wbisimilar \\Zero $.\n\nWe call a process that is weakly bisimilar to $\\Zero$ \\emph{garbage}.\n\n \\item\nWe define $\\redR$ as the largest, symmetric equivalence such that:\n\n \\begin{enumerate}\n\n \\item for all $\\proc{P}$ such that the name $a$ is the only free output of $\\proc{P}$ and is only used to output and only once: $ \\New a ( \\proc{P} \\Par \\Eq a=e ) \\redR \\proc{P}[e \\For a]$,\n\n \\item for all contexts $\\Cont$, if $ \\proc{P} \\redR \\proc{Q} $ then $ \\Cont [\\proc{P}] \\redR \\Cont [\\proc{Q}] $.\n\n \\end{enumerate}\n\nWe will use $\\redR$ if we want to emphasise that two processes are equivalent just using renaming, \\emph{i.e.}~define: $ \\New a ( \\proc{P} \\Par \\Eq a=e ) \\redR \\proc{P}[e \\For a]$.\n \\item\nWe define $ \\conredPi $ as $ \\lpired $.\n\n \\end{enumerate}\n \\end{definition}\nNotice that ${\\redGC} \\subset {\\wbisimilar}$ and ${\\redR} \\subset {\\wbisimilar}$.\n\n\n\nUsing this lemma, we can show the following:\n \\begin{example} \\label{redex example}\nThe interpretation of the $`b$-redex $(`lx.P)Q$ reduces as follows:\n \\[ \\def1.1} \\begin{array}{rcl{1.1} \\begin{array}[t]{@{}l\\CLAC{@{~}}c@{~}l}\n\\SPilmuTerm [a {(l x . P )} Q] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [a {l x . P} Q] a\n\t& \\redPi & (c) \\\\\n\\New c ( \\New bx ({ \\PilmuTerm[P] b \\Par \\BEq b=a \\Par \\PilmuTerm[x := Q] {} }) \\Par\n\\PiExContsub c := Q . a )\n\t& \\congruent & (c \\notele \\fn(P,Q)) \\\\\n\\New bx ({ \\PilmuTerm[P] b \\Par \\BEq b=a \\Par \\PiExSub x := Q }) \\Par \\New c ( \\PiExContSub c := Q . a )\n\t& \\wbisimG & (\\ast) \\\\\n\\New bx ({ \\PilmuTerm[P] b \\Par \\BEq b=a \\Par \\PiExSub x := Q })\n\t& \\wbisimR & (\\Ref{renaming lemma}) \\\\\n\\PilmuTerm[S P x := Q] a\n\t& \\ByDef &\n\\SPilmuTerm[s P x := Q] a\n \\end{array} \\]\n\nThis shows that $`b$-reduction is implemented in $`p$ by at least one synchronisation.\nNotice that, in step $(\\ast)$, the process\n $ \\New c ( \\PiExContSub c := Q . a ) \\ByDef \\New c ( \\PiExContsub c := Q . a ) $\nis weakly bisimilar to $\\Zero$.\n\n\\Comment\nRenaming is not always needed; for a concrete example, take\n \\[ \\begin{array}[t]{@{}l@{~}c@{~}l}\n\\SPilmuTerm [a {(l x . x )} (`ly.y)] a\n\t& \\ByDef & \\\\\n \\PilmuTerm [a {l x . x} `ly.y] a\n\t& \\redPi & (c) \\\\\n\\New bx ({ \\PilmuTerm[x] b \\Par \\BEq b=a \\Par \\PilmuTerm[x := `ly.y ] {} }) \\Par {}\n\\New c ( \\PiExContsub c := `ly.y . a )\n\t& \\wbisimG & \\\\\n\\New bx ( \\PilmuTerm[x] b \\Par \\BEq b=a \\Par \\PilmuTerm[x := `ly.y] {} )\n\t& \\ByDef & \\\\\n\\New bx ( \\PilmuTerm[v x] b \\Par \\BEq b=a \\Par \\PiExsub x := `ly.y )\n\t& \\redPi & (x) \\\\\n\\New bw ({ \\BEq w=b \\Par \\BEq b=a \\Par \\PilmuTerm[`ly.y ] w \\Par \\New x ( \\PiExsub x := `ly.y ) })\n\t& \\wbisimG & \\\\\n\\New bw ( \\BEq w=b \\Par \\BEq b=a \\Par \\PilmuTerm[`ly.y ] w )\n\t& \\ByDef & \\\\\n\\New bw ( \\BEq w=b \\Par \\BEq b=a \\Par \\PilmuTerm[l y . y] w )\n\t& \\redPi & (w) \\\\\n\\New bw ( \\BEq w=b \\Par \\BEq b=a ) \\Par \\PilmuTerm[l y . y] a\n\t& \\wbisimG &\n\\SPilmuTerm[l y . y] a\n \\end{array} \\]\nwhich shows that renaming is not needed here; however, we need it for cases like\n \\[ \\begin{array}[t]{@{}l@{~}c@{~}l}\n\\SPilmuTerm [a {(l x . x )} y] a\n\t& \\ByDef & \\\\\n \\PilmuTerm [a {l x . x} y] a\n\t& \\redPi & (c) \\\\\n\\New bx ({ \\PilmuTerm[x] b \\Par \\BEq b=a \\Par \\PilmuTerm[x := y ] {} }) \\Par\n\\New c ( \\PiExContsub c := y . a )\n\t& \\wbisimG & \\\\\n\\New bx ( \\PilmuTerm[x] b \\Par \\BEq b=a \\Par \\PilmuTerm[x := y] {} )\n\t& \\ByDef & \\\\\n\\New bx ( \\PilmuTerm[v x] b \\Par \\BEq b=a \\Par \\PiExsub x := y )\n\t& \\redPi & (x) \\\\\n\\New bw ({ \\BEq w=b \\Par \\BEq b=a \\Par \\PilmuTerm[v y] w \\Par \\New x ( \\PiExsub x := y ) })\n\t& \\wbisimG & \\\\\n\\New bw ( \\BEq w=b \\Par \\BEq b=a \\Par \\PilmuTerm[v y] w )\n\t& \\wbisimR & (\\Ref{renaming lemma}) \\\\\n\\PilmuTerm[v y] a\n\t& \\ByDef &\n\\SPilmuTerm[y] a\n \\end{array} \\]\n\nOn the other hand, as mentioned above, $`m$-reduction consists of a reorganisation of the structure of a term by changing its applicative structure.\nSince application is modelled through parallel composition, this implies that the interpretation of a $`m$-redex is essentially dealt with by congruence and renaming.\nFor example,\n \\[ \\begin{array}[t]{@{}lll}\n\\SPilmuTerm [a {(u `b . `b P)} Q] a\n\t& \\,\\ByDef & \\\\\n\\PilmuTerm [a {(u `b . `b P)} Q] a\n\t& =_{`a} & \\\\\n\\New `b ({ \\PilmuTerm [P] `b \\Par \\PiExContsub `b := Q . a })\n \\end{array} \\]\nWe can show, using Lem.~\\ref{replication lemma}, this last process is weakly bisimilar to\n \\[ \\begin{array}[t]{@{}lllll}\n\\New `g ( \\New `b ({ \\PilmuTerm [P] `g \\Par \\PiExContSub `b := Q . a }) \\Par\n\\PiExContSub `g := Q . a )\n\t& \\ByDef &\n\\SPilmuTerm [a {c P `b := Q . a} Q] a\n \\end{array} \\]\n(notice that\nwe have separated out the outside name of the term $P$, being $`b$, which we renamed to $`g$; this leaves two context substitutions, one dealing with the occurrences of $`b$ inside $P$, and one with $`g$.\\footnote{This corresponds to the behaviour of rule $(\\Riii)$ in $\\X$.})\nThis illustrates that structural substitution is dealt with by structural congruence and $`a$-conversion.\n\n\n\n \\end{example}\nThis example stresses that all synchronisations in the image of $\\PilmuTerm[`.] `. $ are over hidden channels, so by Proposition~\\ref{reduction hidden} are in $\\wbisim$.\n\nWe can also show that typeability is preserved: first, we need to prove a substitution lemma.\n\n \\begin{proposition}[Substitution] \\label{substitution lemma}\nIf $\\Pider \\proc{P} : `G,x{:}A |- x{:}A,`D $, then $\\Pider \\proc{P}[b\/x] : `G,b{:}A |- b{:}A,`D $ for $b$ fresh or $b{:}A \\ele `G \\union `D$.\n \\end{proposition}\n\n \\begin{Proof} Easy.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\n \\begin {theorem} [{$\\SPilmuTerm [`.] `. $} preserves $\\lmux$ types]\n \\label{typeability is preserved}\nIf $\\derLmux `G |- M : A | `D $, then $\\Pider \\SPilmuTerm [M] a : `G |- a{:}A,`D $.\n \\end {theorem}\n\n \\begin{Proof}\nBy induction on the structure of derivations in $\\TurnLmux$:\n\n \\begin {description} \\itemsep3mm\n\n \\item [$(\\Ax)$]\nThen $M= x$, and $ `G = `G ', x{:}A$.\nNotice that $\\PiSTerm[v x] a = \\PilmuTerm [v x] a $, and that\n \\[ \\begin {array}{@{}c}\n \\Inf\t[\\InRule]\n\t{\\Inf\t[\\Bang]\n\t\t{\\Comment{%\n\t\t\\Inf\t[\\InRule]\n\t\t\t{\\Inf\t[\\OutRule]\n\t\t\t\t{\\Inf\t[\\Zero]\n\t\t\t\t\t{\\Pider {\\Zero} : `G',w{:}A |- w{:}A }\n\t\t\t\t}{ \\Pider \\Out a : `G',w{:}A |- a{:}A,w{:}A }\n\t\t\t}{ \\Pider \\PiHTerm[v u] a : `G',u{:}A |- a{:}A }\n\t\t \\Inf\t[\\forwarder]\n\t\t\t{ \\Pider \\Eq u=a : `G',u{:}A |- a{:}A }\t\n\t\t}{ \\Pider \\Bang \\PiHTerm[v u] a : `G',u{:}A |- a{:}A }\n\t}{ \\Pider \\PilmuTerm[v x] a : `G',x{:}A |- a{:}A }\n \\end {array} \\]\n\n\n \\item [$(\\arrI)$]\nThen $M = `lx.N$, $A = C \\arr D$, and $\\derLmux `G,x{:}C |- N : D | `D $; by definition, $\\SPilmuTerm[l x . N ] a = \\PilmuTerm [l x . N ] a $.\nThen, by induction, $ \\D \\dcol \\Pider \\SPilmuTerm [N] b : `G,x{:}C |- b{:}D,`D $ exists, and we can construct:\n \\[ \\def\\Turn{\\Turn}\n \\Inf\t[`n]\n\t{\\Inf\t[`n]\n\t\t{\\Inf\t[\\mid]\n\t\t\t{ \\InfBox{\\D}{ \\Pider \\SPilmuTerm [N] b : `G,x{:}C |- b{:}D,`D }\n\t\t\t \\quad\n\t\t\t \\Inf\t[\\PairOut\\,']\n\t\t\t\t{ \\Pider \\Out a : x{:}C |- a{:}C\\arr D,b{:}D }\n\t\t\t}{ \\Pider \\SPilmuTerm [N] b \\Par \\Out a : x{:}C |- a{:}C\\arr D,b{:}D,`D }\n\t\t}{ \\Pider {\\New b ( \\SPilmuTerm [N] b \\Par \\Out a )} : `G,x{:}C |- a{:}C\\arr D,`D }\n\t}{ \\Pider {\\PilmuTerm[l x . N ] a } : `G |- a{:}C\\arr D,`D }\n \\]\nNotice that $\\PilmuTerm[l x . N ] a = \\SPilmuTerm [`lx . N ] a $.\n\n\n \\item [$(`m)$]\nThen $M = \\muterm `a.[`b]N$, and either:\n\n \\begin{description}\n\n \\item[$`a \\not= `b $]\nThen $\\derLmux `G |- N : A | `a{:}A,`b{:}B,`D $.\nBy induction, there exist a derivation for $ \\Pider \\SPilmuTerm [N] `b : `G |- `a{:}A,`b{:}B,`D $; then by Lem.~\\ref{substitution lemma}, also $ \\Pider \\PilmuTerm [u `a . `b N] a : `G |- a{:}A,`b{:}B,`D $ as well, and $\\PilmuTerm [u `a . `b N] a = \\SPilmuTerm [u `a . `b N] a $.\n\n \\item[$`a = `b $]\nThen $\\derLmux `G |- N : A | `a{:}A,`D $.\nBy induction, there exist a derivation for $ \\Pider \\SPilmuTerm [N] a : `G |- a{:}A,`a{:}A,`D $; then by Lem.~\\ref{substitution lemma}, also $ \\Pider \\PilmuTerm [u `a . `a N] a : `G |- a{:}A,`D $ as well, and $\\PilmuTerm [u `a . `a N] a = \\SPilmuTerm [u `a . `a N] a $.\n\n \\end{description}\n\n\n \\item [$(\\CCut)$]\nThen $M = \\ContSub P `a := Q . `g $ and we have $\\derLmux `G |- P : C | `a{:}A\\arr B,`D $ and $\\derLmux `G |- Q : A | `g{:}B,`D $ for some $B$.\nBy induction, there exist derivations $ \\D_1 \\dcol \\Pider { \\SPilmuTerm [P] a } : `G |- a{:}C,`a{:}A\\arr B,`D $ and, since $a$ is fresh,\n$ \\D_2 \\dcol \\Pider \\SPilmuTerm [Q] w : `G |- w{:}B,`D $, and we can construct\nthe derivation\n \\[ \\def\\Turn{\\Turn}\n\\kern-1mm \\begin{array}{@{}c}\n\\Inf\t[`n]\n\t{\\Inf\t[\\Par]\n\t\t{\\InfBox{\\D_1}{ \\Pider \\PilmuTerm [P] a : `G |- a{:}B\\arr A,`D }\n\t\t \\Inf\t[!]\n\t\t\t{\\Inf\t[\\PairIn]\n\t\t\t\t{\\Inf\t[\\mid]\n\t\t\t\t\t{\\Inf\t[!]\n\t\t\t\t\t\t{\\Inf\t[\\PiOverline{`n}]\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t\t\t{\\InfBox{\\D_2}{ \\Pider \\PilmuTerm [Q] w : `G |- w{:}B,`D }\n\t\t\t\t\t\t\t\t}{\n\t\t\t\t\t\t\t\t\\Pider \\BOut b (w) . { \\PilmuTerm [Q] w } : `G |- b{:}B,w{:}B,`D }\n\t\t\t\t\t\n\t\t\t\t\t\t}{ \\Pider \\Bang {\n\t\t\t\t\t\t\t\\BOut b (w) . { \\PilmuTerm [Q] w } } : `G |- b{:}B,`D }\n\t\t\t\t\t \\Inf\t[!]\n\t\t\t\t\t\t{\\Comment\n\t\t\t\t\t\t \\Inf\t[\\InRule]\n\t\t\t\t\t\t\t{\\Inf\t[\\OutRule]\n\t\t\t\t\t\t\t\t{\\Inf\t[\\Zero]\n\t\t\t\t\t\t\t\t\t{\\Pider {\\Zero} : w{:}A |- w{:}A }\n\t\t\t\t\t\t\t\t}{ \\Pider \\Out `g : w{:}A |- `g{:}A,w{:}A }\n\t\t\t\t\t\t\t}{ \\Pider \\Eq d=`g : d{:}A |- `g{:}A }\n\t\t\t\t\t\t \\Inf\t[\\forwarder]\n\t\t\t\t\t\t\t{ \\Pider \\Eq d=`g : d{:}A |- `g{:}A }\n\t\t\t\t\t\t}{ \\Pider \\BEq d=`g : d{:}A |- `g{:}A }\n\t\t\t\t\t}{ \\Pider \\PiExSub b := Q \\Par \\BEq d=`g : `G,d{:}A |- `g{:}A,b{:}B,`D }\n\t\t\t\t}{ \\Pider \\In `a (b,d) . ( \\PiExSub b := Q \\Par \\BEq d=`g ) : `G,`a{:}B\\arr A |- `g{:}A,`D }\n\t\t\t}{ \\Pider \\Bang \\In `a (b,d) . ( \\PiExSub b := Q \\Par \\BEq d=`g ) : `G,`a{:}B\\arr A |- `g{:}A,`D }\n\t\t}{ \\Pider \\PilmuTerm [P] a \\Par \\Bang \\In `a (b,d) . ( \\PiExSub b := Q \\Par \\BEq d=`g ) : `G,`a{:}B\\arr A |- `g{:}A,`D }\n\t}{ \\Pider \\PilmuTerm [c P `a := Q . `g] a : `G |- `g{:}A,`D }\n \\end{array} \\]\nand $ \\PilmuTerm [c P `a := Q . `g] a = \\SPilmuTerm[c P `a := Q . `g] a = \\PilmuTerm [C P `a := Q . `g] a $.\n\n \\item [\\CLAC{$\\arrE)$, $(\\TCut$}\\Paper{$(\\arrE)$, $(\\TCut)$}]\nNotice that since $ \\SPilmuTerm[a P Q] a = \\PilmuTerm[A P Q] a $ and $\\SPilmuTerm[\\Sub M x := N ] a = \\PilmuTerm[S M x := N ] a $, these cases are very similar to that for $(\\CCut)$.\n\\quad\\usebox{\\proofbox}\n \\end {description}\n \\end{Proof}\n\n\nWe can show a witness reduction result for our encoding, for which we need the property:\n\n \\begin{lemma}[Contraction \\cite{Bakel-Vigliotti-JLC'14}] \\label{Pi witness reduction}\n \\begin{enumerate}\n\n \\item If $ \\Pider \\New bc ( \\proc{P} \\Par \\Out a ) \\Par \\In a (x) . \\Out e : `G,a{:}C |- a{:}C,`D $, $a$ does not occur in $\\proc{P}$, and $a \\not= e$, then $ \\Pider \\New bc ( \\proc{P} \\Par \\Out e ) : `G |- `D $.\n\n \\item If $a$ does not occur in $\\proc{P}$ and $\\proc{Q}$ and $ \\Pider \\New bc ( \\proc{P} \\Par \\Out a ) \\Par \\In a (x,y) . \\proc{Q} : `G,a{:}C |- a{:}C,`D $, then \\\\ $ \\Pider \\New bc ( \\proc{P} \\Par \\proc{Q} [b \\For x,c \\For y] ) : `G |- `D $.\n\n \\end{enumerate}\n \\end{lemma}\n\nUsing this result, we can also show a witness reduction result:\n\n \\begin{theorem}\nIf $ \\Pider \\PilmuTerm[P] a : `G |- `D $, and $ \\PilmuTerm[P] a \\rtcredPi \\proc{Q} $, then $ \\Pider \\proc{Q} : `G |- `D $.\n \\end{theorem}\n\n \\begin{Proof}\nBy Remark~\\ref{natural observations}, Theorem~\\ref{typeability is preserved}, and Lemma~\\ref{Pi witness reduction}.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\n\nNotice that, as for Milner's and Sangiorgi's interpretations, ours is not extensional, since\n$ \\PilmuTerm[`D`D] a \\wbisim \\Zero $, but not $ \\PilmuTerm[`lx.`D`Dx] a \\wbisim \\Zero $ (see Lem.~\\ref{zero lemma} and \\ref{bmu reduction characterisation}).\n\n \\begin{remark} \\label{not extensional}\nWe could not have represented the extensional rules: note that\n \\[ \\begin{array}{@{}lcl}\n\\PilmuTerm [`lx.yx] a\n\t& \\ByDef &\n\\PilmuTerm [l x . {A y x}] a\n \\end{array} \\]\ndoes not reduce to $\\PilmuTerm[y] a $, and neither does\n \\[ \\begin{array}{@{}lclcl}\n\\PilmuTerm [{`m`a . [`b] y}] a\n\t& \\ByDef &\n\\PilmuTerm [u `a . `b y] a\n\t& = &\n\\PilmuTerm [y] `b\n \\end{array} \\]\nreduce to:\n \\[ \\begin{array}{@{}lclclcl}\n\\New xb ( \\PilmuTerm[y] `b \\Par \\Out a )\n\t& = &\n\\PilmuTerm [l x . {u `g . `b y}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [l x . {u `g . `b y}] a\n\t& \\ByDef &\n\\PilmuTerm [l x . {`m`g . {[`b]} y }] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [{`lx.`m`g . [`b] y }] a\n\t& = &\n\\PilmuTerm [{`lx.`m`g . [`b] y [x{`.}`g \\For `a]}] a\n \\end{array} \\]\n \\end{remark}\n\n\\Comment{ \\Double }\n\n\nTo illustrate the expressiveness and elegance of our interpretation, we give some examples:\n \\begin{example} \\label{our example}\n \\begin{enumerate} \\itemsep 1mm\n\n \\item\nThe interpretation of the fixed-point combinator runs as follows:\n\n\\[ \\begin{array}[t]{lcl}\n\\PilmuTerm [`lf.(`lx.f(xx))(`ly.f(yy))] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [l f . (`lx.f(xx))(`ly.f(yy))] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [l f . {a `lx.f(xx) `ly.f(yy)}] a \\quad\n\t& \\ByDef & \\\\\n\\PilmuTerm [l f . {d {l x . {{}f(xx)}} `ly.f(yy)}] a\n\t& \\redPi(c),\\wbisimG & \\\\\n\\PilmuTerm [l f . {S {}f(xx) x := `ly.f(yy)}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [l f . {t\n\t\t{a {v f} {(xx)}} x := `ly.f(yy)}] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [l {}f . {t {a {}f {(xx)}} x := `ly.f(yy)}] a\n \\end{array} \\]\n\n\\noindent\nThis last process is in normal form, \\emph{i.e.}~cannot be reduced further: since the only `reachable inputs' are $c$ and $f$, and the only `reachable output' is $x$, which is different from both $c$ and $f$, no synchronisation is possible.\nMoreover, this is not a lazy reduction.\n\n\\Comment\n \\item\nIn Fig.~\\ref{figure double} we run the interpretation of the term $(`lx.x)(\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y))$\nfrom Example~\\ref{redxh examples}\n$\\PilmuTerm[{}(`lx.x)(\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y))] a $,\n \\[ \\begin{array}{@{}lcl}\n(`lx.x)(\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y)) & \\red \\\\\n\\sub x x := \\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y) & \\red \\\\\n\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y) & \\red \\\\\n\\muterm`a.[`a](\\sub q q := \\muterm`b.[`a]`ly.y ) & \\red \\\\\n\\muterm`a.[`a]\\muterm`b.[`a]`ly.y & \\red \\\\\n\\muterm`a.[`a]`ly.y\n \\end{array} \\]\nas an example of a term that generates two outputs over $`a$, and highlights the need for the repeated use of replication; notice that the individual reduction steps return (translated) in that figure.\n\nNotice that\n \\[ \\begin{array}{@{}lcl}\n(`lx.x)(\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y))\n\t& \\redxh & \\\\\n\\Sub x x := \\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y)\n\t& \\redxh & \\\\\n\\muterm`a.[`a](`lq.q)(\\muterm`b.[`a]`ly.y)\n\t& \\redxh & \\\\\n\\muterm`a.[`a]\\Sub q q := \\muterm`b.[`a]`ly.y\n\t& \\redxh & \\quad \\\\\n \\multicolumn{3}{l}{\n\\muterm`a.[`a]\\muterm`b.[`a]`ly.y\n\t\\hfill \\redxh\n\\LTerm\n }\n \\end{array} \\]\n\n \\item\n $ \\begin{array}[t]{@{}lclcl}\n\\PilmuTerm[\\Sub M x := N L ] a & = \\\\\n\\PilmuTerm[A {S M x := N } L ] a & \\congruent \\\\\n\\PilmuTerm[S {a M L} x := N ] a & = \\\\\n\\PilmuTerm[\\Sub ML x := N ] a\n \\end{array} $\n\n\n \\item\nWhen running the reduction of Example \\ref{redxh examples}\\ref{mu reduction}, the naming features are taken care of by the interpretation:\n \\[ \\begin{array}{lcl}\n\\PilmuTerm[{}(\\muterm`a.[`b]\\muterm`d.[`a](`ly.y))(`lz.z)] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a {u `a . `b \\muterm`d.[`a](`ly.y)} `lz.z] a\n\t& = & \\\\\n\\PilmuTerm[a {\\muterm`d.[c](`ly.y)} `lz.z] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a {u `d . c `ly.y} `lz.z] a\n\t& = & \\\\\n\\PilmuTerm[a `ly.y `lz.z] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm[a (`ly.y) (`lz.z)] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a {l y . y} `lz.z] a\n\t& \\redPi (c) & \\\\\n\\New yb ({ \\PilmuTerm[y] b \\Par \\PiExSub y := `lz.z \\Par \\BEq b=a })\n\t& \\wbisimilar & (\\Ref{renaming lemma}) \\\\\n\\PilmuTerm[S y y := `lz.z] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[f {v y} y := {l z . z}] a\n\t& \\red (y) \\\\\n\\New w ( \\BEq w=a \\Par \\PilmuTerm[l z . z] w ) \\Par \\New y ( \\PiExSub y := `lz.z )\n\t& \\red (w) \\\\\n\\PilmuTerm[l z . z] w \\Par \\New w ( \\BEq w=a ) \\Par \\New y ( \\PiExSub y := `lz.z )\n\t& \\wbisim &\n\\PilmuTerm[`lz.z] a\n \\end{array} \\]\n\n\\Comment\n \\[ \\begin{array}{@{}lcl}\n\\SPilmuTerm[(\\muterm`a.[`b]\\muterm`d.[`a](`ly.y))(`lz.z)] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[A (\\muterm`a.[`b]\\muterm`d.[`a](`ly.y)) {`lz.z}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a {u `a . `b {\\muterm`d.[`a](`ly.y)}} {`lz.z}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[A {n `b {\\muterm`d.[c](`ly.y)}} {`lz.z}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[A {n `b {u `d . c (`ly.y)}} {`lz.z}] a\n\t& = & \\\\\n\\PilmuTerm[A {`ly.y} {`lz.z}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a {l y . y} {`lz.z}] a\n\t\\quad & \\redPi & (c) \\\\\n\\New yb ( \\PilmuTerm[y] b \\Par \\PiExSub y := `lz.z \\Par \\BEq b=a ) \\Par\n\\New c ( \\PiExContSub c := {`lz.z} . a )\n\t& \\wbisimG & \\\\\n\\New yb ( \\PilmuTerm[y] b \\Par \\PiExSub y := `lz.z \\Par \\BEq b=a )\n\t& \\ByDef & \\\\\n\\New yb ( \\PilmuTerm[v y] b \\Par \\PiExsub y := `lz.z \\Par \\BEq b=a )\n\t& \\redPi & (y) \\\\\n\\New bw ( \\BEq w=b \\Par \\PilmuTerm[`lz.z] w \\Par \\BEq b=a ) \\Par \\New y ( \\PiExSub y := `lz.z )\n\t& \\wbisimG & \\\\\n\\setcounter{indb}{1}\t\n \\New bw ( \\BEq w=b \\Par {} \\PilmuTerm[l z . z] w \\Par \\BEq b=a )\n\t& \\redPi & (w,b) \\\\\n\\PilmuTerm[l z . z] a \\Par {} \\New bw ( \\BEq w=b \\Par \\BEq b=a )\n\t& \\wbisimG &\n\\SPilmuTerm[l z . z] a\n \\end{array} \\]\n\n\n\n \\item\n$ \\begin{array}[t]{@{}lcl}\n\\SPilmuTerm [a {a P Q} R] a\n\t& \\ByDef,\\congruent &\n\\NewA cc' ( \\PilmuTerm [P] c' \\Par \\PiExContsub c' := Q . c \\Par\n\\PiExContsub c := R . a )\n \\end{array} $\n\n\\noindent\nso components of applications are placed in parallel under the interpretation.\nSimilarly, we have\n $\n\\SPilmuTerm [C {C M `a := Q . `b } `g := L . `d ] a\n=\n\\New `g`a ( \\PilmuTerm [M] a \\Par \\PiExContSub `a := Q . `b \\Par \\PiExContSub `g := L . `d )\n $, so repeated structural substitutions are also placed in parallel under the interpretation and can be applied independently.\n\n \\end{enumerate}\n \\end{example}\n\n \\section{Soundness, completeness, and termination}\n \\label{Soundness, completeness, and termination}\n\n\n\nWe can now show a reduction-preservation result for explicit head reduction for $\\lmux$, by showing that $\\PilmuTerm [`.] {`.} $ preserves $\\redxh$ up to weak bisimularity, stated using $\\equivC$ in \\cite{Bakel-Vigliotti-IFIPTCS'12}.\n\\Paper\nNotice that we prove the result for $\\lmux$ terms, do not require the terms to be closed, and that the result is shown for single step reduction.\n\n\n\n \\begin{theorem} [Soundness] \\label{soundness} \\label{head reduction simulation}\nIf $ M \\redxh N $, then $\\SPilmuTerm [M] a \\conredPi\n\\SPilmuTerm [N] a $.\n \\end {theorem}\n\n \\begin{Proof}\nBy induction on the definition of explicit head reduction\\Comment{; for convenience, we separate naming and $`m$-binding, so actually use the encoding of \\cite}.\n\n \\begin{description} \\itemsep5pt\n\n \\item [Main reduction rules]\n\n \\myitem[$ (`l x.M) N \\red \\Sub M x := N $]\n\\SPilmuTerm [a {(l x . M )} N ] a\n\t& \\ByDef & \\\\\n\\multicolumn{3}{@{}l}{ \\PilmuTerm [a {l x . M} N] a\n\t\\quad \\redPi(c),\\wbisimG } \\\\\n\\New b ({ \\PilmuTerm[M] b \\Par {\\PiExSub x := N } \\Par \\BEq b=a })\n\t& \\wbisimR (\\Ref{renaming lemma}) &\n\\PilmuTerm[S M x := N] a\n \\end{array} $\n\n\n \\myitem[{$(\\muterm `b . [`a]M ) N \\red \\muterm`g . \\Sub {[`a]M} `b := N{`.}`g , ~ `g \\textit{ fresh} $}]\n\\kern-20mm && \\kern10mm\n\\SPilmuTerm [a {(u `b . `a M)} N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [A {(\\muterm `b . [`a]M)} N] a\n\t& \\ByDef &\n\\PilmuTerm [A {(u `b . `a M)} N] a\n\t& =_{`a} & \\\\\n\\New `b ( \\PilmuTerm[M] `a \\Par \\PiExContSub `b := N . a )\n\t& = &\n\\PilmuTerm[u `g . `a {C M `b := N . `g }] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[u `g . `a {\\Sub M `b := N{`.}`g }] a\n\t& \\ByDef &\n\\SPilmuTerm[u `g . `a {s M `b := N{`.}`g }] a\n \\end{array} $\n\n\n \\myitem[{$ \\muterm`b.[`b]M \\red M \\hfill \\textit{ if } `b \\not\\in \\fn(M)$}]\n\\SPilmuTerm [\\muterm`b.[`b]M] a\n\t& \\ByDef &\n\\PilmuTerm [u `b . `b M] a\n\t& \\ByDef (`b \\not\\in \\fn(M) ) &\n\\PilmuTerm [M] a\n \\end{array} $\n\n\n \\myitem[{$\\muterm`a.[`b]\\muterm`g.[`d]M \\red \\muterm`a.[`d]M[`b \\For `g], `g \\not= `d$}]\n\\kern-35mm &&&& \\kern-10mm\n\\SPilmuTerm [\\muterm`a.[`b]\\muterm`g.[`d]M] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [u `a . `b {\\muterm`g.[`d]M}] a\n\t& \\ByDef &\n\\PilmuTerm [u `a . `b {u `g . `d M}] a\n\t& =&\n\\PilmuTerm [u `a . `d {M[`b \\For `g]}] a\n\t& \\ByDef& \\\\\n\\SPilmuTerm [u `a . `d {M[`b \\For `g]}] a\n \\end{array} $\n\n\n \\myitem[{$\\muterm`a.[`b]\\muterm`g.[`g]M \\red \\muterm`a.[`b]M[`b \\For `g]$}]\n\\kern-35mm &&&& \\kern-25mm\n\\SPilmuTerm [\\muterm`a.[`b]\\muterm`g.[`g]M] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [u `a . `b {\\muterm`g.[`g]M}] a\n\t& \\ByDef &\n\\PilmuTerm [u `a . `b {u `g . `g M}] a\n\t& =&\n\\PilmuTerm [u `a . `b {M[`b \\For `g]}] a\n\t& \\ByDef& \\\\\n\\SPilmuTerm [u `a . `b {M[`b \\For `g]}] a\n \\end{array} $\n\n\n \\item [Term substitution rules]\n\n \\myitem[$ \\Sub x x := N \\red N $]\n\\SPilmuTerm [S x x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [s {v x} x := N] a\n\t& \\redPi (x) & \\\\\n\\New w ( \\BEq w=a\n\\Par \\PilmuTerm [N] w ) \\Par \\New x ( \\PiExSub x := N )\n\t& \\wbisimR (\\Ref{renaming lemma}) & \\\\\n\\PilmuTerm [N] a \\Par \\New x ( {\\PiExSub x := N } )\n\t& \\wbisimG &\n\\SPilmuTerm [N] a\n \\end{array} $\n\n\n \\myitem[$ \\Sub M x := N \\red M, ~ x \\not\\in \\fv(M) $]\n\\SPilmuTerm [S M x := N] a\n\t& \\ByDef &\n\\PilmuTerm [s M x := N] a\n\t& \\congruent & \\\\\n\\PilmuTerm[M] a \\Par \\New x ( \\PiExsub x := N )\n\t& \\wbisimG &\n\\SPilmuTerm [M] a\n \\end{array} $\n\n\n\\myitem [$\\Sub (PQ) x := N \\red \\Sub {(\\Sub P x := N Q)} x := N , ~ x = \\hv(P) $]\n\\SPilmuTerm [S (PQ) x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {A P Q} x := N] a\n\t& \\wbisim (\\Ref{replication lemma}) & \\\\\n\\PilmuTerm [S {A {S P x := N} Q } x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {A {\\Sub P x := N } Q} x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {\\Sub P x := N Q } x := N] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [S {(a {\\Sub P x := N } Q)} x := N] a\n \\end{array} $\n\n\n \\myitem[$ \\Sub (`ly.M) x := N \\red `l y.(\\Sub M x := N ) , ~ x = \\hv(M) $]\n\\kern-16mm && \\kern10mm \\SPilmuTerm [S {(l y . M)} x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {l y . M} x := N] a\n\t& \\congruent & \\\\\n\\PilmuTerm [l y . {S M x := N}] a\n\t& \\ByDef &\n\\SPilmuTerm [l y . {S M x := N}] a\n \\end{array} $\n\n\n \\myitem[{$ \\Sub (\\muterm`a.[`b]M) x := N \\red \\muterm`a.[`b](\\Sub M x := N) , ~ x = \\hv(M)$}]\n\\kern-35mm && \\kern10mm\n\\SPilmuTerm [S {(u `a . `b M)} x := N] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {(u `a . `b M)} x := N] a\n\t& \\congruent (`a \\notele \\fn(N) ) &\n\\PilmuTerm [u `a . `b {S M x := N}] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [u `a . `b {S M x := N}] a\n \\end{array} $\n\n\n\n \\item [Structural rules]\n\n\\myitem[{$\\Sub (\\muterm`d.\\Cmd) `a := N{`.}`g \\red \\muterm`d.(\\Sub {\\Cmd} `a := N{`.}`g ) N$}] \\kern-10mm &&\n\\SPilmuTerm [S {(m `d . \\Cmd)} `a := N{`.}`g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {m `d . \\Cmd} `a := N . `g ] a\n\t& =_{`a} & (`d \\not\\in \\fn(\\exsub `a := {N{`.}`g} ) ) \\\\\n\\PilmuTerm [m `d . {C {\\Cmd} `a := N . `g }] a\n\t& \\ByDef &\n\\PilmuTerm [m `d . { {\\Cmd} \\excontsub `a := N . `g }] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [\\muterm`d.(\\Sub {\\Cmd} `a := N{`.}`g ) ] a\n \\end{array} $\n\n\n\n\\myitem[{$\\Sub ([`a]M) `a := N{`.}`g \\red [`g](\\Sub M `a := N{`.}`g ) N$}]\n\\kern-10mm && \\kern-10mm\n\\SPilmuTerm [S {(n `a M)} `a := N{`.}`g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {n `a M} `a := N . `g ] a\n\t& \\wbisim (\\Ref{replication lemma}) &\n\t\\\\\n\\PilmuTerm [n `g {A {C M `a := N . `g } N}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [n `g {A {M \\excontsub `a := N . `g } N}] a\n\t& \\ByDef &\n\\SPilmuTerm [A {C M `a := N . `g } N] `g\n\t& \\ByDef & \\\\\n\\SPilmuTerm [n `g {A {c M `a := N . `g } N}] a\n \\end{array} $\n\n\n\\myitem[{$\\Sub ([`b]M) `a := N{`.}`g \\red [`b]\\Sub M `a := N{`.}`g , ~ `b \\not= `a$}]\n\\kern-30mm && \\kern20mm\n\\SPilmuTerm [C {(n `b M)} `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {[`b]M} `a := N . `g ] a\n\t& \\ByDef &\n\\PilmuTerm [C {n `b M} `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [n `b {M \\excontsub `a := N . `g }] a\n\t& \\ByDef &\n\\SPilmuTerm [n `b {c M `a := N . `g }] a\n \\end{array} $\n\n \\myitem[$ \\Sub M `a := N{`.}`g \\red M , ~ `a \\not\\in \\fn(M) $]\n\\kern-7mm &&\n\\SPilmuTerm [C {M} `a := N . `g ] a\n\t& \\ByDef &\n\\PilmuTerm [C {M} `a := N . `g ] a\n\t& \\congruent & \\\\\n\\SPilmuTerm [M] a \\Par \\New `a ( \\PiExContSub `a := N . `g )\n\t& \\wbisimG &\n\\SPilmuTerm [M] a\n \\end{array} $\n\n\n \\myitem[$ \\Sub (`lx.M) `a := N{`.}`g \\red `lx.\\Sub M `a := N{`.}`g $]\n\\kern-7mm && \\kern-10mm\n\\SPilmuTerm [C {(l x . M)} `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {l x . M} `a := N . `g ] a\n\t& \\congruent & \\\\\n\\PilmuTerm [l x . {C M `a := N . `g }] a\n\t& \\ByDef &\n\\SPilmuTerm [l x . {C M `a := N . `g }] a\n \\end{array} $\n\n\n \\myitem[$ \\Sub (PQ) `a := N{`.}`g \\red \\Sub {(\\Sub P `a := N{`.}`g Q )} `a := N{`.}`g $]\n\\SPilmuTerm [C (PQ) `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {A P Q} `a := N . `g ] a\n\t& \\wbisim (\\Ref{replication lemma}) &\n\t\\\\\n\\PilmuTerm [C {A {C P `a := N . `g } Q } `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {C P `a := N . `g Q } `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [C {(a {C P `a := N . `g } Q)} `a := N . `g ] a\n \\end{array} $\n\n\n \\item [Substitution rules]\n\n \\myitem[$ \\Sub {\\Sub M x := N } y := P \\red \\Sub { \\Sub { \\Sub M y := P } x := N } y := P $]\n&& \\kern-15mm\n\\SPilmuTerm [S {S M x := N } y := P ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S {S M x := N } y := P ] a\n\t& \\wbisim (\\Ref{replication lemma}) &\n\t\\\\\n\\PilmuTerm [S {S {S M y := P } x := N } y := P] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [S {S {S M y := P } x := N } y := P] a\n \\end{array} $\n\n\n\\item[$ \\Sub {\\Sub M `a := N{`.}`g } `b := L{`.}`d \\red \\Sub { \\Sub { \\Sub M `b := L{`.}`d } `a := N{`.}`g } `b := L{`.}`d $] ~\n\n\n$ \\begin{array}{lclcl}\n\\SPilmuTerm [C {C M `a := N . `g } `b := L . `d ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {C M `a := N . `g } `b := L . `d ] a\n\t& \\wbisim (\\Ref{replication lemma}) &\n\t\\\\\n\\Paper{\\PilmuTerm [C {C {C M `b := L . `d } `a := N . `g } `b := L . `d ] a }\n\\CLAC{\\PilmuTerm [C {C {B M `b := L . `d } `a := N . `g } `b := L . `d ] a }\n\t& \\ByDef & \\\\\n\\SPilmuTerm [C {C {C M `b := L . `d } `a := N . `g } `b := L . `d ] a\n \\end{array} $\n\n\n \\item [Contextual rules] By induction. \\quad\\usebox{\\proofbox}\n\n\\Comment\n \\myitem[$ M \\red N \\Then ML \\red NL $]\n\\SPilmuTerm [a M L] a\n\t& \\ByDef &\n\\PilmuTerm [A M L] a\n\t& \\wbisim (\\IH) & \\\\\n\\PilmuTerm [A N L] a\n\t& \\ByDef &\n\\SPilmuTerm [a N L] a\n \\end{array} $\n\n\n \\myitem[$ M \\red N \\Then `lx.M \\red `lx.N $]\n\\SPilmuTerm [l x . M] a\n\t& \\ByDef &\n\\PilmuTerm [l x . M] a\n\t& \\wbisim (\\IH) & \\\\\n\\PilmuTerm [l x . N] a\n\t& \\ByDef &\n\\SPilmuTerm [l x . N] a\n \\end{array} $\n\n\n \\myitem[{$ M \\red N \\Then \\muterm`a.[`b]M \\red \\muterm`a.[`b]N$}]\n \\kern-10mm &&\n\\SPilmuTerm [u `a . `b M] a\n\t& \\ByDef &\n\\PilmuTerm [u `a . `b M] a\n\t& \\wbisim (\\IH) &\n\t\\\\\n\\PilmuTerm [u `a . `b N] a\n\t& \\ByDef &\n\\SPilmuTerm [u `a . `b N] a\n \\end{array} $\n\n\n \\myitem[$ M \\red N \\Then \\Sub M x := L \\red \\Sub N x := L $]\n\\kern-15mm &&\n\\SPilmuTerm [S M x := L] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [S M x := L] a\n\t& \\wbisim (\\IH) &\n\\PilmuTerm [S N x := L] a\n\t& \\ByDef &\n\\SPilmuTerm [S N x := L] a\n \\end{array} $\n\n\n \\myitem[$ M \\red N \\Then \\Sub M `a := L{`.}`g \\red \\Sub N `a := L{`.}`g $]\n\\kern-15mm &&\n\\SPilmuTerm [C M `a := L . `g] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C M `a := L . `g] a\n\t& \\wbisim (\\IH) &\n\\PilmuTerm [C N `a := L . `g] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [C N `a := L . `g] a \\quad\\usebox{\\proofbox}\n \\end{array} $\n\n\n \\end {description}\n \\end{Proof}\n\nRemark that, by Prop.~\\ref{reduction hidden}, all proper reductions in this proof are in $\\wbisim$.\nAlso, the proof \\CLAC{in \\cite{Bakel-Vigliotti-IFIPTCS'12} }shows that $`b$-reduction is implemented in $`p$ by at least one synchronisation.\n\nWe can now easily show:\n\n\n \\begin{theorem} [Operational Soundness\\Paper{ for $\\redwxh$}\\CLAC{ \\cite{Bakel-Vigliotti-IFIPTCS'12}}] \\label{Operational Soundness redxh} \\label{rtc soundness}\n\n \\begin{enumerate}\n \\item $M \\rtcredxh N \\Then \\SPilmuTerm [M] a \\wbisim \\SPilmuTerm [N] a $.\n\n \\item If $M \\Divergesxh $ then $\\PilmuTerm[M] a \\Diverges $.\n\n \\end{enumerate}\n \\end {theorem}\n\\Paper\n\n \\begin{Proof}\nThe first is shown by induction using Theorem~\\ref{head reduction simulation}, using Proposition~\\ref{reduction hidden}; the second follows from Example~\\ref{redex example}, and the fact that $`m$-reduction and substitution do not loop \\cite{Py-PhD'98} (\\emph{i.e.}~non-termination is caused only by $`b$-reduction).%\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\n\\CLAC{The proof in \\cite{Bakel-Vigliotti-IFIPTCS'12} shows that $`b$-reduction is implemented in $`p$ by at least one synchronisation.}\n\n\\Comment\nThis result states that our interpretation gives, in fact, a semantics for the explicit head reduction for $\\lmu$.\n By Lem.~\\ref{explicit versus head} and Prop.~\\ref{lmu vs lmux} we can show the same for $\\redh$ and $\\redbmu$.\n\nBy Lem.~\\ref{explicit versus head}, we can also show:\n\n \\begin{theorem} [Operational Soundness for $\\redh$] \\label{Operational Soundness redh} \\CLAC{~}\n\n \\begin{enumerate}\n \\item If $M \\rtcredh N $, then $ \\SPilmuTerm [M] a \\wbisimilar \\SPilmuTerm [N] a $.\n \\item $M \\divergesh \\Implies \\SPilmuTerm [M] a \\divergesPi $.\n \\end{enumerate}\n \\end {theorem}\n\n \\begin{Proof}\n \\begin{enumerate}\n \\item\n\nIf $M \\rtcredh N $, then, by Lem.~\\ref{explicit versus head}, there exists $L$ such that $ M \\rtcredxh L $ and $ L \\rtcredxsub N $; \\Paper{then, }by Theorem~\\ref{Operational Soundness redxh}, $ \\SPilmuTerm [M] a \\wbisimilar \\SPilmuTerm [L] a $.\n\n \\item\nSince every $`b$-reduction step in $\\redh$ invokes at least one step in $\\redxh$.\n\\quad\\usebox{\\proofbox}\n\n \\end{enumerate}\n \\end{Proof}\nand similarly, by Prop.~\\ref{lmu vs lmux}, we can show:\n\n \\begin{theorem} [Operational Soundness for $\\redbmu$] \\label{Operational Soundness}\n\n \\begin{enumerate}\n \\item\n$M \\rtcredbmu N \\Then \\SPilmuTerm [M] a \\wbisimilar \\SPilmuTerm [N] a $.\n \\item\n$M \\divergess \\Implies \\SPilmuTerm [M] a \\divergesPi $.\n \\end{enumerate}\n \\end {theorem}\n\n \\begin{Proof}\nIf $M \\rtcredbmu N $, then, by Prop.~\\ref{lmu vs lmux}, there exists $L$ such that $ M \\rtcredx L $ and $ L \\rtcredxsub N $; then, by Theorem~\\ref{Operational Soundness}, $ \\SPilmuTerm [M] a \\wbisimilar \\SPilmuTerm [L] a $,\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\\def \\Lmux\t{\\Lmux}\n\nWe can also show operational completeness for $\\redxh$.\n \\begin{theorem} [Operational Completeness for $\\redxh$] \\label{Operational Completeness for Lmux}\nIf $ \\PilmuTerm [M] a \\redPi \\proc{P} $, then there exists $N$ such that $ \\proc{P} \\wbisim \\PilmuTerm [N] a $ and $ M \\rtcredxh N $.\n \\end {theorem}\n\n \\begin{Proof}\nBy easy induction on the structure of terms, using the fact that all reductions that are possible in $\\PilmuTerm [M] a $ are generated by the interpretation, and correspond to $`b$-contractions or are the result of reduction over $y$ in\nthe process\n$ \\New y ( \\PilmuTerm [l x . M] y \\Par \\Eq y=a ) $, and is then the result of executing a substitution $\\Sub y y := `lx.M $.%\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\nThis in turn can be used to show:\n \\begin{lemma} \\label{Completeness}\n \\begin{enumerate}\n\n \\item\nLet $M$ be a term in $\\lmux$.\nIf $ \\PilmuTerm[M] a \\rtcredPi \\PilmuTerm[N] a $ then $ M \\rtcredxh N $.\n\n \\item\nLet $M \\ele \\lmu$, \\emph{i.e.}~a (pure) $\\lmu$-term.\nIf $ \\PilmuTerm[M] a \\redPi \\proc{P} $ then there exists $N \\ele \\lmux$ and $L \\ele \\lmu$ such that $ \\proc{P} \\wbisimilar \\PilmuTerm[N] a $, and $ M \\rtcredxh N $ and $ N \\rtcredxsub L $.\n\n \\end{enumerate}\n \\end {lemma}\n\n \\begin{Proof}\nThe first is an obvious consequence of Theorem~\\ref{Operational Completeness for Lmux}, the second follows from Lem.~\\ref{head reduction}, Lem.~\\ref{lmu vs lmux}, and Theorem~\\ref{Operational Completeness for Lmux}.\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\nSince renaming can be executed for abstractions, we can even generalise this result for lazy reduction $\\redlazy$ \\cite{Abramsky'90} and explicit lazy reduction $\\redxl$ \\cite{Bakel-Vigliotti-CONCUR'09} on closed $`l$-terms:\n \\begin{theorem}\nLet $M$ be closed. Then\n \\begin{enumerate}\n \\item $M \\redxl `ly.M' \\, \\Vect{\\exsub x := N } \\Implies \\CLAC{$ \\\\ $} \\SPilmuTerm [M] a \\rtcredPi\n y := P {(`nx)} \\, ( \\SPilmuTerm [{`ly.M}'] a \\Par \\Vect{ \\Bang \\SPilmuTerm [x := N] {} } ) $.\n\n \\item $M \\redlazy `ly.M' \\, \\Vect{[N \\For x]} \\Implies \\CLAC{$ \\\\ $} \\SPilmuTerm [M] a \\rtcredPi\n y := P {(`nx)} \\, ( \\SPilmuTerm [`ly.M'] a \\Par \\Vect{ \\SPilmuTerm [N] x } ) $.\n\n \\end{enumerate}\n \\end{theorem}\n\n\n\n\nWe can also show that equality with explicit substitution, $\\eqx$, is preserved under our encoding by weak bisimulation.\n\n \\begin{theorem} \\label{lambda mu x model}\nIf $M \\eqx N$, then $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $.\n \\end{theorem}\n\n \\begin{Proof}\nBy induction on the definition of $\\eqx$\\Paper\n; we only show the cases that are different from the proof of Theorem~\\ref{soundness}.\n\n \\begin{description} \\itemsep 4pt\n\n \\myitem [$ \\Sub (PQ) `a := N{`.}`g \\red (\\Sub P `a := N{`.}`g ) \\, (\\Sub Q `a := N{`.}`g) $]\n\\PilmuTerm[\\sub (PQ) x := N ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[S {A P Q} x := N ] a\n\t& \\wbisimilar (\\Ref{replication lemma}) \\\\\n\\New c ({ \\New x ({ \\PilmuTerm[P] c \\Par \\PiLExSub x := N }) \\Par\n\\quad {} \\\\ \\hfill\n\t\\New x ({ \\In c (v,d) . ({ \\PiExsub v := Q \\Par \\Eq d=a }) \\Par \\PiLExSub x := N })\n})\n\t& \\wbisimilar (\\Ref{replication lemma}) \\\\\n\\New c ({ \\New x ({ \\PilmuTerm[P] c \\Par \\PiLExSub x := N }) \\Par\n\t\\In c (v,d) . ({ \\New x ({ \\PiExsub v := Q \\Par \\PiExSub x := N }) \\Par \\Eq d=a })\n})\n\t& \\wbisimilar (\\Ref{replication lemma}) \\\\\n\\New c ({ \\New x ({ \\PilmuTerm[P] c \\Par \\PiExSub x := N }) \\Par\n\t\\In c (v,d) . ({ \\PiExsub v := {S Q x := N } \\Par \\Eq d=a }) })\n\t& \\ByDef & \\\\\n\\New c ({ \\New x ({ \\PilmuTerm[M] c \\Par \\PiExSub x := L }) \\Par\n\t\\In c (v,d) . ({ \\PiExsub v := {\\sub N x := L } \\Par \\Eq d=a }) })\n\t& \\ByDef & \\\\\n\\PilmuTerm[A {\\sub P x := N } {\\sub Q x := N }] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[{}(\\sub P x := N ) (\\sub Q x := N )] a\n \\end{array} $\n\n\n \\myitem [$ \\Sub (PQ) `a := N{`.}`g \\red (\\Sub P `a := N{`.}`g ) \\, (\\Sub Q `a := N{`.}`g) $]\n\\SPilmuTerm [C (PQ) `a := N . `g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [C {X P Q} `a := N . `g ] a\n\t& \\wbisim (\\Ref{replication lemma}) & \\\\\n\\PilmuTerm [Y {(C P `a := N . `g)} {C Q `a := N . `g}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [a {(C P `a := N . `g)} {C Q `a := N . `g}] a\n\t& \\ByDef & \\\\\n\\PilmuTerm [A {C P `a := N . `g } {(\\ContSub Q `a := N . `g )}] a\n\t& \\ByDef & \\\\\n\\SPilmuTerm [a {(C P `a := N . `g)} {(C Q `a := N . `g )}] a\n \\end{array} $\n\n \\item [If $ M \\red N $ then $ LM \\red LN $, $\\sub L x := M \\red \\sub L x := N $, and $ \\Sub L `a := M{`.}`g \\red \\Sub L `a := N{`.}`g $] ~\n\nBy induction.\n\\quad\\usebox{\\proofbox}\n\n\\Comment\n \\myitem [$ M \\red N \\Then LM \\red LN $]\n\\PilmuTerm[LM] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[X L M] a\n\t& \\wbisimilar & (\\IH) \\\\\n\\PilmuTerm[X L N] a\n\t& \\ByDef &\n\\PilmuTerm[LN] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then \\sub L x := M \\red \\sub L x := N $]\n\\PilmuTerm[\\sub L x := M ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[s L x := M] a\n\t& \\wbisimilar & (\\IH) \\\\\n\\PilmuTerm[s L x := N] a\n\t& \\ByDef &\n\\PilmuTerm[\\sub L x := N ] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then \\Sub L `a := M{`.}`g \\red \\Sub L `a := N{`.}`g $]\n\\PilmuTerm[\\Sub L `a := M{`.}`g ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[F L `a := M . `g ] a\n\t& \\wbisimilar & (\\IH) & \\\\\n\\PilmuTerm[F L `a := N . `g ] a\n\t& \\ByDef &\n\\PilmuTerm[\\Sub L `a := N{`.}`g ] a\n \\end{array} $\n\n\n \\myitem [$ \\sub x x := L \\red L $]\n\\PilmuTerm[\\, \\sub x x := L ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[s {v x} x := L] a\n\t& \\wbisimilar (\\Ref{reduction hidden}) \\\\\n\\New w ({ \\Eq w=a \\Par \\PilmuTerm[L] w }) \\Par \\New x ({ \\PiLExSub x := N })\n\t& \\wbisimilar (\\Ref{other reduction},\\Ref{renaming lemma}) &\n\\PilmuTerm[L] a\n \\end{array} $\n\n \\myitem [$ \\sub M x := L \\red M \\hfill (x \\notele \\fv(M) ) $]\n\\SPiLTerm [s M x := N] a\n\t& \\ByDef &\n\\PilmuTerm [S M x := N] a\n\t& \\wbisimilar & \\\\\n\\PilmuTerm[M] a \\Par \\New x ( \\PiLExSub x := N )\n\t~ \\wbisimilar (\\Ref{other reduction}) ~\n\\SPiLTerm[M] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then ML \\red NL $]\n\\PilmuTerm[ML] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[A M L] a\n\t& \\wbisimilar (\\IH) \\\\\n\\PilmuTerm[A N L] a\n\t& \\ByDef &\n\\PilmuTerm[NL] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then LM \\red LN $]\n\\PilmuTerm[LM] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[a L M] a\n\t& \\wbisimilar (\\IH) \\\\\n\\PilmuTerm[a L N] a\n\t& \\ByDef &\n\\PilmuTerm[LN] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then `lx.M \\red `lx.N $]\n\\PilmuTerm[`lx.ML] a\n\t& \\ByDef &\n\\PilmuTerm[l x . M] a\n\t& \\wbisimilar (\\IH) & \\\\\n\\PilmuTerm[l x . N] a\n\t& \\ByDef &\n\\PilmuTerm[`lx.N] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then \\sub M x := L \\red \\sub N x := L $]\n\\kern -11mm &&\n\\PilmuTerm[\\sub M x := L ] a\n\t& \\ByDef &\n\\PilmuTerm[S M x := L] a\n\t& \\wbisimilar (\\IH) \\\\\n\\PilmuTerm[S N x := L] a\n\t& \\ByDef &\n\\PilmuTerm[\\sub N x := L ] a\n \\end{array} $\n\n \\myitem [$ M \\red N \\Then \\sub L x := M \\red \\sub L x := N $]\n\\PilmuTerm[\\sub L x := M ] a\n\t& \\ByDef & \\\\\n\\PilmuTerm[s L x := M] a\n\t& \\wbisimilar (\\IH) \\\\\n\\PilmuTerm[s L x := N] a\n\t& \\ByDef &\n\\PilmuTerm[\\sub L x := N ] a\n \\end{array} $\n\n \\item [$ M \\red N \\Then M \\rtcred N $]\nImmediate.\n\n \\item [$ M \\rtcred M $]\nImmediate.\n\n \\item [$ M \\rtcred N \\And N \\rtcred L \\Then M \\rtcred L $]\nSince $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $ and $ \\PilmuTerm[N] a \\wbisimilar \\PilmuTerm[L] a $ implies $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[L] a $.\n\n \\item [$ M \\rtcred N \\Then M \\eqb N $]\nImmediate.\n\n \\item [$ M \\eqb N \\Then M \\eqb N $]\nSince $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $ implies $\\PilmuTerm[N] a \\wbisimilar \\PilmuTerm[M] a $.\n\n \\item [$ M \\eqb N \\And N \\eqb L \\Then M \\eqb L $]\nSince $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $ and $ \\PilmuTerm[N] a \\wbisimilar \\PilmuTerm[L] a $ implies $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[L] a $.\\QED\n\n\n \\end{description}\nThe steps to a reflexive, transitive closure and equivalence relation follow directly from $\\wbisimilar\n\n.\\quad\\usebox{\\proofbox}\n\n \\end{Proof}\n\nNow the following is an immediate consequence:\n\n \\begin{theorem}[Semantics] \\label{lambda model}\nIf $M \\eqbmu N$, then $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $.\n \\end{theorem}\n \\begin{Proof}\nBy induction on the definition of $\\eqbmu$.\nThe case $M \\rtcredbmu N$ follows from the fact that then, by Proposition~\\ref{lmu vs lmux}, also $M \\rtcredx N$, so by Theorem~\\ref{lambda mu x model}, we have $\\PilmuTerm[M] a \\wbisimilar \\PilmuTerm[N] a $.\nThe steps to an equivalence relation follow directly from $\\wbisimilar$.\\QED\n \\end{Proof}\n\nNotice that it is clear that we cannot prove the exact reversal of this result, since terms without head-normal form are all interpreted by $\\Zero$ (see also Lem.~\\ref{zero lemma}), but are not all related through $\\eqbmu$.\nUsing \\Paper{our notion of }weak equivalence, we can deal with the reverse part, and will do so in the last sections of this paper.\n\n\\Paper\n\nWe can show that interpretation of terms in $\\redxh$-normal form are in normal form as well, which is a property that we need here.\n \\begin{lemma} \\label{head pi normal form}\n$\\lmuNF$ is a $\\redxh$-nf implies $ \\PilmuTerm[\\lmuNF] a $ is irreducible.\n \\end{lemma}\n\n \\begin{Proof}\nBy induction on the structure of terms in $\\redxh$-normal form.\n\n \\begin{description}\n\n \\myitem[$\\lmuNF = xM_1\\dots M_n ~ (n \\geq 0) $]\n\\textrm{Then } \\PilmuTerm [xM_1\\dots M_n] a\n\t& \\ByDef & \\\\\n\\New c_n (\\PilmuTerm [xM_1\\dots M_{n-1}] c_n \\Par \\PiExContSub c_n := M_{n-1} . a )\n\t& \\ByDef & \\\\\n\\NewA c_n ({ \\NewA c_{n-1} (\\PilmuTerm [xM_1\\dots M_{n-2}] c_{n-1} \\Par \\PiExContSub c_{n-1} := M_n . c_n }) \\Par \\dots \\Par \\PiExContSub c_n := M_n . a )\n\t& \\congruent & \\\\\n\\NewA c_nc_{n-1} (\\PilmuTerm [xM_1\\dots M_{n-2}] c_{n-1} \\Par \\PiExContSub c_{n-1} := M_n . c_n \\Par \\dots \\Par \\PiExContSub c_n := M_n . a )\n\t& \\ByDef & \\\\\n \\New c_n\\dots c_1 (\\PilmuTerm [x] c_1 \\Par \\PiExContSub c_1 := M_1 . c_2 \\Par \\dots \\Par\n\\PiExContSub c_n := M_n . a ) \\\\ [1mm]\n \\end{array} $\n\n\\noindent\nSince\n \\[ \\begin{array}[t]{rcl}\n \\SPilmuTerm [x] c_1 &=& \\PilmuTerm [v x] c_1 \\\\\n \\PiExContSub c_i := M_i . c_{i+1} &=& \\PiExContFsub c_i := M_i . c_{i+1}\n \\end{array} \\]\nall $\\PilmuTerm [M_i] w $ appear under input on $c_i$, so no synchronisation inside one of the $\\PilmuTerm [M_i] w $ is possible; since all $c_i$ are fresh, all are different from $x$ and no synchronisation is possible over one of the $c_i$.\nSo this process is in normal form.\n\n \\item[$\\lmuNF = `lx. \\lmuNF' $]\nThen $ \\SPilmuTerm [l x . \\lmuNF'] a = \\PilmuTerm [l x . \\lmuNF'] a $, and, by induction, $\\PilmuTerm [\\lmuNF'] b $ is in normal form; since $a$ is fresh, that process does not input over $a$, so $ \\SPilmuTerm [l x . \\lmuNF'] a $ is normal form.\n\n \\item[{$\\lmuNF = \\muterm `a . [`b] \\lmuNF' ~ (`a \\not= `b \\Or `a \\in \\lmuNF', \\lmuNF' \\not= \\muterm`g.[`d] \\lmuNF'' )$}]\nThen $ \\SPilmuTerm [u `a . `b \\lmuNF'] a = \\PilmuTerm [u `a . `b \\lmuNF'] a $, this follows immediately by induction.\n\n \\item[$\\lmuNF = \\Sub \\lmuNF' x := M ~ (\\hv(\\lmuNF') \\not= x ) $]\nThen $ \\SPilmuTerm [s \\lmuNF' x := M ] a = \\PilmuTerm [s \\lmuNF' x := M ] a $.\nBy induction, $\\PilmuTerm [\\lmuNF'] a $ is in normal form; since $x$ is not the head-variable of $\\lmuNF$, the process $\\PilmuTerm [\\lmuNF'] a $ has no reachable input over $x$, so no synchronisation is possible over $x$; also, no synchronisation is possible inside $\\PilmuTerm [M] w $, as above.\n\n \\item[$\\lmuNF = \\Sub \\lmuNF' `a := M{`.}`g ~ (\\hn(\\lmuNF') \\not= `a ) $]\nThen $ \\SPilmuTerm [c {\\lmuNF'} `a := M . `g ] a = \\PilmuTerm [C {\\lmuNF'} `a := M . `g ] a $.\nBy induction, $\\PilmuTerm [\\lmuNF'] a $ is in normal form.\nNote that $`a$ is only a reachable output in $\\PilmuTerm [\\lmuNF'] a $ if $\\lmuNF'$ is an abstraction and $a = `a$; this is impossible, since we choose $a$ fresh.\nAs above, no synchronisation is possible inside $\\PilmuTerm [M] w $.\n\\quad\\usebox{\\proofbox}\n\n \\end{description}\n \\end{Proof}\n\nNotice that $ \\SPilmuTerm [u `a . `b {u `g . `d \\lmuNF}] a = \\PilmuTerm [u `a . `b {u `g . `d \\lmuNF}] a $, which is in normal form, so some reducible terms in $\\lmux$ are mapped to processes in normal form; this does not contradict the above result, of course.\n\nWe can now show the following termination results:\n \\begin{theorem} [Termination] \\label{lambda head termination}\n \\begin{enumerate}\n \\item If $M \\rtcredxh[\\nf] N $, then $ \\PilmuTerm [M] a \\convergesPi $.\n \\item If $M \\rtcredbmu[\\hnf] N $, then $ \\PilmuTerm [M] a \\convergesPi $.\n\n \\end{enumerate}\n \\end{theorem}\n\n \\begin{Proof}\n \\begin{enumerate}\n\n \\item\nBy Lem.~\\ref{head pi normal form}, if $N$ is in explicit head-normal from, then $ \\PilmuTerm [N] a $ is in normal form, and by Thm~\\ref{head reduction simulation}, $ \\PilmuTerm [M] a \\rtcredPi \\proc{P} $ with $\\proc{P} \\wbisim \\PilmuTerm [N] a $.\nSince in the proof of Thm~\\ref{head reduction simulation}, $\\wbisimG$ only removes processes in normal form, this implies that \\proc{P} is in normal form.\n \\item\nBy Lem.~\\ref{head reduction}, there exists $L$ in {\\HNF} such that $ M \\rtcredh[\\nf] L $;\nby Prop.~\\ref{explicit versus head}, there exists $\\lmuNF$ such that $ M \\rtcredxh[\\nf] \\lmuNF $; by the previous part, $ \\PilmuTerm [M] a \\convergesPi $.\n\\quad\\usebox{\\proofbox}\n\n \\end{enumerate}\n \\end{Proof}\n\nNotice also that this result is stronger than the formulation of the termination result for Milner's interpretation in \\cite{Sangiorgi-Walker-Book'01}, since it models reduction to head-normal form, not just lazy normal form.\nSince terms that have a normal form have a head-normal form as well, Theorem~\\ref{lambda head termination} immediately leads to:\n \\begin{corollary} \\label{lambda termination}\nIf $M \\convergesbmu $, then $ \\PiLmuSem [M] a \\convergesPi $.\n \\end{corollary}\n\n\n\n\n\n\\Paper{ \\section{Weak reduction for $\\lmu$ and $\\lmux$} \n \\label{weak reduction} }\n\\CLAC{ \\section{Weak \\Paper{reduction and }equivalences for $\\lmu$ and $\\lmux$}\n\\label{first full abstraction section} \n \\label{weak reduction}\n}\n\n\\label{weak equivalences}\n\n\n\\Paper{It seems widely accepted that bisimilarity-like equivalences have become the standard when studying interpretations of $`l$-calculi into the $`p$-calculus.\nThis creates a point of concern with respect to full abstraction.}\nSince $`D`D$ and $`W`W$ (where \\CLAC{$`D = `lx.xx$ and }$`W = `ly.yyy$\\Paper{; we will use $`W$ again below}) are closed terms that do not interact with any context, they are contextually equivalent; any well-defined interpretation of these terms into the $`p$-calculus, be it input based or output based, will therefore map those to processes that are weakly bisimilar to $\\Zero$, and therefore to weakly bisimilar processes.\nAbstraction, on the other hand, enables interaction with a context, and therefore the interpretation of $`lz.`D`D$ will \\emph{not} be weakly bisimilar to $\\Zero$.\nWe therefore cannot hope to model standard $`b`m$-equality in the $`p$-calculus in a fully-abstract way; rather, we need to consider a notion of reduction that considers \\emph{all} abstractions meaningful; therefore, the only kind of reduction on $`l$-calculi that can naturally be encoded into the $`p$-calculus is \\emph{weak} reduction.\n\n\\Paper\n\n\\DDinpi\n\n \\begin{example} \\label{DD is a zero process}\nConsider the reduction of $`D`D$ that was given in Example~\\ref{reductions example}; by Theorem~\\ref{head reduction simulation}, we have for example the reduction of $\\SPilmuTerm [`D `D] a \\wbisim \\SPilmuTerm [\\Sub {\\Sub xy y := x } x := `ly.yy ] a $ as shown in Figure~\\ref{DDinpi},\nwhich shows that the interpretation of $`D`D$ reduces without creating output over $a$ -- it always occurs inside a sub-process of the shape\n \\[ \\begin{array}{rcl}\n\\PiExContsub c := y . a\n \\end{array} \\]\nand does not input, since the head-variable is always bound, so $\\SPilmuTerm [`D `D] a $ is weakly bisimilar to $\\Zero$ (see also Lem.~\\ref{zero lemma}).\nTherefore,\n \\[ \\begin{array}{lclcl}\n\\PilmuTerm[`lz.`D`D] a\n\t&\\ByDef&\n\\PilmuTerm[l z . {`D`D}] a\n\t&\\wbisim& \\\\ &&\n\\New zb ( \\Zero \\Par \\Out a )\n\t&\\wbisim& \\\\ &&\n\\PilmuTerm[l z . {`W`W}] a\n\t&\\ByDef&\n\\PilmuTerm[`lz.`W`W] a\n \\end{array} \\]\nSo, for full abstraction, we are forced to consider $`lz.`D`D$ and $`lz.`W`W$ equivalent, and therefore, we need to consider \\emph{weak} equivalences on terms.\n \\end{example}\n\n\n\n\n\nGenerally, the concept of \\emph{weak} reduction refers to the variant of calculi that eliminate the contextual rule\n \\[ \\begin{array}{rcl}\nM \\red N &\\Then& `lx.M \\red `lx.N\n \\end{array} \\]\nFor the $`l$-calculus, then a closed normal form is an abstraction.\nNote that, in the context of $\\lmu$, this is no longer the case, since also context switches might occur inside the term.\n\nWe will now introduce the correct notions in $\\lmu$.\n\n\n \\begin{definition}\nWe define the notion $\\redwbmu$ of \\emph{weak $`b`m$-reduction} as in Def.~\\ref{lmu reduction}, the notion $\\redwh$ of \\emph{weak head reduction}\\footnote{This notion is also known as \\emph{lazy} reduction; for the sake of keeping our terminology consistent, we prefer to call it weak head reduction.} on $\\lmu$ as in Def.~\\ref{head reduction definition}, and the notion $\\redwxh$ of \\emph{weak explicit head reduction} on $\\lmux$ as in Def.~\\ref{explicit head reduction}, by (also) eliminating the rules:\n \\[ \\begin{array}{rcl}\n\\Sub (`ly.M) x := N &\\red & `l y.(\\Sub M x := N ) \\\\\n\\Sub (`lx.M) `a := N{`.}`g & \\red & `lx.(\\Sub M `a := N{`.}`g ) \\\\\nM \\red N &\\Then&\n`lx.M \\red `lx.N\n \\end{array} \\]\n \\end{definition}\n\nWe define the notion of weak head-normal forms, the normal forms with respect to weak head-reduction:\n\n \\begin{definition}[Weak head-normal forms for $\\lmu$] \\label{weak head reduction definition}\n \\begin{enumerate}\n\n \\item\nThe $\\lmu$ \\emph{weak head-normal forms} (\\WHNF) are defined through the grammar:\n \\[ \\begin {array}{rrl@{\\quad}l}\n\\lmuwHNF & ::= &\n`lx. M\n\t\\\\ &\\mid &\nxM_1\\dots M_n & (n \\geq 0)\n\t\\\\ &\\mid &\n\\muterm`a. [`b] \\lmuwHNF\n\t& (`a \\not= `b \\textrm{ or } `a \\in \\fn(\\lmuwHNF),\n\t\t\\Paper{\\textrm{ and }}\\lmuwHNF \\not= \\muterm `g.[`d] \\lmuwHNF' )\n \\end {array} \\]\n\n \\item\nWe say that $M$ \\emph{has a \\WHNF} if there exists $\\lmuwHNF$ such that $M \\rtcredwh \\lmuwHNF$.\n\n\\Comment{\n \\item\nWe define \\emph{values} as those \\WHNF s defined through:\n \\[ \\begin {array}{r\\CLAC{@{~}}r\\CLAC{@{~}}ll}\n\\lmuV & ::= & `lx. M\n\t\\\\ &\\mid &\n\\muterm`a. [`b] \\lmuV & (`a \\not= `b \\textrm{ or } `a \\not\\in \\lmuV, \\\\ &&& ~\n\t\\textrm{ and }\\lmuV \\not= \\muterm `g.[`d] \\lmuV' )\n \\end {array} \\]\n}\n\n \\end{enumerate}\n \\end{definition}\n\\Paper{As before, the normal forms of weak head reduction are the \\WHNF s.}\n\nThe main difference between \\HNF s and \\WHNF s is in the case of abstraction: where the definition of {\\HNF} only allows for the abstraction over a \\HNF, for \\WHNF s any term can be the body.\nMoreover, notice that both $\\LTerm$ and $\\LTerm$ are in \\WHNF.\n\nSince ${\\redwxh} \\subseteq {\\redxh}$, we can show the equivalent of Lem~\\ref{head reduction} and Thm.~\\ref{rtc soundness} also for \\emph{weak explicit head reduction}:\n\n \\Paper\n \\begin{proposition} \\label{weak head reduction}\nIf $ M \\rtcredbmu N $ with $N$ in {\\WHNF}, then there exists $\\lmuwxHNF$ such that $ M \\rtcredwh[\\nf] \\lmuwxHNF $ and $ \\lmuwxHNF \\rtcredbmu N $ without using $\\redwh$.\n \\end{proposition}\n\n \\begin{corollary} \\label{weak soundness}\n\n\\CLAC\n \\begin{theorem} [cf.~\\cite{Bakel-Vigliotti-IFIPTCS'12}] \\label{weak soundness}\n\n \\begin{enumerate}\n \\item\nIf $ M \\rtcredwxh N $, then $\\SPilmuTerm [M] a \\wbisim \\SPilmuTerm [N] a $.\n \\item\nIf $ M \\rtcredbmu N $ with $N$ in {\\WHNF}, then there exists $\\lmuwxHNF$ such that $ M \\rtcredwxh[\\nf] \\lmuwxHNF $ and $ \\lmuwxHNF \\rtcredx N $ without using $\\redwxh$.\n \\end{enumerate}\n\\CLAC\n \\end{theorem}\n\n \\Paper\n \\end{corollary}\n\n\n\nWe also define weak explicit head-normal forms.\n\n \\begin{definition}[Weak explicit head-normal forms\\Paper{ for $\\lmu$}] \\label{weak explicit head reduction definition}\n\n \\begin{enumerate}\n\n \\item\nThe $\\lmux$ \\emph{weak explicit head-normal forms \\CLAC{\\\\ }(\\WEHNF)} are defined through\\Paper{ the grammar}:\n \\[ \\def1.1} \\begin{array}{rcl{1.1} \\begin {array}{rrl@{\\qquad}l}\n\\lmuwxHNF & ::= &\n`lx. M \\, \\Vect{\\exsub y := N } \\, {\\Vexcontsub `s := Q . `t }\n\t\\\\ &\\mid &\nxM_1\\dots M_n \\, \\Vect{\\exsub y := N } \\, {\\Vexcontsub `s := Q . `t } & (n \\geq 0, ~ x \\notele \\Vect{y})\n\t\\\\ &\\mid &\n\\muterm `a . [`b] \\lmuwxHNF \\, \\Vect{\\exsub y := N } \\, {\\Vexcontsub `s := Q . `t }\n&\t\\\\ \\multicolumn{4}{r}\n\t{(`b \\notele \\Vect{`s}, ~ `a \\not= `b \\textrm{ or } `a \\in \\fn(\\lmuwxHNF),\n\t\\textrm{ and } \\lmuwxHNF \\not= `m`g.[`d] \\lmuwxHNF' ) }\n \\end {array} \\]\n\n \\item\nWe say that $M \\ele \\lmux$ \\emph{has an \\WEHNF} if there exists $\\lmuwxHNF$ such that $M \\rtcredwxh \\lmuwxHNF$.\n\n\\Comment{\n \\item\nWe define \\emph{values} as those weak head-normals defined through:\n \\[ \\begin {array}{rrll}\n\\lmuV & ::= & `lx. M\n\t\\\\ &\\mid &\n\\muterm`a. [`b] \\lmuV & (`a \\not= `b \\textrm{ or } `a \\not\\in \\lmuV, \\\\ &&& ~\n\t\\textrm{ and }\\lmuV \\not= \\muterm `g.[`d] \\lmuV' )\n \\end {array} \\]\n}\n\n \\end{enumerate}\n \\end{definition}\n\n \\begin{remark} \\label{substitution remark}\nIn the context of reduction (normal and weak), when starting from pure terms, the substitution operation can be left inside terms in normal form, as in\n \\[ \\begin{array}{rcl}\n(`lx.yM)NL &\\redxh& \\Sub yM x := N \\, L .\n \\end{array} \\]\nHowever, since, by Barendregt's convention, $x$ does not appear free in $L$, the latter term is operationally equivalent to $\\Sub yML x := N $; in fact, these two are equivalent under $\\equivwh$ (see Def.~\\ref{equivwh definition}), and also congruent when interpreted as processes.\n\\Paper\n \\[ \\begin{array}{lclcl}\n\\SPilmuTerm[a {S yM x := N } L] a\n\t& \\ByDef &\n\\PilmuTerm[A {S yM x := N } L] a\n\t& \\congruent & \\\\ &&\n\\PilmuTerm[S {A yM L} x := N] a\n\t& \\ByDef & \\\\ &&\n\\SPilmuTerm[S {a yM L} x := N] a\n \\end{array} \\]\n\nSince in weak reduction the reduction $\\Sub (`lx.M) y := N $ for $`lx . (\\Sub M y := N ) $ is not allowed, also this substitution can be considered to stay at the outside.\nTherefore, without loss of generality, for readability and ease of definition we will use a notation for terms that places all explicit substitutions on the outside.\\footnote{This is exactly the approach of Krivine's machine, where explicit substitutions are called \\emph{closures} that form an environment in which a term is evaluated.}\nSo actual terms can have substitutions inside, but they are written as if they appear outside.\nTo ease notation, we will use $\\Ssub$ for a set of substitutions of the shape $ \\exsub x := N $ or $ \\excontsub `a := N . `g $ when the exact contents of the substitutions is not relevant; we write $x \\ele \\Ssub $ if $\\exsub x := N \\ele \\Ssub $ and similarly for $`a \\ele \\Ssub $.\n\n \\end{remark}\n\nWe can show that the interpretation of a term without {\\WHNF} \\Paper{gives a process that }is weakly bisimilar to $\\Zero$.\n\n \\begin{lemma} \\label{zero lemma}\nIf $M$ has no {\\WEHNF} (so $M$ also has no \\WHNF), then $\\PilmuTerm[M] a \\wbisim \\Zero $.\n \\end{lemma}\n\n \\begin{Proof}\nIf $M$ has no {\\WEHNF}, then $M$ has no leading abstractions and all terms generated by reduction have a weak explicit head redex.\nIf $M = \\muterm `a . [`b] N $, then $\\PilmuTerm[M] a \\ByDef \\PilmuTerm[u `a . `b N] a \\wbisim \\Zero $, so also $ \\PilmuTerm[N] `b \\wbisim \\Zero $; therefore we can assume $M$ itself does not start with a context switch.\n\nWe reason by coinduction on the explicit weak head reduction sequence from $M$ and analyse the cases of weak explicit head reduction.\nFor example,\n \\[ \\def1.1} \\begin{array}{rcl{1.2} \\begin{array}{lcl}\n\\PilmuTerm [\\VSub {(`lx.P_1)P_2 \\dots P_n} y := Q \\, {\\Vexcontsub `a := R . `b }] a ~ \\ByDef\n\t\\\\ \\dquad\n\\NewA \\Vect{c} (\\PilmuTerm [l x . P_1] c_1 \\Par \\Comment{ {} \\\\ \\dquad \\dquad } {\\VPiExContSub c_{i-1} := P_i . c_i } \\Par {\\VPiExSub y := Q } \\Par \\, {\\VPiExContSub `a := R . `b })\n \\end{array} \\]\nwhere $c_{n-1} = a$\\CLAC{. }\\Paper{ and\n \\[ \\def1.1} \\begin{array}{rcl{1.1} \\begin{array}{rcl}\n\\PiExContSub c_{i-1} := P_i . c_i &=& \\PiExContFsub c_{i-1} := P_i . c_i \\\\\n\\PiExSub y_j := Q_j &=& \\PiExsub y_j := Q_j \\\\\n\\PiExContSub `a_k := R_k . `b_k &=& \\PiExContFsub `a_k := R_k . `b_k\n \\end{array} \\]}\nSince a synchronisation over $c_1$ is possible, the process is not in normal form.\nObserve that all outputs are over bound names or under guard, and since the result of the reduction has no head variable, no input is exposed.\nSo \\Paper{reduction of $\\PilmuTerm[M] a $ cannot exhibit an input or an output, so }$\\PilmuTerm[M] a \\wbisim \\Zero $.%\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\\Paper{\\begin{figure*\n\nThe reduction of $\\SPilmuTerm [a `D `D] a $ is given in Fig.~\\ref{DDnogarb}, which shows that the interpretation of $`D`D$ reduces without creating output over $a$; notice that the individual steps of the above reduction in $\\redxh$ in Example~\\ref{redxh examples} are respected in Fig.~\\ref{DDnogarb}.\n\nAs a direct consequence of this result,\\Paper{ as for Milner's and Sangiorgi's interpretations,} our interpretation is not extensional, since\n$ \\PilmuTerm[`D`D] a \\wbisim \\Zero $, whereas $ \\PilmuTerm[`lx.`D`Dx] a \\ByDef \\PilmuTerm[l x . {`D`Dx}] a \\not\\wbisim \\Zero $.\n\n\n\nWe can show the following property.\n\n \\begin{lemma} \\label{redwh to redwxh lemma} \\label{redbmu to redwxh lemma}\n\nLet $M$ and $N$ be pure $\\lmu$-terms; then\n$M \\rtcredwh[\\nf] N$ if and only if there exists $N'$, $\\Ssub$ such that $M \\rtcredwxh[\\nf] N' \\, \\Ssub $, and $ N' \\, \\Ssub\\rtcredxsub[\\nf] N $.\n\n\n \\end{lemma}\n\n\n\n\nWe will now define equivalences $\\equivwbmu$ and $\\equivwh$ between terms of $\\lmu$, and $\\equivwxh$ between terms of $\\lmux$ (the last two are defined coinductively as bisimulations), that are based on weak reduction, and show that the last two equate the same pure $\\lmu$-terms.\nThese notions all consider terms without {\\WHNF} equivalent.\nThis is also the case for the approximation semantics we present in the next section.\n\nFirst we define a weak equivalence generated by the reduction relation $\\redwbmu$.\n\n \\begin{definition} \\label{wbmu equivalence}\nWe define $\\equivwbmu$ as the smallest congruence that contains:\n \\[ \\begin{array}{rcl@{\\quad}l}\nM, N \\emph{ have no \\WHNF} &\\Then& M \\equivwbmu N \\\\\n(`l x . M ) N & \\equivwbmu & M [ N \\For x ] \\\\\n(\\muterm `a . \\Cmd ) N & \\equivwbmu & \\muterm`g. \\Cmd [N{`.}`g \\For `a] & (`g \\textit{ fresh}) \\\\\n\\quad \\muterm`a.[`b]\\muterm`g.[`d]M & \\equivwbmu & \\muterm`a.([`d]M[`b \\For `g]) \\\\\n\\muterm `a . [`a] M & \\equivwbmu & M & (`a \\notele M)\n\\Comment\nM \\equivwbmu N \\And N \\equivwbmu L &\\Then& N \\equivwbmu L \\\\\n \\\\\nM &\\equivwbmu& M \\\\\nM \\equivwbmu N & \\Then &\n\\multicolumn{2}{l}{ \\begin{cases}\nN \\equivwbmu M \\\\\n`lx.M \\equivwbmu `lx.N \\\\\nLM \\equivwbmu LN \\\\\nML \\equivwbmu NL \\\\\n\\muterm `a . [`b] M \\equivwbmu \\muterm `a . [`b] N \\\\\n \\end{cases} }\n\n \\end{array} \\]\n \\end{definition}\n\n\\Paper\nNotice that $`D`D \\equivwbmu `W`W$ and $`lz.`D`D \\equivwbmu `lz.`W`W$, but $`D`D \\not \\eqbmu `W`W$; moreover, $\\equivwbmu$ is closed under reduction.\nIn Section~\\ref{approximation section} we will show that two terms are equivalent in $\\equivwbmu$ if and only if they have the same set of weak approximants.\n\n\nSince reduction is confluent, the following is immediate.\n\n \\begin{proposition} \\label{confluence equiv}\nIf $M \\equivwbmu N$ and $M \\rtcredwbmu \\lmuwHNF$, then there exists $\\lmuwHNF'$ such that $\\lmuwHNF \\equivwbmu \\lmuwHNF'$ and $N \\rtcredwbmu \\lmuwHNF'$.\n \\end{proposition}\n\\Paper{Notice that Prop.~\\ref{confluence eq} is formulated with respect to $\\eqbmu$, not $\\equivwbmu$.}\n\nThe other two equivalences we consider are generated by \\emph{weak head reduction} and \\emph{weak explicit head reduction}.\nWe will show in Theorem~\\ref{equivwh is equivwxh} that these coincide for pure, substitution-free terms.\n\n \\begin{definition} [Weak head equivalence] \\label{equivwh definition}\nThe relation $\\equivwh$ is defined co-inductively as the largest symmetric relation such that: $M \\equivwh N$ if and only if either $M$ and $N$ have both no \\WHNF, or both $M \\rtcredwh[\\nf] M'$ and $ N \\rtcredwh[\\nf] N'$, and either:\n \\begin {itemize}\n\n \\item if $M' = xM_1\\dots M_n$ $(n \\geq 0)$, then $ N' = xN_1\\dots N_n$ and $ M_i \\equivwh N_i $ for all $1 \\seq i \\seq n$; or\n\n \\item if $M' =`lx.M''$, then $N' = `lx.N''$ and $M'' \\equivwh N''$; or\n\n \\item if $M' = \\muterm`a.[`b]M''$, then $N' = \\muterm`a.[`b]N''$ (so $`a \\not= `b$ or $`a \\ele \\fn(M'')$, $M'' \\not= \\muterm`g.[`d] R$, and similarly for $N''$), and $M'' \\equivwh N''$.\n\n \\end {itemize}\n \\end{definition}\nNotice that $`lz.`D`D \\equivwh `lz.`W`W$ because $`D`D \\equivwh `W`W$, since neither has a \\WHNF.\n\n\\Paper\nWe perhaps need to clarify the details of this definition.\nThe notion of weak head equivalence captures the fact that, once weak head reduction has finished, there are sub-terms that can be reduced further.\nThis process will generate (in principle) infinite terms and the equivalence expresses when this produces equal (infinite) terms.\nHowever, it also equates terms that have no \\WHNF.\nAs can be seen from Def.~\\ref{weak head reduction definition}, a context switch $\\muterm`a.[`b]N$ is in {\\WHNF} only if $N$ is; so when we state in the third case that $M \\rtcredwh[\\nf] \\muterm`a.[`b]M''$, by the fact that this reduction has terminated, we know that $M''$ \\emph{is} in \\WHNF.\n\n\nWe will now define a notion of weak explicit head equivalence, that, in approach, corresponds the weak head equivalence but for the fact that now explicit substitutions are part of terms.\n\n \\begin{definition} [Weak explicit head-equivalence] \\label{equivwxh definition}\nThe relation $\\equivwxh$ is defined co-in\\-ductively as the largest symmetric relation such that: $M \\equivwxh N$ if and only if either $M$ and $N$ have both no $\\redwxh$-normal form, or both $M \\rtcredwxh[\\nf] M' \\, \\Ssub $ and $ N \\rtcredwxh[\\nf] N' \\, \\Ssub' $, and either:\n \\begin {itemize}\n\n \\item if $M' = xM_1\\dots M_n $ $(n \\geq 0)$, then $ N' = xN_1\\dots N_n $ (so $x \\notele \\Ssub$, $x \\notele \\Ssub'$) and $M_i \\, \\Ssub \\equivwxh N_i \\, \\Ssub ' $ for all $1 \\seq i \\seq n$; or\n\n \\item if $M' =`lx.M''$, then $N' = `lx.N''$ and $M'' \\, \\Ssub \\equivwxh N'' \\, \\Ssub' $; or\n\n \\item if $M' = \\muterm`a.[`b]M''$, then $N' = \\muterm`a.[`b]N''$ (so $`a \\not= `b$ or $`a \\ele \\fn(M'')$, $M'' \\not= \\muterm`g.[`d] R$, so $`b \\notele \\Ssub$, $`b \\notele \\Ssub'$, and similarly for $N''$) and $M'' \\, \\Ssub \\equivwxh N'' \\, \\Ssub' $.\n\n \\end {itemize}\n \\end{definition}\nNotice that $\\muterm`a.[`b]`D`D \\equivwxh `D`D$.\n\n\nThe following results formulate the strong relation between $\\equivwh$ and $\\equivwxh$, and therefore between $\\redwh$ and $\\redwxh$.\nWe first show that pure terms that are equivalent under $\\equivwxh$ are also so under $\\equivwh$.\n\n\n \\begin{lemma}\nLet $M$ and $N$ be pure $\\lmu$-terms; then $M \\equivwh N $ if and only if there are $M'$, $N'$ such that $M' \\rtcredxsub[\\nf] M$ and $N' \\rtcredxsub[\\nf] N$, and $M' \\equivwxh N' $.\n\n \\begin{Proof}\n \\begin{description}\n\n \\item[only if]\nBy co-induction on the definition of $\\equivwh$.\nIf $M \\equivwh N $, then either:\n \\begin {itemize}\n\n \\item $M \\rtcredwh[\\nf] xM_1\\dots M_n$ and $N \\rtcredwh[\\nf] xN_1\\dots N_n$ and $ M_i \\equivwh N_i$, for all $1 \\seq i \\seq n$.\nThen, by Lem.~\\ref{redwh to redwxh lemma}, there are $\\Vect{M'_i}$ such that \\Paper{both}\n \\[ \\begin{array}{ccccc}\nM &\\rtcredwxh[\\nf]& xM'_1\\dots M'_n ~ \\Ssub &\\rtcredxsub& xM_1\\dots M_n\n\\\\\nN &\\rtcredwxh[\\nf]& xN'_1\\dots N'_n ~ \\Ssub' &\\rtcredxsub& xN_1\\dots N_n\n \\end{array} \\]\nBut then $M'_i \\, \\Ssub \\rtcredxsub[\\nf] M_i$ and $N'_i \\, \\Ssub' \\rtcredxsub[\\nf] N_i$, for all $1 \\seq i \\seq n$; then by induction, $M'_i \\, \\Ssub \\equivwxh N'_i \\, \\Ssub' $ for all $1 \\seq i \\seq n$.\nBut then $M \\equivwxh N$.\n\n\n\n\n \\end {itemize}\nThe other cases are similar.\n\n\n \\item[if]\nBy co-induction on the definition of $\\equivwxh$.\nIf there are $M'$, $N'$ such that $M' \\rtcredxsub[\\nf] M$ and $N' \\rtcredxsub[\\nf] N$, and $M' \\equivwxh N' $, then\neither:\n\n \\begin {itemize}\n\n \\item $M' \\rtcredwxh[\\nf] xM'_1\\dots M'_n ~ \\Ssub $, $N' \\rtcredwxh[\\nf] xN'_1\\dots N'_n ~ \\Ssub' $ and $ \\Vect{M'_i \\, \\Ssub \\equivwxh N'_i \\, \\Ssub' } $.\nLet, for all $1 \\seq i \\seq n$, $M'_i \\, \\Ssub \\rtcredxsub[\\nf] M_i$ and $N'_i \\, \\Ssub \\rtcredxsub[\\nf] N_i$ then by induction, $\\Vect{M_i \\equivwh N_i}$.\nNotice that we have $M' \\rtcredwxh[\\nf] xM'_1\\dots M'_n ~ \\Ssub \\rtcredxsub[\\nf]$ $xM_1\\dots M_n$.\nLet $M' = M'' \\, \\Ssub'' $, so $ M'' \\, \\Ssub'' \\rtcredwxh[\\nf] xM'_1\\dots M'_n ~ \\Ssub'\\,\\Ssub'' $, where $\\Ssub = \\Ssub'\\,\\Ssub'' $.\nLet $M'' \\, \\Ssub'' \\rtcredxsub[\\nf] M $, then by Lem.~\\ref{redwh to redwxh lemma}, we also have $M \\rtcredwxh[\\nf] xM''_1\\dots M''_n ~ \\Ssub' \\rtcredwh[\\nf] xM_1\\dots M_n $.\nThen, again by Lem.~\\ref{redwh to redwxh lemma}, $M \\rtcredwh[\\nf] xM_1\\dots M_n$; likewise, we have $N \\rtcredwh[\\nf] xN_1\\dots N_n$.\nBut then $M \\equivwh N$.\n\n\n\n\n \\end {itemize}\nThe other cases are similar.\\quad\\usebox{\\proofbox}\n\n \\end{description}\n \\end{Proof}\n\n \\end{lemma}\n\nNotice that this lemma in fact shows:\n\n \\begin{theorem} \\label{equivwh is equivwxh}\nLet $M,N \\ele \\lmu$, then $M \\equivwxh N \\Iff M \\equivwh N$.\n \\end{theorem}\n\n\\Comment\n\n \\begin{lemma} \\label{equivwxh implies equivwh}\nFor $M,N \\ele \\lmu$: $M \\equivwxh N \\Then M \\equivwh N$.\n \\end{lemma}\n\n \\begin{Proof}\nWe define $ {\\equivR} \\subseteq \\lmu^2 $ by: $ M \\equivR N $ if and only if $ M,N$ have no {\\WHNF} or there exist $M',N'\\ele \\lmux $ such that $ M' \\equivwxh N' $, $ M' \\rtcredxsub M $, and $ N' \\rtcredxsub N $.\nWe show that, for $M,N \\ele \\lmu$: $M \\equivwxh N \\Then M \\equivR N \\Then M \\equivwh N$.\n\\CLAC{ \\leftmargini 0pt}\n \\begin{description}\n\n \\item [$M \\equivR N \\Then M \\equivwh N$]\nLet $M',N'\\ele \\lmux$ be such that $M' \\equivwxh N'$, $M' \\rtcredxsub M$, and $N' \\rtcredxsub N $.\nWe reason by co-induction on the definition of $\\equivwxh$.\n\n \\begin {itemize} \\itemsep 3pt\n\n\n \\item $M$ and $N$ have both no \\WHNF; then also $M \\equivwh N$.\n\n\n \\item Assume $M' \\rtcredwxh[\\nf] xM'_1\\dots M'_n ~ \\Ssub $ and $N' \\rtcredwxh[\\nf] xN'_1\\dots N'_n ~ \\Ssub' $ such that $ M'_i \\, \\Ssub \\equivwxh N'_i \\, \\Ssub' $ for all $1 \\seq i \\seq n$.\n\nLet $M'_i \\, \\Ssub \\rtcredxsub[\\nf] M_i$ and $N'_i \\, \\Ssub \\rtcredxsub[\\nf] N_i$ (mark that $\\rtcredxsub[\\nf]$ just removes the explicit substitutions), then by definition of $\\equivR$ we have $M_i \\equivR N_i$, so by induction, $M_i \\equivwh N_i$, for all $1 \\seq i \\seq n$.\nThen also $xM'_1\\dots M'_n ~ \\Ssub \\rtcredxsub[\\nf] xM_1\\dots M_n$.\n\nAssume that $M'$ has some substitutions, so $M' = M'' \\, \\Ssub'' $, then also $ M'' \\, \\Ssub'' \\rtcredwxh[\\nf] xM'_1\\dots M'_n ~ \\Ssub'\\,\\Ssub'' $, where $\\Ssub = \\Ssub'\\,\\Ssub'' $.\n\nLet $M$ be the result of removing all substitutions in $M''$, so such that $M'' \\, \\Ssub'' \\rtcredxsub[\\nf] M $, then by Lem.~\\ref{redwh to redwxh lemma}, $M \\rtcredwh[\\nf] xM_1\\dots M_n$; likewise, we can show that $N \\rtcredwh[\\nf] xN_1\\dots N_n$.\nSo we have shown that $M_i \\equivwh N_i$ for all $1 \\seq i \\seq n$, and we obtain $M \\equivwh N$.\n\n\\Paper\n \\item Assume {$M' \\rtcredwxh[\\nf] `lx.M'_0 \\, \\Ssub $ and $N' \\rtcredwxh[\\nf] `lx.N'_0 \\, \\Ssub' $ such that $M'_0 \\, \\Ssub \\equivwxh N'_0 \\, \\Ssub' $}.\nLet $M'_0 \\, \\Ssub \\rtcredxsub[\\nf] M_0$ and $N'_0 \\, \\Ssub \\rtcredxsub[\\nf] N_0$,\nthen $M_0 \\equivR N_0$ by definition, so by induction, $M_0 \\equivwh N_0$.\nThen also $`lx.M'_0 \\, \\Ssub \\rtcredxsub[\\nf] `lx.M_0$.\n\nLet $M' = M'' \\, \\Ssub'' $, then also $ M'' \\, \\Ssub'' \\rtcredwxh[\\nf] `lx.M'_0 ~ \\Ssub'\\,\\Ssub'' $, where $\\Ssub = \\Ssub'\\,\\Ssub'' $.\nTake $M$ such that $M'' \\, \\Ssub'' \\rtcredxsub[\\nf] M $, then by Lem.~\\ref{redwh to redwxh lemma}, $M \\rtcredwh[\\nf] `lx.M_0$; likewise, we can show that $N \\rtcredwh[\\nf] `lx.N_0$.\nBut then $M \\equivwh N$.\n\n\n\n \\item Assume {$M' \\rtcredwxh[\\nf] \\muterm`a.[`b]M_0$ and $N' \\rtcredwxh[\\nf] \\muterm`a.[`b]N_0$, such that $M_0 \\equivh N_0 $ and both have a \\WHNF.}\nLet $M_0 \\, \\Ssub \\rtcredxsub[\\nf] M'_0$ and $N_0 \\, \\Ssub \\rtcredxsub[\\nf] N'_0$,\nthen $M'_0 \\equivR N'_0$, so by induction, $M_0 \\equivwh N_0$.\nThen also $\\muterm`a.[`b]M_0' \\, \\Ssub \\rtcredxsub[\\nf] \\muterm`a.[`b]M_0$.\nLet $M' = M'' \\, \\Ssub'' $, then also $ M'' \\, \\Ssub'' \\rtcredwxh[\\nf] \\muterm`a.[`b]M_0 ~ \\Ssub'\\,\\Ssub'' $, where $\\Ssub = \\Ssub'\\,\\Ssub'' $.\nTake $M$ such that $M'' \\, \\Ssub'' \\rtcredxsub[\\nf] M $, then by Lem.~\\ref{redwh to redwxh lemma}, $M \\rtcredwh[\\nf] \\muterm`a.[`b]M_0$; likewise, we can show that $N \\rtcredwh[\\nf] \\muterm`a.[`b]N_0$.\nBut then $M \\equivwh N$.\n\n\n\n \\item $M' \\rtcredwxh[\\nf] xM'_1\\dots M'_n $ or $M' \\rtcredwxh[\\nf] \\muterm`a.[`b]M_0$: similar\n\n\n \\end {itemize}\n\nSo $M \\equivR N$ implies $M \\equivwh N$.\n\n \\item [$M,N \\ele \\lmu \\Then ( M \\equivwxh N \\Then M \\equivR N )$]\nTake $M' = M$, and $N' = N$.\n\\quad\\usebox{\\proofbox}\n \\end{description}\n \\end{Proof}\n\n\nWe can also show the converse of the previous result, which is:\n\n \\begin{lemma} \\label{equivwh implies equivwxh}\nFor $M,N \\ele \\lmu$: $M \\equivwh N \\Then M \\equivwxh N$.\n \\end{lemma}\n\n \\begin{Proof}\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\nWe can now easily show:\n\n \\begin{theorem} \\label{equivwh is equivwxh}\nLet $M,N \\ele \\lmu$, then $M \\equivwxh N$ if and only if $M \\equivwh N$.\n \\end{theorem}\n\n \\Paper{\\begin{Proof}\nBy Lem.~\\ref{equivwxh implies equivwh} and~\\ref{equivwh implies equivwxh}.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}}\n\n\n\n\n\n\\CLAC{ \\section{Full abstraction for the logical interpretation} \\label{Full abstraction} \\label{approximation section}\n\nIn this section we will show our main result, that the logical encoding is fully abstract with respect to weak equivalence between pure $\\lmu$-terms.\nTo achieve this, we show in Thm.~\\ref{FA equivxh equivC} that $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $ \\emph{iff} $M \\equivwxh N$.\nWe are thus left with the obligation to show that $M \\equivwxh N$ \\emph{iff} $M \\equivwbmu N$.\nIn Thm.~\\ref{equivwh is equivwxh} we have shown that $M \\equivwxh N$ \\emph{iff} $M \\equivwh N$, for pure terms; to achieve $M \\equivwh N$ \\emph{iff} $M \\equivwbmu N$, we go through a notion of \\emph{weak approximation}; based on Wadsworth's approach \\cite{Wadsworth'76}, we define $\\equivwA$ that expresses that terms have the same weak approximants and show that $ M \\equivwh N $ \\emph{iff} $ M \\equivwA N $ \\emph{iff} $ M \\equivwbmu N $.\n\n\n}\n\nWe can \\Paper{also }show \\Paper{the following results that state }that if the interpretation of $M$ produces an output, then $M$ reduces by head reduction to an abstraction; similarly, if the interpretation of $M$ produces an input, then $M$ reduces by head reduction to a term with a head variable.\n\n\n \\begin{lemma} \\label{pi reduction characterisation}\n \\begin{enumerate} \\itemsep 0pt\n\n \\item\nIf $\\PilmuTerm [M] a \\Outson a $, then there exist $x, N$ and $\\Ssub$ such that $ \\PilmuTerm [M] a \\wbisim \\PilmuTerm [`lx.N\\, \\Ssub] a $, and $ M \\rtcredwxh[\\nf] `lx.N \\, \\Ssub $.\n\n \\item\nIf $\\PilmuTerm [M] a \\Outson c $, with $a \\not= c$, then there exist $`a, c, x, N$ and $\\Ssub$ such that $ \\PilmuTerm [M] a \\wbisim \\PilmuTerm [\\muterm`a.[c]`lx.N\\, \\Ssub] a $, and $ M \\rtcredwxh[\\nf] \\muterm`a.[c]`lx.N \\, \\Ssub $.\n\n\n \\item\nIf $\\PilmuTerm [M] a \\Inson x $, then there exist $\\Vect{z_j}, x, \\Vect{N_i}, c$ and $\\Ssub$ with $x \\notele \\Vect{z_j}$, $m \\geq 0$, and $n \\geq 0$ such that\n \\begin{itemize}\n \\item $\\PilmuTerm [M] a \\wbisim \\PilmuTerm [`lz_1\\dots z_m.xN_1\\dots N_n \\, \\Ssub] c $;\n \\item $ M \\rtcredwxh[\\nf] `lz_1\\dots z_m.xN_1\\dots N_n \\, \\Ssub $ if $a = c$;\n \\item $ M \\rtcredwxh[\\nf] \\muterm`a.[c]`lz_1\\dots z_m.xN_1\\dots N_n[a \\For `a] \\, \\Ssub $, if $a \\not= c$.\n \\end{itemize}\n\n \\end{enumerate}\n \\end{lemma}\n\n \\begin{Proof}\nStraightforward.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\n\n\\Comment\n \\[ \\begin{array}{lcl}\n\\SPilmuTerm [(`lx.x)(`ly.z)] a\n\t&=& \\\\\n\\PilmuTerm [a {l x . x} `ly.z ] a\n\t&=& \\\\\n\\PilmuTerm [s {v x} x := `ly.z ] a\n\t&=& \\\\\n\\New w ( \\BEq w=a \\Par \\PilmuTerm [l y . z ] w )\n\t&=& \\\\\n\\PilmuTerm [l y . {v z} ] a\n \\end{array} \\]\n\n\nAs to the reverse, we can show:\n\n \\begin{lemma} \\label{bmu reduction characterisation}\n \\begin{enumerate} \\itemsep 0pt\n\n \\item\nIf $ M \\rtcredwxh[\\nf] `lx.N \\, \\Ssub $, then $ \\PilmuTerm [M] a \\Outson a $.\n\n \\item\nIf $ M \\rtcredwxh[\\nf] \\muterm`a.[`b]`lx.N \\, \\Ssub $, then $ \\PilmuTerm [M] a \\Outson `b $.\n\n \\item\n$\\PilmuTerm [M] a \\Inson x $ if $ M \\rtcredwxh[\\nf] xN_1\\dots N_n \\, \\Ssub $ or $ M \\rtcredwxh[\\nf] \\muterm`a.[`b]xN_1\\dots N_n \\, \\Ssub $.\n\n \\end{enumerate}\n \\end{lemma}\n\n \\begin{Proof}\nStraightforward.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n\n\n}\n\n\\Paper{ \\section{Weak approximation for \\texorpdfstring{$\\lmu$}{}}\n\\label{approximation section}\n\nThe notions of \\emph{approximant} and \\emph{approximation} were first introduced by Wadsworth for the {\\LC}~\\cite{Wadsworth'76}, where they are used in order to better express the relation between equivalence of meaning in Scott's models and the usual notions of conversion and reduction.\nWadsworth defines approximation of terms through the replacement of any parts of a term remaining to be evaluated (\\emph{i.e.}~$`b$-redexes) by $\\bot$.\nRepeatedly applying this process over a reduction sequence starting with $M$ gives a set of approximants, each giving some - in general incomplete - information about the result of reducing $M$.\nOnce this reduction produces $ `lx.yN_1\\dots N_n $, all remaining redexes occur in $N_1, \\ldots, N_n$, which then in turn will be approximated.\n\nFollowing this approach, Wadsworth \\cite{Wadsworth'76} defines $\\SetAppr(M)$ (similar to Def.~\\ref{approximation} below) as the set of approximants of the $`l$-term $M$, which forms a meet semi-lattice.\nIn \\cite{Wadsworth'78}, the connection is established between approximation and semantics, by showing\n \\[ \\begin{array}{rcl}\n\\Sem{M}_{\\Dinfty} \\, p &=& \\bigsqcup \\, \\Set { \\Sem{A}_{\\Dinfty} \\, p \\mid A \\ele \\SetAppr(M) }.\n \\end{array} \\]\nSo, essentially, approximants are partially evaluated expressions in which the locations of incomplete evaluation (\\emph{i.e.}~where reduction \\emph{may} still take place) are explicitly marked by the element $\\bottom$; thus, they \\emph{approximate} the result of computations. Intuitively, an approximant can be seen as a `snapshot' of a computation, where we focus on that part of the resulting program which will no longer change, which corresponds to the (observable) \\emph{output}.\n\n\n\\CLAC{Essentially following \\cite{Wadsworth'76}, w}\\Paper{W}e now define a \\emph{weak approximation semantics} for $\\lmu$.\nApproximation for $\\lmu$ has been studied by others as well \\cite{Saurin'10,deLiguoro'13}; however, seen that we are mainly interested in \\emph{weak} reduction here, we will define \\emph{weak} approximants, which are normally not considered.\n\n \\begin{definition}[Weak approximation for $\\lmu$] \\label{weak approximation lmu} \\label{approximation}\n\n \\begin{enumerate} \\itemsep 0pt\n\n \\item\nThe set of $\\lmu$'s \\emph{weak approximants} $\\SetwApprlmu$ \\Paper{with respect to $\\redbmu$ }is defined through the grammar:\\Paper{\\footnote{For `normal' approximants, the case $`lx.\\ftAppr$ normally demands that $\\ftAppr \\not= \\bot$, as motived by the relation with $\\Dinfty$.}}\n\\Paper{ \\[ \\begin{array}{rrl@{\\quad}l}\n\\wAppr & ::= & \\bot \\\\\n\t&\\mid & x\\wAppr^1\\dots \\wAppr^n & (n \\geq 0) \\\\\n\t&\\mid & `lx.\\wAppr \\\\\n\t&\\mid & \\muterm`a.[`b]\\wAppr & (`a \\not= `b \\textrm{ or } `a \\in \\wAppr,\n\t\\wAppr \\not= \\muterm `g.[`d] \\wAppr', ~ \\wAppr \\not= \\bot )\n \\end{array} \\]\n}\\CLAC{\n \\[ \\begin{array}{rrl@{\\quad}l}\n\\wAppr & ::= &\n\\bot \\mid `lx.\\wAppr \\mid x\\wAppr^1\\dots \\wAppr^n & (n \\geq 0)\n\\\\\n\t&\\mid & \\muterm`a.[`b]\\wAppr & (`a \\not= `b \\textrm{ or } `a \\in \\wAppr, ~\n\t\\wAppr \\not=\t\\muterm `g.[`d] \\wAppr', ~ \\wAppr \\not= \\bot )\n \\end{array} \\]\n}\n\n \\item\nThe relation ${\\dirapp} \\subseteq \\SetwApprlmu \\times \\lmu $ is \\Paper{defined as }the smallest preorder that is the compatible extension of $\\bot \\dirapp M$.\\Paper{\\footnote{Notice that if $\\ftAppr_1 \\dirapp M_1$, and $\\ftAppr_2 \\dirapp M_2$, then $\\ftAppr_1\\ftAppr_2$ need not be an approximant; it is one if $\\ftAppr_1 = x\\ftAppr_1^1\\dots \\ftAppr_1^n$, perhaps prefixed with a number of context switches of the shape $\\muterm`a.[`b]$.}\n \\[ \\begin {array}{rcl}\n \\bottom &\\dirapp& M \\\\\nM \\dirapp M' & \\Then & `l x . M \\dirapp `l x . M' \\\\\nM \\dirapp M' & \\Then & \\muterm `g.[`d] M \\dirapp \\muterm `g.[`d] M' \\\\\nM_1 \\dirapp M'_1 \\And M_2 \\dirapp M'_2 & \\Then & M_1M_2 \\dirapp M_1'M_2'\n \\end {array} \\] \n\n\n \\item \\label{appr def}\n\\Paper{ The set of \\emph{weak approximants} of $M$, $\\SetwApprlmu(M)$, is defined as:\\footnote{Notice that we use $\\redbmu$ here, not $\\redwbmu$; the approximants are weak, not the reduction.}%\n \\[ \\begin{array}{rcl}\n\\Set{ \\wAppr \\ele \\SetwApprlmu \\mid \\Exists N \\ele \\lmu \\Pred[ M \\rtcredbmu N \\And \\wAppr \\dirapp N] } .\n \\end{array} \\]}\n\\CLAC{ $ \\begin{array}{@{}rcl}\n\\SetwApprlmu(M) & \\ByDef & \\Set{ \\wAppr \\ele \\SetwApprlmu \\mid \\Exists N \\ele \\lmu \\Pred[ M \\rtcredbmu N \\And \\wAppr \\dirapp N] } .\n \\end{array} $}\n\n \\item\n\\emph{Weak approximation equivalence} is defined through:\n $ \\begin{array}{rcl@{}}\nM \\equivwA N &\\ByDef& \\SetwApprlmu(M) = \\SetwApprlmu(N)\n \\end{array} $.\n\n \\end{enumerate}\n \\end{definition}\nNotice that, in part~\\ref{appr def}, the approximants are weak, not the reduction.\n\n\\Comment{ Notice that\n$ \\begin{array}[t]{ccccc}\n\\SetwApprlmu(`lz.`D`D)\n\t&=&\n\\Set{\\bot, `lz.\\bot}\n\t&=&\n\\SetwApprlmu(`lz.`W`W)\n\t\\\\\n\\SetwApprlmu(\\muterm `a . [`b] `D`D)\n\t&=&\n\\Set{\\bot}\n\t&=&\n\\SetwApprlmu(`D`D)\n \\end{array} $ \\\\ }\n\nThe relationship between the approximation relation and reduction is characterised by\\Paper{ the following result}:\n \\begin{lemma} \\label{approximation lemma redbmu}\n \\begin{enumerate}\n\n \\item If $\\wAppr \\dirapp M$ and $M \\rtcredbmu N$, then $\\wAppr \\dirapp N$.\n\n \\item If $ \\wAppr \\ele \\SetwApprlmu(N) $ and $ M \\rtcredbmu N $, then also $ \\wAppr \\ele \\SetwApprlmu(M) $.\n\n \\item If $\\wAppr \\ele \\SetwApprlmu(M)$ and $M \\redbmu N$, then there exists $L$ such that $N \\rtcredbmu L$ and $\\wAppr \\dirapp L$.\n\n \\item $M$ is a {\\WHNF} if and only if there exists $\\wAppr \\not= \\bot $ such that $ \\wAppr \\dirapp M $.\n\n \\end{enumerate}\n \\end{lemma}\n\n\\Paper{ \\begin{Proof}\nEasy. \\quad\\usebox{\\proofbox}\n \\end{Proof}}\n\n\n\\Paper\nWe could also have defined the set of approximants of a term coinductively:\n\n \\begin{definition} \\label{alternative approximation}\nWe define $\\SetwApprlmuAlt(M)$ coinductively by:\n\n \\begin {itemize}\n\n \\item If $ \\wAppr \\dirapp M $, then $ \\wAppr \\ele \\SetwApprlmuAlt(M)$.\n\n \\item\nif $M \\rtcredwh xM_1\\dots M_n$ $(n \\geq 0)$, then\n$ \\SetwApprlmuAlt(M) = \\Set { x\\wAppr^1\\dots \\wAppr^n \\mid \\wAppr^i \\ele \\SetwApprlmuAlt(M_i) , 1 \\seq i \\seq n } $.\n\n \\item if $M \\rtcredwh `lx.N$, then\n$ \\SetwApprlmuAlt(M) = \\Set { `lx.\\wAppr \\mid \\wAppr \\ele \\SetwApprlmuAlt(N) } $.\n\n \\item if $M \\rtcredwh \\muterm`a.[`b]N$, with $`a \\not= `b$ or $`a \\ele \\fn(N)$, $N \\not= \\muterm`g.[`d] L$, then\n$ \\SetwApprlmuAlt(M) = \\Set { \\muterm`a.[`b]\\wAppr \\mid \\wAppr \\ele \\SetwApprlmuAlt(N) } $.\n\n \\end {itemize}\n \\end{definition}\n\nWe can show that these definitions coincide:\n\n \\begin{lemma} \\label{coincide lemma}\n$\\SetwApprlmuAlt(M) = \\SetwApprlmu(M)$.\n \\end{lemma}\n\n \\begin{Proof}\n \\begin{description}\n \\item[$\\subseteq$]\nIf $\\wAppr \\ele \\SetwApprlmuAlt(M)$, then by Def.~\\ref{alternative approximation} either:\n\n \\begin{description}\n\n \\item [$ \\wAppr \\dirapp M $] Immediate.\n\n \\item [$ \\wAppr = x\\wAppr^1\\dots \\wAppr^n $]\nThen $ M \\rtcredwh xM_1\\dots M_n $, with $ \\wAppr^i \\ele \\SetwApprlmuAlt(M_i) $.\nBy coinduction, also $ \\wAppr^i \\ele \\SetwApprlmu(M_i) $; then, by Def.~\\ref{weak approximation lmu}, there exist $M'_i$ such that $ M_i \\rtcredbmu M'_i $ and $ \\wAppr^i \\dirapp M'_i $.\nThen, since ${\\rtcredwh} \\subseteq {\\rtcredbmu}$, in particular $ M \\rtcredbmu xM'_1\\dots M'_n$ and also $ \\wAppr \\dirapp xM'_1\\dots M'_n $, so $ \\wAppr \\ele \\SetwApprlmu(M) $.\n\n\n\n \\end{description}\nThe other cases are similar.\n\n \\item[$\\supseteq$]\nIf $ \\wAppr \\ele \\SetwApprlmu(M) $, then by Def.~\\ref{weak approximation lmu}, there exists $N$ such that $ M \\rtcredbmu N $ and $ \\wAppr^i \\dirapp N $.\nNow either:\n\n \\begin{description}\n\n \\item [$ \\wAppr \\dirapp M $] Trivial.\n\n \\item [$ \\wAppr = x\\wAppr^1\\dots \\wAppr^n $]\nSince $ x\\wAppr^1\\dots \\wAppr^n \\dirapp N $, $ N = xN_1\\dots N_n $, and $ \\wAppr^i \\dirapp N_i $.\nThen by Def.~\\ref{alternative approximation} $ \\wAppr^i \\ele \\SetwApprlmuAlt(N_i) $, and by induction, $ \\wAppr^i \\ele \\SetwApprlmu(N_i) $.\nBy Lem.~\\ref{weak head reduction}, there exist $M_i$ $(1 \\seq i \\seq n)$ such that $M \\rtcredwh xM_1 \\dots M_n \\rtcredbmu xN_1\\dots N_n $; since ${\\rtcredwh} \\subseteq {\\rtcredbmu}$, this implies that $M_i \\rtcredbmu N_i$; then by Lem.~\\ref{approximation lemma redbmu}, \n$ \\wAppr^i \\ele \\SetwApprlmu(M_i) $.\nThen, by Def.~\\ref{alternative approximation}, $ \\wAppr \\ele \\SetwApprlmuAlt(M) $.\n\n\n\n \\end{description}\nThe other cases are similar.\n \\end{description}\n \\end{Proof}\nAs a result, below we will use whichever definition is convenient.\n\nNotice that the first and latter cases in Def.~\\ref{alternative approximation} overlap for terms that are already in head-normal form. In fact, we can show:\n\n \\begin{lemma}\nIf $\\wAppr \\ele \\SetwApprlmu(M) $ and $ \\wAppr' \\dirapp \\wAppr $, then $ \\wAppr' \\ele \\SetwApprlmu(M) $.\n \\end{lemma}\n \\begin{Proof}\nBy Def.~\\ref{weak approximation lmu} and transitivity of $\\dirapp$.\\quad\\usebox{\\proofbox}\n \\end{Proof}\nBy Lem.~\\ref{coincide lemma}, this result also holds for $ \\SetwApprlmuAlt(`.) $.\n\n\n\n\nAs is standard in other settings, interpreting a $\\lmu$-term $M$ through its set of weak approximants $\\SetwApprlmu(M)$ gives a semantics.\n\n\n \\begin{theorem} [Weak approximation semantics] \\label{approx seman lmux}\nIf $M \\eqbmu N$, then $M \\equivwA N$%\n\\Comment{$\\SetApprlmu(M) = \\SetwApprlmu(N) $}.\n \\end{theorem}\n\n \\begin{Proof}\n\\CLAC{Using Prop.~\\ref{confluence eq} and Lem.~\\ref{approximation lemma redbmu}.\\quad\\usebox{\\proofbox}}\n\\Paper{\n\\setbox100=\\hbox{\\hspace*{\\leftmargini}\\textit{Proof. }}\n$\n\\begin {array}[t]{@{}lclcl}\nM \\eqbmu N \\And \\wAppr \\ele \\SetwApprlmu(M)\n\t& \\Then & \\\\\nM \\eqbmu N \\And \\Exists L \\Pred[ M \\rtcredbmu L \\And \\wAppr \\dirapp L ]\n\t& \\Then & (\\textit{\\ref{confluence eq}}) \\\\\n\\Exists L,K \\Pred[ L \\rtcredbmu K \\And N \\rtcredbmu K \\And \\wAppr \\dirapp L ]\n\t& \\Then & (\\textit{\\ref{approximation lemma redbmu}}) \\\\\n\\Exists K \\Pred[ N \\rtcredbmu K \\And \\wAppr \\dirapp K ]\n\t& \\Then &\n\\wAppr \\ele \\SetwApprlmu(N) \\quad\\usebox{\\proofbox}\n \\end{array} $\n}\n\\Comment{\n$ \\kern-6mm \\begin {array}[t]{@{}lllll} \\kern6mm\nM \\eqbmu N \\And \\wAppr \\ele \\SetwApprlmu(M)\n\t& \\Then &\n\\kern -10mm\nM \\eqbmu N \\And \\Exists L \\Pred[ M \\rtcredbmu L \\And \\wAppr \\dirapp L ]\n\t& \\Then (\\textit{\\ref{confluence eq}}) \\\\\n\\Exists L,K \\Pred[ L \\rtcredbmu K \\And N \\rtcredbmu K \\And \\wAppr \\dirapp L ]\n\t& \\Then (\\textit{\\ref{approximation lemma redbmu}}) &\n\\Exists K \\Pred[ N \\rtcredbmu K \\And \\wAppr \\dirapp L ]\n\t& \\Then & \\\\\n\\wAppr \\ele \\SetwApprlmu(N) \\quad\\usebox{\\proofbox}\n \\end{array} $\n}\n \\end{Proof}\n\nThe reverse implication of this result does not hold, since terms without {\\WHNF} (which have only $\\bot$ as approximant) are not all related by reduction.\nBut we can show the following full abstraction result:\n\n \\begin{theorem} [Full abstraction of $\\equivwbmu$ versus $\\equivwA$]\n\\label{full abstr approx seman Lmux}\n$M \\equivwbmu N$ if and only if $M \\equivwA N$.\n \\end{theorem}\n\n \\begin{Proof}\n\n\n \\begin {description}\n\n \\item[if]\nBy co-induction on the definition of the set of weak approximants.\n\\CLAC{ \\leftmarginii 0pt}\n\\Paper{%\nIf $\\SetwApprlmu(M) = \\Set{\\bottom} = \\SetwApprlmu(N)$, then both $M$ and $N$ have no {\\WHNF}, so $M \\equivwbmu N$.\nOtherwise, either:\n\n \\begin{description} \\itemindent -1em\n\n \\item[$x\\wAppr^1\\dots \\wAppr^n \\ele \\SetwApprlmu(M) \\And x\\wAppr^1\\dots \\wAppr^n \\ele \\SetwApprlmu(N)$] \\CLAC{ ~ }\n\nThen by Def.~\\ref{alternative approximation}\n$M \\rtcredwh xM_1\\dots M_n$ for some $M_i$ ($1 \\seq i \\seq n$) and $\\wAppr^i \\ele \\SetwApprlmu(M_i)$.\nLikewise, there exist $N_i$ ($1 \\seq i \\seq n$) such that $N \\rtcredwh xN_1\\dots N_n$ and $\\wAppr^i \\ele \\SetwApprlmu(N_i)$.\nSo $\\SetwApprlmu(M_i) = \\SetwApprlmu(N_i)$ and by induction $M_i \\equivwbmu N_i$, for $1 \\seq i \\seq n$.\nSince $\\equivwbmu$ is a congruence, also $xM_1\\dots M_n \\equivwbmu xN_1\\dots N_n$; since $\\equivwbmu$ is closed under reduction $\\redwbmu$, it is also under $\\redwh$, and we have $M \\equivwbmu N$.\n\n\n\\Comment{%\nAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n\n \\item[$x\\wAppr^1\\dots \\wAppr^n \\ele \\SetwApprlmu(M) \\And x\\wAppr^1\\dots \\wAppr^n \\ele \\SetwApprlmu(N)$] \\CLAC{ ~ }\n\nIf $x\\wAppr^1\\dots \\wAppr^n \\ele \\SetwApprlmu(M)$, then there exist $L$ such that $M \\rtcredbmu L$ and $x\\wAppr^1\\dots \\wAppr^n \\dirapp L$, so $L = xL_1\\dots L_n$ with $\\wAppr^i \\dirapp L_i$.\n\nThen, by Lem.~\\ref{redbmu to redwxh lemma}, $M \\rtcredwh xM_1\\dots M_n$ with, for $1 \\seq i \\seq n$, $M_i \\rtcredwbmu L_i$; then $\\wAppr^i \\ele \\SetwApprlmu(M_i)$.\n\nLikewise, there exist $L'$ such that $N \\rtcredbmu L'$ and $x\\wAppr^1\\dots \\wAppr^n \\dirapp L'$, so $L' = xL'_1\\dots L'_n$ with $\\wAppr^i \\dirapp L'_i$, and $N \\rtcredwh xN_1\\dots N_n$ with, for $1 \\seq i \\seq n$, $N_i \\rtcredwbmu L'_i$; then $\\wAppr^i \\ele \\SetwApprlmu(N_i)$.\n\nSo $\\SetwApprlmu(M_i) = \\SetwApprlmu(N_i)$, so by induction $M_i \\equivwbmu N_i$, for $1 \\seq i \\seq n$.\nBut then $xM_1\\dots M_n \\equivwbmu xN_1\\dots N_n$; since $\\equivwbmu$ is closed under reduction, also $M \\equivwbmu N$.\n\n\n\n\n \\end {description}\nThe other cases are similar.\n\n\n \\item[only if]\nAs the proof of Theorem~\\ref{approx seman lmux}, but using Proposition~\\ref{confluence equiv} rather than~\\ref{confluence eq}.\\quad\\usebox{\\proofbox}\n\n \\end {description}\n \\end{Proof}\n\nWe can also show that weak head equivalence and weak approximation equivalence coincide:\n \\begin{theorem} \\label{eqA is eqh}\n$M \\equivwh N $ if and only if $M \\equivwA N$.\n \\end{theorem}\n\n \\begin{Proof}\nStraightforward, by coinduction.\\quad\\usebox{\\proofbox}\n\n\\Comment\n \\begin{description} \\itemsep 4pt\n\n \\item[if : $ \\SetwApprlmu(M) = \\SetwApprlmu(N) \\Then M \\equivwh N $]\nBy co-induction on the definition of the set of weak approximants.\n\n \\begin{description}\n\n \\item [$ \\SetwApprlmu(M) = \\Set{\\bottom} = \\SetwApprlmu(N) $]\nThen both $M$ and $N$ have no \\WHNF, so $ M \\equivwh N $.\n\n \\item [$ \\wAppr = x\\wAppr^1\\dots \\wAppr^n $]\nThen $ M \\rtcredwh xM_1\\dots M_n $, and $ \\wAppr^i \\ele \\SetwApprlmu(M_i) $, for $1 \\seq i \\seq n$.\nSince $ \\SetwApprlmu(M) = \\SetwApprlmu(N) $, also $ N \\rtcredwh xN_1\\dots N_n $, with $ \\wAppr^i \\ele \\SetwApprlmu(N_i) $, so $ \\SetwApprlmu(M_i) = \\SetwApprlmu(N_i) $.\nThen, by coinduction, $ M_i \\equivwh N_i $, so $ M \\equivwh N $.\n\n\n]\n \\end{description}\nThe other cases are similar.\n\n \\item[only if : $M \\equivwh N \\Then M \\equivwA N $]\nBy co-induction on the definition of $\\equivwh$.\n \\begin{description}\n\n \\item [$M$ and $N$ have no {\\WHNF}]\nThen $ \\SetwApprlmu(M) = \\Set{\\bottom} = \\SetwApprlmu(N) $.\n\n \\item[$ M \\rtcredwh xM_1\\dots M_n $]\nThen also $N \\rtcredwh xN_1 \\dots N_n$, and $ M_i \\equivwh N_i $ for $1 \\seq i \\seq n$, and by coinduction, $ M \\equivwA N $, so $ \\SetwApprlmu(M_i) = \\SetwApprlmu(N_i)$.\nThen, by Def.~\\ref{alternative approximation}, we have $ \\SetwApprlmu(M) = \\SetwApprlmu(N)$.\n\n\n\n \\end{description}\nThe other cases are similar.\n\n \\end{description}\n\n \\end{Proof}\n\nWe can \\Paper{also }define $\\Sem{M}_{\\SetwApprlmu} = \\sqcup \\, \\Set{ \\wAppr \\mid \\wAppr \\ele \\SetwApprlmu(M) } $, with $\\sqcup$ the least-upper bound with respect to $\\dirapp$; then $\\Sem{`.}_{\\SetwApprlmu}$ corresponds to the ($\\lmu$ variant of) L\\'evy-Longo trees.\nCombined with the results shown in the previous section, we now also have the following result that states that all equivalences coincide:\n \\begin{corollary} \\label{all lmu weak equivalences}\nLet $M,N \\ele \\lmu$, then $M \\equivwxh N \\Iff M \\equivwh N \\Iff M \\equivwA N \\Iff M \\equivwbmu N $.\n \\end{corollary}\n\n\n\n\\Paper{ \\section{Full abstraction for the logical interpretation} \\label{Full abstraction} }\n\nWe now come to the main result of this paper, where we show a full abstraction result for our logical interpretation.\nFirst we show the relation between weak explicit head equivalence and weak bisimilarity.\n\n \\begin{theorem} [Full abstraction of $\\wbisim$ versus $\\equivwxh$]\n\\label{FA equivxh equivC}\nFor any $M,N \\ele \\lmux$:\n$\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $ if and only if $M \\equivwxh N$.\n \\end{theorem}\n\n{\n \\begin{Proof}\n\n\\CLAC{ \\leftmargini 0pt\n}\n\n \\begin{description} \\itemsep 4pt\n\n \\item[if]\nBy co-induction on the definition of $\\equivwxh$.\nLet $M \\equivwxh N$, then either $M$ and $N$ have both no $\\redwxh$-normal form, so, by Lem.~\\ref{zero lemma}, their interpretations are both weakly bisimilar to the process $\\Zero$; or\nboth $M \\rtcredwxh[\\nf] M' \\, \\Ssub $ and $ N \\rtcredwxh[\\nf] N' \\, \\Ssub' $ (let $\\Ssub = \\Vexsub y := P \\, \\Vexcontsub `a := Q . `b $, and $\\Ssub' = \\Vexsub y := P' \\, \\Vexcontsub `a := Q' . `b $), and either:\n\n \\begin{description}\n \\CLAC{ \\itemindent -5mm}\n\n \\item [$M' = xM_1\\dots M_n $ $(n \\geq 0)$, $ N = xN_1\\dots N_n $ and $M_i \\, \\Ssub \\equivwxh N_i \\, \\Ssub ' $, for all $1 \\seq i \\seq n$] ~\n\nWe have $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[xM_1\\dots M_n \\, \\Ssub] a $ and $\\PilmuTerm[N] a \\wbisim \\PilmuTerm[xN_1\\dots N_n \\, \\Ssub'] a $ by Corollary~\\ref{weak soundness}.\nNotice that\n \\[ \\begin{array}{lcl}\n\\PilmuTerm [xM_1\\dots M_n \\, \\Ssub] a &=&\n\\New \\Vect{cy`a} (\\PilmuTerm [v x] c_1 \\Par {\\VPiExContSub c_i := M_i . c_{i+1} } \\Par {} \\PiLmuSem[\\Ssub] )\n \\end{array} \\]\nwhere $c_n = a$ and\n \\[ \\begin{array}{rcl}\n\\PiLmuSem[\\Ssub] &=& {\\VPiExSub y := P } \\Par {\\VPiExContSub `a := Q . `b } \\\\\n\\PiExContSub c_i := M_i . c_{i+1} &=&\n{ \\PiExContFsub c_i := M_i . c_{i+1} } \\\\\n\\PiExSub y_j := P_j &=& \\PiExsub y_j := P_j \\\\\n\\PiExContSub `a_k := Q_k . `b_k &=&\n {\\PiExContFsub `a_k := Q_k . `b_k }\n \\end{array} \\]\nand similar for $\\PilmuTerm [xN_1\\dots N_n \\, \\Ssub'] a $.\nBy induction,\n \\[ \\begin{array}{rclccll}\n\\New \\Vect{y`a} (\\PilmuTerm[M_i] w \\Par {} \\PiLmuSem[\\Ssub]) & \\ByDef &\n\\PilmuTerm[M_i \\, \\Ssub] w & \\wbisim &\n\\PilmuTerm[N_i \\, \\Ssub'] w & \\ByDef &\n\\New \\Vect{y`a} (\\PilmuTerm[N_i] w \\Par {} \\PiLmuSem[\\Ssub'])\n \\end{array} \\]\nSince $\\wbisim$ is a congruence, also\n \\[ \\begin{array}{ccl}\n\\PiExContFsub c_i := M_i . c_{i+1} \\Par \\PiLmuSem[\\Ssub]\n\t& \\wbisim &\n\\PiExContFsub c_i := N_i . c_{i+1} \\Par \\PiLmuSem[\\Ssub']\n \\end{array} \\]\nfor all $1 \\seq i \\seq n$, so also\n $ \\PilmuTerm [xM_1\\dots M_n \\, \\Ssub] a \\wbisim \\PilmuTerm [xN_1\\dots N_n \\, \\Ssub'] a\n$ but then also $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $.\n\n\\Paper\n \\item [ $M' = `lx.M''$, $N' = `lx.N''$, and $M'' \\, \\Ssub \\equivwxh N'' \\, \\Ssub' $]\nBy Corollary~\\ref{weak soundness}, we have $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[`lx.M'' \\, \\Ssub ] a $ and $\\PilmuTerm[N] a \\wbisim \\PilmuTerm[`lx.N'' \\, \\Ssub' ] a $.\nNotice that\n \\[ \\begin{array}{rcl}\n\\PilmuTerm [`lx.M'' \\, \\Ssub ] a &\\ByDef&\n\\New \\Vect{y`a} ( \\PilmuTerm [l x . M''] a \\Par {} \\PiLmuSem[\\Ssub] )\n\t\\\\ [1mm]\n\\PilmuTerm [`lx.N'' \\, \\Ssub' ] a &\\ByDef&\n\\New \\Vect{y`a} ( \\PilmuTerm [l x . N''] a \\Par {} \\PiLmuSem[\\Ssub'] )\n \\end{array} \\]\nwith $\\Ssub$ and $\\Ssub'$ as in the previous part and $a$ not in $\\Ssub$ or $\\Ssub'$.\nBy induction,\n \\[ \\begin{array}{rclclcl}\n\\New \\Vect{y`a} ( \\PilmuTerm [M''] b \\Par {} \\PiLmuSem[\\Ssub] )\n\t& \\ByDef &\n\\PilmuTerm[M'' \\, \\Ssub] b\n\t& \\wbisim & \\CLAC{ \\\\ }\n\\PilmuTerm[N''\\, \\Ssub'] b\n\t& \\ByDef &\n\\New \\Vect{y`a} ( \\PilmuTerm [N''] b \\Par {} \\PiLmuSem[\\Ssub] )\n \\end{array} \\]\nSince $\\wbisim$ is a congruence, also\n$\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $.\n\n\n \\item [{$M' = \\muterm`g.[`d]M''$, $N' = \\muterm`g.[`d]N''$}]\nThen $M''$ and $N''$ themselves are in normal form and $M'' \\, \\Ssub \\equivwxh N'' \\, \\Ssub' $, with $\\Ssub$, $\\Ssub'$ as above.\nBy Corollary~\\ref{weak soundness}, $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[\\muterm`g.[`d]M'' \\, \\Ssub ] a $ and $\\PilmuTerm[N] a \\wbisim \\PilmuTerm[\\muterm`g.[`d]N'' \\, \\Ssub' ] a $.\nNotice that\n \\[ \\begin{array}{\\CLAC{l@{~}c@{~}l}\\Paper{rcl}cl}\n\\PilmuTerm [{\\muterm`g.[`d].M''} \\, \\Ssub ] a\n\t&\\ByDef& \\CLAC{ \\\\ }\n\\New \\Vect{y`a} ( \\PilmuTerm [{ M''[a \\For `g]}] `d \\Par {} \\PiLmuSem[\\Ssub] )\n\t&\\ByDef&\n\\PilmuTerm[{M''[a \\For `g]} \\, \\Ssub] `d\n\t\\\\ [2mm]\n\\PilmuTerm [{\\muterm`g.[`d].N''} \\, \\Ssub'] a\n\t&\\ByDef& \\CLAC{ \\\\ }\n\\New \\Vect{y`a} ( \\PilmuTerm [{ N''[a \\For `g]}] `d \\Par {} \\PiLmuSem[\\Ssub'] )\n\t&\\ByDef&\n\\PilmuTerm[{N''[a \\For `g]} \\, \\Ssub'] `d\n \\end{array} \\]\nBy induction, $\\PilmuTerm[{M''[a \\For `g]} \\, \\Ssub] `d \\wbisim \\PilmuTerm[{N''[a \\For `g]} \\, \\Ssub'] `d $;\nsince $\\wbisim$ is a congruence, also\n$\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $.\n\n\n\n\\CLAC{\\item [{$M' = `lx.M''$ or $M' = \\muterm`g.[`d]M''$}] Similar.}\n\n \\end{description}\n\n\n \\item[only if]\nWe distinguish the following cases.\n\n \\begin{enumerate}\n\n \\item $ \\PilmuTerm[M] a $ can never input nor output; then $ \\PilmuTerm[M] a \\wbisim \\Zero \\wbisim \\PilmuTerm[N] a $.\nAssume $M$ has a weak head-normal form, then by Lem.~\\ref{bmu reduction characterisation}, $ \\PilmuTerm[M] a $ is not weakly bisimilar to $\\Zero$; therefore, $M$ and $N$ both have no weak head-normal form.\n\n \\item $\\PilmuTerm[M] a \\Outson c $, then by Lem.~\\ref{pi reduction characterisation}, $\\PilmuTerm[M] a \\wbisim \\New xb ( \\PilmuTerm[M'] b \\Par \\Out c \\Par \\PiLmuSem[\\Ssub] ) $, and $ M \\rtcredwxh `lx.M' \\, \\Ssub $.\nSince $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $,\nalso $\\PilmuTerm[N] a \\Outson c $, so $ \\PilmuTerm[N] a \\wbisim \\New xb ( \\PilmuTerm[N'] b \\Par \\Out c \\Par \\PiLmuSem[\\Ssub'] ) $ and $ N \\rtcredwxh `lx.N' \\, \\Ssub' $.\nThen also $ \\PilmuTerm[M'] b \\Par \\PiLmuSem[\\Ssub] \\wbisim \\PilmuTerm[N'] b \\Par \\PiLmuSem[\\Ssub'] $, so $\\PilmuTerm[M' \\, \\Ssub] a \\wbisim \\PilmuTerm[N' \\, \\Ssub'] a $\nand by induction, $M' \\, \\Ssub \\equivwxh N' \\, \\Ssub'$; so also $M \\equivwxh N$ by definition.\n\n \\item\nIf $\\PilmuTerm [M] a \\not\\!\\Outson c $, but $\\PilmuTerm [M] a \\Inson x $, then by Lem.~\\ref{pi reduction characterisation}, $ \\PilmuTerm[M] a \\wbisim \\PilmuTerm [xM_1\\dots M_n \\, \\Ssub] a' $ and $ M \\rtcredwxh xM_1\\dots M_n \\, \\Ssub$.\nWe have\n \\[ \\begin{array}{lcl}\n\\PilmuTerm [xM_1\\dots M_n \\, \\Ssub] a' &=&\n\\New \\Vect{cy`a} (\\PilmuTerm [v x] c_1 \\Par {\\VPiExContSub c_i := M_i . c_{i+1} } \\Par \\PiLmuSem[\\Ssub])\n \\end{array} \\]\n\\Paper{where $c_n = a'$ and\n \\[ \\begin{array}{rcl}\n\\PiLmuSem[\\Ssub] &=& {\\VPiExSub y := P } \\Par {\\VPiExContSub `a := Q . `b } \\\\\n\\PiExContSub c_i := M_i . c_{i+1} &=& { \\PiExContFsub c_i := M_i . c_{i+1} } \\\\\n\\PiExSub y_j := P_j &=& \\PiExsub y_j := P_j \\\\\n\\PiExContSub `a_k := Q_k . `b_k &=& { \\PiExContFsub `a_k := Q_k . `b_k }\n \\end{array} \\]}\n\\CLAC{with $\\PiLmuSem[\\Ssub] $,\n$ \\PiExContSub c_i := M_i . c_{i+1} $,\n$ \\PiExSub y_j := P_j $, and\n$ \\PiExContSub `a_k := Q_k . `b_k $ are defined as above.}\n\nSince $\\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $, again by Lem.~\\ref{pi reduction characterisation}, $\\PilmuTerm[N] a \\wbisim \\PilmuTerm [xN_1\\dots N_n \\, \\Ssub'] a'' $ and $ N \\rtcredwxh xN_1\\dots N_n \\, \\Ssub' $.\nNotice that\n \\[ \\begin{array}{lcl}\n\\PilmuTerm [xN_1\\dots N_n \\, \\Ssub'] a'' & = &\n\\New \\Vect{cy`a} (\\PilmuTerm [v x] c_1 \\Par {\\VPiExContSub c_i := N_i . c_{i+1} } \\Par {} \\PiLmuSem[\\Ssub'])\n \\end{array} \\]\n\\Paper\nwhere $c_n = a''$ and\n \\[ \\begin{array}{rcl}\n\\PiLmuSem[\\Ssub'] &=& {\\VPiExSub y := P' } \\Par {\\VPiExContSub `a := Q' . `b } \\\\\n\\PiExContSub c_i := N_i . c_{i+1} &=& \\PiExContFsub c_i := N_i . c_{i+1} \\\\\n\\PiExSub y_j := P'_j &=& \\PiExsub y_j := P'_j \\\\\n\\PiExContSub `a_k := Q'_k . `b_k &=& { \\PiExContFsub `a_k := Q'_k . `b_k }\n \\end{array} \\]\n\n\\CLAC{with $\\PiLmuSem[\\Ssub'] $, $\\PiExContSub c_i := N_i . c_{i+1} $, $ \\PiExSub y_j := P'_j $, and $ \\PiExContSub `a_k := Q'_k . `b_k $ similar to above.}\nThen we have\n \\[ \\begin{array}{rcl}\n\\PilmuTerm [xM_1\\dots M_n \\, \\Ssub] a' &\\wbisim& \\PilmuTerm [xN_1\\dots N_n \\, \\Ssub'] a'' ,\n\\end{array} \\]\nso $a' = a''$ and $\\PilmuTerm [M_i' \\, \\Ssub ] w \\wbisim \\PilmuTerm [N_i' \\, \\Ssub'] w $; then by induction, $M_i' \\, \\Ssub \\equivwxh N_i' \\, \\Ssub' $, and \\Paper{then also }$M\\equivwxh N $.\n\\quad\\usebox{\\proofbox}\n\n\n \\end{enumerate}\n \\end{description}\n \\end{Proof}\n\n\nWe now obtain our main result:\n\n \\begin{theorem} [Full abstraction] \\label{main result}\nLet $M,N \\ele \\lmu$, then $ \\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $ if and only if \n M \\equivwbmu N $.\n\n\\Paper{\\begin{Proof}\nBy Corollary~\\ref{all lmu weak equivalences} and Theorem~\\ref{FA equivxh equivC}.\n\\quad\\usebox{\\proofbox}\n \\end{Proof}\n}\n\n \\end{theorem}\n\n\\Paper\nSince ${\\eqbmu} \\subseteq {\\equivwbmu}$, the following is immediate.\n\n \\begin{corollary} \\label{semantics}\nIf $M \\eqbmu N$, then $ \\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $.\n \\end{corollary}\nwhich states that our interpretation gives a semantics for $\\lmu$.\n\n\n \\subsection*{Conclusions and Future Work}\nWe have found a new, simple and intuitive interpretation of $\\lmu$-terms in $`p$ that respects head reduction with explicit substitution.\nFor this interpretation, we have shown that termination is preserved, and that it is sound and complete.\nWe have shown that, for our context assignment system that uses the type constructor $\\arrow$ for $`p$ and is based on classical logic, typeable $\\lmu$-terms are interpreted by our interpretation as typeable $`p$-processes, preserving the types.\n\n\n\n\n\\CLAC{ \\section*{Conclusions and future work}\nWe have studied the output based, logic-inspired interpretation of untyped $\\lmu$ with explicit substitution into the $`p$-calculus and shown that this interpretation is fully abstract with respect to weak equivalence between terms and weak bisimilarity between processes.\n\nWe have defined the\nweak equivalences $\\equivwbmu$, $\\equivwh$, $\\equivwxh$, and $\\equivwA$ on $\\lmu$ terms, and shown that these all coincide.\nWe then proved that $M \\equivwxh N \\Iff \\PilmuTerm[M] a \\wbisim \\PilmuTerm[N] a $, which, combined with our other results, essentially shows that $\\PilmuTerm[`.] `. $ respects equality between L\\'evy-Longo trees for $\\lmu$.}\n\nWe will investigate the relation between our interpretation and the {\\sc cps}-translation of Lafont, Reus, and Streicher \\cite{Lafont-Reus-Streicher'93}.\n\n\\Paper{ \n\\bibliography {references} \n}\n \\CLAC{ \\input {biblio} }\n\n\n\n\n\n\n \\end {document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA staged program of neutrino oscillation experiments is forseen using a new beam line facility, the ``Neutrinos at the Main Injector'' (NuMI) \\cite{numitdr}, at the Fermi National Accelerator Laboratory. The first experiment, MINOS \\cite{minos}, will perform definitive spectrum measurements which demonstrate the effect of $\\nu_\\mu$ oscillations. The experiment has run since early 2005 and published a first result \\cite{minos-prl} based upon $1.3\\times10^{20}$~protons-on-target, approximately 1\/10$^{th}$ the eventual exposure of the experiment. The second, NO$\\nu$A \\cite{nova}, has been approved to explore the phenomenon of CP violation in the transition $\\nu_\\mu\\rightarrow\\nu_e$ and possibly resolve the mass heirarchy of neutrinos. A third, MINER$\\nu$A \\cite{minerva}, is an experiment 1 km from the NuMI target to perform neutrino cross section measurements essential for long-baseline neutrino oscillation searches. \n \n\n\n\n\\section{NuMI Intensity and Upgrades}\n\nNuMI is a tertiary beam resulting from the decays of pion and kaon secondaries produced in the NuMI target. Protons of 120 GeV are fast-extracted (spill duration $\\sim10\\mu$sec) from the Main Injector (MI) accelerator and bent downward by 58~mrad toward Soudan, MN (see Figure~\\ref{fig:numi}). The beam line is designed to accept $4\\times10^{13}$~protons-per-pulse~(ppp). The beam line can be flexibly configured to achieve a variety of neutrino energies \\cite{flexybeam}. \nTo date, the average (peak) beam power has been 230~kW (310~kW), and a three-phase plan has been approved to upgrade the accelerator complex and NuMI line to increase this intensity to 430~kW by 2009, then 700~kW by 2012, then 1200~kW by 2013. \n\n\\begin{figure*}[htb]\n\\vspace{9pt}\n \\includegraphics[width=5.8in]{numi-beam-figure.eps}\n\\caption{Plan and elevation view of the NuMI beam at FNAL. }\n\\label{fig:numi}\n\\end{figure*}\n\nThe MI is fed multiple batches from the 8~GeV Booster accelerator. The Booster can deliver $(4-5)\\times10^{12}~p$\/batch. During normal operations, the MI beam is shared: 2 of 7 accelerated Booster batches are extracted to make antiprotons for the Tevatron collider, while the remaining 5 are extracted to the NuMI target. \n\nThe first phase of the intensity upgrades takes advantage of the fast (67~ms) cycle time to the Booster to inject as many as 11 batches of protons into the MI.\n Pairs of batches will be coalesced by a process of ``slip-stacking'' \\cite{kiyomi} into 6 double-batches around the MI circumference, and this beam accelerated to 120~GeV. Already, MINOS has received a 20\\% increase in beam power by slip-stacking of 2 batches and accelerating these along with 5 other single batches, and tests of 11-batch stacking have achieved as much as 3$\\times10^{13}$~ppp, which will improve as the coalescing and capture process is better understood. This phase should be fully-implemented in 2007 and achieve 430~kW. \n\nThe second phase will use the 8~GeV Recycler ring in the MI tunnel as a pre-injector. The Recycler, which has the same circumference as the MI, will be used to pre-load and slip-stack 11 Booster batches, then perform a fast transfer to the MI for acceleration. This reduces the MI cycle time from 2.2~sec. to 1.3~sec., increasing the beam power to NuMI to 700~kW (see Table~\\ref{table:power}). This phase is to be implemented at the end-of-Tevatron shutdown in 2009, and has been approved by the Laboratory.\n\nThe third phase, which is still in its conceptual stages, calls for use of the Accumulator, currently utilized for antiproton beam accumulation, as a proton beam accumulator which will permit 3 Booster batches to be coalesced in one Booster-equivalent circumference. Six such Accumulator batches can be injected into the Recycler, then the Main Injector. The plan calls for new injection and extraction lines to be built for the Accumulator (see Figure~\\ref{fig:accum}). As shown in Table~\\ref{table:power}, the anticipated beam power to NuMI is then 1.2~MW.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n \\includegraphics[width=3in]{accumulator.eps}\n\\vskip -1.cm\n\\caption{Plan view of the FNAL Accumulator ring, with new AP4 injection line from the Booster and AP5 extraction line to the MI.}\n\\label{fig:accum}\n\\vskip -1.cm\n\\end{figure}\n\nPhases II and III require upgrades to the NuMI line. These include the substitution of a graphite target with alternative water cooling schemes capable of withstanding the higher beam power and also additional cooling in the target station, decay tunnel, and beam stop. The new target design is facilitated by the fact that in the future the NuMI beam will operate at higher on-axis neutrino energy (which will obtain an ideal 1.8~GeV off-axis for the NO$\\nu$A experiment, see below) and the target will no longer be cantilevered inside the first focusing horn.\n\n\\begin{table*}[htb]\n\\caption{Present and projected NuMI beam power following 3 upgrades.}\n\\label{table:power}\n\\newcommand{\\hphantom{$-$}}{\\hphantom{$-$}}\n\\newcommand{\\cc}[1]{\\multicolumn{1}{c}{#1}}\n\\renewcommand{\\tabcolsep}{2pc}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{@{}llll}\n\\hline\n & Protons\/pulse & Cycle time & Power \\\\\n\\hline\n{Current Complex} & & & \\\\\n$\\cdot$ Shared Beam & 25$\\times10^{12}$ & 2.4 s & 200 kW \\\\\n$\\cdot$ NuMI Alone & 33$\\times10^{12}$ & 2.0 s & 310 kW \\\\\n{Phase I (2009)$^{(a)}$} & & & \\\\\n$\\cdot$ Shared Beam & 37$\\times10^{12}$ & 2.2 s & 320 kW \\\\\n$\\cdot$ NuMI Alone & 49$\\times10^{12}$ & 2.2 s & 430 kW \\\\\n{Phase II$-$Recycler (2012)$^{(a)}$} & 49$\\times10^{12}$ & 1.3 s & 700 kW \\\\\n{Phase III$-$Accumulator (2013)$^{(b)}$} & 83$\\times10^{12}$ & 1.3 s & 1200 kW \\\\\n\\hline\n & $^{(a)}$Approved &\\multicolumn{2}{l}{$^{(b)}$Still at conceptual stage.}\n\\end{tabular}\\\\[2pt]\n\\end{table*}\n\n\n\\section{MINOS}\n\nThe Main Injector Neutrino Oscillation Search (MINOS) \\cite{minos} began data-taking in 2005 and has accumulated $1.8\\times10^{20}$~POT. The first results \\cite{minos-prl} based on 1.2$\\times10^{20}$~POT were covered by B. Rebel at this workshop. MINOS is a two-detector experiment, with a 980~ton detector located 1~km from the NuMI target on site at FNAL (see Figure~\\ref{fig:numi}) to measure the neutrio flux directly, and a second, 5400~ton detector located in the Soudan mine in Minnesota, at a distance 735~km. The spectrum of neutrinos arriving in the far detector is studied to investigate whether neutrinos oscillate, decay, or otherwise disappear in flight. \n\n\\begin{figure}[htb]\n \\includegraphics[width=3.1in]{minos-result.eps}\n\\vskip -1.cm\n\\caption{First result from the MINOS experiment \\cite{minos-prl}. A $>6\\sigma$ deficit of neutrinos is observed in the far detector in Minnesota. }\n\\label{fig:minos-result}\n\\end{figure}\n\nMINOS has investigated the rate of charged-current $\\nu_\\mu$ interactions in the near and far detectors. Oscillations would result in an energy-dependent depletion of these interactions. The results of the first 1.2$\\times10^{20}$~POT are shown in Figure~\\ref{fig:minos-result}. Following commissioning of the beam line, the experiment acquired several data samples at different neutrino energies to study the beam spectrum and neutrino interactions in the near detector. Having thus tested the calculated spectrum of the neutrino flux, the Monte Carlo could be better trusted to describe any differences in the neutrino energy spectra in the near and far detectors. \nThe energy spectrum observed in the near detector was multiplied by an extrapolation factor \\cite{para} which corrects the near spectrum to the expected spectrum (in the absence of oscillations) at the far detector. A deficit of $\\nu_\\mu$'s with $E<6$~GeV of $>6\\sigma$ is observed. If interpreted as neutrino oscillations, the data indicate\n\n\\begin{equation}\n\\Delta m^2_{32}=(2.74^{+0.44}_{-0.26})\\times10^{-3}~\\mbox{eV}^2 \n\\end{equation}\nwith the mixing angle consistent with maximal mixing: $\\sin^2(2\\theta_{23})>0.87$. The expected resolution is $\\delta(\\Delta m_{32}^2)<10^{-4}~\\mbox{eV}^2$ at $16\\times10^{20}$~POT.\n\nWith further statistics, the experiment hopes to observe or place stringent limits on the transition $\\nu_\\mu\\rightarrow\\nu_e$. As shown in Figure~\\ref{fig:minos-nue}, the potential for 3$\\sigma$ discovery depends upon the values of the CP-violating phase $\\delta$ and the sign of $\\delta m_{32}^2$. \n\n\\begin{figure}[tb]\n \\includegraphics[width=2.9in]{minos-nue.eps}\n\\vskip -1.cm\n\\caption{Expected sensitivity of the MINOS experiment to the appearance of $\\nu_e$'s in its $\\nu_\\mu$ beam. }\n\\label{fig:minos-nue}\n\\end{figure}\n\n\n\n\\section{MINER$\\nu$A}\n\nThe MINER$\\nu$A experiment is a fine-grained scintillator detector like the KEK SciBar detector, but located in the NuMI detector hall 1~km from the NuMI target. It features 196 alternating planes of $U$ and $V$ strips, with each strip consisting of $1.7\\times3.3~\\mbox{cm}^2$ triangular solid extruded scintillator read out into a multi-anode PMT via scintillating fiber. The finer pitch, overlapping triangular strips gives excellent position resolution to reconstruct final states in neutrino-nucleus interactions. A coarse outer calorimeter is used for shower containment, and muon momentum analysis is performed in the downstream MINOS near detector. The ability of the detector to perform detailed analyses of neutrino final states may be seen in Figure~\\ref{fig:minerva-events}.\n\n\\begin{figure}[tb]\n\\vspace{9pt}\n \\includegraphics[width=3.in]{minerva-events.eps}\n\\vskip -0.5cm\n\\caption{Display of two simulated neutrino interactions in the MINER$\\nu$A detector: (left) quasielastic scattering, (right) resonant production, with the $\\Delta^+\\rightarrow p\\pi^0$ visible in the detector. }\n\\label{fig:minerva-events}\n\\end{figure}\n\nUncertainties in neutrino cross sections are substantial at low energy. Of particular note are the large uncertainties in the MINOS oscillation result due to such cross section uncertainties \\cite{minos-prl}: pion reabsorption by the struck nucleus can alter the apparent final state multiplicity and visible energy. Such nuclear effects will be studied using the 18 nuclear target sheets in MINER$\\nu$A.\n\n\n\\section{NO$\\nu$A}\n\nThe NuMI Off-Axis $\\nu_e$ Appearance (NO$\\nu$A) experiment has been approved by the Laboratory to search for and use the transition $\\nu_\\mu\\rightarrow\\nu_e$ to study CP violation in the neutrino sector. \n\n\\begin{figure}[t]\n\\vspace{9pt}\n\\vskip -0.5cm\n \\includegraphics[width=2.5in]{nue-matter.eps}\n\\vskip -1.cm\n\\caption{Probability for the transition $\\nu_\\mu\\rightarrow\\nu_e$ and $\\overline{\\nu}_\\mu\\rightarrow\\overline{\\nu}_e$ for 735~km baseline and $E_\\nu$=1.5~GeV as a function of the CP violating phase $\\delta$ under two assumptions for the mass ordering. Taken from \\cite{nova}.}\n\\label{fig:nue-matter}\n\\end{figure}\n\n\\begin{figure}[h]\n\\vspace{9pt}\n\\vskip -0.5cm\n \\includegraphics[width=2.6in]{nova-spectrum.eps}\n\\vskip -1.cm\n\\caption{The charged-current $\\nu_\\mu$ spectrum expected in NO$\\nu$A, along with the expected $\\nu_e$ signal from oscillations with $\\sin^2 2\\theta_{13}=0.1$ and $\\nu_e$ contamination from the beam. Also shown is the visible energy from neutral current $\\nu_\\mu$ interactions before any rejection criteria are applied. }\n\\label{fig:nova-spectrum}\n\\end{figure}\n\nThe NuMI long baseline and higher neutrino energy results in a splitting between the transition probability for $\\nu_\\mu\\rightarrow\\nu_e$ and $\\overline{\\nu}_\\mu\\rightarrow\\overline{\\nu}_e$, as shown in Figure~\\ref{fig:nue-matter}. This arises due to the matter effect, and results in an asymmetry even in the absence of CP violation. This fact makes the NO$\\nu$A experiment quite complementary to the JPARC T2K experiment, which will operate at lower $E_\\nu$ and shorter baseline: JPARC will measure the transition probability for $\\nu_\\mu\\rightarrow\\nu_e$, which measures the mixing angle $\\theta_{13}$ in the neutrino mixing matrix, and the NO$\\nu$A experiment could study both the $\\nu$ and $\\overline{\\nu}$ process to understand the mass heirarchy of the neutrino states, and possibly also measure the CP phase $\\delta$.\n\n\\begin{figure}[tb]\n\\vspace{9pt}\n \\includegraphics[width=2.9in]{nova-sensitivity-1.eps}\n \\includegraphics[width=2.9in]{nova-sensitivity-2.eps}\n\\vskip -1.cm\n\\caption{Curves showing the $3\\sigma$ discovery potential of the NO$\\nu$A experiment to observe the transition $\\nu_\\mu\\rightarrow\\nu_e$ after an exposure of $60\\times10^{20}$~POT. The upper plot is for an all-neutrino exposure, the lower for an equal admixture of $\\nu$ and $\\overline{\\nu}$ running.}\n\\label{fig:nova-sensitivity}\n\\end{figure}\n\n\nNO$\\nu$A must look for the appearance of charged-current $\\nu_e$ interactions in a $\\nu_\\mu$ beam, a challenge because because of the small transition probability and the prevalence of showers from neutral current $\\nu_\\mu$ interactions which can mimic the signal. As shown in Figure~\\ref{fig:nova-spectrum}, the $\\nu_\\mu$ off-axis spectrum can be centered at the oscillation maximum for $\\nu_\\mu\\rightarrow\\nu_e$, and the putative signal for $\\sin^2 2\\theta_{13}=0.1$ is significant over the intrinsic $\\nu_e$ in the beam which arises primarily from muon decays in flight. The challenge for the experiment will be the reduction of neutral current $\\nu_\\mu$ interactions, which can mimic the showers of charged-current $\\nu_e$ events. \n\nTo reduce neutral current backgrounds, the NO$\\nu$A detector is a fine-grained totally-active segmented calorimeter, consisting of alternating horizontally- and vertically-oriented planes of scintillator strips, 15.7~m in length and 6~cm in depth by 3.9~cm transverse width. The scintillator strips are made using extruded PVC shells filled with liquid scintillator. The two ends of a looped scintillating fiber which runs the 15.7~m length of the strip brings the light out to an APD. The light output for a minimum ionizing particle is 20~photo-electrons, to be compared with 2~photo-electrons noise from the APD. The detector will have 1654 planes of 384 strips, for a total mass of 25~kton. \n\n\nFigure~\\ref{fig:nova-sensitivity} shows the capability of the NO$\\nu$A experiment to observe the $\\nu_\\mu\\rightarrow\\nu_e$ transition with 3$\\sigma$ significance after an exposure of $60\\times10^{20}$~POT (equivalent to 7~years of Phase-III running). The curves are calculated using values derived from a full simulation of neutrino interactions in the NO$\\nu$A detector, namely 23\\% efficiency for identifying a $\\nu_e$ charged current interaction, as well as a rejection factor of 7$\\times10^{-4}$ (1.3$\\times10^{-3}$) for charged current (neutral current) $\\nu_\\mu$'s. \nRunning the NuMI beam in both $\\nu$ and $\\overline{\\nu}$ mode makes the discovery potential of the experiment less dependent upon the CP phase and mass heirarchy.\n\n\\section{Future Initiatives}\n\nTwo longer-term design studies are underway in the U.S. The first would consider the feasibility of building a high-resolution liquid Argon detector of order 50~kton to study $\\nu_\\mu\\rightarrow\\nu_e$ transitions in the NuMI beam \\cite{flare}. The design would build upon previous LAr experience, housing the detector in a massive cryogenic system as is used for liquid natural gas.\n\nThe second is a BNL-FNAL study \\cite{diwan} to send a neutrino beam to DUSEL. If the beam were initiated at FNAL, the Main Injector complex could serve as its source, but a new neutrino line would have to be built which points in the direction of DUSEL. The proposal is for an on-axis wide-band neutrino beam. At the FNAL site, such a beam line could be at most 400~m in length.\n\n\\section{Summary}\n\nNeutrino physics in the U.S. has enjoyed a renaissance with the commissioning of the NuMI line at FNAL. The MINOS experiment has already published a competitive measurement of $\\Delta m^2_{32}$, and still looks forward to collecting 10 times more data. In the future, we may look forward to oscillation and scattering results\n\nThe NO$\\nu$A experiment is an ambitious effort to use $\\nu_\\mu\\rightarrow\\nu_e$ to study the $\\nu$ mass heirarchy and search for CP violation in the lepton sector. It complements the shorter-baseline experiment at J-PARC or a reactor disappearance experiment. \n\nThe NuMI line is commissioned and performing well. As the Tevatron collider ramps down, Fermilab will be a dedicated neutrino facility, and the accelerator complex will be re-commissioned toward higher beam power. \n\nI wish to acknowledge my colleagues on the NuMI facility and MINOS, NO$\\nu$A, and MINER$\\nu$A experiments. R. Zwaska, A. Marchionni, M. Martens, and N. Grossman kindly provided information on the proton source upgrades.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDeep learning has been making significant progress in many fields in recent years especially in computer vision \\cite{krizhevsky2012imagenet} and speech recognition \\cite{hinton2012deep}. However, it remains a black box as of today. The hyper-parameter tuning is a hard and tedious process, which makes it more of an artistry than hard-core science. Moreover, its lack of interpretability poses a major holdback to its adoption in many practical scenarios. Some researchers try to combine deep learning with traditional machine learning methods such as Random Forest (RF). For example, RF is used to enhance interpretability and performance with deep learning models \\cite{frosst2017distilling, kontschieder2015deep}. There's also advancement in representation learning \\cite{feng2017autoencoder} and trying to make RF deep \\cite{zhou2017deep}.\n\nRF is the paradigm of Bagging algorithm in ensemble learning. In 2001, Breiman combined decision trees \\cite{breiman2017classification} into a forest \\cite{breiman2001random}. Features (columns) and data (rows) are randomly sampled to generate multiple decision trees and their predictions are aggregated. RF improves the prediction accuracy without significant increase in computation, and it is insensitive to multicollinearity. Its performance is robust against missing and imbalanced data, and it can well measure the roles of thousands of variables \\cite{breiman2001statistical}. Compared with models in deep learning, it has the advantage of small computation, outstanding generalization ability and strong interpretability, which makes it ever appealing to researchers and practitioners in spite of its long existence. \n\nHowever, there are also limitations of RF. First of all, the RF model can only be extended horizontally (more decision trees) but not vertically since the decision trees exist in parallel and cannot be stacked in layers in the same fashion as neurons in neural networks. Secondly, these decision trees have the same weight in voting for the final prediction despite that some of these trees may perform poorly. Lastly, all points in training data have the same weight and are treated equally in the sampling and training process, despite that some of the data are easy to classify while others are hard.\n\nIn this paper, we propose a dynamic boosted ensemble learning method based on random forest (DBRF) to overcome these limitations. It is a novel ensemble algorithm where such notion of hard example mining as in Boosting is incorporated into RF. Specifically, we propose to score the quality of each leaf node of every decision tree in the random forest and then vote to determine hard examples. By iteratively training and then removing easy examples from training data to train again, we evolve the model to focus on hard examples dynamically so as to learn decision boundaries better. After training, the test data can be cascaded through these learned random forests of each iteration in sequence to generate predictions, thus achieving more depth with RF. In addition, we propose to use evolution mechanism and smart iteration mechanism to enhance the performance of DBRF and conducted ablation test. We evaluate DBRF on public datasets and it outperforms RF and achieves state-of-the-art results compared to other deep models. Last but not least, we also perform visualizations to validate DBRF and analyze its effectiveness from a sampling point of view.\n\nThe contributions of this paper are listed as follows:\n\\begin{enumerate}\n\\item We propose a novel ensemble algorithm called DBRF to extend RF vertically by iteratively mining and training on hard examples, and thus incorporate boosting into RF training process. It outperforms RF.\n\\item We propose a criterion to measure the quality of a leaf node in a decision tree and a voting mechanism to determine hard examples.\n\\item We propose an evolution mechanism which filters out poor decision trees in a random forest, together with iteration mechanism to enhance the performance of DBRF and achieve state-of-the-art results on three UCI datasets.\n\\item The hard example mining process proposed with DBRF can be seen as a novel way of sampling. We demonstrate the effectiveness of using DBRF to deal with imbalanced data.\n\\end{enumerate}\n\nThe rest of the paper is organized as follows. Section 2 explains our motives. Section 3 presents our proposed model DBRF and the enhancement mechanisms. Section 4 reports the results of multiple evaluation experiments. Finally, Section 5 contains related work and Section 6 concludes the paper.\n\n\\section{Motivation}\nIn typical classification tasks, classifiers are trained to find the decision boundaries of labelled data using a set of features. The distribution of data near the decision boundaries largely affect the performance of the classifier and is generally where the overfitting and under-fitting trade-off is made. Those data whose predicted labels are not agreed upon across multiple classifiers, probably located near the decision boundaries, can be seen as hard examples. On the contrary, those data whose predicted labels are consistent and correct across all classifiers can be seen as easy examples. Naturally, the performance on hard examples determines how good a classifier is compared with others and it's ideal to focus on these hard examples during training.\n\nHard example mining is a common notion in many machine learning algorithms. A typical example is AdaBoost, in which wrongly classified examples are deemed as hard examples. The general idea is to assign more weight to those hard examples and train the model iteratively until convergence. This hard example mining notion has also been adopted in deep learning to successfully improve the performance of the model, especially in object recognition tasks, such as OHEM \\cite{shrivastava2016training}, S-OHEM \\cite{li2017s} and A-Fast-RCNN \\cite{wang2017fast}.\n\nThus, we are motivated to incorporate this hard example mining notion, or boosting, into RF. It can be integrated smoothly within the RF training process since a random forest is an ensemble of decision trees trained on randomly sampled data and features, which are weak classifiers. If all decision trees make right predictions on a part of the training data, these data can be considered as easy examples. Rather than assign more weight to hard examples, we can simply remove these data from the training data to achieve the same effect. When training the RF model again on the new set of data, we essentially make the model focus on hard examples and learn the decision boundaries better without making the classifier more complex and sacrificing generalization abilities. The idea is illustrated in Figure \\ref{f1}. \n\n\\begin{figure}[htb]\n\\centering\n\\caption{Illustration of removing easy examples to improve learning. When we train a classifier as shown on the left, one of the decision regions is not ideal, but when we remove the easy examples from the good decision region and then train a classifier as shown in the right figure. At this point, the ensemble of the two classifiers allows the training data to be perfectly divided.}\\label{f1}\n\\includegraphics[width=3in]{figure1}\n\\end{figure}\n\nOne point to clarify is that the aforementioned criterion to determine easy examples, i.e. data that are classified right across multiple classifiers, is not strict enough for removal because in extreme cases where data are imbalanced, we may be left with data of only one label. We will explain our proposed criterion in detail in Section 3. But basically, with RF, we are seeking to find good rules and define easy examples as ones that fit the good rules of all classifiers. In a decision tree, each leaf node represents a rule, thus we need to come up with a criterion to determine whether a leaf node is good or not.\n\n\n\\section{The Proposed Approach: DBRF}\nIn this section, we present out proposed approach DBRF based on RF, which drives the evolution of the model by iteratively updating the training data. Compared to RF, DBRF can greatly improve the performance. We will first present the general framework and the basic algorithm of our model and then propose two mechanisms to enhance the basic model. \n\nIn the following, we consider a typical classification task where $\\mathcal{X}$ and $\\mathcal{Y}$ denote input and output space respectively. For a decision tree $T$, $N$ denotes a decision node and $L$ a leaf node. A RF $F$ is a set of $T$ trained on data $D = \\{X, Y\\}$ where $X$ is points set in $\\mathcal{X}$ and $Y$ is corresponding labels set in $\\mathcal{Y}$.\n\n\\subsection{The General Framework}\n\n\\begin{figure}[htb]\n\\centering\n\\caption{Illustration of DBRF with three iterations. During training (above the dotted line), firstly, a random forest is trained on the training set, and then the leaf nodes are evaluated through the HEM process (Section \\ref{s3.2}). The hard examples (with bold borders) can be identified accordingly (by removing easy examples) as a new training set for the next iteration. Similarly, at each iteration during test (below the dotted line), easy examples are identified (except the last iteration) and their predictions are outputted.}\\label{f2}\n\\includegraphics[width=5in]{figure7}\n\\end{figure}\n\nFigure \\ref{f2} illustrates the basic structure of DBRF. We first split a dataset into training set and test set and then train a random forest of decision trees on the training set $D^{d}=\\{X, Y\\}$. Next, we use a criterion to measure the quality of each leaf node of each decision tree in the forest. In Figure \\ref{f2}, the leaf nodes circled in bold line are good ones which represent possible easy examples. By using the proposed hard example mining (HEM) method to be elaborated in Section \\ref{s3.2}, we can divide $D^{d}$ into two parts $\\{D^{d}_{e}, D^{d}_{h}\\}$, where $D^{d}_{e}$ denotes easy examples and $D^{d}_{h}$ hard examples. Only $D^{d}_{h}$ are preserved for the next iteration's training. This process keeps on iterating until a predetermined $n^{th}$ iteration is done.\n\nAt iteration i, we need to preserve the random forest model $F_{i}$ trained in current iteration and the evaluation scores of all leaf nodes of all decision trees in $F_{i}$, denoted as $\\Pi_{i}$. For predicting test set $D^{t}$, we first use $F_{1}$ to predict and then divide $D^{t}$ into $\\{D^{t}_{e}, D^{t}_{h}\\}$, according to $F_{1}$. For $D^{t}_{e}$ the easy data, the predictions made by $F_{1}$ are outputted as the final prediction result, while for the hard data $D^{t}_{h}$, we will feed them into $F_{2}$ for the next iteration's prediction. This process goes on until no longer contains data, or until the last iteration. In the last iteration n, the output of $F_{n}$ will be the final predicted labels of the corresponding data.\n\nThis training and test process can be further written in pseudo code shown in Algorithm \\ref{a1}.\n\n\\begin{algorithm}[htb]\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\t\\caption{Dynamic Boosted Random Forest}\\label{a1}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE Training set: $D^{d}$, Test set: $D^{t}$, Iterations $n$\n\t\t\\ENSURE Prediction of test set: $O$\n\t\t\n\t\t\\STATE \/\/ Training Procedure\n\t\t\\STATE $D \\gets D^{d}$\n\t\t\\FOR {$i = 1 \\to n$}\n\t\t\\STATE Train RF as $F_{i}$ on dataset $D$\n\t\t\\STATE Get scores of leaf nodes $\\Pi_{i}$ by HEM (Section \\ref{s3.2})\n\t\t\\STATE Split $D$ into easy data $D_{e}$ and hard data $D_{h}$ according to $F_{i}$ and $\\Pi_{i}$\n\t\t\\STATE $D \\gets D_{h}$\n\t\t\\STATE add $F_{i}$ to $F$-list \n\t\t\\STATE add $\\pi_{i}$ to $\\Pi$-list \n\t\t\\ENDFOR\n\t\t\\STATE\n\n\t\t\\STATE \/\/ Test Procedure\n\t\t\\STATE $D \\gets D^{t}$\n\t\t\\STATE $O \\gets \\varnothing$\n\t\t\\FOR {$i = 1 \\to n$}\n\t\t\\STATE $F_{i} \\leftarrow F$-list at i\n\t\t\\STATE $\\Pi_{i} \\leftarrow \\Pi$-list at i\n\t\t\\STATE Split $D$ into easy data $D_{e}$ and hard data $D_{h}$ according to $F_{i}$ and $\\Pi_{i}$\n\t\t\\STATE Predict $D_{e}$ as $O'$ by $F_{i}$\n\t\t\\STATE $D \\gets D_{h}$\n\t\t\\STATE $O = O \\cup O'$\n\t\t\\ENDFOR\n\t\t\\STATE Predict $D_{h}$ as $O'$ by $F_{n}$\n\t\t\\STATE $O = O \\cup O'$\n\t\t\n\t\t\\STATE \\textbf{return} $O$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nWe propose to use two mechanisms to enhance the basic model of DBRF. To prevent the negative influence of poor decision trees in a random forest on the HEM process, we propose to use an evolution mechanism to eliminate them from the random forest, which will be further elaborated in Section 3.3. Furthermore, we propose a smart iteration mechanism to better guide the HEM process, as elaborated in Section 3.3. The model of DBRF with two mechanisms is illustrated in Figure \\ref{f3}.\n\n\\begin{figure}[htb]\n\\centering\n\\caption{Illustration of the mechanisms of DBRF. The evolution mechanism is used to eliminate poor decision trees from the random forest and the smart iteration mechanism is used to better guide the HEM process.}\\label{f3}\n\\includegraphics[width=5in]{figure3}\n\\end{figure}\n\n\\subsection{Hard Example Mining (HEM)}\\label{s3.2}\nA random forest consists of multiple decision trees. We use $D^{e}_{F}$ and $D^{h}_{F}$ to denote easy examples and hard examples of a random forest, and $D^{e}_{T}$ and $D^{h}_{T}$ to denote easy examples and hard examples of each decision tree. We propose that $D^{e}_{F}$ and $D^{h}_{F}$ can be generated as\n$$D^{e}_{F}= \\bigcap_{T\\in F} D^{e}_{T},\\ D^{h}_{F}=D - D^{e}_{F}$$\nmeaning that the intersection of easy examples of all the decision trees are considered as easy examples.\n\nA leaf node $L$ in a decision tree corresponds to a rule $R_{L}$. Suppose the path to reach the leaf node $L$ from the root in a decision tree $F$ is $W_{L}=\\{n_{1},n_{2},n_{3},...,n_{i}\\}$, where $n_{1}$ to $n_{i}$ denote the decision nodes along the path. The probability distribution in $\\mathcal{Y}$ in node $L$ is $\\pi_{L}$, $c$ is any label in $\\mathcal{Y}$, then we can define the rule $R_{L}$ corresponding to the leaf node $L$ as\n\n$$R_{L}: x\\ |\\ x \\in \\bigcap_{n_{i} \\in W_{L}} r(n_{i}) \\Rightarrow y = \\underset{c}{\\mathrm{\\textit{arg\\ max}}}\\ \\pi_{L}(c)$$\nwhere $r(\\cdot)$ is a function that represents the data that satisfy the rule of a decision node $n$. Then the easy examples of a decision tree can be defined as\n\n$$D^{e}_{T} = \\{x\\ |\\ x \\in \\bigcap_{L\\in T, T \\in F} (score(L) > \\sigma) \\}$$\nwhere $score(\\cdot)$ is a leaf node evaluation metric and $\\sigma$ is the threshold, which is implemented as the average score of leaf nodes in all decision trees in our model.\n\nNext, we propose several leaf node evaluation metrics $score(\\cdot)$. We may use support and confidence, as were used to evaluate association rules \\cite{agrawal1993mining} to score the leaf node. For the rule $R_{L}$ of a leaf node $L$, all data that satisfy the preconditions of the rule, i.e. $x \\in_{n_{i} \\in W_{L}} r(n_{i})$, compose candidate set $C$. Then\n$$score_{supp}(L)=\\frac{|C|}{|X|}$$\n$$score_{conf}(L)=\\frac{|\\{y = c,\\ x \\in C\\}|}{|C|}$$\nSince both support and confidence derive from association rules, we can merge them to be f1 score. \n$$score_{f1}(L)=2 \\cdot \\frac{score_{supp}(L) \\cdot score_{conf}(L)}{score_{supp}(L) + score_{conf}(L)}$$\n\nAlso, since Gini impurity (gini) and information gain (entropy) are the partitioning criteria used in decision tree, we can define gini score and entropy score of a leaf node as\n\n$$score_{gini}(L)=-Gini(L)=\\sum^{c}_{j=1} p_{j}^{2}-1$$\n$$score_{entropy}(L)=-Entropy(L)=\\sum^{c}_{j=1} p_{j}^{2}\\ log\\ p_{j}$$\nwhere $c_{j}$ is $j^{th}$ label in $\\mathcal{Y}$ and $p_{j}$ is the probability of $c_{j}$ .\n\nAll in all, we propose three leaf node evaluation metrics, namely \\textit{score-gini}, \\textit{score-entropy}, and \\textit{score-f1}. Their effects are explored in experiments, as discussed in Section 4.1.\n\n\\subsection{Mechanisms}\n\\subsubsection{Evolution Mechanism}\nIn RF, data and features are randomly sampled to generate multiple decision trees. Chances are that some of them are of poor quality, which have negative effect on the voting process in our proposed HEM method. To overcome this problem, we draw on the idea of genetic programming \\cite{banzhaf1998genetic} and propose to use a fitness formula to eliminate those decision trees with lower scores before determining easy and hard examples from the training data. In our implementation, we use the average evaluation metric score across all leaf nodes as the fitness score of each decision tree. The specific procedure is as follows.\n\\begin{enumerate}\n\\item At each iteration, calculate the fitness score of each decision tree in a random forest after training.\n\\item Set an elimination ratio (20\\% by default) and calculate the threshold, and then eliminate those decision trees whose score is lower than the threshold.\n\\item Determine the easy and hard examples by voting among the rest of the decision trees and remove them to generate a new training set for the next iteration.\n\\end{enumerate}\n\n\\subsubsection{Smart Iteration Mechanism}\nAt each iteration during training, the division of $D^{d}$ as $\\{D^{d}_{e}, D^{d}_{h}\\}$, might not be ideal in terms of validation accuracy. We propose to use the prediction accuracy on the training data $D^{d}$ as the validation accuracy (which can be generated using k-fold cross-validation as mentioned in Stacking \\cite{wolpert1992stacked, breiman1996stacked}), to judge the quality of the division and annul the division or terminate training if rules are triggered. We propose two smart iteration rules as follows.\n\\begin{enumerate}\n\\item If the validation accuracy decreases for \\textit{N} consecutive times (five by default), apply early termination.\n\\item If the validation accuracy of easy examples $D^{e}$ is lower than that of the whole training data $D^{d}$, render this division invalid and continue to train the model on $D^{d}$ in the next iteration.\t\n\\end{enumerate}\n\n\n\\section{Experiments}\nTo evaluate DBRF, we compared its performance on public datasets against several popular or related methods, and performed ablation test to determine the effects of the three proposed mechanisms for enhancement. We further applied visualization to demonstrate the effectiveness of DBRF. \n\nIn these experiments, we used the default parameters defined in our model and did not fine-tune them. The default number of decision trees in a random forest is 200; the default splitting criterion in a decision tree is Gini impurity; the default number of iterations is 10; the default quality evaluation criterion for leaf nodes is \\textit{score-f1}; the other default parameters have already been mentioned in the Section 3.\n\n\\subsection{Comparison with Other Approaches}\nWe used three datasets from the UCI Machine Learning Repository\\footnote{\\url{http:\/\/archive.ics.uci.edu\/ml\/index.php}} in the comparison experiments, namely Adult, Letter and Yeast. We evaluated three versions of DBRF, namely DBRF-g, DBRF-e and DBRF-f, which uses \\textit{score-gini}, \\textit{score-entropy}, and \\textit{score-f1} as the quality evaluation criterion of leaf nodes respectively. For comparison, we chose two ensemble algorithms, GBDT \\cite{friedman2001greedy} and RF \\cite{breiman2001random}, two related state-of-the-art approaches, gcForest \\cite{zhou2017deep} and sNDF \\cite{kontschieder2015deep}, and multiplayer perceptron (MLP).\n\nFor fairer comparison, we compiled the open-source code of gcForest\\footnote{\\url{https:\/\/github.com\/kingfengji\/gcForest}} published by Professor Zhou's team and used exactly the same method to split a dataset into training set and test set. The evaluation metric of the experiments is accuracy, as shown in Table 1. Numbers in italics are directly cited from either the gcForest paper \\cite{zhou2017deep} or the sNDF paper \\cite{kontschieder2015deep}. We used PyTorch to reproduce sNDF and recorded the results on both Adult and Yeast datasets. We used scikit-learn\\footnote{\\url{http:\/\/scikit-learn.org\/stable\/}} library to evaluate GBDT.\n\n\\begin{table}[htbp]\n\\centering\n\\caption{Comparison of test accuracy on UCI datasets.}\\label{t1}\n\\begin{tabular}{lccc}\n \\toprule\n & Adult & Letter & Yeast \\\\\n \\midrule\n sNDF & 85.58\\% & \\textit{97.08\\%} & 60.31\\% \\\\\n gcForest & \\textit{86.40\\%} & 97.12\\% & \\textit{63.45\\%}\\\\\n MLP & \\textit{85.25\\%} & \\textit{95.70\\%} & \\textit{55.60\\%} \\\\\n RF & \\textit{85.49\\%} & \\textit{96.50\\%} & \\textit{61.66\\%} \\\\\n GBDT & 86.34\\% & 96.32\\% & 60.98\\% \\\\\n \\midrule\n DBRF-g & 86.57\\% & 97.02\\% & 63.68\\% \\\\\n DBRF-e & 86.56\\% & 97.18\\% & \\textbf{64.13\\%} \\\\\n DBRF-f & \\textbf{86.62\\%} & \\textbf{97.25\\%} & 63.90\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nAs Table \\ref{t1} shows, DBRF performs better than RF in all three datasets, which indicates that DBRF is indeed superior to RF. All DBRF versions achieve state-of-the-art results, proving evidence to the effectiveness of DBRF. Last but not least, the results in the last three rows indicate that \\textit{score-f1} may be the best fit for the quality evaluation criterion of leaf nodes, though not by a large margin over the other two.\n\n\\subsection{Decision Regions Visualization}\nWe visualized the decision boundaries learned by DBRF to verify that our proposed hard example mining method indeed achieves our intended purpose. \n\n\\textbf{To facilitate visualization, we selected the second and third column of the UCI Iris dataset}, and randomly splitted the dataset into 67\\% for training and 33\\% for testing. We conducted a comparison experiment between RF and DBRF using default parameters (for RF we used scikit-learn). RF achieves 98\\% accuracy on the training set but only 94\\% accuracy on the test set, which indicates overfitting. In contrast, though DBRF achieves 94\\% accuracy on the training set, it achieves a surprising 98\\% accuracy on the test set, suggesting that it has learned better decision boundaries. The decision boundaries learned by RF and DBRF are shown in Figure \\ref{f4}, which corroborates the above findings.\n\n\\begin{figure}[htb]\n\\centering\n\\caption{Visualization of decision boundaries. The two pictures on the left represent the RF's decision boundaries of the training set and test set, while the two pictures on the right represent the DBRF's decision boundary of the training set and test set, respectively.}\\label{f4}\n\\includegraphics[width=5in]{figure4}\n\\end{figure}\n\n\\subsection{Imbalanced Data}\nWe can better understand how our proposed hard example mining method makes DBRF more effective from a sampling point of view. Suppose we have a large dataset and the decision boundaries are determined by only a small proportion of the dataset. These are important data that ideally need to be preserved for training. However, traditional sampling methods such as random sampling used in RF have no clue whether a data point is important or not. Since they account for only a small amount, chances are that after sampling there are simply not enough important data left for a classifier to learn from. This is how a model may fail to learn the boundaries in spite of seemingly abundant data and features. Removing easy examples makes the proportion of important data higher, and thus makes it more likely that there will be sufficient amount of important data left after sampling. To the best of our knowledge, our proposed hard example mining method is a new way of sampling.\n\nThis is extremely useful when dealing with imbalanced datasets. We applied DBRF to Kaggle's classic imbalanced datasets, Credit Card Fraud Dataset\\footnote{\\url{https:\/\/www.kaggle.com\/mlg-ulb\/creditcardfraud}} to demonstrate this point. \n\nWe randomly selected 65\\% of the dataset as training data and 35\\% as test data. Figure \\ref{f5} shows how the proportion of positive and negative examples in the training data shifted after each iteration. We can see that in the initial training set (on the left of the x-axis), the data are extremely imbalanced with a negative-to-positive ratio of 565.38:1. As the iteration goes on, more data are removed from the training set and at the end (on the right of the x-axis), data are more balanced in the remaining hard examples for training with a negative-to-positive ratio of 3.62:1. \n\n\\begin{figure}[htb]\n\\centering\n\\caption{Stacked training data distribution in each iteration. The blue area (neg-pass) and the yellow area (pos-pass) at the bottom represent the proportions of negative and positive examples that were removed after each iteration. The red area (pos) and green area (neg) at the top represent the proportions of positive and negative examples that were kept as hard examples after each iteration.}\\label{f5}\n\\includegraphics[width=3in]{figure5}\n\\end{figure}\n\nThe whole process achieves the purpose of hard example mining and sampling at the same time. Essentially, it uses data's importance rather than its label as a guideline for sampling. Without much fine-tuning, DBRF achieves an AUC 97.55\\% on the test set, a very competitive score.\n\n\n\\subsection{Running time}\nThe three UCI datasets we used for evaluation differ in data size (1.5k, 20k, 50k), ranging from small to medium. To further consolidate our claim that the proposed model DBRF is superior to RF and can achieve state-of-the-art result while still being computational efficient, we conducted a series of more experiments on Kaggle's Credit Card Fraud dataset mentioned in section 4.3. It has a considerably larger data size of 280k, and is extremely imbalanced since only about 400 data have positive labels. Thus, we have to use AUC instead of accuracy as the evaluation metric. We also reported standard deviation and computation time where necessary.\n\nWe compared our DBRF model against several closely related methods in terms of AUC and training time. As the original data are given in time order, we performed shuffling before randomly splitting the data into training set and test set with a ratio of 2:1. To be more credible, we ran the experiments on three different data splits, and for each split we ran each method five times and recorded means and standard deviations etc.\n\nFor RF-based models (gcForest, RF and DBRF), we grew 200 trees in a random forest per iteration, considering there are 30 features in the dataset. For iterative models (sNDF, gcForest, DBRF and GBDT), we set iteration times as 10 except for GBDT, for we found that the validation AUC of the first three models didn't change much after roughly 10 iterations. And since after 10 iterations, GBDT still performed not well enough so we increased it to 200, when the validation AUC stabilized. All the training time reported were recorded at the end of maximum iterations. For MLP or deep neural network configurations, we used ReLU for activation function, cross-entropy for loss function, Adam for optimization. Via three-fold cross-validation, we set the MLP with 3 hidden layers, 200 neurons in the first layer, 100 in the second and 10 in the third. The other hyper-parameters were set as their default value.\n\n\\begin{table}[htb]\n\\centering\n\\caption{Comparison of AUC and running time on Credit Card Fraud dataset on three data splits.}\\label{t3}\n\\begin{tabular}{lcccc}\n \\toprule\n & \\multicolumn{2}{c}{AUC(Test)} & \\multicolumn{2}{c}{Time(s)} \\\\\n & Avg. & Std.($\\times 10^{-2}$) & Avg. & Std. \\\\\n \\midrule\nsNDF & 0.9348\/0.9290\/0.9264 & 1.2849\/1.3254\/1.1094 & 712.4\/717\/724.4 & 20.9\/34.0\/24.5 \\\\\ngcForest & 0.9580\/0.9702\/0.9548 & 0.2208\/0.3660\/0.1673 & 161\/152.2\/148.4 & 8.06\/8.11\/4.50 \\\\\nMLP & 0.9172\/0.8423\/0.9112 & 1.9070\/4.4489\/1.9399 & 51.7\/53.8\/62.2 & 9.98\/8.73\/8.89 \\\\\nRF & 0.9508\/0.9598\/0.9478 & 0.3050\/0.5299\/0.2218 & 61.2\/61.6\/61.54 & 1.42\/1.84\/1.43 \\\\\nGBDT & 0.9434\/0.8829\/0.7259 & 0.0009\/0.0032\/0.0063 & 147.4\/141.6\/135.2 & 8.57\/6.71\/3.06 \\\\\nDBRF & \\textbf{0.9762}\/\\textbf{0.9848}\/\\textbf{0.9730} & 0.0293\/0.0242\/0.0117 & 97.4\/81\/77.80 & 5.73\/2.83\/0.75 \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nFrom Table \\ref{t3} one can see that DBRF performs consistently better than all other methods in terms of AUC on test set on all three data splits. Besides, the training time of DBRF is quite competitive. In particular, it is more computationally efficient than sNDF and gcForest.\n\nWe also want to know how the computation time and the size of training data change over each iteration. We recorded computation time and the proportion of easy examples that are removed at each iteration during both training and testing process on the first data split. The results are shown in Table \\ref{t4}. For comparison, RF trains in 58.2s and costs 0.33s in testing on average.\n\n\\begin{table}[htb]\n\\centering\n\\caption{Computation time and the proportion of easy examples of the first three iteration of DBRF on the first data split.}\\label{t4}\n\\begin{tabular}{llllll}\n \\toprule\n & & Iter.1 & Iter.2 & Iter.3 & All\\\\\n \\midrule\n \\multirow{2}{2cm}{Time} & Train & 59.18s & 1.05s & 0.91s & 61.14s \\\\\n & Test & 2.86s & 0.16s & 0.16s & 3.18s \\\\\n \\multirow{2}{2cm}{Pass rate} & Train & 98.08\\% &0.16\\% & 0.12\\% & 98.36\\% \\\\\n & Test & 98.05\\% & 0.15\\% & 0.11\\% & 98.31\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable \\ref{t5} shows that after the first iteration of DBRF, if we train other methods on the remaining training data, their overall test AUC all increased, which indicates the effectiveness of DBRF. Table \\ref{t6} records the test AUC of RF and DBRF after three iterations when the number of trees in a random forest is fixed. It shows that just like neural nets, depth is also beneficial to RF.\n\n\\begin{table}[htb]\n\\centering\n\\caption{Comparison of AUC of other methods trained alone and on top of the first iteration of DBRF.}\\label{t5}\n\\begin{tabular}{llllll}\n \\toprule\n & MLP & RF & GDBT & sNDF & gcForest\\\\\n \\midrule\n alone & 91.72\\% & 95.08\\% & 94.34\\% & 93.48\\% & 95.80\\% \\\\\n +DBRF & 95.98\\% & 97.40\\% & 97.38\\% & 96.69\\% & 97.37\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htb]\n\\centering\n\\caption{Comparison of AUC between RF and DBRF after three iterations when number of trees per forest is fixed.}\\label{t6}\n\\begin{tabular}{lllllll}\n \\toprule\n & 17 & 50 & 150 & 450 & 1350 & 4050\\\\\n \\midrule\n RF & - & 93.86\\% & 94.58\\% & 96.59\\% & 96.60\\% & 96.67\\% \\\\\n DBRF & 97.21\\% & 97.25\\% & 97.57\\% & 97.62\\% & 97.83\\% & - \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\section{Related Work}\nEnsemble learning \\cite{zhou2012ensemble} is a powerful machine learning technique in which multiple learners are trained to solve the same problem. RF \\cite{breiman2001random}, GBDT \\cite{friedman2001greedy}, XGBoost \\cite{chen2016xgboost} are paradigms of ensemble learning, which are all tree-based learning algorithms. DBRF is a novel ensemble algorithm in that it incorporates boosting into the training process of RF. AdaBoost \\cite{freund1997decision} iteratively adds weight to the wrongly classified examples so as to focus training on these hard examples. In contrast, DBRF removes easy examples in each iteration to train on hard examples, which can effectively avoid the effects of unconcerned data while achieving similar boosting effect.\n\nDeep learning research community is also resorting to the strength of tree models. Kontschieder et al. proposed deep neural decision forests \\cite{kontschieder2015deep}, which is a novel approach that unifies decision trees with the representation learning known from deep convolutional networks, by training them in an end-to-end manner. In contrary, we attempt to make RF deep without training through back propagation and hyper-parameter tuning.\n\nDBRF uses RF as the base learner. The capacity of RF is extended vertically by iteratively mining and training on hard examples. Zhou proposed gcForest \\cite{zhou2017deep, feng2017autoencoder} has a cascade procedure similar to DBRF, but the specific approach is different. GcForest cascades the base learner by passing the output of one level of learners as input to the next level, which is similar to Stacking \\cite{wolpert1992stacked, breiman1996stacked}. DBRF cascades the base learner by using the quality of leaf nodes in one level as a criterion to dynamically evolve the training data to train the next level (model of the next iteration). \n\n\\section{Conclusion}\nIn this paper, to overcome some of the limitations of Random Forest (RF), we incorporate boosting into the training process of RF and propose a novel dynamic boosted ensemble learning method based on random forest (DBRF). Specifically, we propose a criterion to measure the quality of a leaf node in a decision tree and then vote to remove easy examples. By iteratively mining and training on hard examples, we evolve the model to learn decision boundaries better and thus extend RF vertically. We also propose evolution mechanism and smart iteration mechanism to enhance DBRF. Experiments show that DBRF outperforms RF and achieves state-of-the-art results. We also provide an explanation of its effectiveness from a sampling point of view.\n\nWe believe that DBRF is a very practical approach and is particularly useful in learning from imbalanced data. In future work, we plan to extend DBRF to learn from raw feature data such as images and sequences. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}