diff --git "a/data_all_eng_slimpj/shuffled/split2/finalztti" "b/data_all_eng_slimpj/shuffled/split2/finalztti" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalztti" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec-introduction}\n\nRobot autonomy offers great promise as a tool by which we can enhance, or restore, the natural abilities of a human partner. For example, in the fields of assistive and rehabilitative medicine, devices such as exoskeletons and powered wheelchairs can be used to assist a human who has severely diminished motor capabilities. However, many assistive devices can be difficult to control. This can be due to the inherent complexity of the system, the required fidelity in the control signal, or the physical limitations of the human partner. We can, therefore, further improve the efficacy of these devices by offloading challenging aspects of the control problem to an autonomous partner. In doing so, the human operator is freed to focus their mental and physical capacities on important high-level tasks like path planning and interaction with the environment. This idea forms the basis of \\textit{shared control} (see Figure~\\ref{fig-shared-control}), a paradigm that aims to produce joint human-machine systems that are more capable than either the human or machine on their own. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.95\\hsize]{shared_control_new.png}\n\t\\caption{Pictorial representation of a shared control paradigm. Both the human and autonomy are capable of controlling the mechanical system, and a dynamic control allocation algorithm selects which agent is in control at any given moment.}\n\t\\label{fig-shared-control}\n\\end{figure}\n\nA primary challenge that researchers and engineers face when developing shared control paradigms for generic human-machine systems is a lack of \\textit{a priori} knowledge of the human and robot partners. This issue is compounded by the fact that, in the real world, many users may operate the same mechanical device. It is therefore necessary to consider solutions that generalize to a variety of potential human and machine partners. In this work, we propose a data-driven methodology that learns all relevant information about how a given human and machine pair interact directly from observation. We then integrate the learned model of the joint system into a single shared control paradigm. We refer to this idea as \\textit{model-based shared control}.\n\nIn this work, we learn a model of the joint human-machine system through an approximation to the Koopman operator (\\cite{koopman1931hamiltonian}), though any machine learning approach could be used. However, the Koopman operator is chosen specifically for this work as it has previously proven useful in human-in-the-loop systems~(\\cite{broad2017learning}) and can be computed efficiently~(\\cite{williams2015kernel}). This model is trained on observation data collected during demonstration of the human and machine interacting and therefore describes both the human's input to the system, and the robot's response to the human input and system state. We can then integrate the portion of the learned model that specifically describes the system and control dynamics of the mechanical device into an optimal control algorithm to produce autonomous policies. Finally, the input provided by the human and autonomous partners are integrated via a geometric signal filter to provide real-time, dynamic shared control of unknown systems. \n\nWe validate our thesis that modeling the joint human-machine system is sufficient for the purpose of automating assistance with two human subjects studies consisting of 32 total participants. The first study imposes a linear constraint on the modeling and control algorithms, while the second study relaxes these constraints to evaluate the more general, nonlinear case. The linear variant of our proposed algorithm is used to validate the efficacy of our shared control paradigm and was first presented in~\\cite{broad2017learning}. The nonlinear variant extends these results to a wider class of human-machine systems. The results of the two studies demonstrate that the nonlinear variant has a greater impact on overall task performance than the linear methods. We also find that our modeling technique is generalizable across users with results that suggest that individualizing the model offline, based on a user's own data, does not affect the ability to learn a useful representation of the dynamical system. Finally, we evaluate the efficacy of our shared control paradigm in an online learning scenario, demonstrating the sample efficiency of the model-based shared control paradigm.\n\nWe provide background and related work in Section~\\ref{sec-background-and-related-work}. We then define model-based shared control in Section~\\ref{sec-model-based-shared-control}. In Section~\\ref{sec-experimental-validation} we describe the human subjects study we perform and detail the results in Section~\\ref{sec-results}. We describe important takeaways in Section~\\ref{sec-discussion} and conclude in Section~\\ref{sec-conclusion}.\n\n\\section{Background and Related Work}\n\\label{sec-background-and-related-work}\n\nThis section presents background and related work in the shared control literature for human-machine systems. We also identify alternative methods of autonomous policy generation for shared control, and provide a detailed background on the Koopman operator (\\cite{koopman1931hamiltonian}) with a particular focus on its use in learning system dynamics.\n\n\\subsection{Shared Control}\n\\label{sec-background-shared-control}\n\nIn this work, we explore the question of how automation can be used to adjust to, and account for, the specific capabilities of a human partner. In particular, we aim to develop a methodology that allows us to \\textit{dynamically adjust} the amount of control authority given to the robot and human partners~(\\cite{hoeniger1998dynamically, hoffman2004collaboration}). If done intelligently, and with appropriate knowledge of the individual capabilities of each team member, we can improve the overall efficiency, stability and safety of the joint system~(\\cite{lasota2017survey}). Approaches to shared control range from pre-defined, discretely adjustable methods~(\\cite{kortenkamp2000adjustable}) to probabilistic models~(\\cite{javdani2015shared}) to policy blending~(\\cite{dragan2013policy}). In addition to blending in the original control signal space, shared control has been researched through haptic control~(\\cite{nudehi2005shared}) and compliant control~(\\cite{kim1992force}).\n\nIn this work, we allocate control using a filter~(\\cite{tzorakoleftherakis2015controllers}) described more thoroughly in Section~\\ref{sub-sec-control-allocation}. Our control allocation strategy is similar in practice to \\textit{virtual fixtures} and \\textit{virtual guides}, techniques that are common in the haptics literature~(\\cite{forsyth2005predictive, griffiths2005sharing}). In particular, virtual fixtures and guides are techniques by which autonomously generated forces are \\textit{added to the control of a system} to limit movement into undesriable areas and\/or influence motion towards an optimal strategy~(\\cite{abbink2012haptic}). These ideas have been explored most commonly in association with robotic telemanipulation~(\\cite{abbott2007haptic}), including applications like robotic surgery~(\\cite{marayong2004speed}) and robot-assisted therapy~(\\cite{noohi2016model}). A key difference between these approaches and our own is that our control allocation method does not incorporate additional information from the autonomous partner into the control loop. Instead, the autonomous partner simply rejects input from the operator that does not meet the proposed criteria. Our approach therefore requires no \\textit{a priori} information about (or ability to sense) the environment, and no information about the system dynamics. In contrast, virtual fixtures\/guides require information about (or the ability to detect) hard constraints in the environment, and knowledge of the system dynamics. This information is then used to compute forces---the virtual fixtures---that counteract user-generated forces that are defined by as dangerous. The approach in this paper does not have similar \\textit{a priori} information requirements, suggesting our approach can more easily be incorporated into novel human-machine systems. An important benefit of the methods proposed in the virtual fixtures\/guide literature is that the techniques often provide an explicit guarantee of safety for the joint human-machine system. Our approach can be extended to provide the same guarantees by incorporating information about (or the ability to sense) the environment and using control barrier functions to implement safety requirements~(\\cite{broad2018operation}).\n\nThe effects of shared control (SC) have been explored in numerous fields in which the addition of a robot partner could benefit a human operator. For example, in assistive and rehabilitation robotics, researchers have explored the effects of shared control on teleoperation of smart wheelchairs~(\\cite{erdogan2017effect, trieu2008shared}) and robotic manipulators~(\\cite{kim2006continuous}). Similarly, researchers have explored shared control as it applies to the teleoperation of larger mobile robots and human-machine systems, such as cars~(\\cite{dewinter11smc}) and aircraft~(\\cite{matni08acc}). When dealing with systems of this size, safety is often a primary concern. \n\nThe above works are conceptually similar to our own as they use automation to facilitate control of a robot by a human partner. However, in this work, we do not augment the user's control based on an explicit model of the user. Instead, we use observations of the user demonstrations to build a \\textit{model of the joint human-robot system}. The effect of the human partner on the shared control system is implicitly encoded in the model learned from their interactions.\n\n\\subsection{Model-Based Reinforcement Learning}\n\\label{sec-background-model-based}\n\nModel-based shared control (MbSC) is a paradigm that generalizes shared control to generic human-machine partners~(\\cite{broad2017learning}). That is, MbSC assumes no \\textit{a priori} knowledge of either partner and instead uses data-driven techniques to learn models of the human and\/or robot partner(s) from observation. In addition to providing a quantitative understanding of each partner, these models can be used to generate autonomous control policies by integrating the learned system and control dynamics into an optimal control framework. \n\nModel-based shared control is therefore highly related to model-based reinforcement learning (MbRL), a paradigm that explicitly learns a model of the system dynamics in addition to learning an effective control policy. MbSC extends MbRL to systems that integrate control from various sources. Early work in model-based reinforcement learning includes~(\\cite{barto1995learning}) and~(\\cite{kaelbling1996reinforcement}). More recently, researchers have considered integrating learned system models with optimal control algorithms to produce control trajectories in a more data-efficient manner~(\\cite{mitrovic2010adaptive}). These algorithms compute control through an online optimization process, instead of through further data collection~(\\cite{barto1995learning}). There are of course, many viable model learning techniques that can be used to describe the system and control dynamics. For example, Neural Networks~(\\cite{williams2017information}), Gaussian Processes~(\\cite{nguyen2009local}), and Gaussian Mixture Models~(\\cite{khansari2011learning}) have all shown great promise in this area. Often the best choice of modeling algorithm is related specifically to the application domain. For example, Gaussian Processes perform well in low-data regimes, but scale poorly with the size of the dataset where Neural Networks fit naturally. In this work we explore a modeling technique that easily integrates with derivative-based optimal control algorithms. A survey of learning for control can be found in~(\\cite{schaal2010learning}).\n\nFrom a motivational standpoint, related work also includes methods that model not only the dynamics of a robotic system, but combined human-machine systems from data. For example, researchers have explored learning control policies from user demonstrations, thereby incorporating both system dynamics and the user's desires~(\\cite{argall2009survey, celemin2019fast}). Building on these ideas, researchers have proposed learning shared control policies directly from demonstration data using deep reinforcement learning~(\\cite{reddy2018shared}). To improve the human partner's intuition for the interaction paradigm, researchers have also proposed learning latent spaces to allow users to control complex robots with low dimensional input devices~(\\cite{losey2019controlling}). Relatedly, people have also proposed techniques for modeling both the dynamics of a system, and a policy for deciding when a human or autonomous partner should be in control. One such method is to learn local approximations to the system's dynamics and only provide autonomous assistance when the system is nearby a state it has previously observed~(\\cite{peternel2016shared}). Our approach utilizes a linear representation of the nonlinear human-robot dynamics which avoids the use of local models in exchange for a higher capacity linear model which globally represents the complex system. This is also distinct from the virtual fixtures\/guide literature where system models are known \\textit{a priori}, and frequently nonlinear.\n\nFrom a methodological standpoint, the most closely related research is recent work that computes control trajectories by integrating learned dynamics models with model predictive control (MPC) algorithms~(\\cite{williams2017information, drews2017aggressive}). These algorithms are defined by an iterative, receding horizon optimization process instead of using an infinite-horizon. Similar to our own work, these researchers first collect observations from live demonstrations of the mechanical device to learn a model of the system dynamics. They then integrate the model with an MPC algorithm to develop control policies. Beyond methodological differences (e.g., choice of machine learning and optimal control algorithms), the key theoretical distinction between these works and our own is our focus on shared control of joint human-machine systems, instead of developing fully autonomous systems. In particular, we learn a model of the joint system that is integrated into a shared control system to improve a human operator's control of a dynamic system. We therefore consider the influence of the human operator both during the data-collection process and at run-time in the control of the dynamic system.\n\nIn this work, we learn a model of the system and control dynamics through an approximation to the Koopman operator~(\\cite{koopman1931hamiltonian}). As the Koopman operator is a relatively new concept in robot learning for control, we now provide additional information on its description in the following section.\n\n\\subsection{The Koopman Operator}\n\\label{sec-background-koopman}\n\n\\begin{figure*}[!th]\n\t\\centering\n\t\\includegraphics[width=0.8\\hsize]{pipeline.png}\n\t\\caption{Pictorial depiction of the our model-based shared control paradigm. (a) Collect observations from user interaction and learn a model of the joint human-machine system through an approximation to the Koopman operator. This can be computed offline or online. (b) Compute control policy of autonomous agent by solving optimal control problem using the learned model. (c) Allocate control to integrate autonomy (gray) and user input (green\/red).}\n\t\\label{fig-pipeline}\n\\end{figure*}\n\nThe Koopman operator is an infinite-dimensional linear operator that can capture all information about the evolution of nonlinear dynamical systems. This is possible because the operator describes a linear mapping between sequential \\textit{functions of states} instead of the state itself. In particular, the Koopman operator acts on an infinite dimensional Hilbert space representation of the state. To define the Koopman operator, let us consider a discrete time dynamic system $(\\mathcal{X}, t, F)$: \n\\begin{equation}\nx_{t+1} = F(x_t)\n\\label{eq-gen-dynamics}\n\\end{equation}\n\n\\noindent where $\\mathcal{X} \\subseteq \\mathbb{R}^N$ is the state space, $t \\in \\mathbb{R}$ is time and $F : \\mathcal{X} \\rightarrow \\mathcal{X}$ is the state evolution operator. We also define $\\phi$, a nominally infinite dimensional observation function \n\\begin{equation}\ny_t = \\phi(x_t)\n\\label{fn-obs}\n\\end{equation}\n\n\\noindent where $\\phi : \\mathcal{X} \\rightarrow \\mathbb{C}$ defines the transformation from the original state space into the Hilbert space representation that the Koopman operator acts on. The Koopman operator $\\mathcal{K}$ is defined as the composition of $\\phi$ with $F$, such that \n\n\\begin{equation}\n\\mathcal{K} \\phi = \\phi \\circ F.\n\\label{fn-koopman-eq}\n\\end{equation}\n\n\\noindent By acting on the Hilbert state representation, the \\textit{linear} Koopman operator is able to capture the complex, nonlinear dynamics described by the state evolution operator. \n\nWhile the Koopman operator is nominally infinite dimensional, recent work has demonstrated the ability to approximate a finite dimensional representation using data-driven techniques~(\\cite{rowley2009spectral, budivsic2012applied}). In the limit of collected observation data, the approximation to the Koopman becomes exact~(\\cite{williams2015data}). These data-driven methods have renewed an interest in using the Koopman operator in applied engineering fields. In contemporary work, the Koopman operator has been successfully used to learn the dynamics of numerous challenging systems. This includes demonstrations that show the Koopman operator can differentiate between cyclic and non-cyclic stochastic signals in stock market data~(\\cite{hua2016using}) and that it can detect specific signals in neural data that signify non-rapid eye movement (NREM) sleep~(\\cite{brunton2016extracting}). More recently these systems have included physical robotics systems~(\\cite{abraham2019active, bruder2019modeling}).\n\n\\section{Model-based Shared Control}\n\\label{sec-model-based-shared-control}\n\nOur primary goal is to develop a shared control methodology that improves the skill of human-machine systems without relying on \\textit{a priori} knowledge of the relationship between the human and the machine. To define our model-based shared control algorithm we now describe the (1) model learning process, (2) method for computing the policy of the autonomous agent (\\textit{autonomy input} in Figure~\\ref{fig-shared-control}) and (3) control allocation method (the \\textit{green box} in Figure~\\ref{fig-shared-control}). A pictorial depiction of our model-based shared control paradigm can be found in Figure~\\ref{fig-pipeline}. Our learning-based approach develops a model of the joint human-machine system solely from observation, and this model can be used by the policy generation method to develop autonomous control trajectories. The control allocation method then describes how we integrate the input provided by the human partner and the autonomous agent into a single command that is sent to the dynamic system. \n\n\\subsection{Model Learning via the Koopman Operator}\n\nWhen designing assistive shared control systems, it is important to consider both the human and autonomous partners. To ensure that our paradigm is valid for generic human-machine systems, we learn both the \\textit{system dynamics} and information about the \\textit{user interaction} directly from data. In this work, we develop a model of the joint human-machine through an approximation to the Koopman operator, which can be computed offline or online (discussed further in Section~\\ref{sec-study-two-results-online}). The model learning process is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(a)}. As mentioned previously, there are of course a variety of other machine learning algorithms and representions one could choose to learn the system dynamics. In this work, we use the Koopman operator, which is particularly well suited to model-based shared control of human-machine systems for two main reasons. First, it is possible to approximate the Koopman operator in low-data regimes (see Section~\\ref{sec-study-two-results-online}) which allows us to quickly expand the set of human-machine systems we can control under the general MbSC paradigm. Second, there are a variety of highly efficient learning algorithms~(\\cite{williams2015data, klus2015numerical, rowley2009spectral}) that make the Koopman operator well suited to an online learning paradigm, an important feature in shared control where it is unlikely that we have \\textit{a priori} knowledge of the joint human-machine system.\n\nWe use Extended Dynamic Mode Decomposition (EDMD) to approximate the Koopman operator~(\\cite{williams2015data}). EDMD belongs to a class of data-driven techniques known as Dynamic Mode Decomposition (DMD)~(\\cite{rowley2009spectral, schmid2010dynamic, tu2013dynamic}). These algorithms use snapshots of observation data to approximate the Koopman modes that describe the dynamics of the observed quantities. We now provide a mathematical treatment of the EDMD algorithm. We start by redefining the observation function $\\phi$ from Equation~\\ref{fn-obs} as a vector valued set of basis functions chosen to compute a finite approximation to the Hilbert space representation. We can then define the following approximation to the Koopman operator\n\n\\begin{equation}\n\\phi(x_{t+1}) = \\mathcal{K}^T\\phi(x_t) + r(x_t)\n\\end{equation}\n\n\\noindent where $r(x_t)$ is a residual term that represents the error in the model. The Koopman operator is therefore the solution to the optimization problem that minimizes this residual error term \n\\begin{align}\n\\begin{split}\nJ & = \\frac{1}{2} \\sum_{t=1}^{T} |r(x_t)|^2\\\\\n& = \\frac{1}{2} \\sum_{t=1}^{T} |\\phi(x_{t+1}) - \\phi(x_t)K) |^2\n\\end{split}\n\\label{eq-ls}\n\\end{align}\n\n\\noindent where $T$ is the time horizon of the optimization procedure, and $|\\cdot|$ is the absolute value. The solution to the least squares problem presented in Equation~\\eqref{eq-ls} is\n\\begin{equation*}\nK = G^\\dagger A\n\\end{equation*}\n\n\\noindent where $\\dagger$ denotes the Moore-Penrose pseudo inverse and\n\\begin{align*}\nG & = \\frac{1}{T} \\sum_{t=1}^{T} \\phi(x_t)^T\\phi(x_t) \\\\\nA & = \\frac{1}{T} \\sum_{t=1}^{T} \\phi(x_t)^T\\phi(x_{t+1})\n\\end{align*}\n\n\\subsubsection{Basis} In this work, we require that the finite basis $\\phi$ \\textit{includes both the state and control variables}~(\\cite{proctor2016generalizing}). This ensures that the Koopman operator models both the natural dynamics of the mechanical system and the control dynamics as provided by the user demonstration. In this work we empirically select a fixed set of basis functions to ensure that all models (across the different users in our validation study) are learned using the same basis. Here we choose $\\phi$ such that\n\\begin{align}\n\\begin{split}\n\\phi = &[1, x_1, x_2, x_3, x_4, x_5, x_6, u_1, u_2, u_1*x_1, u_1*x_2, \\\\\n& u_1*x_3, u_1*x_4, u_1*x_5, u_1*x_6, u_2*x_1, u_2*x_2, \\\\\n& u_2*x_3, u_2*x_4, u_2*x_5, u_2*x_6, u_1*cos(x_3), \\\\\n& u_1*sin(x_3), u_2*cos(x_3), u_2*sin(x_3)].\n\\end{split}\n\\label{eq-basis}\n\\end{align}\nThese 25 basis functions were chosen to combine information about the geometry of the task (e.g., the trigonometric functions capture specific nonlinearities present in the system dynamics, see Section~\\ref{sec-experimental-environment}) with information related to how the user responds to system state. For this reason, we include terms that mix state information with control information. To evaluate the accuracy of the learned approximation to the Koopman operator we compute the H-step prediction accuracy (see Figure~\\ref{fig-h-step-accuracy}).\n\nThere are, of course, a variety of methods that one could use to select an appropriate basis for a given dynamical system. This step is particularly important as selecting a poor basis will quickly degrade the validity of the learned model~(\\cite{berrueta2018dynamical}). One such method is to integrate known information about the system dynamics into the chosen basis functions, such as the relationship between the heading of the lander and the motion generated by the main thruster. This approach works well when the system dynamics are easy to understand, however it can prove challenging when the dynamics are more complex. For this reason, one could also choose the set of basis functions through purely data-driven techniques. Sparsity Promoting DMD~(\\cite{jovanovic2014sparsity}) is one such algorithm. SP-DMD takes a large initial set of randomly generated basis functions and imposes an $\\ell_1$ penalty during the learning process to algorithmically decides which basis functions are the most relevant to the observable dynamics~(\\cite{tibshirani1996regression}). An example of this purely data-driven approach being applied to human-machine systems can be found in~\\cite{broad2019highly}.\n\n\\subsection{Autonomous Policy Generation}\n\nTo generate an autonomous control policy, we can integrate the portion of the learned model that relates to the system and control dynamics into a model predictive control (MPC) algorithm. In particular, we use Koopman operator model-based control~(\\cite{broad2017learning, abraham2017model}), which we detail now in full. To compute the optimal control sequence, $u$, we must solve the following Model Predictive Control (MPC) problem\n\\begin{equation}\n\\begin{aligned}\n& \\underset{u}{\\text{minimize}}\n& & J = \\sum_{t=0}^{T-1} l(x_t,u_t) + l_T(x_T) \\\\\n& \\text{subject to}\n& & x_{t+1} = f(x_t, u_t), \\\\\n& & & u_t \\in U, x_t \\in X, \\forall t\n\\end{aligned}\n\\label{eqn-mpc}\n\\end{equation}\n\n\\noindent where $f(x_t,u_t)$ is the system dynamics, $l$ and $l_T$ are the running and terminal cost, and $U$ and $X$ are the set of valid control and state values, respectively.\n\nIn this work, we define\n\\begin{align*}\nl(x_t, u_t) &= \\frac{1}{2} (x_t-x_d) Q_t (x_t-x_d) + \\frac{1}{2} u_t R_t u_t \\\\\n\\textnormal{where }Q_t &= Diag[6.0, 10.0, 20.0, 2.0, 2.0, 3.0]\n\\end{align*}\n$x_t$ is the current state and $x_d$ is the desired goal state. Additionally,\n\\begin{align*}\nl_T(x_T) &= \\frac{1}{2} (x_t-x_d) Q_T (x_t-x_d) \\\\\n\\textnormal{where }Q_T &= Diag[3.0, 3.0, 5.0, 1.0, 1.0, 1.0]\n\\end{align*}\nThese values were chosen empirically based on results observed from the system operating fully autonomously.\n\nTo integrate our learned system model, we re-write the system dynamics as such:\n\\begin{equation}\n\\phi(x_{t+1}) = f_\\mathcal{K}(x_t, u_t)\n\\label{eqn-koopman-dyn}\n\\end{equation}\n\n\\noindent where $f_\\mathcal{K} = \\mathcal{K}^T\\phi(x_t, u_t)$ is the learned system dynamics parameterized by a Koopman operator $\\mathcal{K}$. This equation demonstrates the fact that the Koopman operator does not map directly from state to state, but rather operates on functions of state. We can then evaluate the evolved state by recovering the portion of the basis that represents the system's state\n\\begin{equation}\nx_{t+1} = \\phi(x_{t+t})_{1:N}\n\\label{eqn-koopman-recover}\n\\end{equation}\n\\noindent where values $1:N$ are the state variables, as per our definition in Equation~\\eqref{eq-basis}, and $N$ is the dimension of the state space. The policy generation process is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(b)}.\n\n\\subsubsection{Nonlinear Model Predictive Control Algorithm}\n\nWe solve Equation~\\eqref{eqn-mpc} with Sequential Action Control~(\\cite{ansari2016sequential}) (SAC). SAC is a real-time, model-based non-linear optimal control algorithm that is designed to iteratively find a single value, a time to act, and a duration that maximally improves performance. Other viable nonlinear optimal control algorithms include iLQR~(\\cite{li2004iterative}) and DDP~(\\cite{mayne1966second, tassa2014control}). SAC is particularly well suited for our shared control algorithm because it searches for single, short burst actions which aligns well with our control allocation algorithm (described in detail in Section~\\ref{sub-sec-control-allocation}). Additionally, SAC can compute closed-loop trajectories very quickly (1 kHz), an important feature for interactive human-machine systems such as the one presented in this work. \n\n\\subsubsection{Integrating the Koopman model and SAC}\nSequential Action Control is a gradient-based optimization technique and it is therefore necessary to compute derivatives of a system during the optimization process. The linearization of the discrete time system is defined by the following equation\n\\begin{equation*}\nx_{t+1} = A x_t + B u_t.\\\\\n\\end{equation*}\n\n\\noindent By selecting a differentiable $\\phi$, one can compute $A$ and $B$ \n\\begin{equation}\n\\begin{aligned}\nA &= \\mathcal{K}_{1:N}^T \\frac{\\partial \\phi}{\\partial x}\\\\\nB &= \\mathcal{K}_{N:N+P}^T \\frac{\\partial \\phi}{\\partial u}\n\\label{eqn-linear-model-a_and-b}\n\\end{aligned}\n\\end{equation}\n\n\\noindent where $N$ is again the dimension of the state space, and $P$ is the dimension of the control space. \n\n\\subsection{Control Allocation Method}\n\\label{sub-sec-control-allocation}\n\nTo close the loop on our shared control paradigm, we define a control allocation method that uses the solution from the optimal control algorithm to provide outer-loop stabilization. We use a geometric signal filter that is capable of dynamically shifting which partner is in control at any given instant based on optimality criteria. This technique is known as Maxwell's Demon Algorithm (MDA)~(\\cite{tzorakoleftherakis2015controllers}). Our specific implementation of MDA is detailed in Algorithm~\\ref{mda-algorithm} where $u_h$ is the control input from the human operator, $u_a$ is the control produced by the autonomy, and $u$ is applied to the dynamic system. \n\\begin{algorithm}[!h]\n\\caption{Maxwell's Demon Algorithm (MDA)}\n\\begin{algorithmic}\n \\If {$\\langle u_h, u_a \\rangle \\geq 0$} \n \\State $u = u_h$;\n \\Else\n \\State $u = 0$;\n \\EndIf\n\\end{algorithmic}\n\\label{mda-algorithm}\n\\end{algorithm}\n\nWe also provide a pictorial representation of the algorithm in Figure~\\ref{fig-mda}.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=\\hsize]{mda.png}\n\t\\caption{Maxwell's Demon Algorithm (MDA)}\n\t\\label{fig-mda}\n\\end{figure}\n\nThis control allocation method restricts the user's input to the system to be in the same half-plane as the optimal control solution, and places no other limitations on the human-machine interaction. If the user's input is in the opposite half-plane, no input is provided to the system. This control allocation method is lenient to the human partner, as notably, \\textit{the autonomous agent does not add any information into the system} and instead only blocks particularly bad input from the user. Therefore, \\textit{any signal sent to the system originates from the human partner}. We use this filter because we are motivated by assistive robotics, in which prior research has shown that there is no consensus across users on desired assistance level~(\\cite{erdogan2017effect}). By allowing the user a high level of control freedom, the system encourages input from the human operator and restricts how much authority is granted to the autonomous partner. This method is depicted in Figure~\\ref{fig-pipeline}\\textcolor{red}{(c)}. We use MDA in this work primarily because it has been experimentally validated in prior studies on human-machine systems for assistive robotics~(\\cite{fitzsimons2016optimal}). Notably, this method does not guarantee optimal (or even \"good\") performance, as a human operator could theoretically always provide input orthogonal to the autonomous solution, resulting in no control ever being applied to the system. However, this technique does allocate a large amount of control authority to the human-in-the-loop, a desirable feature in our motivating application domains. There are also alternative methods that can be used for similar purposes, including extensions to MDA that incorporate additional information from the autonomous partner, which can be used to improve performance or safety~(\\cite{broad2018operation}). A review paper of alternative control allocation techniques can be found in~\\cite{losey2018review}.\n\n\\section{Human Subjects Study}\n\\label{sec-experimental-validation}\n\nHere, we detail the experimental setup that we use to study three main aspects of the described system. \n\\begin{itemize}\n\\item First, our aim is to evaluate the efficacy of model-based shared control as it relates to task success and control skill. Concurrently, we aim to evaluate the generalizability of the learned system models with respect to a wide range of human operators.\n\\item Second, we aim to evaluate the efficacy of model-based shared control under an online learning paradigm---specifically, the sample-efficiency of the Koopman operator representation.\n\\item Finally, we aim to evaluate the impact of nonlinear modeling and policy generation techniques through a comparison to a second human-subjects study that enforces linear constraints on our model-based shared control algorithm.\n\\end{itemize}\n\n\\subsection{Experimental Environment}\n\\label{sec-experimental-environment}\n\nThe proposed shared control framework is evaluated using a simulated lunar lander (see Figure~\\ref{fig-ll-env}). \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\fbox{\\includegraphics[width=0.85\\hsize]{environment.png}}\n\t\\caption{Simulated lunar lander system. The green circle is the goal location. The red dots represent an engine firing.}\n\t\\label{fig-ll-env}\n\\end{figure}\n\n\\noindent We use a simulated lunar lander (rocket) as our experimental environment for a number of reasons. This environment is challenging for a novice user, but performance can be improved (and sometimes mastered) given enough time and experience. Similar to a real rocket, one of the main control challenges is the stability of the system. As the rocket rotates along its yaw axis, firing the main thruster can produce nonintuitive dynamics for a novice. Furthermore, once the rocket has begun to rotate, momentum can easily overwhelm a user who is unfamiliar with such systems. Therefore, it is often imperative---particularly for non-expert users---to maintain a high degree of stability at all times in order to successfully complete the task. In addition to the control challenges, we choose this environment because the simulator abstracts the system dynamics through calls to the Box2D physics engine; therefore, we do not have an exact model and thus have an \\textit{explicit need to learn one}.\n\n\\subsection{System Description}\n\\label{sub-sec-system-description}\n\nThe dynamic system is a modified version of an open-source environment implemented in the Box2D physics engine and released by OpenAI~(\\cite{brockman2016gym}). Our modifications (1) allow for continuous-valued multi-dimensional user control via a joystick, and (2) incorporate the codebase into the open-source ROS framework. We have made our code available online at \\url{https:\/\/github.com\/asbroad\/model_based_shared_control}.\n\nThe lunar lander is defined by a 6D state space made up of the position ($x,y$), heading ($\\theta$), and their rates of change ($\\dot{x}, \\dot{y}, \\dot{\\theta}$). The control input to the system is a continuous two dimensional vector ($u_1, u_2$) which represents the throttle of the main and rotational thrusters. The main engine can only apply positive force. The left engine fires when the second input is negative, while the right engine fires when the second input is positive. The main engine applies an impulse that acts on the center of mass of the lunar lander, while the left and right engines apply impulses that act on either side of the rocket. We remind the reader that our goal is to learn both the system dynamics and user interaction. For this reason, we must collect data both on the system state and also the control input. Together, this defines an eight dimensional system: \n\\begin{equation*}\n\\mathcal{X} = [x, y, \\theta, \\dot{x}, \\dot{y}, \\dot{\\theta}, u_1, u_2]\n\\end{equation*}\n\n\\noindent where the first six terms define the lunar lander state and $u_1, u_2$ are the main and rotational thruster values, through which the user interacts with the system.\n\n\\subsection{Trial Description}\n\\label{sec-experiment-trial-description}\n\nThe task in this environment requires the user to navigate the lander from its initial location to the goal location (represented by the green circle in Figure~\\ref{fig-ll-env}) and to arrive with a heading nearly perpendicular to the ground plane and with linear and rotational velocities near zero. A trial is considered complete either (1) when the center of an upright lunar lander is fully contained within the goal circle (i.e., the Euclidean distance between the center of the lander and the center of the goal is less than $0.9$ m) and the linear and angular velocities are near zero (i.e., the linear velocities must be less than $1.0$ m\/s and the angular velocity must be less than $0.3$ rad\/sec), or (2) when the lander moves outside the bounded environment (i.e., when the lander moves off the screen to the left or right) or crashes into the ground.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.85\\hsize]{control_interface.jpeg}\n\\end{figure}\nIn each trial, the lunar lander is initialized to the same $x, y$ position ($10.0$ m, $13.3$ m), to which we added a small amount of Gaussian noise ($\\mu = 0.2$ m). Additionally, a random force is applied at the start of each trial (uniform($-1000$ N,$1000$ N)). The goal location ($10.0$ m, $6.0$ m) is constant throughout all trials and is displayed to the user as a green circle (see Figure~\\ref{fig-ll-env}). \n\nThe operator uses a PS3 controller to interact with the system. The joystick controlled by the participant's dominant hand fires the main thruster, and the opposing joystick fires the side thrusters. As the user moves through the environment, we keep track of the full state space at each timestep (10 Hz). We provide a video of the system, task and user interaction under shared control as part of the supplementary material.\n\n\\subsection{Analysis I : Efficacy and Generalizability of Model-based Shared Control}\n\\label{sub-sec-analysis-i}\n\n\\subsubsection{Control Conditions}\n\nTo study the efficacy and generalizability of our shared control system and the generalizability of the learned system dynamics, we compare four distinct control conditions.\n\n\\begin{itemize}\n\\item In the first condition, the user is in full control of the lander and is not assisted by the autonomy in any way; we call this approach \\textit{User Only} control. As each user undergoes repeated trials with the same goal, this can also be considered a natural learning paradigm. \n\\end{itemize}\n\nIn the remaining three conditions an autonomous agent provides outer-loop stabilization on the user's input as described in Section~\\ref{sec-model-based-shared-control}. The main distinction between these three control conditions is the source of the data used to compute the model of the joint system. \n\\begin{itemize}\n\\item In the second condition, the model is defined by a Koopman operator learned on data captured from earlier observations of the current user; we call this approach \\textit{Individual Koopman}. \n\\item In the third condition, the model is defined by a Koopman operator learned on data captured from observations of three novice participants prior to the experiment (who were not included in our analysis); we call this approach \\textit{General Koopman}.\n\\item In the fourth condition, the model is defined by a Koopman operator learned on data captured from observations of an expert user (the first author of the paper, who has significant practice controlling the simulated system); we call this approach \\textit{Expert Koopman}.\n\\end{itemize}\n\nWe analyze the viability of model-based shared control by comparing the \\textit{User Only} condition to each of the shared control conditions. We analyze the generalizability of the learned models by comparing the results from the \\textit{Individual Koopman}, \\textit{General Koopman} and \\textit{Expert Koopman} conditions. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsubsection{Protocol and Participants}\n\nEach experiment begins with a training period for the user to become accustomed to the dynamics of the system and the interface. This training period continues until the user is able to successfully achieve the task three times in a row or 15 minutes elapses. During the next phase of the experiment, we collect data from 10 user-controlled trials, which we then use to develop a model. Finally, each user performs the task under the four conditions detailed above (10 trials each). The order in which the control paradigms are presented to the user is randomized and counter-balanced to reduce the effect of experience.\n\nThe study consisted of 16 participants (11 female, 5 male). All subjects gave their informed consent and the experiment was approved by Northwestern University's Institutional Review Board. \n\n\\subsection{Analysis II : Online Model-based Shared Control}\n\\label{sub-sec-analysis-ii}\n\nTo study the efficacy of our model-based shared control algorithm in an online learning paradigm, we collect data from a fifth experimental condition, which we call \\textit{Online Koopman}. \n\n\\subsubsection{Control Condition}\n\n\\begin{itemize}\n\\item The main difference between the \\textit{Online Koopman} paradigm and the three previously described shared control conditions is that the model of the joint human-machine system is learned online in real-time. In all other control conditions, all models were trained offline from observations gathered during a data collection phase. In the online paradigm, the model is updated continuously starting with the first collected set of observations.\n\\end{itemize}\n\nIn addition to the lack of a separate data collection phase, the online learning paradigm is distinct from the other shared control conditions because of the data that we use to learn the model. In the shared control conditions that use a model learned offline, we use all of the observations collected from the user demonstrations to learn the model. In the online learning paradigm we only update the model when the user input is admitted by the MDA controller. We choose this learning paradigm because it fits well conceptually with our long term goal of using the outer-loop algorithm to provide stability and safety constraints on the shared control system. It is important to note that at the beginning of the online learning paradigm, the MDA controller relies on randomly initialized control and system dynamics models. For this reason, the control let through to the system will be very noisy during the first few moments of the experiment, making the system difficult to control successfully for any human-in-the-loop. For this reason, it is important that the system dynamics and control models can be learned quickly, something we evaluate in Section~\\ref{sec-study-two-results-online}. As soon as the learning process produces a model of any kind, the policy is computed using MPC techniques.\n\n\\subsubsection{Protocol and Participants}\n\\label{sec-sub-sub-protocol}\n\nThe online learning paradigm consists of 15 trials per user to allow us to evaluate possible learning curves. The model is updated at the same rate as the simulator (10 Hz) and is initialized naively (i.e., all values are sampled from a uniform distribution [0,1)). This paradigm is presented as the final experimental condition to all subjects. The subjects are the same 16 participants as in Section~\\ref{sub-sec-analysis-i}. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsection{Analysis III : Comparison of Linear and Nonlinear Model-based Shared Control}\n\\label{sub-sec-analysis-iii}\n\nTo study the impact of nonlinear modeling and policy generation techniques on our model-based shared control paradigm, we compare results from the above study to a second study (consisting of a separate group of 16 participants) that enforces linear constraints on these parts of the system. \n\n\\subsubsection{Control Conditions}\n\nThe same four control conditions from Analysis I are evaluated. The differences lie in (1) the choice of basis function used to approximate the Koopman operator and (2) the choice of optimal control algorithm used to generate the autonomous policy. In this study, we use a linear basis, instead of a nonlinear basis, to approximate the Koopman operator, which consists of the first nine terms in Equation~\\eqref{eq-basis}. We furthermore use a Linear Quadratic Regulator, instead of a nonlinear model predictive control (MPC) algorithm (Sequential Action Control (SAC)). \n\n\\subsubsection{Protocol and Participants}\n\nThe same experimental protocol described in Section~\\ref{sub-sec-analysis-i} was used, allowing us to perform a direct comparison between the two studies. Unlike the prior sections, the data we analyze to evaluate the ideas presented in this section comes from both the linear and nonlinear MbSC studies. The data from the linear MbSC study was previously analyzed in~(\\cite{broad2017learning}) and was collected from a different set of 16 subjects, resulting in 32 total participants.\n\n\\subsection{Statistical Analysis} \n\\label{sec-statistical-analysis}\n\nWe analyze the results of the human-subjects studies using statistical tests to compare the performance of participants along a set of pertinent metrics under the control conditions described in Section~\\ref{sub-sec-analysis-i}. Our analysis consists of one-way ANOVA tests conducted to evaluate the effect of the shared control paradigm on each of the dependent variables in the study. These tests allow us to statistically analyze the effect of each condition while controlling for overinflated type I errors that are common with repeated t-tests. Each test is computed at a significance value of 0.05. When the omnibus F-test produces significant results, we conduct post-hoc pair-wise Student's t-tests using Holm-Bonferroni adjusted alpha values~(\\cite{wright1992adjusted}). The post-hoc t-tests allow us to further evaluate the cause of the significance demonstrated by the ANOVA by comparing each pair of control paradigms separately. Similar to the ANOVA test, the Holm-Bonferroni correction is used to reduce the likelihood of type I errors in the post-hoc t-tests. \n\nIn addition to reporting the results of the statistical tests, we also use box-and-whisker diagrams to display specific metrics. In these plots, the box represents the \\textit{interquartile range (IQR)} which refers to the data that lies between the first and third quartiles. This area contains 50$\\%$ of the data. The line inside the box represents the median value and the whiskers above and below the box are the minimum and maximum values inside 1.5 times the interquartile range. The small circles are outliers. The plots also depict the results of the reported statistical tests. That is, if a post-hoc t-test finds statistically significant differences between two conditions, we depict these results on the box-and-whisker diagrams using asterisks to represent the significance level ($*: p < 0.05, **: p < 0.01, ***: p < 0.005$). \n\nWe note that this analysis is used for \\textit{all reported results}. Therefore, if we present the results of a t-test, it signifies that we have previously run an ANOVA and found a statistically significant difference. The reader can also assume that any unreported post-hoc t-tests failed to reject the null hypothesis. \n\n\\section{Results}\n\\label{sec-results}\n\nWe now present the results of the desired analyses described in Sections~\\ref{sub-sec-analysis-i}, ~\\ref{sub-sec-analysis-ii}, and ~\\ref{sub-sec-analysis-iii}. Our analyses support the premise that model-based shared control is a valid and effective data-driven method for improving a human operator's control of an \\textit{a priori} unknown dynamic system. We also find the learned system models are generalizable across a population of users. Finally, we find that these models can be learned online in a fast, data-efficient manner.\n\n\\subsection{Efficacy of Model-based Shared Control}\n\\label{sec-results-performance-metrics}\n\nTo evaluate the efficacy of our model-based shared control algorithm, we compute the average success rate under each control paradigm and examine the distribution of executed trajectories. Our analysis compares the User Only control condition to each of the shared control conditions (Individual Koopman, General Koopman and Expert Koopman). All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study.\n\n\\subsubsection{Task Success and User Skill}\n\nA trial is considered a success when the user is able to meet the conditions defined in Section~\\ref{sec-experiment-trial-description}. We can interpret the success rate of a user, or shared control system, on a set of trials as a measure of skill. The greater the skill, the higher the success rate. By comparing the average success rate under the User Only control paradigm with the average success rate under the shared control paradigms, we can analyze the impact of the assistance provided by the autonomous partner. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\hsize]{results_success_nonlinear.png} \n \\caption{Number of successful trials under each control condition.}\n \\label{fig-results-success-nonlinear}\n\\end{figure} \n\n\\begin{figure*}[t]\n\t\\centering\n\n\t\\includegraphics[width=0.8\\hsize]{trajectories.png}\n\t\\caption{Trajectory plots which visualize the most frequently visited parts of the state space. The data is broken down by control condition (columns) and whether the trial was successful (rows). The plots are generated by overlaying each trajectory with a low opacity and the intensity of the plots therefore represents more frequently visited portions of the state space.}\n\t\\label{fig-heatmaps}\n\\end{figure*}\n\nThe average number of successful trials produced in each control condition are displayed in Figure~\\ref{fig-results-success-nonlinear}. An analysis of variance shows that the choice of control paradigm has a significant effect on the success rate ($F(3, 59) = 4.58, p < 0.01$). Post-hoc t-tests find that users under the shared control conditions show statistically significant improvements in the success rate when compared to their performance under the User Only control condition ($p < 0.01$, for all cases). No other pairings are found to be statistically distinct. This result demonstrates that the assistance provided by the autonomous agent significantly improves the skill of the joint human-machine system, thereby validating the efficacy of model-based shared control. This result is inline with related work that aims to provide similar assistance, such as can be found in the virtual fixtures\/guides literature~(\\cite{forsyth2005predictive, griffiths2005sharing, abbink2012haptic}). Importantly, however, unlike these prior methods, model-based shared control does not require \\textit{a priori} knowledge of the system dynamics or the human operator.\n\nThe fact that there are no observed differences in task performance between the Individual, General and Expert cases suggests that the source of the data used to learn the model may not be important in developing helpful autonomous assistance in the shared control of dynamic systems (discussed further in Section~\\ref{sec-results-generalizability}). An alternative interpretation of this data could be that the discrepancy of skill demonstrated by the participants in the individual, general and expert cases was not large enough to produce any potential difference in performance. This, however, is not likely, as the expert demonstrator (the lead author) was able to easily achieve the desired goal state during every demonstration (i.e. 10 out of 10 trials). In contrast, the average subject who provided data in the individual and general cases performed similarly to how participants performed under the User Only cases in Figure~\\ref{fig-results-success-nonlinear} (i.e. about 1 in 10 successful demonstrations).\n\n\\subsubsection{Distribution of Trajectories---Qualitative}\n\nWe further analyze the different control conditions through a comparison of the distribution of trajectories we observe in each condition. Unlike the success metric, this analysis is not based on task performance, and is instead performed to evaluate the control skill exhibited by either the human operator alone or the joint human-machine system. Figure~\\ref{fig-heatmaps} depicts trajectory plots which represent the most frequently occupied sections of the state space. The plots are generated using data separated based on the control condition (columns) and whether the user was able to complete the task on a given trial (rows). \n\nThe first distinction we draw is between the User Only control condition and the three shared control conditions. In particular, the distribution of trajectories in the User Only condition depicts both larger excursions away from the target and lower levels of similarity between individual executions. When we focus specifically on which parts of the state space users spend the most time in (as represented by the intensity of the plots), we see two main clusters of high intensity (around the start and goal locations) in the shared control conditions, whereas we see a wider spread of high-intensity values in the User Only control condition. This suggests more purposeful motions under the shared control conditions.\n\nThe second distinction we draw focuses on a comparison between the successful and unsuccessful trials. Specifically, we note that trajectory plots computed from the failed trials under the shared control conditions demonstrate similar properties (e.g., the extent of the excursions away from the target, as well as two main clusters of intensity) to the trajectory plots computed from successful trials under the shared control conditions. This suggests that users may have been closer to succeeding in these tasks than the binary success metric gives them credit for. By comparison, the trajectory plot computed from the failed trials under the User Only control condition depicts a significantly different distribution of trajectories with less structure. Specifically, we observe numerous clusters of intensity that represent time spent far away from the start and goal locations. This suggests that users were particularly struggling to control the system in these cases.\n\n\\subsubsection{Distribution of Trajectories---Quantitative}\n\nThese observations are supported by an evaluation of the ergodicity~(\\cite{mathew2011metrics}) of the distributions of trajectories described above. We find users under the shared control paradigm are able to produce trajectories that are more ergodic with respect to the goal location then users under User Only control, which means that they spend more a significantly larger proportion of their time navigating near the goal location under shared control.\n\nTo perform this comparison, we compute the ergodicity of each trajectory with respect to a probability distribution defined by a Gaussian centered at the goal location (which represents highly desirable states). This metric can be calculated as the weighted Euclidean distance between the Fourier coefficients of the spatial distribution and the trajectory~(\\cite{miller2016ergodic}).\n\nSimilar to our qualitative analysis of the trajectory plots in Figure~\\ref{fig-heatmaps}, we first compare ergodicity between the different control conditions by analyzing \\textit{all} the trajectories observed under each condition. An analysis of variance showed that the effect of the shared control paradigm on trajectory ergodicity is significant ($F(3, 640) = 12.97, p < 0.00001$). Post-hoc t-tests find statistically significant differences between the performance of the users in the User Only control condition and users in the shared control conditions based on the individual, general and expert datasets ($p < 0.0005, p < 0.001, p < 0.0005$, respectively). No other pairings demonstrate statistically distinct results. We interpret this result as additional evidence that model-based shared control improves the skill of the human partner in controlling the dynamic system.\n\nWe further analyze the ergodicity results by separating the trajectories based on whether they come from an unsuccessful or successful trial. An analysis of variance computed over all control conditions showed that the effect of the shared control paradigm on trajectory ergodicity is significant for both unsuccessful ($F(3, 310) = 6.60, p < 0.0005$) and successful ($F(3, 325) = 7.20, p < 0.0005$) trials. Post-hoc t-tests find statistically significant differences between the performance of the users in the User Only control condition and users in the shared control conditions ($p < 0.005$ in all unsuccessful cases, $p < 0.05$ in all successful cases). No other pairings reject the null hypothesis. These results suggest that the shared control paradigm is helpful in improving the user's skills even when they provide input that is ultimately unsuccessful in achieving the task. Furthermore, our shared control paradigm is helpful, even when users are performing at their best. Thus, for both failed and successful trials, users exhibit a greater amount of control skill than when there is no assistance. \n\n\\subsection{Generalizability of Shared Control Paradigm}\n\\label{sec-results-generalizability}\n\nWe continue the evaluation of our human subjects study with an analysis of the generalizability of the learned system models and our model-based shared control algorithm. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study. As reported in Section~\\ref{sec-results-performance-metrics}, we find no statistical evidence that the source of the data used to train the model impacts the efficacy of the shared control paradigm. This test was again conducted using an ANOVA which can be used to evaluate differences between groups by comparing the mean and variance computed from the data collected during the experimental trials. When we compare the success rate of users in each shared control condition, we find no statistically significant difference. However, we do find a significant difference between the user's performance under each shared control condition and the User Only condition. The same result holds when we compare each control condition along the ergodic metric described in Section ~\\ref{sec-results-performance-metrics} and visualized by trajectory plots in Figure~\\ref{fig-heatmaps}. Taken together, these results suggest that the efficacy of the assistance provided by the autonomous agent is \\textit{independent of the source of the data used to learn a model of the joint system}. That is, models trained on data collected from an individual user generalize to a larger population of human partners. \n\n\n\n\\begin{figure}[h]\n \\centering\n \\captionsetup[subfigure]{justification=centering}\n \\begin{subfigure}[t]{0.49\\hsize}\n \t\\centering\n \t\\includegraphics[width=\\hsize]{results_agreement_2_main_by_user.png} \n \t\\caption{}\n \t\\label{fig-results-agreement-main}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\hsize}\n \\centering\n \\includegraphics[width=\\hsize]{results_agreement_2_side_by_user.png} \n \\caption{}\n \\label{fig-results-agreement-side}\n \\end{subfigure}\n \\caption{Average agreement between user and optimal control algorithm as defined by the Maxwell's Demon Algorithm (Equation~\\eqref{mda-algorithm}) along the (\\subref{fig-results-agreement-main}) main and (\\subref{fig-results-agreement-side}) side thrusters.}\n \\label{fig-results-agreement}\n\\end{figure} \n\nTo further analyze the generalizability of the model-based shared control paradigm, we evaluate the participants' interactions with the outer-loop autonomous control. We are interested in whether or not users agree more often with the autonomy when control signals are produced based on models learned from their personal demonstration data. To evaluate this idea, we look at the percentage of user inputs that are let through as control to the dynamic system based on our control allocation method (MDA). The average agreement metric is broken down by control condition and presented in Figure~\\ref{fig-results-agreement}. \n\nAn analysis of variance shows that the effect of the source of the model data on the average agreement is not significant in either the main thruster ($F(2, 44) = 0.87, p = 0.43$) nor the side thruster ($F(2, 44) = 0.38, p = 0.69$). These results show a uniformity in the response to system state across users and suggest that the system is able to adapt to the person, instead of requiring a personalized notion of the user and system. \n\nWe interpret this finding as further evidence of the generalizability of our model-based shared control paradigm. In particular, we find that it is not necessary to incorporate demonstration data from individual users when developing model-based shared control. This result replicates findings from our analysis of data collected under a shared control paradigm that enforced a linear constraint on the model learning and policy generation techniques~(\\cite{broad2017learning}).\n\n\\subsection{Online Learning Shared Control}\n\\label{sec-study-two-results-online}\n\nWe next evaluate our model-based shared control algorithm in an online learning paradigm. Our evaluation considers the sample complexity of our model-based learning algorithm through a comparison of the \\textit{impact each shared control paradigm has on the skill of the joint system over time}. All of the data we analyze to evaluate the ideas presented in this section comes from the nonlinear MbSC study. Our statistical analysis is a comparison of the percent of participants who succeed under each paradigm \\textit{by trial number}, shown in Figure~\\ref{fig-res-success-by-trial}. We remind the reader that users participate in 15 trials of the Online Koopman condition while they participate in 10 trials of the four other experimental conditions. For comparison we only plot the first 10 trials of the Online Koopman data, though we note that the improved success rate is sustained over the final five trials. From this plot, we can see that users in the Online Koopman shared control condition start off performing poorly, but by around trial 7 start performing on par with the other shared control conditions. \n\nHere, we also note the number of trials used to train the model of the system and control dynamics in each condition. In the Individual and Expert conditions, data is collected from 10 trials to train the model. In the General condition, data is collected from three different users who each control the system for 10 trials each, which means the model is trained from a total of 30 trials. Finally, as discussed above, in the Online condition, the model is learned continuously over the course of 15 trials.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=\\hsize]{success_by_trial_by_paradigm_dark.png} \n \\caption{Average percentage success by trial for the first 10 trials by control condition. Users under all shared control conditions using models learned offline (Individual, General, Expert) outperform the User Only control condition across all trials. Users under the shared control condition using models learned online (Online) start off performing poorly, but quickly begin to outperform the User Only control condition and, in the end, achieve the same level of success as those under the offline shared control conditions.}\n \\label{fig-res-success-by-trial}\n\\end{figure}\n\nTo provide quantitative evidence of this visual trend, we perform the same types of statistical analyses as in previous sections, but now include data from the \\textit{Online Koopman} as a fifth experimental condition. For ease of discussion we refer to the \\textit{Individual}, \\textit{General} and \\textit{Expert Koopman} model-based shared control conditions as the offline learning conditions, and the \\textit{Online Koopman} model-based shared control as the online learning condition. As users provide more data in the Online Koopman condition than in all other conditions, we perform two sets of analyses. First, we compare the data from the first ten trials from the Online Koopman condition to all other control conditions. We then re-perform the same tests, but use the final ten trials from the Online Koopman condition. By comparing these results, we can evaluate the efficacy of the online learning paradigm, and also analyze the effect of the amount of data used during the learning process. \n\n\\subsubsection{Statistical Analysis of the First Ten Trials}\n\\label{sec-study-two-results-online-first-ten}\n\nAn analysis of variance finds a statistically significant difference between the various control conditions along the primary success metric ($F(4, 74) = 5.35, p < 0.001$). Post-hoc t-tests find that all offline learning conditions significantly outperform the Online Koopman and User Only control conditions ($p < 0.05$ for all cases). We do not find the same statistically significant difference between the User Only and Online Koopman conditions. These results suggest that users under the online learning paradigm initially perform on par with how they perform under the user only control paradigm, but worse than under the offline control conditions. This analysis is consistent with our expectations since, in the online condition, the model of the joint system is initialized randomly and therefore does a poor job of assisting the user. However, it is also important that this online shared control does not degrade performance in comparison to the User Only paradigm, suggesting that there is little downside to employing the online learning paradigm during learning. \n\n\\subsubsection{Statistical Analysis of the Final Ten Trials}\n\\label{sec-study-two-results-online-last-ten}\n\nAs a point of comparison, we now re-run the same statistical tests using the final ten trials from the Online Koopman condition. An analysis of variance finds a statistically significant difference between the various control conditions along the primary success metric ($F(4, 74) = 3.55, p < 0.05$) (see Figure~\\ref{fig-avg-metrics-second-online-study}). Post-hoc t-tests find that all shared control conditions (using models learned offline and online) significantly out perform the User Only control paradigm ($p < 0.01$ for all conditions). This result is different from our analysis of the first ten trials and suggests that the learned model improves significantly with more data and now is on par with the models learned in the offline conditions. No other pairings show statistically significant differences. \n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.8\\hsize]{results_success_online.png} \n \\caption{Number of successful trials under each control condition (including an online learning paradigm). We find statistically significant differences between the User Only condition and each shared control condition ($p < 0.01$).}\n \\label{fig-avg-metrics-second-online-study}\n\\end{figure}\n\nThe visual trend present in Figure~\\ref{fig-res-success-by-trial} and the statistical analysis demonstrated in Figure~\\ref{fig-avg-metrics-second-online-study} suggest that the Koooman operator is able to quickly learn an actionable representation of the joint human-machine system. These results also demonstrate the efficacy of our model-based shared control algorithm in an online learning scenario and in limited data regimes. Here we note that follow-up studies are required to tease apart the impact of the model learning process and the user's experience controlling the dynamic system when comparing the offline paradigms to the online paradigm. Notably, in the offline learning paradigm, users undergo 10 trials of training at the start of the experiment. In contrast, and as stated in Section~\\ref{sec-sub-sub-protocol}, all users operate the system under the online learning paradigm as the final condition. For this reason, we do not account for user experience in this condition and therefore highlight the data-efficiency of the model learning process instead of the overall task performance in this condition. The main takeaway from this portion of the analysis is therefore that an actionable Koopman operator can be learned \\textit{quickly}, from significantly less data than alternative approaches commonly found in the literature, like neural networks~(\\cite{nagabandi2018neural}).\n\n\\subsection{Linear and Nonlinear Model-based Shared Control}\n\nThe final piece of analysis we perform in this work is related to the impact that nonlinear modeling and policy generation techniques have on our model-based shared control paradigm. For this analysis we compare the User Only control condition to the three offline shared control conditions. Unlike the prior sections, the data we analyze to evaluate the ideas presented in this section comes from both the linear and nonlinear MbSC studies.\n\n\\begin{figure}[h]\n \\centering\n \\captionsetup[subfigure]{justification=centering}\n \\begin{subfigure}[t]{0.49\\hsize}\n \t\\centering\n \t\\includegraphics[width=\\hsize]{results_success_linear.png} \n \t\\caption{Linear MbSC.}\n \t\\label{fig-results-comparison-success-linear}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.49\\hsize}\n \\centering\n \\includegraphics[width=\\hsize]{results_success_nonlinear.png} \n \\caption{Nonlinear MbSC.}\n \\label{fig-results-comparison-success-nonlinear}\n \\end{subfigure}\n \\caption{A comparison of the average success rate under (a) linear and (b) nonlinear model-based shared control to user only control.}\n \\label{fig-results-comparison-success}\n\\end{figure} \n\n\nThe average success rate of users under each control paradigm for both studies is presented in Figure~\\ref{fig-results-comparison-success}. In the linear study~(\\cite{broad2017learning}) we observe a trend (see Figure~\\ref{fig-results-comparison-success-linear}) that suggests users perform better under the shared control paradigm, but we do not find statistically significant evidence of this observation. In contrast, we find that model-based shared control using nonlinear modeling and policy generation techniques does statistically improve the success rate when compared to a User Only control paradigm.\n\n\nOne potential explanation for the difference we find in the results of the two studies is that the nonlinear basis produces more accurate models of the system dynamics then the linear basis. To explore this explanation, we evaluate the predictive capabilities of a Koopman operator learned with a linear basis to one learned with a nonlinear basis. This analysis is performed by comparing the predicted system states with ground truth data. We evaluate the error (mean and variance) as a function of prediction horizon (a.k.a. the H-step error). Figure~\\ref{fig-h-step-accuracy} depicts the raw error (in meters) of Koopman operators trained using linear and nonlinear bases. \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.9\\hsize]{h_step_model_prediction_accuracy_joint.png}\n\t\\caption{H-step Prediction Accuracy of Koopman operator models based on linear and nonlinear bases. Error is computed as a combination of the Euclidean distance between the predicted $(x, y)$ values and the ground truth $(x, y)$.}\n\t\\label{fig-h-step-accuracy}\n\\end{figure}\n\nOur analysis of the predictive capabilities of the Koopman operator models demonstrates that each is highly accurate. The nonlinear model does slightly outperform the linear model as the prediction horizon grows; however, we find that both models are able to produce single-step predictions with error on the scale of $10^{-3}$ meters. As a reminder to the reader, the state space is bounded with $X \\in (-10, 10), Y \\in (0, 16)$. This analysis suggests that the choice of basis function does not cause the observed difference in average success rate between the two studies. Instead, the important design decision may be the choice of model predictive control algorithm. In the linear study we use an infinite horizon LQR to produce autonomous control policies, whereas in the nonlinear study we use a receding-horizon Model Predictive Control (MPC) to produce autonomous control. Our interpretation of these results is that the receding horizon nature of MPC is better suited to the visual planning approach that human operators use when solving the lunar lander task.\n\n\\section{Discussion}\n\\label{sec-discussion}\n\nIn this section, we highlight a number of main takeaways that stem from our analysis. To begin, the results of our human-subjects studies demonstrate that our model-based shared control paradigm is able to (1) successfully learn a model of the joint human-machine system from observation data, and (2) use the learned system model to generate autonomous policies that can help assist a human partner achieve a desired goal. We evaluate the predictive capabilities of the learned system models through a comparison to ground truth trajectory data (see Figure~\\ref{fig-h-step-accuracy}) and evaluate the impact of the assistive shared control system through a comparison of performance (success rate, see Figure~\\ref{fig-results-success-nonlinear}) with a User Only (or natural learning) control paradigm. All analyses support the idea that MbSC can help improve the control skill of a human operator both when they are able to achieve a task on their own and when they are not.\n\nAdditional evaluations demonstrate that the learned system and control dynamics generalize across users, and suggests that, unlike in other human-machine interaction paradigms~(\\cite{sadigh2016information, macindoe2012pomcop, javdani2015shared}), personalization is not required for defining shared control paradigms of generic human-machine systems. Specifically, we find that the demonstration data used to learn the system and control models does not need to come from an optimal, or expert, controller, and can instead come from \\textit{any} human operator. Therefore, at a base level, the controller does not need to be personalized to each individual user as the learned model captures all necessary information. This idea is important for application in real-world scenarios where personalization of control paradigms can be time-consuming, costly, and challenging to appropriately define, often due to the variety in preferences described by human operators~(\\cite{gopinath2016human, erdogan2017effect}).\n\nWe also demonstrate that our approach can be used in an online learning setting. Importantly, we find that the model is able to learn very quickly, from limited amounts of data. In the Online Koopman condition, each trial took an average of 18 seconds, and therefore provided 180 data points. From our analysis in Section~\\ref{sec-study-two-results-online-last-ten}, we find that we are able to learn an effective model of the joint system after only 5 trials (or about 900 data points). Our model learning technique is also well suited for an online learning paradigm as it is not computationally intensive and can easily run at 50Hz on a Core i7 laptop with 8 GB of RAM. Additionally, we find that, even during the learning process, the application of the online model-based shared control algorithm does not significantly degrade the performance of the human operator.\n\nFinally, we also evaluate the impact that nonlinear modeling and policy generation techniques have on our model-based shared control algorithm~(\\cite{broad2017learning}). In particular, we replace the nonlinear modeling and policy generation techniques with linear counterparts and compare how they impact the ability of a human operator to achieve a desired task. This requires using a nonlinear basis when computing the approximation to the Koopman operator and using nonlinear model predictive control (SAC) to generate the autonomous policy. We find that the nonlinear model-based shared control paradigm produces a joint human-machine system that is significantly better along the primary performance metric (task success) then users under a user only control paradigm. The same result is not found from the data collected under a shared control paradigm that enforced linear constraints (see Figure~\\ref{fig-results-comparison-success}).\n\n\\section{Conclusion}\n\\label{sec-conclusion}\n\nIn this work, we introduce model-based shared control (MbSC). A particularly important aspect of this work is that \\textit{we do not rely on a priori knowledge, or a high-fidelity model, of the system dynamics}. Instead, we learn the system dynamics \\textit{and} information about the user interaction with the system directly from data. We learn this model through an approximation to the Koopman operator, an infinite dimensional linear operator that can exactly model non-linear dynamics. By learning the joint system dynamics through user interaction, the robot's understanding of the human is implicit to the system definition.\n\nResults from two human subjects studies (consisting of 32 total participants) demonstrate that incorporating the learned models into our shared control framework statistically improves the performance of the operator along a number of pertinent metrics. Furthermore, an analysis of trajectory ergodicity demonstrates that our shared control framework is able encourage the human-machine system to spend a significantly greater percentage of time in desirable states. We also find that the learned system models are able to be used in shared control systems that generalize across a population of users. Finally, we find that, using this approach, models can be efficiently learned online. In conclusion, we believe that our approach is an effective step towards shared control of human-machine systems with unknown dynamics. This framework is sufficiently general that it could be applied to any robotic system with a human in the loop. Additionally, we have made our code available online at \\url{https:\/\/github.com\/asbroad\/model_based_shared_control}, and include a video depicting a user's control of the dynamic system and the impact of model-based shared control in the supplementary material.\n\n\\begin{acks}\nThis material is based upon work supported by the National Science Foundation under Grant CNS 1329891. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the aforementioned institutions.\n\\end{acks}\n\n\\bibliographystyle{SageH}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nAerial radiometric survey is a mature field. Successful prospecting for\neconomically viable ore deposits using the radiation signal from naturally occuring\nrocks stretches back decades~\\cite{Grasty_1975}.\nSurvey systems composed of large volumes of NaI(Tl) scintillator gamma-ray detectors, as much as 20~L,\ncoupled with georeferenced position sensors (now making use of the global\nnavigation satellite system (GNSS)), record gamma energy spectra versus\nposition. This information is later processed to produce maps of natural\npotassium, uranium and thorium concentrations. Standards exist to guide\npractitioners in this area~\\cite{IAEA_1991,IAEA_1995} and vast regions of the\nearth have been covered~\\cite{IAEA_2010}.\nPractitioners have also developed methods to correct for terrain variation in\naerial survey~\\cite{Schwarz1992,ISHIZAKI201782}.\n\nThe emphasis in aerial radiometric survey methods until recently has been on\ndevelopment of techniques suitable for geologic sources, for which the\nsimplification of the source as an infinite and uniform sheet is reasonable in\ncomparison with the distance scales of the survey parameters (altitude, line\nspacing). The higher an aircraft flies, the more that far-away locations\ncontribute to the detected signal, relative to locations directly underneath\nthe aircraft~\\cite{King_1912}. This can have the\nadvantage of allowing for complete coverage in a more economical survey with\nwider line spacing. However, detection systems at higher altitude see a signal\nwhich is effectively averaged over a larger area of the ground. Anthropogenic\nsignals such as those resulting in case of a\nreactor accident or malicious radiological dispersal could result in hot spots\nthe concentration of which would be underestimated if averaged over a larger area.\n\nIn this paper, we present a method to deconvolve an aerial radiometric survey\nfor spatial smearing. This kind of problem, requiring inversion of a spatial\ndistribution, is encountered frequently in geophysical surveying. \nGeophysical spatial\ninversion problems are typically underdetermined, and one way of dealing\nwith this has been to select only those solutions which are close to some\npreconceived model~\\cite{Parker_2015}. An approach to spatial\ndeconvolution of airborne spectrometric data which relies on an analytical\nmodel for the response function, and allows underdetermined problems, has been\npublished previously~\\cite{Minty_2016}. A related method for spatial\ninversion to the approach presented here, but using an iterative inversion and\nneglecting uncertainties, was published recently~\\cite{Penny_2015}. Other\ngroups are taking a similar approach to that advocated here, but applied to\nthe inversion of spectra rather than spatial\nmaps~\\cite{Mattingly_2010, Hussein_2012}.\n\nThe method which will be presented here was applied to data obtained using\ndetectors aboard a manned helicopter. Nevertheless, it is prudent to mention\nthe proliferation of work ongoing currently in aerial radiation detection\nfrom unmanned aerial vehicle (UAV) systems. The use of a UAV platform for \naerial survey has facilitated the advance of \naerial survey methodology.\nA good review of recent publications can be found here~\\cite{Connor_2016}.\n\nThe method which will be presented here involves simply a) determining the\ninfluence of each independent region of the earth's surface on the measured spatial\ndistribution and then b) optimizing the weight coefficients of each region of\nthe surface to obtain the best reproduction of the measured map. The number of\npixels of the solution can be chosen such that the problem is not underdetermined. No\npotentially biasing prior assumption about the underlying distribution is\nnecessary. The method can handle complicated detector geometries as the\nresponse functions are determined in simulation. The method could easily be\nextended to allow it to be applied when there is significant\nterrain variation in the source such as would be the case in an urban\nenvironment. The method is stable under different starting\nconditions, and naturally allows for propagation of uncertainties from the\nmeasurement to the unfolded result. \n\nWe demonstrate the application of the unfolding method by applying it first to\na synthetic data set. This is compared with the known underlying distribution. \nWe proceed to apply\nthe unfolding method to a real aerial survey measurement acquired following\ndetonation of a radiological dispersal device~\\cite{Sinclair_RDD_2015}.\n\nThis spatial deconvolution method has been\npresented previously at conferences by this\ngroup~\\cite{RDD_CTBT_2015,NSSMIC_RDD_2015}, however, this is the first full write-up.\n\n\\section{Methods}\n\\subsection{Unfolding method}\n\\label{sec:unfolding_method}\nA measurement of surface activity concentration under the uniform infinite\nplane assumption may be denoted $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ where $x$\nand $y$ represent easting and northing in geographic Cartesian coordinates.\nWe seek to determine the true underlying surface radioactivity concentration, $f(x,y)$. $f(x,y)$ is related to $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ through\n\\begin{equation} \ng^{\\mbox{\\scriptsize MEAS}}(x,y) = S [f(x,y)],\n\\end{equation}\nwhere $S$ represents the effect of the measurement system on $f(x,y)$.\n\nWe divide space into $N^{\\mbox{\\scriptsize PAR}}$ pixels $i$, and using Monte Carlo simulation, generate uniform radioactivity distributions in each pixel, $f_i(x,y)$.\n\nThe measurement system $S$ consists of the detection system as well as the air\nand all other absorbing and scattering materials between the source and the\nproducts of scattering, and the detectors. It is represented in the Monte\nCarlo simulation, and the emissions from the radioactive sources $f_i(x,y)$ are transported\nthrough the system $S$ to obtain the template responses\n$g_i(x,y)$, where\n\\begin{equation}\ng_i(x,y) = S[f_i(x,y)].\n\\end{equation}\n\nWe let \n\\begin{equation}\ng(x,y) = \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i g_i(x,y)\n\\end{equation}\nand fit $g(x,y)$ to $g^{\\mbox{\\scriptsize MEAS}}(x,y)$ using a $\\chi^2$ minimization~\\cite{Minuit} to extract the weighting coefficients $w_i$.\n\nTo examine this $\\chi^2$ function, let $g_j^{\\mbox{\\scriptsize MEAS}}(x,y)$ represent the $j$th measurement of the activity concentration $g^{\\mbox{\\scriptsize MEAS}}(x,y)$. Then\n\\begin{equation}\n\\chi^2 = \\sum_{j=1}^{N^{\\mbox{\\tiny MEAS}}}\\frac{g_j^{\\mbox{\\scriptsize MEAS}}(x,y) - \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i g_i(x,y)}{e_j^2}\n\\end{equation}\nwhere there are $N^{\\mbox{\\scriptsize MEAS}}$ measurements $g_j^{\\mbox{\\scriptsize MEAS}}(x,y)$ in the problem each with uncertainty $e_j$.\n\nThe estimator of $f(x,y)$ is then the reconstructed radioactivity\nconcentration distribution $f^{\\mbox{\\scriptsize REC}}(x,y)$, where\n\\begin{equation}\nf^{\\mbox{\\scriptsize REC}}(x,y) = \\sum_{i=1}^{N^{\\mbox{\\tiny PAR}}} w_i f_i(x,y).\n\\end{equation}\n\nWe choose to require the problem to be oversampled. That is, there is\neverywhere a greater spatial density of measurements than of fit parameters\nand $N^{\\mbox{\\scriptsize MEAS}} > N^{\\mbox{\\scriptsize PAR}}$.\nThen, provided that the uncertainties $e_j$ in the denominator of the\n$\\chi^{2}$ function encompass all of the uncertainties of the problem, the\nminimum $\\chi^2$ value will be approximately equal to the number of degrees\nof freedom of the problem. \nAnd the MINOS algorithm~\\cite{Minuit} can be used to propagate the\n$N^{\\mbox{\\scriptsize MEAS}}$ measurement uncertainties $e_j$ through the fit procedure\nto calculate\nthe $N^{\\mbox{\\scriptsize PAR }}$ uncertainties \n$\\delta w_i$ on the weighting parameters $w_i$.\nIn practice, there are irreconcilable nonstochastic uncertainties affecting the problem which\nmust be included in the\n$e_j$ by application of a constant scaling factor\nto bring $\\chi^{2}$ per degree of freedom to one before the fit uncertainties\ncan be utilized. These are due to the statistical uncertainties\nin the template responses $g_i(x,y)$, and the finite pixellization of the problem.\n\nUncertainties for spatial deconvolution of fallout surveys can be expected to be asymmetric owing to the\nboundary condition that the amount of deposition can not physically be less\nthan zero. MINOS works by setting the positive and negative uncertainty for\neach parameter to the amount the parameter has to vary in each direction such\nthat $\\chi^{2}$ increases by one. Thus MINOS naturally allows for assymetric\nuncertainties and is particularly suited to uncertainty analysis in the\nmeasurement of amount of radioactivity.\n\n\\subsection{Experimental method -- aerial survey}\n\\label{sec:exp_method}\nThe experimental methods to obtain the data to which we will apply the unfolding method have been described previously~\\cite{Sinclair_RDD_2015}. We will repeat only the most essential points here.\nThree RDD detonations were conducted during the trial. In the first, $\\sim$31~GBq of \\mbox{La-140} was dispersed explosively, with radioactive debris subsequently carried by wind as far as $\\sim$~2~km from the site of the detonation.\nAerial gamma-ray spectrometric surveys were conducted using two\n10.2~x~10.2~x~40.6~cm$^{3}$ NaI(Tl) crystals mounted exterior to a helicopter\nin a skid-mounted cargo expansion basket. GNSS antennae and inertial navigation and altimetry systems\nwere also installed in the basket, to determine location.\nThe system recorded a linearized 1024-channel gamma-ray energy spectrum over the domain 0~MeV to 3~MeV, tagged with the location information, once per second.\nPost-acquisition, counts were selected from the spectra in an energy window of approximately four sigma in width around the 1.6~MeV \\mbox{La-140} photopeak. \nThese count rates were corrected for lag and dead time. Count rates were\nalso corrected for variations in\nflight altitude to the nominal flying height making use of the Shuttle Radar\nTopography Mission~\\cite{g.2007shuttle} digital terrain model\nadjusted to the local elevation using a GNSS base station.\nBackgrounds due to the radioactivity of the earth, the\natmosphere, the aircraft and its occupants, and cosmic rays, were all\nsubtracted. The count rates were all corrected for radioactive decay, back to\nthe time of the blast. A coefficient to convert the measurements from counts\nper second to kBq\/m$^2$, assuming an infinite and uniform source, was obtained\nfrom experimentally validated Monte Carlo simulation. \nFinally, measurements of radioactivity concentration in kBq\/m$^2$ for four\naerial surveys, two conducted after the first blast and one conducted after\neach of the subsequent two blasts, were presented.\n\nIn this paper, we will discuss only the data recorded during the first aerial\nsurvey after the first blast.\nThis survey was flown at a nominal 40~m flying height, with speed of\n25~m\/s and flight-line spacing of 50~m.\n\n\\subsection{Experimental method -- truckborne survey}\n\\label{sec:method_truck}\n\n\\subsubsection{Data collection -- truckborne survey}\nTruckborne surveys were driven by criss-crossing the deposited fallout in an\nextemporaneous pattern following\nthe first and third RDD blasts, restricting to the part of the fallout outside\nof a 500~m x 500~m fenced hot zone~\\cite{Green_RDD_2015,Marshall_thesis_2014}. \nThe detection system was mounted in the bed of a pickup truck and consisted of four 10.2~x~10.2~x~40.6~cm$^{3}$\nNaI(Tl) crystals oriented vertically in a self-shielding arrangement for\nazimuthal direction measurement.\nTruckborne data following the first RDD blast will be presented here for\ncomparison with the aerial survey data. The truckborne data has not undergone\nsufficient analysis for a full quantitative evaluation, but the shape will\nnevertheless provide an interesting comparison for interpretation of the aerial survey data.\n\n\\subsubsection{Sensitivity calculation -- truckborne survey}\n\nThe sensitivity of the truckborne system to a \\mbox{La-140} contamination on\nthe ground was determined using experimentally validated Monte Carlo\nsimulation.\nIn the simulations, the detector was placed with the centre of its sensitive\nvolume at a height of 1.2~m above ground. \nThe NaI(Tl) crystals and their housing were represented in the simulation in \ntheir vertical arrangement. \nThe $\\sim 5000$~kg of mass of the pick-up\ntruck carrying the detector was represented by simple blocks of steel.\n\nSensitivity validation data was collected by placing a \\mbox{La-140} source of\nknown emission rate at fixed locations around the truck and detector system.\nThe dead material was adjusted in size and position in the model until an acceptable agreement with the\nsensitivity validation data was obtained.\nThe engine and other materials at the front of the truck effectively block\nradiation coming from that direction, and most of the detected counts arise\nfrom radiation originating to the side and rear of the truck where there is\ncomparatively little material.\nThe uncertainty in the estimation of the sensitivity of the truckborne\ndetector obtained in this manner is large, about 20\\% to 30\\%.\nThis level of accuracy is sufficient for illumination of the value of the spatial deconvolution applied to the aerial\ndata by comparison with data collected from a ground-based system.\n\nFig.~\\ref{fig:truck_sensitivity}~a) shows the number of energy deposits\nregistered in the detector per second as a function of the radius $R$ of a\ndisc-shaped source centered beneath the truckborne detector on the ground as\ndetermined by the simulation.\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=4.7cm]{2crystal_sensivity_25.eps}\\put(18,52){\\textcolor{black}{a)}}\\end{overpic}\n \\hspace*{-.1cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=4.7cm]{trk_scaled_25.eps}\\put(18,52){\\textcolor{black}{b)}}\\end{overpic}\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:truck_sensitivity} \na) Sensitivity of the truckborne survey system to a disc-shaped distribution of isotropic emitters of the \\mbox{La-140} gamma spectrum, as a function of disk radius. Black dots show EGSnrc prediction. Solid curve shows fit of the expression $C(R,H)$ to the synthetic data. The dashed line shows the asymptote of the fit curve and represents the sensitivity to a uniform and infinite sheet source.\nb) Comparison of the shapes of the sensitivity curves for the aerial and truckborne survey systems. The dashed line shows the sensitivity to a disc source relative to the sensitivity to an infinite sheet as a function of disc radius for an aerial survey system at an altitude of 40~m~\\cite{Sinclair_RDD_2015}. The solid line shows the equivalent relative sensitivity curve for the truckborne survey system.\n}\n\\end{figure} \nThe expression for the flux, $\\Phi(R,H)$, due to a surface activity concentration $S_0$ at a point an elevation $H$ above a disc-shaped source of radius $R$ can be readily calculated~\\cite{King_1912},\n\\begin{equation}\n \\Phi(R,H) = \\frac{S_0}{2} (E_1(\\mu_{\\mbox{\\scriptsize a}}H) - E_1(\\mu_{\\mbox{\\scriptsize a}}\\sqrt{H^2+R^2})),\n\\label{eqn:flux_vs_R}\n\\end{equation}\nwhere E$_1$ is the exponential integral and $\\mu_{\\mbox{\\scriptsize a}}$ is the linear attenuation coefficient for gamma rays in \nair.\nTo determine the asymptotic sensitivity, we formed a function for the detected\ncounts as a function of the source radius, \n\\begin{equation}\n C(R,H) = \\epsilon \\Phi(R,H),\n\\label{eqn:count_rate}\n\\end{equation}\nand fit the expression for $C(R,H)$ to the synthetic data to obtain the\ndetection efficiency $\\epsilon$.\nThe fit result is shown as the solid curve in\nFig.~\\ref{fig:truck_sensitivity}a) and the asymptotic sensitivity, shown by\nthe dashed line, is $\\sim 71$~s$^{-1}$\/(kBq\/m$^2$).\nAs mentioned, the uncertainty on this sensitivity is large due to the lack of\ndetailed representation of the shielding material of the truck in the\nsimulation.\nNevertheless, the shape of the sensitivity curve is of value, as is the shape\nof the profile\nof counts measured with the truckborne system as it traversed the deposited plume.\n\n\\subsubsection{Comparison of aerial and truckborne sensitivity curves}\nFig.~\\ref{fig:truck_sensitivity}b) shows a comparison of the shapes of the\nsensitivity curves of the ground-based and aerial systems. Despite the tall\nnarrow shape of its detectors, which would tend to increase sensitivity to incoming radiation\nfrom the sides, for the truckborne system at ``altitude'' $H=1.2$~m, a greater\npercentage of detected gamma rays originate close to the point directly\nunderneath the detector as compared to the airborne\nsystem at $H=40$~m.\nThis leads to superior spatial precision in the results of truckborne survey.\n\n\\subsection{Aerial survey template response determination through Monte Carlo simulation}\n\\label{sec:simulation}\nThe radiation transport model EGSnrc~\\cite{EGSnrc1,EGSnrc2} was used to\ngenerate the individual uniform pixel sources $f_i(x,y)$ and to propagate the\ngenerated gamma rays and their interaction products through air and into the detection volume to create the\ntemplate responses $g_i(x,y)$. For the solutions presented herein, the\nsimulation geometry represented the experimental setup during the first RDD\ntrial~\\cite{Sinclair_RDD_2015}. The actual aerial survey system was described and shown\nin photographs in the previous publication and briefly reiterated in \nSect.~\\ref{sec:exp_method}.\nThe model of that system in EGSnrc is shown in\nFig.~\\ref{fig:egspp}~a). The simulated gamma detection system included the\ntwo NaI(Tl) crystals, as well as their aluminum cladding, and felt, foam and\ncarbon fibre enclosure. The exterior-mounted basket containing the detectors\nwas modelled in a simplified manner with 51 3~mm~x~1.5~mm bars of aluminum,\nrepresenting the basket strands, running the length of the basket in a\nsemicircle around its long axis. Dead materials due to the photo-multiplier\ntube readout of the crystals and associated electronics, as well as the\naltimeter and GNSS receivers, were modelled as simple blocks of metal of the\nappropriate overall mass. The ground was represented as perfectly flat with\nthe detection system at a height of 40~m. \nThe model was validated experimentally using data from point sources of known\nemission rate placed at\nknown locations around the actual detector. The uncertainty associated with approximation in the representation of the\nmeasurement system in the model was evaluated by variation of the arrangement\nof dead material in the model, by variation of the detector's position,\naltitude and\nattitude and by variation of the detector's energy resolution within\nreasonable limits. This uncertainty was evaluated to be about 12\\% on the \nactivity\n concentration and it was included the overall systematic\nuncertainty quoted in the publication~\\cite{Sinclair_RDD_2015}.\n\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim=0cm -5cm 0cm 3cm, clip = true, height=5.1cm]{basket_model_4.eps}\\put(5,89){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{1.4cm}\n \\begin{overpic}[height=5.5cm]{K_53_resp.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:egspp} \na) The aerial survey system as modelled in EGSnrc. The two\n10.2~x~10.2~x~40.6~cm$^{3}$ NaI(Tl) crystals\nare represented in their housing, shown in purple, with steel blocks at the\nends representing the dead material of the PMTs and readout electronics. The\ntwo crystals are mounted lengthwise with their centres 152.4~cm apart on an aluminium plate which is \nrepresented with mass and dimensions according to the engineering drawing.\nAuxilliary instrumentation is represented by an\naluminium block in the centre of the basket, of the summed instrument mass.\nThis dead material is shown in green in the figure. The aluminum plate on\nwhich the detectors and auxiliary equipment is mounted is itself attached to a\nbasket which is mounted to\nthe skids on the outside of the helicopter. Dead material of the basket is represented by 51 thin aluminum bars running the length of the basket, in a\nsemicircle around the basket long axis, shown in gold. Uncertainty on the\nsensitivities due to misrepresentation of the system in the model has been\nestimated to be approximately 12\\%.\nb) Response function, $g_{53}(x,y)$ to the pixel source $f_{53}(x,y)$,\nnormalized to the number of generated events, where the $x$-axis shows Easting\nand the $y$-axis shows Northing. The true spatial extent of pixel 53 is indicated by the black square.}\n\\end{figure} \n\nThe simulated pixel sources, $f_i(x,y)$, consisted of uniform distributions of\nisotropic emitters of the \\mbox{La-140} spectrum of gamma rays, including all\nemission energies above 0.05\\% emission probability~\\cite{TabRad_v1}. Gammas\nwere emitted into the full 4$\\pi$ solid angle. Each pixel source $f_i(x,y)$\nwas square, and 50~m on a side. Note that with the survey parameters\nmentioned previously, we have one spectrum recorded approximately every\n25~m~x~50~m in the real data. Thus, the number of measurements in the data is\nhigher than the number of fit parameters in the simulation, the problem is over-determined, and we can expect reasonably rapid convergence of the method to a solution which is stable under different starting conditions.\n\nThe entire region to be unfolded measured 1.5~km~x~1.5~km. To speed up\nconvergence of the fitting, we chose to parallelize the computation, breaking\nthe problem up into individual tiles, each 500~m~x~500~m. Only eight of these\ntiles were necessary to cover the area over which the radioactivity actually\nsettled. To knit the tiles together, we extended the fit into a border of\npixel sources 50~m wide, surrounding each tile. Thus the fit area\ncorresponding to each tile was actually 600~m~x~600~m. If, after background\nsubtractions, fewer than one count corresponding to a \\mbox{La-140} energy\ndeposit was measured in the detection system in the region of space\ncorresponding to one fit parameter $w_i$, then that fit parameter was assigned\na value of zero and left out of the minimization. Thus, 144 or fewer\nfit parameters $w_i$ were determined from the inversion of each tile.\nTo merge the tiles after the individual inversions, the weighting parameters of the border pixels were simply ignored, and the central 500~m~x~500~m areas of the tiles were placed next to each other.\n\nThe simulation included one detection system for each of the 144 pixel sources, centered\nlaterally within the pixel, at 40~m height. \nHere we have used multiple detection systems at different places at one time to\nrepresent the real world in which one detection system was in different places\nat different times. Given the 40~m height of the detection systems above the\nsource, the 50~m spacing between them and the requirement that the deposited\nenergy lie in the highest-energy photopeak of the source, the error in this\napproximation, which would come from a single emitted gamma ray depositing energy in two\ndifferent detection systems, is negligible.\n\nThe volume of the air in which the gamma rays and electrons were tracked in each\ntile extended 1.2~km in easting (or $x$) and northing (or $y$), and 600~m up.\nA 5~m-thick concrete slab underneath the emitters and filling the lateral\ndimensions of the simulated volume, represented the earth.\n\nThe simulation included all physical interactions of the emitted gammas and of\nthe gammas, electrons and positrons resulting from those interactions. Scattering\nand absorption in the air and ``earth'' of the simulated volume, and in the dead material\nof the basket system and housing of the NaI(Tl) crystals, was included. All\nprocesses leading to energy deposit within the NaI(Tl) crystal from either the\ndirect gamma rays, or the products of scattering, were included. Energy\ndeposits in the NaI(Tl) were then smeared to create simulated spectra,\naccording to the energy resolutions of the crystals determined during the experiment.\n\nFig.~\\ref{fig:egspp}~b) shows the response, $g_{53}(x,y)$, of the 144\ndetection systems of one tile, as a percentage of the number of events\ngenerated in a single one of the pixel sources, $f_{53}(x,y)$, where this\nhappens to correspond to the pixel numbered ``53''. The extent of the source\nis indicated by the black square. Note how the measured response extends much\nmore broadly in space. This is the spatial smearing which will be corrected\nby the unfolding.\n\n\\section{Results}\n\n\\subsection{Results obtained by application of the method to synthetic data}\nWe begin by applying the spatial inversion method to a synthetic data set for\nwhich we know the underlying distribution $f(x,y)$. \nAn annular region consisting of the area between two circles of radius 100~m\nand 250~m, centered at (550~m, 550~m) was uniformly populated with\n10~$\\cdot$~10$^{9}$ emitters of the \\mbox{La-140} gamma sectrum. Considering the\nbranching ratios for the \\mbox{La-140} gamma emissions, this amounts to a source\nconcentration of 28.4~kBq\/m$^2$. The annular region\nis indicated by the black outlines in Fig.'s~\\ref{fig:annulus_data_and_fit}~a)\nand~b).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{gdconc_K_giant_fast_99.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hDFIT_GIANT_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:annulus_data_and_fit} \nSynthetic aerial survey and fit results, where the $x$\naxis is Easting, and the $y$ axis is Northing. The source was of concentration \n28.4~kBq\/m$^2$ and\nannular with inner radius 100~m and outer radius 250~m, centered at\n(550~m, 550~m), as indicated by the area between the black circles.\na) Synthetic aerial survey measurement result.\nb) Result of fit of template responses $g_i(x,y)$ to the synthetic aerial survey.}\n\\end{figure} \n\nGeneration of the synthetic dataset makes use of the\nidentical detector simulations as used in generating the template \nresponses $g_i(x,y)$ as described in Section~\\ref{sec:simulation}, however the detection systems were more narrowly spaced\nin the synthetic dataset.\nDetection systems for the synthetic dataset were located every 20~m in $x$ and\n$y$ such that there were 225 detection systems in total in each 600~m~x~600~m\ntile.\n\nThe template sources $f_i(x,y)$ and the template responses $g_i(x,y)$ utilized\nin the inversion are the same as will be used for the real data and are as\ndescribed in Section~\\ref{sec:simulation}.\nThus, the synthetic data measurement density of 20~m~x~20~m exceeds the\ndensity of the parametrization, 25~m~x~25~m, and the problem\nis overdetermined, as desired.\n\nFig.~\\ref{fig:annulus_data_and_fit}~a) shows the synthetic aerial survey\nmeasurement. The result is broader than the underlying true source\ndistribution. Contamination appears to extend outside of the known true\nunderlying borders. The central area of the annulus appears to be filled with\nsignificant contamination. The average concentration of contamination\nreconstructed within the annular region is lower than the known true concentration. \n\nFig.~\\ref{fig:annulus_data_and_fit}~b) shows the result of the fit of the\ntemplate measured activity distributions to the synthetic aerial survey\nmeasurement. The colour scale used in Fig.~\\ref{fig:annulus_data_and_fit}~b) is\nthe same as that of Fig.~\\ref{fig:annulus_data_and_fit}~a).\nThe first observation to note is that the tile knitting procedure\napparently works well. The synthetic data stretches over six of the\noverlapping 600~m x\n600~m simulation tiles. After knitting of the inverted result into adjacent \nnon-overlapping\n500~m~x~500~m tiles, there is seamless matching of the reconstructed activity\nconcentration at the tile borders.\nAlso to note is that within the limitations of the somewhat coarse pixellization of\nthe problem, the survey is well reproduced by the fit. \nThe tendency of the\nmeasurement to extend outside of the bounds of the true source is reproduced,\nas is the tendency of the measurement to underestimate the magnitude of the\nconcentration within the source boundary.\n\nThe reconstruction of the true underlying source distribution for the\nsynthetic data is presented in Fig.~\\ref{fig:annulus_inverted}.\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hCONC_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{a)}}\\end{overpic}\\\\\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hpERR_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{b)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hmERR_GIANT_ALL.eps}\\put(18,65){\\textcolor{white}{c)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:annulus_inverted} \nSpatially deconvolved synthetic aerial survey result, $x$ axis is Easting and\n$y$ axis is Northing. Black circles indicate\nthe true annular source distribution.\na) The spatially deconvolved measurement.\nb) Positive statistical uncertainty on the spatially deconvolved measurement.\nc) Negative statistical uncertainty on the spatially deconvolved measurement.}\n\\end{figure} \nAs shown in Fig.~\\ref{fig:annulus_inverted}~a), the inversion process results\nin a reconstructed source distribution which is higher in magnitude and closer\nto the true activity concentration than the\ninitial survey measurement. The boundaries of the actual source are \nbetter reproduced after inversion, in particular the absence of radioactivity\nin the centre of the annulus is recovered.\n\nA major advantage of the spatial\ndeconvolution method presented here is that statistical uncertainties affecting\nthe measurement may be propagated through the minimization procedure by the MINOS algorithm as described\nin Sect.~\\ref{sec:unfolding_method}.\n A map\nshowing the one-sigma positive statistical uncertainty on the reconstructed surface activity\nconcentration is shown in\nFig.~\\ref{fig:annulus_inverted}~b) and the corresponding negative statistical uncertainty\nis shown in Fig.~\\ref{fig:annulus_inverted}~c) where the same colour scale is\nused for the uncertainties as for the measurement shown in \nFig.~\\ref{fig:annulus_inverted}~a).\nConsidering the uncertainties, the ability of the method to reconstruct the\ntrue underlying activity concentration is good. The reconstructed activity\nconcentration magnitude is in agreement with the known true activity\nconcentration within uncertainties in most places. For example, near the\ninner boundary of the annulus where the reconstructed concentration is low\ncompared to the known true concentration, further negative movement of the\nresult is not allowed by the negative uncertainty. The positive uncertainty exceeds the\nnegative uncertainty in magnitude, and does allow for a positive movement of the\nreconstructed value.\n\n\\subsection{Real data collected following detonation of a radiological\n dispersal device}\n\\subsubsection{Spatial deconvolution of aerial survey data}\n\nThe aerial survey measurement from the first RDD trial is shown in Fig.~\\ref{fig:data_and_fit}~a).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{gdconc_99_1.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hDFIT_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:data_and_fit} \na) Aerial survey measurement of the distribution of fallout following detonation\n of the radiological dispersal device.\nb) Result of fit of template histograms to the aerial survey measurement.}\n\\end{figure}\nThis result has been published previously~\\cite{Sinclair_RDD_2015} and the methods to\narrive at the result were recapitulated here in Sect.~\\ref{sec:exp_method}.\nWe observe an area of activity concentration exceeding 100~kBq\/m$^2$ near\nground zero of the detonation, with a long deposited plume of maximum width\nof 300~m to 400~m, and significant measured radioactivity extending over a\ndistance of about 2~km.\nThe total systematic uncertainty affecting this measurement was determined to\nbe around 12\\% and the statistical uncertainty peaks at 6~kBq\/m$^2$.\n\nFig.~\\ref{fig:data_and_fit}~b) shows the result of the $\\chi^2$ fit of\nthe weighting coefficients of the template response functions to the\nmeasurement of Fig.~\\ref{fig:data_and_fit}~a). The colour scales of \nFig.'s~\\ref{fig:data_and_fit}~a) and~b) have been chosen to be equal. \n(This colour\nscale has in fact been set to the optimal colour scale for the data from a\ntruckborne survey which will be presented in the upcoming \nSect.~\\ref{sec:truckborne}.)\nThe features of the measurement are broadly well-reproduced by the fit,\nconsidering the pixellization of the reconstruction. In particular, the\nmagnitude, width and extent of the contamination are well represented in the\nresult of the fit.\n\nThe underlying distribution of pixel sources which gives rise to the fit\nresult of Fig.~\\ref{fig:data_and_fit}~b) is presented in\nFig.~\\ref{fig:data_inverted}~a).\n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hCONC_ALL.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\\\\\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hpERR_ALL.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{hmERR_ALL.eps}\\put(18,70){\\textcolor{white}{c)}}\\end{overpic}\\\\\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:data_inverted} \na) Spatially deconvolved aerial survey measurement of fallout following\nradiological dispersal device blast.\nb) Positive statistical uncertainty on spatially deconvolved measurement.\nc) Negative statistical uncertainty on spatially deconvolved measurement.}\n\\end{figure}\n This is the reconstructed distribution of\nthe surface activity concentration following spatial deconvolution.\nWe find that the width of the deposited plume is now much smaller than the\noriginal undeconvolved measurement, around\n50~m. Correspondingly, the peak concentration is higher, over 400~kBq\/m$^2$.\nNote that the width of the deposited plume is small with respect to the altitude and line spacing of the\nsurvey. This accounts for its significant overestimation when the ``infinite and\nuniform'' approximation was used to obtain a concentration measurement from the\nmeasured counts as shown in Fig.~\\ref{fig:data_and_fit}~a) and~\\cite{Sinclair_RDD_2015}.\nThe length of the deposition is\nhowever much larger than the survey parameters, so in this dimension the\n``infinite and uniform'' sheet approximation is not so bad and the original\nlength measurement is not much altered by the spatial deconvolution.\n\nThe positive and negative statistical uncertainties on the spatially\ndeconvolved deposited fallout map are shown in Fig.'s~\\ref{fig:data_inverted}~b)\nand~c) respectively. Note that the statistical uncertainties affecting the\nmeasurement are very small, and on the colour scale of\nFig.~\\ref{fig:data_inverted}~a) would be difficult to see, so the colour scale in\nFig.'s~\\ref{fig:data_inverted}~b) and~c) is chosen to optimize the\nrepresentation of the information in Fig.~\\ref{fig:data_inverted}~c). The\nuncertainties reveal important features of the measurement and its inversion.\nThe positive uncertainty indicates a region extending away from the measured deposited\nplume axis in which a positive quantity of activity is permitted, however at a\nvery low amount of between 5~kBq\/m$^2$ and 10~kBq\/m$^2$. The negative\nuncertainty shows that the measurement of the presence and distribution of\nradioactivity is significantly above zero.\n\nThe MINOS error propagation includes only the stochastic uncertainties on the\nmeasurement. There are additional uncertainties which are systematic and arise from\napproximations in the representation of the system in the simulation. These include mis-representation of the position, particularly the altitude; the attitude (yaw, pitch and roll); the amount of shielding material in the basket containing the detectors; the energy resolution and the air density.\nThe systematic uncertainty on the (undeconvolved) radioactivity concentration distribution was \ndetermined to be about 12\\% by variation of these parameters within reasonable limits~\\cite{Sinclair_RDD_2015}.\n\nFor the spatially deconvolved measurement, some of the systematic\nuncertainties must be re-examined as they can be expected to have an effect on the shape of the reconstructed fallout distribution, as well as its overall magnitude. These are the systematic uncertainties associated with the measurement of altitude and pitch angle. \nIt is also interesting to examine the effect of the measurement of yaw angle\non the spatially inverted measurement as it can have no effect at all on the\noriginal undeconvolved measurement which used the infinite sheet approximation\nfor the source and therefore the detector response was invariant under changes of yaw.\n \nThe inertial navigation system\ndetermined the yaw angle \nduring the measurement to be around -30$^\\circ$. The spatial inversion was conducted\ntwice. For the central value of the inversion as presented in \nFig.~\\ref{fig:data_inverted}~a) the helicopter systems in the template\nhistograms were assigned a yaw of -30$^\\circ$ to match the data. To allow for\nchanges in yaw during flight, the regression was repeated with yaw set\nmaximally different at 60$^\\circ$. Pitch was varied from the nominal\n0$^\\circ$ to -10$^\\circ$ according to the maximum deviation of pitch recorded\nby the inertial navigation system while on line during one sortie.\nAltitude was varied from nominal by 1~m to account for approximately one sigma\nof variability in height determination.\nThese variations did not significantly alter the measurement of length and\nwidth of the deposited fallout.\nAdded in quadrature, and considering that some of the variation was already\nincluded in the original systematic uncertainty associated with the\nsensitivity, the deviations do not yield a significant additional systematic\nuncertainty.\nAlthough not a significant additional source of uncertainty for the measurements\npresented here, these sources of uncertainty are worth discussing for the\nbenefit of researchers following this approach under different operating conditions.\n\n\\subsubsection{Comparison of spatially deconvolved aerial survey data\n with truckborne survey data}\n\\label{sec:truckborne}\n\nIn Fig.~\\ref{fig:cftruck}a) the truckborne survey result which followed the\nfirst blast is shown overlaid on\nthe undeconvolved aerial survey result from the same blast\non the same colour scale. \n\\begin{figure}[h]\n \\begin{center}\n \\begin{tabular}{c}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{truck_on_data_1.eps}\\put(18,70){\\textcolor{white}{a)}}\\end{overpic}\n \\hspace{.4cm}\n \\begin{overpic}[trim = .1cm .1cm .1cm .1cm, clip = true, height=6.5cm, angle=270]{truck_on_unfold_1.eps}\\put(18,70){\\textcolor{white}{b)}}\\end{overpic}\\\\\n \\vspace*{.5cm}\\\\ \n \\begin{overpic}[height=6.5cm, angle=0]{RMSD_RDD.eps}\\put(95,45){\\textcolor{black}{c)}}\\end{overpic}\n \\end{tabular}\n \\end{center}\n \\caption\n \n { \\label{fig:cftruck} \na) Aerial radiation survey following detonation of radiological dispersal\ndevice, with results of radiation survey from a truck-based system overlaid.\nb) Spatially deconvolved aerial radiation survey of the RDD fallout, with\nresults of radiation survey from a truck-based system. Colour scale is the\nsame in a) and b) and is optimized to show the range of values of the result from the truckborne survey.\nc) (top) Transects of the deposited RDD plume following the path of the truck-based survey\nsystem. The\nsolid line shows the aerial survey result sampled at the truck location. The dot-dashed line shows the\ntruckborne survey result and the dashed line shows the aerial survey result\nafter spatial deconvolution sampled at the truck location. (bottom)\nRoot-mean-square deviation of the seven transects versus the location of maximum\nconcentration of each transect according to the truckborne survey. Circles\nshow the aerial survey result, squares show the truckborne survey and\ntriangles show the spatially deconvolved aerial survey result.}\n\\end{figure} \n(The colour\nscale is optimized to show the variation in the truckborne survey result.)\nThe truckborne survey reports much higher surface activity oncentrations than\nthe aerial survey and the fallout appears to be narrower in width.\n\nFig.~\\ref{fig:cftruck}b) shows the same truckborne survey result this time\noverlaid on the spatially deconvolved aerial survey measurement. (The colour\nscale is the same as that used in Fig.~\\ref{fig:cftruck}a).)\nHere, both the width and magnitude of the concentration are in better\nagreement.\n\nThe aerial survey maps were sampled at the locations of the truck and these\nsampled activity concentration values are presented in\nFig.~\\ref{fig:cftruck}c) (top) as a function of the distance from the start of the\ntruck-driven sortie.\nAgain, it is clear that the spatially deconvolved result is generally higher\nat maximum\nmagnitude and more narrow than the undeconvolved aerial survey result. \nThe bottom part of Fig.~\\ref{fig:cftruck}c) shows the root-mean-square\ndeviation (RMSD)\nof the curves calculated by sampling each profile at regular intervals\nfrom the maximum concentration down to the point at which the concentration\nfalls below 10\\% of the maximum. This plot shows that the RMSD width\nof the deposition according to the undeconvolved survey is typically around\n90~m while the width of the deposition according to the\ntruckborne survey and the spatially deconvolved aerial survey tends to be\nsignificantly narrower, closer to 30~m.\nSpatially deconvolved aerial survey thus recovers the\nnarrowness of the fallout to approximately the same spectral\nprecision as the truckborne survey.\n\nTruckborne data is shown for the purpose of shape comparison only and does not\ninclude detailed error analysis. In any event, the truckborne system has its\nown finite area of sensitivity largely caused by its ``altitude'' of just over one\nmetre, causing smearing of the measured spatial distribution. \nA contact measurement of the deposited radioactivity can be expected to be even more\nconcentrated in places~\\cite{Erhardt_deposition_RDD_2015}.\n\n\\section{Discussion and Conclusions}\n\nRadiometric survey would be performed to map fallout following a reactor\naccident or following a malicious\nrelease. To cover a large area quickly, the surveys are initially performed\nusing manned aircraft at some significant altitude $H$. If there is spatial\nvariability in the fallout at distance scales much less than $H$, then\nthe map result of the survey can underestimate the quantitity of\nradioactivity on the ground in places.\n\nWe have presented here a method to deconvolve an aerial survey map for the\nspatial smearing caused by measurement at altitude, at least to the extent\npermitted by the sampling density as determined by the aircraft speed and line\nspacing.\nPerformed on synthetic data, the deconvolution method returns a distribution\nwhich is consistent with the true underlying distribution within\nmeasurement uncertainty.\nThe deconvolved distribution is more narrowly distributed, and shows regions\nof locally higher radioactivity concentration than the initial undeconvolved\nmeasurement.\nPerformed on real data acquired following detonation of a radiological\ndispersal device, the method produces a distribution which is narrower and\nshows radioactivity concentration as much as four times that of the original\nmeasurement.\n\nThe method can unfold a distribution for smearing effects up to a resolution\npermittable by the sampling frequency of the original measurement. The\nmethod allows for propagation of stochastic measurement\nuncertainties through the unfolding to obtain the measurement uncertainties on\nthe fit parameters.\nThe method relies on application of the MINUIT and MINOS algorithms well known\nin the field of particle physics.\nWhat is perhaps not well known is that these algorithms can tolerate\noperating with hundreds of independent fit parameters, converging to a stable solution in\na reasonable amount of time from an arbitrary starting distribution.\nOur current need was to develop a method to extract the greatest possible\ninformation from a set of aerial surveys performed to improve scientific\nunderstanding of the behaviour of radiological dispersal devices.\nThe method, however, is also applicable to unfolding of any smeared distribution\nof any dimensionality. It could find application in other fields.\n\nThe result of the unfolding is limited in spatial resolution by the requirement\nthat the density of pixellization of the answer not exceed the density of\nmeasurements as determined by the aircraft speed and line spacing. \nNevertheless, aerial survey practitioners should be aware that there is \nimproved information about the spatial distribution of the radioactivity \ncontained in their aerial survey map that can be extracted provided good \nknowledge of the response function of the system is available.\nThe achieved spatial resolution for the particular aerial survey following RDD \ndetonation presented here approximately matched that obtained during\ntruckborne survey over the same deposition (while providing complete coverage).\nContact measurements ($H=0$~m) can be expected to reveal even greater\nlocal spatial variations than the truckborne data.\nStill, the truckborne survey ``height'' of about 1.2~m provides a salient\nbenchmark for spatial resolution as this is close to the average height of an\nadult human.\nShould humans be required to enter a possibly contaminated area guided by the\nresults of aerial survey alone, a spatially deconvolved aerial survey map could\nprovide a better predictor of the activity concentrations they will encounter\nthan the undeconvolved measurement.\n\n\\section*{Acknowledgements}\n\nThe authors gratefully acknowledge the leadership of the RDD field trials\nunder L. Erhardt, and helpful comments on the analysis from H.C.J.~Seywerd,\nP.R.B.~Saull and F.A,~Marshall. Funding for this project was provided through\nCanada's Chemical, Biological, Radiological-Nuclear and Explosives Research\nand Technology Initiative. This report is NRCan Contribution 20180112.\n\n\n\n\n\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{s.intro}}\n\nHigh-mass stars are influential in galactic evolution by dynamically affecting and ionizing the interstellar medium, and also by chemically enriching heavy elements via supernova explosions. It is of fundamental importance to understand the physical processes in the evolution of molecular clouds where high-mass stars are forming. There have been numerous works on high-mass star formation in the literature \\citep[for reviews see e.g.,][]{Zinnecker2007,Tan2014}. In spite of these works we have not yet understood how high-mass star formation takes place. One of the promising candidates where young high-mass stars are forming is the very dense and massive cores such as infrared dark clouds in the Milky Way \\citep{Peretto2013}. Another possible candidate is the compressed layer formed in cloud-cloud collisions. Observations of a few super star clusters and smaller H$\\,${\\sc ii} regions in the Milky Way have shown signs of triggered formation of high-mass stars in the collision-compressed layers \\citep[e.g.,][]{Furukawa2009,Torii2011,Torii2015,Fukui2014,Fukui2015}. Magneto-hydro-dynamical numerical simulations of two colliding molecular clouds by \\citet{Inoue2013} have shown that turbulence is excited and the magnetic field is amplified in the collision-shocked layer between the clouds. These turbulence and magnetic field increase the mass accretion rate, favoring high-mass star formation.\n\nThe difficulty in studying young high-mass stars lies in the considerably small number of young high-mass stars as compared with low-mass stars in the solar vicinity; this is in part due to the lower frequency of high-mass stars and the heavy sightline contamination in the Galactic disk. ALMA is now opening a new possibility to explore high-mass star formation in external galaxies by its unprecedented sensitivity and resolution, having a potential to revolutionize our view of high-mass star formation. The Large and Small Magellanic Clouds (LMC and SMC), at a distance of 50\\,kpc \\citep{Schaefer2008} and 61\\,kpc \\citep{Szewczyk2009}, are actively forming high-mass stars. The LMC is an ideal laboratory to see the evolution of stars and clouds thanks to the non-obscured face-on view \\citep{Subramanian2010} of all the GMCs in a single galaxy \\citep[for a review][]{Fukui2010}. A $^{12}$CO($J$=1--0) survey for GMCs with NANTEN 4m telescope \\citep{Fukui1999,Mizuno2001,Yamaguchi2001,Fukui2008} provided a sample of nearly 300 GMCs at 40\\,pc resolution and led to an evolutionary scheme from starless GMCs (Type I) to active star-forming GMCs (Type III) over a timescale of 20 Myrs \\citep{Fukui1999,Kawamura2009}. Aiming at revealing the finer-scale details of the molecular gas in the LMC, we have commenced systematic CO observations by using ALMA at sub-pc resolution.\n\nAmong the nearly 300 GMCs over the LMC obtained with NANTEN, N159 is the brightest one with H$\\,${\\sc ii} regions. Infrared studies have revealed nearly twenty young high-mass stars in N159 with Spitzer and Herschel (\\citealt{Chen2010}, \\citealt{Wong2011}, \\citealt{Carlson2012} and references therein; \\citealt{Seale2014}), where two clumps N159 East and West are active in star formation. \\citet{Mizuno2010} showed that the CO $J$=4--3\/$J$=1--0 ratio shows enhancement toward the molecular peak without a well-developed H$\\,${\\sc ii} region in N159 West (N159W). This high excitation condition suggests that N159W is possibly on the verge of high-mass star formation, and thus the initial condition of high-mass star formation may still hold. The preceding observations with Australia Telescope Array (ATCA), while low in resolution (HPBW$\\sim6\\arcsec$), presented some hint of small-scale clumps and filaments in N159W\\citep{Seale2012}. N159W is therefore the most suitable target for the purpose of witnessing the oneset of high-mass star formation. \n\nWe present the first results of the ALMA observations of N159W in this Letter mainly based on the $^{13}$CO($J$=2--1) data.\n\n\\section{Observations \\label{s.obs}}\n\nWe carried out ALMA Cycle 1 Band 3 (86--116\\,GHz) and Band 6 (211--275\\,GHz) observations toward N159W both with the main array 12m antennas and the Atacama Compact Array (ACA) 7m antennas. The observations centered at ($\\alpha_{J2000.0}$, $\\delta_{J2000.0}$) = (5$^{\\rm h}$39$^{\\rm m}$35\\fs34, -69\\arcdeg45\\arcmin33\\farcs2), were carried out between October 2013 to May 2014. The target molecular lines were $^{13}$CO($J$=1--0), C$^{18}$O($J$=1--0), CS($J$=2--1), $^{12}$CO($J$=2--1), $^{13}$CO($J$=2--1) and C$^{18}$O($J$=2--1) with a bandwidth of 58.6 MHz (15.3 kHz $\\times$ 3840 channels). We used a spectral window for the observations of the continuum emission among the four with a bandwidth of 1875.0 MHz (488.3 kHz $\\times$ 3840 channels). The radio recombination lines of H30$\\alpha$ and H40$\\alpha$ were also included in the windows. The projected baseline length of the 12m array ranges from 16\\,m to 395\\,m. The ACA covers 9\\,m to 37\\,m baselines. The calibration of the complex gains was carried out through observations of seven quasars, phase calibration on four quasars, and flux calibration on five solar system objects. For the flux calibration of the solar system objects, we used the Butler-JPL-Horizons 2012 model (https:\/\/science.nrao.edu{\\slash}facilities{\\slash}alma{\\slash}aboutALMA{\\slash}Technology{\\slash}ALMA\\_Memo\\_Series{\\slash}alma594{\\slash}abs594). The data were reduced using the Common Astronomy Software Application (CASA) package (http:\/\/casa.nrao.edu), and visibility imaged. \nWe used the natural weighting for both the Band 3 and Band 6 data, providing synthesized beam sizes of $\\sim$2\\farcs5 $\\times$ 1\\farcs8 (0.6 pc $\\times$ 0.4 pc at 50kpc) and $\\sim$1\\farcs3 $\\times$ 0\\farcs8 (0.3 $\\times$ 0.2 pc), respectively. \nThe rms noises of molecular lines of Band 3 and band 6 are $\\sim$40 mJy beam$^{-1} $and $\\sim$20 mJy beam$^{-1}$, respectively, in emission-free channels.\nThe comparison of the cloud mass derived from the ALMA observation with that of a single dish observation described in Section \\ref{s.r.filaments} suggests that the missing flux of the present ALMA observation is not significant.\n\n\\section{Results \\label{results}}\n\\subsection{A complex filamentary structure \\label{s.r.filaments}}\n\nFigure \\ref{fig1} shows the $^{13}$CO velocity integrated intensity image of the $J$=2--1 transition. \nThe distribution of the $^{13}$CO emission is highly filamentary. \nThe filaments, having often straight or curved distribution, have a typical length of 5--10\\,pc and a width of 0.5--1.0\\,pc defined as a full-width of the emission area at the 3$\\sigma$ level of the intensity integrated over a range of 234 to 240\\,km\\,s$^{-1}$, which may be analogous to the dominance of filaments in the interstellar medium of the solar vicinity \\citep[e.g.,][]{Andre2013,Molinari2010}, possibly suggesting that filaments are ubiquitous in other galaxies as well. \nMore details of the filaments will be published separately.\nThe most active star formation is found in two regions as denoted by N159W-N and N159W-S in Figure \\ref{fig1}, both of which are associated with enhanced $^{13}$CO emission. \n\nThe cloud mass is estimated from the $^{12}$CO($J$=2--1) intensity by assuming a conversion factor from the $^{12}$CO($J$=1--0) intensity to the column density of X(CO)=$7\\times10^{20}$cm$^{-2}$ \\citep{Fukui2008} and the typical $^{12}$CO($J$=2--1)\/$^{12}$CO($J$=1--0) ratio toward H{\\sc ii} regions of 0.85 (the ratio in the Orion-KL region of \\citealt{Nishimura2015}).\n We also assumed the absorption coefficient per unit dust mass at 1.2\\,mm and the dust-to-gas mass ratio to be 0.77\\,cm$^2$\\,g$^{-1}$ and $3.5\\times10^{-3}$, respectively to derive the gas mass from the dust emission (Herrera et al. 2013).\nIn total, the filaments have molecular mass of 2.4 $\\times$ 10$^{5}$ $M_{\\odot}$ in N159W corresponding to 35 \\% of the total mass which is estimated by the lower resolution study \\citep{Minamidani2008,Mizuno2010}. \nWe define the N159W-N and N159W-S clumps at the 5\\,$\\sigma$ level of Band 6 continuum (white contours in Figure 1), and the masses of these clumps are estimated to be $2.9\\times10^4$\\,$M_\\odot$ and $4.1\\times10^3$\\,$M_\\odot$, respectively, by assuming the dust temperature 20\\,K. Their masses derived from the CO emission are $1.5\\times10^4$\\,$M_\\odot$ and $4.2\\times10^3$\\,$M_\\odot$, respectively, as is consistent with the dust-emission estimate.\n\n\\subsection{Outflows \\label{s.r.outflows}}\n\nWe have discovered two molecular outflows having velocity span of 10--20\\,km\\,s$^{-1}$ in $^{12}$CO($J$=2--1). Figure \\ref{fig2} shows the distribution of the outflow wings. One of them corresponds to N159W-N and the other N159W-S. The N159W-S outflow has red-shifted and blue-shifted lobes which show offsets of 0.1--0.15\\,pc from the peak of the continuum emission. The outflow axis is along the east-west direction. The N159W-N outflow has only the blue-shifted lobe which shows an offset of 0.2 pc from the $^{13}$CO peak. It is possible that the complicated gas distribution around N159W-N may mask the possible red lobe. The size of the red-shifted and blue-shifted lobes is less than the beam size 0.2\\,pc $\\times$ 0.3\\,pc and the upper-limit timescale of the outflow is roughly estimated to be 10$^{4}$ yrs. This is the first discovery of extragalactic outflows associated with a single protostar. The positions of outflows in N159W-N and N159W-S coincide with YSOs identified based on the {\\it Spitzer} data: 053937.56-694525.4 (hereafter YSO-N; Chen et al. 2010) and 053941.89-694612.0 (hereafter YSO-S; \\citealt{Chen2010}; P2 in \\citealt{Jones2005}), respectively. \n\n\\subsection{YSO characteristics \\label{s.r.yso}}\nTwo YSOs associated with outflows, YSO-N and YSO-S, have been studied extensively at near- to far-infrared, submillimeter, and radio wavelengths (\\citealt{Carlson2012} and references therein; \\citealt{Seale2014}; \\citealt{Indebetouw2004}). Using the \\citet{Robitaille2006,Robitaille2007} YSO model grid and spectral energy distribution (SED) fitter, we model all the available data including the {\\it Spitzer} and {\\it Herschel} fluxes (1.2--500 $\\mu$m), as well as photometry we extracted from {\\it Spitzer}\/IRS spectra (5--37\\,$\\mu$m; \\citealt{Seale2009}), and the fit of the SEDs indicates that both YSO-N and YSO-S are Stage 0\/I YSOs. \nThe mass and luminosity are estimated to be $31\\pm8$\\,$M_\\odot$ and $(1.4\\pm0.4)\\times10^5$\\,$L_\\odot$ for YSO-N, and $37\\pm2$\\,$M_\\odot$ and $(2.0\\pm0.3)\\times10^5$\\,$L_\\odot$ for YSO-S. These results are consistent with those from Chen et al. (2010), who also used Robitaille fitter but without the Herschel constraints. \nThe dynamical ages of the two outflows are consistent with the ages output from the SED fitter \\citep{Robitaille2006}, $\\sim$10$^{4}$ yrs.\n\nAccording to 3\\,cm radio continuum measurements YSO-N is determined to be an O5.5V star, whereas YSO-S is not detected \\citep{Indebetouw2004}, suggesting that YSO-S is in an earlier evolutionary state than YSO-N.\nThis is consistent with a non-detection of the He 2.113\\,$\\mu$m and with the weak Br$\\gamma$ in YSO-S \\citep{Testor2006}.\nThe \\citet{Testor2006} near-IR VLT data revealed that YSO-S consists of at least two sources, whereas their detailed physical properties and relation with the mid-\/far-infrared source are yet unknown.\n\n\\subsection{Filamentary collision in N159W-S \\label{s.r.col}}\n\nIn Figure \\ref{fig1}, N159W-N shows complicated $^{13}$CO distribution, whereas the source N159W-S shows relatively simple $^{13}$CO morphology. The $^{13}$CO distribution in N159W-N consists of several filaments which are elongated generally in the direction from the northeast to southwest, and N159W-S is located at the tip of a V-shaped distribution of two filaments. We shall focus on N159W-S in the following to describe the filament distribution and the high-mass young star, because the simple morphology allows us to understand the physical process unambiguously. \n\nFigure \\ref{fig3} shows the two filaments toward N159W-S (Figure \\ref{fig3}(a) the whole velocity range, Figure \\ref{fig3}(b) red-shifted filament, and Figure \\ref{fig3}(c) blue-shifted filament, respectively). The two filaments overlap toward N159W-S, where the $^{13}$CO intensity and linewidth are significantly enhanced. Figures \\ref{fig3}(d-h) show position-velocity diagrams taken along the two filaments. We see that the filaments have small velocity span of 3\\,km\\,s$^{-1}$ in the north of N159W-S which shows significantly enhanced velocity span of 8\\,km\\,s$^{-1}$ at the 15\\,\\% level of the $^{13}$CO peak in Figure\\,\\ref{fig3}(g). \nAn HST image at near infrared indicates that the red-shifted filament is extended toward the south beyond N159W-S (Carlson et al. 2015 in preparation), while no CO emission is detected there in our $^{12}$CO or $^{13}$CO observations with ALMA. We also find that the blue-shifted filament has its extension beyond N159W-S in $^{13}$CO. So, although the filaments are apparently terminated toward N159W-S, they are actually more extended, placing N159W-S in the intersection of the two filaments. \n\nIn N159W-S the longer red-shifted filament in the east is highly elongated and mildly curved, having a length of 10\\,pc, while the other blue-shifted filament in the west is straight and elongated by 5\\,pc. N159W-S clearly demonstrates that a high-mass YSO with bipolar outflow is formed toward the intersection between the two thin filaments, and the velocity dispersion is significantly enhanced in the intersection. \n\nBased on these results we set up a hypothesis that formation of N159W-S was triggered by the collision between the two filaments. We first describe a possible scenario for N159W-S and then discuss the observational constraints on the collision and high-mass star formation. The two crossing filaments overlapping toward N159W-S give direct support for the present scenario. The lower limit for the relative velocity in the collision is given by the velocity difference of the two filaments 2--3\\,km\\,s$^{-1}$. The actual collision velocity should be higher than 2--3 km s$^{-1}$ because of the projection effect. According to the magneto-hydro-dynamical numerical simulations of two colliding molecular flows by \\citet{Inoue2013}, the collision-shocked layer enhances isotropic turbulence, independent of the direction of the collision, and the velocity span in the shocked layer is similar to the relative collision velocity. The simulations by \\citet{Inoue2013} for a velocity difference of 20 km s$^{-1}$ allow us to scale the relative velocity to $\\sim$10 km s$^{-1}$ with basically the same physical process. We therefore assume the velocity span in N 159W-S, 8 km s$^{-1}$, as the actual collision velocity. This implies the relative motion of the two filaments is nearly vertical, roughly 70$^\\circ$, to the line of sight.\n\n\\section{Discussion on the high-mass star formation processes \\label{s.discussion}}\n\nSince the rest of the filaments show no sign of velocity dispersion enhancement with high-mass star formation, we assume that the non-interacting filaments retain the initial condition prior to the collision. \nThe line-mass, mass per unit length, in the filaments changes from region to region by an order of magnitude. In order to estimate the typical mass of the filaments associated with N159W-S clump for the following discussion, we pick up two segments of 1.5\\,pc and 1.8\\,pc in length and 0.7\\,pc and 0.6\\,pc in full width at a 35\\,\\% level of the $^{13}$CO peak for the red-shifted and blue-shifted filaments, respectively, as indicated in Figures 3(b) and 3(c). \n{Below the 35\\,\\% level, it is hard to estimate the line-mass of the individual filaments separately due to overlapping. Above this level, the mass sampled becomes underestimated.\nWe estimate the total mass of these two segments to be $2.9\\times10^3$\\,$M_\\odot$ from $^{12}$CO($J$=2--1) for a velocity range 234\\,--\\,240\\,km\\,s$^{-1}$. \nWe then estimate the average line-mass of these two filaments to be $8.9\\times10^2$\\,$M_\\odot$\\,pc$^{-1}$. \nThe filaments are not detected in the Band 6 continuum at the 3\\,$\\sigma$ noise level of the molecular mass density $1.6\\times10^3$\\,$M_\\odot$\\,pc$^{-1}$, which is higher than the above CO-based line-mass density of the filaments.\n \nThis suggests that the collision took place in a timescale, $\\sim$0.5\\,pc divided by 8\\,km\\,s$^{-1}$, i.e., $\\sim$6 $\\times$ 10$^{4}$ years ago. We assume that formation of the high-mass star initiated at the same time. By using the stellar mass 37 $M_{\\odot}$, the average mass accretion timescale of the star formation is given as 37\\,$M_{\\odot}$\/$6 \\times 10^{4}$\\,yrs $\\sim$6 $\\times$ 10$^{-4}$ $M_{\\odot}$ yr$^{-1}$. This rate is well in accord with the theoretical estimate around 10$^{-3}$ $M_{\\odot}$ yr$^{-1}$ and satisfies the criterion to form high-mass stars by overcoming the stellar feedback \\citep[e.g.,][]{Wolfire1986}. The small outflow timescale 10$^{4}$\\,yrs is consistent with this picture involving rapid high-mass star formation.\n\nThe present case of N159W-S has shown that the high-mass star having 37\\,$M_{\\odot}$ is formed in a turbulent condition created by the collisional shock. \nThe mass of the N159W-S clump is estimated to be $4\\times10^3$\\,$M_\\odot$ toward its CO peak. \nThere is no sign of such dense clumps over the rest of the filament according to our ALMA data, either in CS($J$=2--1) data, whose line-mass detection limit is about 150\\,$M_\\odot$\\,pc$^{-1}$, or in dust emission data, whose line-mass detection limit is about $1.6\\times10^3$\\,$M_\\odot$\\,pc$^{-1}$ by assuming a filament width of 0.6\\,pc. \nThis offers an interesting possibility that high-mass stars do not necessarily require dense cloud cores as the initial condition. Instead, high velocity colliding molecular flows are able to efficiently collect mass into a cloud core non-gravitationally. \\citet{Inoue2013} discuss that the mass flow in the collision can be efficiently converged into a shock-induced core due to the oblique shock effect, and that self-gravity is not important in the beginning of the high-mass star formation, while soon later, in the shock-collected core, self gravity will play a role to form the stellar core \\citep[see also][]{Vaidya2013}. \n\nIn the Milky Way we see increasing observational evidence for cloud-cloud collisions which trigger high-mass star formation. Four super star clusters, Westerlund2, NGC3603, RCW38 and DSB[2003]179, are found to be formed by collisions between two clouds \\citep{Furukawa2009,Ohama2010,Fukui2014,Fukui2015}. Isolated O stars with H$\\,${\\sc ii} region, M20, RCW120 etc., are also suggested to be triggered by cloud-cloud collisions \\citep{Torii2011, Torii2015}. N159W-S is in the very early stage of star formation as indicated by the non-detection of ionized gas, as well as by the collision scenario and SED models. Therefore N159W-S is an optimal source to study filamentary collision leading to star formation. \nIt has been shown that the youngest O stars are formed coevally in a duration of $\\lesssim10^{5}$\\,yrs in NGC3603 and Westerlund1 by careful measurements of stellar ages with HST and VLT \\citep{Kudryavtseva2012}. We have here an independent estimate of the stellar age by taking the advantage of the simple cloud morphology in N159W-S, and the present time-scale estimate is consistent with that of \\citet{Kudryavtseva2012}.\n\n\\section{Conclusions \\label{s.conclusion}}\n\nIn this Letter we presented the $^{13}$CO ($J$ = 2--1) observations with ALMA on the active star-forming region N159 West in the LMC. We have found the first two extragalactic protostellar molecular outflows toward young high-mass stars, whose dynamical timescale is $\\sim$10$^{4}$ yrs. One of the two stars N159W-S is clearly located toward the intersection of two filamentary clouds. We set up a hypothesis that two filaments collided with each other $\\sim$10$^{5}$ yrs ago and triggered the formation of the high-mass star. The results demonstrate the unprecedented power of ALMA to resolve extragalactic star formation.\n\n\\acknowledgments\n\nThe authors thank the anonymous referee for his\/her helpful comments. This paper makes use of the following ALMA data: ADS\/JAO.ALMA\\#2012.1.00554.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO and NAOJ. This work was supported by JSPS KAKENHI grant numbers 22244014, 22540250, 22740127, 23403001, 24224005, 26247026; by JSPS and by the Mitsubishi Foundation. MM and ON are grateful for support from NSF grant \\#1312902.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Self-injection locking with microresonators and microcombs}\nThe solution proposed in\\cite{Maleki2015} for a compact microresonator based Kerr comb used a relatively broadband (10-1000 times wider than the microresonator's resonance) single-frequency distributed feedbacak (DFB) laser. In this case, the role of the microresonator was twofold:\n1) acting as an external cavity it narrowed the linewidth of the laser via the self-injection locking effect\\cite{Vasil'ev1996,Yarovitsky1998,Donvalkar2018}, and 2) it provided a nonlinear low-threshold Kerr medium where a frequency comb appeared if the laser was appropriately detuned. Laser linewidth narrowing in this approach exploits the coupling of a free-running laser diode with the WGM microresonator\\cite{Braginsky1989} having internal and surface Rayleigh backscattering\\cite{Ilchenko1992,Ilchenko2000}. These scattering effects resonantly couple an excited traveling wave WGM with the identical but counter propagating mode which returns back to the laser, locking it to the cavity resonance. This technique was used for stabilisation of a DFB laser down to a record sub--Hz level\\cite{Liang2015}. The analytical theory of self-injection locking by a WGM cavity initially proposed in\\cite{Oraevsky} was recently revised and extended in\\cite{Kondratiev2017}. \n\nIt was previously assumed for granted that `only single-mode DFB lasers characterized with comparably high internal Q's are suitable for stable self-injection locking using multimode optical cavities'\\cite{Xie:15}. Similarly, earlier only single-frequency pre-stabilized external cavity diode lasers (ECDLs) with diffraction grating\\cite{Yarovitsky1998, Velichansky2003} were considered for self-injection locking to WGM cavity\\cite{Maleki2015, Maleki2010}. Any external cavity pre-stabilisation complicates devices and their integration. The DFB lasers, however, are not available for many wavelengths and have a limited power that in turn restricts the power of single-mode operation and generated frequency combs at milliwatt level. Meanwhile many applications require higher comb power. For example, absorption dual-comb spectroscopy using surface diffuse scattering has less than $1$\\% of light collection efficiency so that high power of incident combs must be provided\\cite{Hensley}. Evaluations in\\cite{Shchekin:18} show that dual-comb CARS spectroscopy of blood glucose provides a measurable glucose signal at about 100mW power of frequency combs with $10$~GHz mode spacing. \n\n\n\\section*{Single-frequency lasing with a multi-frequency laser}\n\nWe revealed that the initial mode pre-selection and pre-stabilisation in laser diodes are not required to obtain stable narrow-linewidth single-frequency lasing, and a WGM microresonator can handle all of these purposes efficiently as well. Consequently, simpler and cheaper FP laser diodes with higher power may be used. \n\nWe demonstrate for the first time an efficient (up to 50\\%) conversion of a broadband multifrequency FP laser diode, coupled to a high-Q WGM microresonator, into a narrow linewidth single-frequency light source in the 100 mW power range at optical telecom wavelength, with its subsequent transformation to a single-soliton Kerr comb oscillator. FP laser diode spectrum narrowing occurs in regime of competition between many longitudinal modes. Self-injection locking solves two critical technical problems of soliton Kerr combs: (1) thermal instability and (2) preferential excitation of multiple-soliton regimes. The soliton states are only possible at strong red detunings from the cavity resonance where the CW internal circulating power is small. That is why, the transition from a chaotic comb to a multiple solitons comb and finally to the single-soliton state via slow laser tuning leads to a fast drop of the internal power resulting in cooling of the resonator and finally, due to thermal refraction and expansion effects, to a large detuning from the required regime causing the loss of the soliton state. Detuning can be compensated by an electronic feedback which requires fast tuning (with the characteristic time of thermal relaxation of the resonator) of the laser frequency which is difficult to achieve. That is why different additional nonstationary `power kicking' methods\\cite{Kippenberg2016, Brasch2016, Yi2016} were proposed to reach the thermal equilibrium. However, the optical feedback in self-injection locking is fast enough to compensate thermal effects in real time. Additionally, slower tuning from a chaotic to soliton state, only possible with the supported thermal equilibrium, allows a transition to smaller initial numbers of circulating solitons and finally to a single-soliton state\\cite{Lobanov2016}.\n\n\n\\section*{Experimental setup} \n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Spectra.pdf}\n\t\\caption{\\textbf{Self-injection locking and spectral narrowing of a multi-frequency diode laser coupled to a MgF$_2$ ultra-high-Q whispering gallery microresonator.} \\textbf{(a)} The spectrum of the free-running diode laser. \\textbf{(b)} The spectrum of the diode laser stabilised by a high-Q microresonator. \\textbf{(c)} Soliton generation in self-injection locking regime. \\textbf{(d)} The FSR beat note signal of the free running multi-frequency laser, the beat note frequency corresponds to the diode chip length of 2500 $\\mu m$. \\textbf{(e)} Heterodyne signal between self-injection locked diode laser and narrow linewidth fiber laser -- blue curve, Voigt fit -- red curve. \\textbf{(f)} repetition rate signal of a single-soliton state, central frequency corresponds to a WGM cavity with 5.5 mm diameter (ESA RBW is $1$~kHz).}\n\t\\label{ris:image2}\n\\end{figure*}\n\nA schematic view and a picture of the experimental setup is presented in Fig.\\ref{ris:image1}. \nThe laser beam from a free-space multi-frequency InP diode is collimated and coupled to a MgF$_2$ WGM resonator with a glass (BK-7) prism \\cite{Gorodetsky1999}. Resonantly backscattered Rayleigh radiation returns to the diode laser and forces self-injection locking of the laser frequency to the microresonator's WGM mode. The output beam is collimated to a single-mode fiber and analyzed with an optical spectrum analyzer (OSA), on a photodiode (PD) with an oscilloscope (OSC), and an electrical spectrum analyzer (ESA). The repetition rate of the soliton pulses is monitored by a fast photodiode and ESA. The detuning of the laser frequency from an optical resonance is monitored on a PD with an oscilloscope. A narrow linewidth tunable fiber laser is used for the heterodyne linewidth measurements.\n\nFor pumping millimeter-sized MgF$_2$ resonators, ordinary packaged uncapped free-space multi-frequency laser diodes were used (Seminex, chip length $L=2500$~$\\mu$m, central wavelengths 1535~nm, 1550~nm and 1650~nm covering spectral intervals of $\\Delta \\lambda \\sim 10$~nm and a total power of $P\\sim$200~mW). Generation of the self-injection locked soliton combs with a repetition rate signal linewidth $\\sim1$~kHz was observed when the laser diode driving current was manually adjusted to red detune the pump frequency in self-injection locked regime within a soliton-supporting high-Q cavity resonance.\n\nThe experimental results demonstrated in Fig.\\ref{ris:image2} were obtained with a MgF$_2$ resonator with a diameter of $5.5$~mm and edge curvature radius of $500 \\mu$m corresponding to the free spectral range (FSR) of $\\sim 12.5$~GHz (inverse of the pulse round-trip time in the microresonator). The group velocity dispersion (GVD) for all tested laser frequencies is anomalous allowing for the generation of DKS. The microresonator was manufactured by precise single-point diamond turning (DAC ALM lathe, see SI)\\cite{Tanabe2016}. The ultra-high intrinsic Q-factor exceeding $10^9$ was achieved by polishing with diamond slurries\\cite{Maleki2001}. Experimental results with other microresonators pumped at different wavelength are presented in SI.\n\nThe laser diode used to obtain results in Fig.\\ref{ris:image2}(a) has an optical spectrum consisting of tens of incoherent lines covering $\\sim 10$~nm with mode spacing $\\Delta f=\\frac{c}{2Ln} \\approx 17.68$~GHz around the central wavelength $1535$~nm and $200$~mW maximum output power. The intensity in the laser gain region is approximately uniformly distributed between the lines. Fig.\\ref{ris:image2}(d) shows the noisy beatnote signal from the adjacent lines of the free-running laser diode at $17.68$~GHz with $\\sim 1$~MHz full width at half maximum (FWHM) linewidth. The light back-reflected from the microresonator due to resonant Rayleigh backscattering, provided in case of crystalline materials mostly by surface inhomogeneities, leads to natural feedback for the laser diode. Back reflection is measured in our case at $10^{-3}$ intensity level (see SI for details). The backscattering intensity depends on the degree of loading (coupling efficiency) of the resonator, and thus can be regulated by changing the gap between the prism and the resonator\\cite{Yarovitsky1998}. \n\n\\section*{Self-injection locking with a multi-frequency laser diode}\n\nFig.\\ref{ris:image2}(b) demonstrates a collapse of the wide spectrum of the multi-frequency laser diode to a single-frequency line. In case of a multi-frequency diode in the self-injection locking regime, the power from multiple modes due to mode competition is transferred into a single narrow line, and its output power increases. In this way, the microresonator behaves not like a simple filter cavity but plays an active role in lasing. Fig.\\ref{ris:image2}(b) illustrates that in the case of self-injection locking to the WGM microresonator, the power in the dominant line increases by $\\sim 7$~dBm. This effect gives a significant additional advantage of using longer multi-frequency FP diode chips with higher power as compared to DFB lasers (with maximal power $\\sim 40$~mW at telecom wavelengths) \\cite{Maleki2010,Maleki2015}. The asymmetry of residual laser modes in the self-injection locking regime in Fig.\\ref{ris:image2}(b), with the dip on the high-frequency wing, is associated with the anomalous interaction of spectral modes in a semiconductor laser\\cite{Bogatov1975, ahmed2002,AhmedYamada2010}. Note that the contrast between the dominant line and residual lines of an order of 35~dB may in our case be significantly improved with an additional coupler prism as a drop port. In this case, the residual modes of the laser will be filtered out by the resonator. Recently a 446.5 nm self-injection locked laser \\cite{Donvalkar2018} was demonstrated with sub-MHz linewidth by using a high-${Q}$ (${Q>10^9}$) WGM ${\\rm MgF_2}$ microresonator in conjunction with a multi-longitudinal-mode laser diode. The presented blue FP laser had a two peak spectrum at low driving current in the free running regime. Longitudinal mode competition and conversion efficiency of the laser power to the power of a single mode were not considered. \n\nWe used the heterodyne technique to measure the instantaneous laser linewidth in the self-injection locking regime operating at $1550$~nm. The beatnote of the self-injection locked diode laser with the narrow linewidth tunable fiber laser (Koheras Adjustik) was analyzed on a fast PD and is shown in Fig.\\ref{ris:image2}(e). The Voigt profile\\cite{Stephan2005} fit provides a Lorentzian width contribution due to laser white frequency noise of $370$Hz, and Gaussian contribution due to flicker frequency noise of $1.7$ kHz. The theory of self-injection locking developed in\\cite{Kondratiev2017} allows this linewidth to be estimated (see the Supplementary Information (SI) for details) with a good agreement. In this way, an efficient and compact single-mode diode laser with a narrow linewidth at kHz range is demonstrated.\n\n\\section*{Soliton microcomb with a multifrequency laser}\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Pulse1.pdf}\n\t\\caption{\\textbf{Soliton generation with a laser diode.} \\textbf{(a)} Cavity mode response on oscilloscope when the frequency of the laser is swept through the WGM cavity resonance, characteristic for the self-injection locking, with step-like transition to a soliton state. \\textbf{(b)} The autocorrelation of a soliton waveform obtained from the spectrum in Fig.\\ref{ris:image2}b (blue) and the theoretical fit (red). $\\tau_R = 80 ps$ is the roundtrip time.}. \n\t\\label{ris:image3}\n\\end{figure*}\n\n\nFig.\\ref{ris:image3}(a) illustrates the typical cavity response on PD when the frequency of the laser is swept through the WGM cavity resonance, characteristic for the self-injection locking\\cite{Kondratiev2017}, with step-like transition to a soliton state. The self-injection locking range, calibrated with a fiber-loop cavity, was of an order of $1$~GHz, depending on the quality-factor of the WGM resonance. Region (a) corresponds to the free-running laser (Fig.\\ref{ris:image2}(a)), region (b) corresponds to the case of a self-injection locking regime without a frequency comb (Fig.\\ref{ris:image2}(b)), and region (c) is a soliton existence region analogous to the soliton step obtained in experiments with tunable single frequency lasers\\cite{Kippenberg2014} (Fig.\\ref{ris:image2}(c)). We use MgF$_2$ material for microresonator fabrication where thermo-optical instability is suppressed due to the same sign of thermo-refractive and thermal expansion effects and stable soliton generation is possible. In the self-injection locking regime, if the characteristic time of back-reflection is shorter than the thermal relaxation time, the laser frequency follows the cavity thermally shifted resonance and thermo-optical instabilities are suppressed. We observed partial suppressing of thermal nonlinearity while scanning the laser across the cavity resonance (Fig.\\ref{ris:image3}(a)) due to self-injection locking and we were able to tune into stable regime of soliton generation without thermal jumps.\n\nBy gradually red detuning the diode laser frequency with the current, but staying in the locked regime, we could smoothly switch the system into a soliton comb regime (predominantly single-soliton one) with a very characteristic sech$^2$(x) envelope Fig.\\ref{ris:image2}(c). The soliton comb has a span $\\sim 30$~nm with a line spacing of $12.5$~GHz. Additional residual laser lines separated by $17.68$~GHz are visible in optical spectrum, although they are weak and could be filtered out in drop-port configuration. By stronger variation of the diode current it was also possible to switch to a different resonance of the microresonator, jumping from one resonance to another within a single mode family, thus gradually changing the central frequency of the soliton and its bandwidth (see SI Fig.S.6). \n\nThe soliton repetition rate signal in the microwave range is demonstrated in Fig.\\ref{ris:image2}(f). Fig.\\ref{ris:image3}(b) shows the result of the inverse Fourier transform of the spectrum from \\ref{ris:image2}(c) and a fit $2A^2t\/\\sinh(Bt)$ which corresponds to the soliton waveform $A{\\,\\rm sech}(Bt)$, which reveals the soliton duration of $220$~fs. The residual fringes are caused by a dispersive wave formed due to parasitic mode-crossings which one can see in Fig. \\ref{ris:image2}(c). \n\nNote, that no particular technique with amplitude and frequency manipulations and hence no additional equipment was used to control the soliton generation in the presence of thermal nonlinearities\\cite{Kippenberg2014}. This is possible due to the very fast optical feedback which compensates thermal cavity detuning upon switching. Single-soliton states lived several hours in laboratory conditions without any additional stabilisation techniques -- another convenient consequence of self-injection locking. In our experiment coupling rate to soliton resonances was around 15--25\\% due to the non-optimised geometry of the microresonator and prism coupling\\cite{bilenko2017optimisation68043753}, so we did not efficiently use available laser power to generate a wide frequency comb. Nevertheless, our result shows that this proposed method is applicable due to pump power overhead even when coupling conditions are not optimal or the Q-factor is not ultra high, e.g. for integrated microresonators.\n\nWe have checked that the Kerr soliton comb is generated inside the microresonator and is not generated or amplified in the FP laser chip by adding a beam splitter between the laser chip and the microresonator and by observing a spectrum of the light immediately after the gain chip. In this way we confirm that only single frequency lasing (corresponding to Fig.\\ref{ris:image2}(b)) is observed at the output facet of the laser. It should be noted that in our work the FSRs of the FP diode laser and microresonator did not match.\n\nWe observe that single soliton generation is preferable although multi-soliton states are also possible. This results from a relatively slow transition from CW to soliton regime \\cite{Lobanov2016}, due to low speed of pump frequency tuning in the locked regime. Comparing to previous realisations of tuning methods to obtain soliton states (fast forward scan\\cite{Kippenberg2014}, slow backward tuning\\cite{Karpov16}, pump power kicking\\cite{Brasch2016}), in self-injection locking the frequency tuning is orders of magnitude slower. Considering realistic parameters of our system we estimate the tuning speed is $10^4$ times slower than without self-injection locking\\cite{Kondratiev2017} (see SI for details)\n\nWe have also checked and confirmed the generation of the soliton comb in the same microresonator using the traditional technique with a CW narrow linewidth tunable fiber laser (see SI for details). \n\n\\section*{Discussion and conclusion}\n\nIn conclusion, we have demonstrated a new efficient method for achieving a single-frequency narrow linewidth lasing and independently a method for generating stable Kerr soliton combs directly from multi-frequency high-gain laser diode chips using the self-injection locking effect. This result paves a way to compact, low-noise photonic microwave sources and for generating stable powerful frequency combs, which are important for spectroscopy, LIDAR application, astronomy, metrology and telecommunications. \n\n\\section*{Methods}\n\n{In experiments we used for the self-injection locking with high-Q MgF$_2$ WGM resonators Indium Phosphide multifrequency and single latitudinal mode laser diode chips. The length of chips was 1.5--2.5 mm and the free running spectrum consisted approximately of 50 FP lines with a beatnote between adjacent diode FP lines 1--3~MHz. The output power was 100--500~mW depending of the diode length and applied current. Such chips are commercially available in wide wavelength range.}\n\n\n\n\n\\section*{Data availability statement}\nAll data used in this study are available from the corresponding authors upon reasonable request.\n\n\\section*{Authors contributions}\nExperiments were conceived by N.G.P., S.K., and M.L.G. Analysis of results was conducted by N.G.P., A.S.V, S.K., G.V.L. and M.L.G., N.G.P., A.S.V., S.K. and A.S.G performed measurements with diode lasers and G.V.L. with a fiber laser. G.V.L., N.G.P. and A.S.V. fabricated devices. S.V.P. and M.R. set the research direction relevant for industrial needs - comb source for wearable spectrometer (Samsung Gear). S.V.P. supervised the project from the Samsung. M.L.G. supervised the project. All authors participated in writing the manuscript.\n\n\\begin{acknowledgments}\nThis publication was supported by the Russian Science Foundation (17-12-01413). G.V.L., N.G.P. and A.S.V. were partially supported by the Samsung Research center in Moscow. The authors gratefully acknowledge valuable discussions with Tobias Kippenberg, Kerry Vahala and Vitaly Vassiliev. The authors express gratefulness to Hong-Seok Lee and Young-Geun Roh from Samsung Advanced Institute of Technologies for help in establishing the project and its further support.\n\\end{acknowledgments}\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\la{intro}\nFigure \\ref{figConvergenceintro} is extracted from \\cite{Paper3} and \\cite{Andrea}. \n\\begin{figure}[h]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{FigConvergence.pdf}\n\t\\vspace{-6.2cm}\n\t\\caption{{\\textbf a)} Maximal cubic coupling showing up in the scattering of the lightest particle in a gapped theory with a single bound-state (in this channel at least)~\\cite{Paper3}. Convergence is perfect when the bound-state mass (measured in units of the lightest mass) is bigger than $\\sqrt{2}$ and quite painful otherwise. {\\textbf b)} The allowed chiral zeroes space of putative pion S-matrices associated to an $SU(2)$ chiral symmetry breaking patterns draws a beautiful peninsula like object with a sharp tip~\\cite{Andrea}.\\protect\\footnotemark \\,Convergence is great almost everywhere except close to the tip where numerics struggle. In those cases where the primal problem struggles, having a dual rigorous bound would be a blessing. This paper is about such dual bounds. }\n\t\\label{figConvergenceintro}\n\\end{figure}\n\\footnotetext{There are, at least, other two structures would benefit a dual description. One is the ``pion lake''~\\cite{Andrea}, found imposing the presence of the physical $\\rho$ resonance only. Another interesting and recent structure is the ``pion river''~\\cite{river}, found imposing additional constraints on the scattering lengths arising from $\\chi$PT and monotonicity of the relative entropy. The dual formulation would allow to rigorously define these structures excluding theories not compatible with the assumed low energy QCD behavior.}\n\nThese works explore the allowed space of physical 4D S-matrices. One parametrizes a vast family of S-matrices compatible with given physical and mathematical assumptions and maximize or minimize quantities within this ansatz to find the boundaries of what is possible. The more parameters the ansatz has, the better is the exploration. As the number of parameters become very large, one hopes that these boundaries converge towards the true boundaries of the S-matrix space. \n\nSometimes this works beautifully as illustrated in the figure; sometimes convergence is painful, to say the least, as also illustrated in the figure.\nIn those cases where convergence is a struggle, what can we do? Sometimes, it is a simple matter of improving the ansatz; sometimes it is not clear what exactly is missing. And in either case, how can we ever tell how close to converging are we anyways? \n\nA solution would be to develop a dual numerical procedure -- called the \\text{dual} problem -- where instead of constructing viable S-matrices we would instead rule out unphysical S-matrix space.\\footnote{Such dual bounds were attempted more than 50 years ago already in \\cite{Archeo1, Archeo2, Archeo3, Archeo4}. Would be very important to do some archeology work and revive\/translate\/re-discover\/improve those old explorations in a modern computer friendly era. A beautiful first step is currently being pursued by Martin Kruczenski and Yifei He \\cite{MartinTalkBootstrap}. The conformal bootstrap bounds are also exclusion analysis of this sort \\cite{bootstrapReview}.} Then we would approach the boundaries of the S-matrix space from two sides, dual and primal, and in this way rigorously bracket the true boundaries of the sought after S-matrix space. \nThis was recently achieved in two dimensions for simple models with a single type of particle transforming in some non-trivial global symmetry group \\cite{Monolith}.\\footnote{The primal version of these single particle studies with global symmetry was the subject of \\cite{Martin,Miguel,Lucia}; the case without global symmetry was considered in \\cite{Creutz,Paper2}.} \n\nThis paper concerns two dimensional multi-particle systems with arbitrary mass spectra from this dual perspective, clearly one step further in the complexity ladder, closer to the full higher dimensional problem.\\footnote{Multi-particle primal problems of this kind were pioneered in \\cite{Paper4,ToAppearSUSY}.} We will also consider a different technical approach, complementary to \\cite{Monolith}, with some aspects which we hope can be more directly transposable to higher dimensions. \n\n\n\\section{Dual optimization and the S-matrix bootstrap}\n\\label{sec2}\n\n\\qquad To achieve the desired dual formulation, it is useful to revisit the S-matrix bootstrap with a slightly different perspective.\n\nIn the \\textit{primal} S-matrix bootstrap formulation\n one constructs scattering amplitudes consistent with a set of axioms, or constraints. Such amplitudes are said to be \\textit{feasible}, that is, they belong to the allowed space of theories. \nOne then optimizes physical observables, such as the interaction strength between stable particles, in the space of feasible amplitudes. The prototypical example is \\cite{Paper2,Creutz}: in a 2D theory with a single stable particle of mass $m$, what is the maximum cubic coupling $g$ consistent with a $2 \\rightarrow 2$ scattering amplitude $M$ satisfying the constraints of unitarity, extended analyticity, and crossing? \n\nIn other words, we would like to solve the optimization problem\n \\begin{mdframed}[frametitle={Primal problem},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \\label{primal}\n\t\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } M(s)\\text{, }g^2}{\\text{maximize}} && g^2 \\label{primal}\\\\\n & \\text{constrained by} && \\mathcal{A}(s) \\equiv M(s) - \\Big(M_\\infty -\\frac{g^2}{s-m^2} +\\!\\!\\! \\int\\limits_{4m^2}^\\infty \\! \\frac{dz}{\\pi} \\frac{\\text{Im}M(z)}{s-z {+}i0} {+} \\left(s \\leftrightarrow 4m^2-s \\right)\\Big) =0 \\nonumber\\\\\n & &&\\text{for } s>4m^2, \\label{analandcrossing}\\\\\n & \\text{} && \\mathcal{U}(s) \\equiv 2\\text{ Im}M(s) - \\frac{\\lvert M(s)\\rvert^2}{2\\sqrt{s-4m^2}\\sqrt{s}} \\geq 0 \\qquad \\text{for } s>4m^2. \\label{unitarity}\n \\end{align}\n \\end{mdframed}\nwhere we maximize over the space of analytic functions $M$, and emphasize that one parameter in this infinite dimensional space is the residue of such functions at $s=m^2$ which is equal to~$-g^2$. \nThe first constraint (\\ref{analandcrossing}), an exact equality, imposes that feasible scattering amplitudes must respect crossing, real analyticity, and have singularities determined by physical processes: poles corresponding to one particle states, and cuts corresponding to multi-particle states.\\footnote{It turns out that there is no loss of generality in omitting subtractions from (\\ref{analandcrossing}), since a more careful analysis shows that the inclusion of those leads to the same result (\\ref{dual bootstrap}). We opt for not including subtractions in the main text for the sake of clarity -- see appendix~\\ref{analyticstuff} for a more detailed discussion.}\n We choose to impose this condition for $s > 4m^2$, but because we maximise over analytic functions, feasible amplitudes will have have this property for all $s$ in the physical sheet.\\footnote{The physical sheet is defined as the first Riemann sheet encountered after analytically continuing from physical kinematics, $s>4m^2$, using the $+ i \\epsilon$ prescription.\n } The convenience of imposing this condition for $s > 4m^2$ will become clear in time. The second constraint (\\ref{unitarity}) is the physical unitarity condition, equivalent to~$\\lvert S(s)\\rvert \\leq 1$. \n\nSince the quantity we are maximising, the objective, is a linear map in the space of analytic functions, the map that evaluates the residue at a point, and since the constraints~(\\ref{analandcrossing}),~(\\ref{unitarity}) are affine and convex respectively, the optimization problem we aim to solve is an infinite dimensional convex optimization problem. For such a simple problem, there are now two directions that can be taken. The first option is to solve the infinite dimensional problem analytically. As is well known by now, this follows from a simple application of the maximum modulus principle~\\cite{Paper2, Creutz}. The second option, available in more complicated situations, is to bring the problem to the realm of computers by maximizing our objective in some finite dimensional subspace of analytic functions. For example, one can consider analytic functions that are, up to poles, polynomial of at most degree $N_\\text{max}$ in some foliation variable $\\rho$ that trivializes the constraint (\\ref{analandcrossing}), as done in~\\cite{Paper3}. This truncated problem can be efficiently solved by a convex optimization software, for example SDPB \\cite{SDPB, scaling}. By choosing and increasing the finite dimensional subspace smartly, one obtains lower bounds to the solution of the primal problem that should converge to the correct bound with more expensive numerics.\n\nThe primal formulation suffers from two important shortcomings. First, for some problems it is hard to identify a simple ansatz, or truncation scheme, that allows for fast convergence. This is often the case in higher dimensional S-matrix bootstrap applications, or when scattering heavy particles in 2D. Second, and perhaps more importantly, one may want to add extra variables and constraints to the primal problem. In the previous example, those variables and constraints could be, respectively, higher point amplitudes and higher point unitarity equations. It may be the case that a feasible $2 \\rightarrow 2$ amplitude in the original primal problem may no longer be feasible in the enlarged space with extra constraints. In those cases, a point in theory space previously said to be allowed becomes forbidden. It would be more satisfying if bounds on the space of theories obtained by studying some scattering subsector remained true once the full set of QFT constraints were imposed.\\footnote{Much in the same way that CFT data excluded by the numerical conformal bootstrap remains excluded once more crossing equations are included into the system.} To overcome both of this shortcomings, we introduce the dual formulation. We use the coupling maximization problem as a guiding example, before generalizing.\n\nConsider the Lagrangian\\footnote{Note $\\mathcal{A}(s)$ is actually real. }\n\\begBvR{equation}\n\\mathcal{L}(M,w,\\lambda) = g^2 + \\int_{4 m^2}^\\infty ds\\text{ } w(s) \\mathcal{A}(s) + \\lambda(s) \\mathcal{U}(s) \\label{lagrangian}\n\\end{equation}\nwith $\\lambda(s) \\geq 0$ and define the dual functional\n\\begBvR{equation}\nd(w,\\lambda) = \\underset{\\{M, g\\}}{\\text{sup}} \\mathcal{L}(M,w,\\lambda)\\label{dualfunctional}\n\\end{equation}\nNotice that the supremum is taken over unconstrained analytic functions $M$.\\footnote{It is useful to think of analytic functions as being defined through their independent real and imaginary parts along a line. Of course, if the dispersion (\\ref{analandcrossing}) were to hold, then those would not be independent. However, since we maximise over generic analytic functions, we are free to treat $\\text{Re }M$ and $\\text{Im } M$ for $s>4m^2$ as independent.} The dual functional $d$ is the central object in the dual formulation due to the following property: \n\n \\begin{mdframed}[frametitle={Weak Duality},frametitlealignment=\\centering,backgroundcolor=black!10, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \\label{duality}\n\t\\begBvR{equation}\n\\text{Let the solution of the primal problem be $g_*^2$. Then\t}\nd(w,\\lambda) \\geq g_*^2. \\label{weak}\n\\end{equation}\n \\end{mdframed}\nWeak duality holds due to two observations. First, note that since\n\\begBvR{equation}\n\\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda) = \n\\begin{cases}\n g^2& \\text{if } M \\text{ is feasible}\\\\\n -\\infty & \\text{otherwise},\n\\end{cases}\\label{since} \n\\end{equation}\nwe have that \n\\begin{equation*}\n\\normalfont g_*^2 = \\underset{\\{M, g\\}}{\\text{sup}} \\left[ \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda)\\right].\n\\end{equation*}\nWeak duality then follows from the max-min inequality\n\\begBvR{equation}\nd(w,\\lambda) \\geq \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\left[ \\underset{\\{M, g\\}}{\\text{sup}} \\mathcal{L}(M,w,\\lambda) \\right] \\geq \\underset{\\{M, g\\}}{\\text{sup}} \\left[ \\underset{\\{\\lambda \\geq 0, w\\}}{\\text{inf }} \\mathcal{L}(M,w,\\lambda)\\right] = g_*^2. \\label{maxmin}\n\\end{equation}\n\nExploring the $\\{w,\\lambda\\}$ space, the space of dual variables, we therefore obtain upper bounds on the values of $g$ allowed by the axioms and exclude regions in theory space. This, in turn, partially solves the first shortcome of the primal formulation: by providing upper limits on the coupling, it bounds how far from converging an ineffective primal truncation scheme may be. To find the best possible upper bound, we solve the\n \\begin{mdframed}[frametitle={Dual problem (generic)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } w(s)\\text{, }\\lambda(s)}{\\text{minimize}} && d(w,\\lambda) \\label{dual generic}\\\\\n & \\text{constrained by} && \\lambda(s)\\geq 0\\nonumber\n \\end{align}\n \\end{mdframed}\n\nThe construction of dual functionals from a primal optimization problem is standard in optimization theory, but the particularities of the problems encountered in the S-matrix bootstrap lead to important simplifications. One of these is that the analyticity of the scattering amplitude is inherited by the dual variable $w(s)$, conjugate to the analyticity constraint. In fact, let's define a ``dual scattering function\", $W(s)$\\footnote{It is worth stressing that the introduction of an analytic function $W(s)$ is not mandatory. It is possible to work with real densities $w(s)$ and follow the argument presented in this section using the same logic. This possibility is particularly useful in higher dimensions if one wants to assume no more than the proven analyticity domains~\\cite{Archeo4}.}, odd under crossing and whose absorptive part is $w(s)$: \\begBvR{equation}\nW(s) \\equiv \\frac{1}{\\pi}\\int_{4m^2}^\\infty dz \\frac{w(z)}{s-z {+}i0} - \\left(s \\leftrightarrow 4m^2-s \\right). \\label{disp}\n\\end{equation}\n\nThen, swapping a few integrals in (\\ref{lagrangian}) and using $\\frac{1}{\\left(s-z{\\pm}i0\\right)} = \\mp i \\pi \\delta(s-z) + \\mathcal{P}\\frac{1}{(s-z)}$ leads to a very simple representation for the lagrangian as\n\\begBvR{equation}\n\\mathcal{L}(M,W,\\lambda) = g^2 \\left(1 + \\pi W(m^2)\\right) + \\int_{4 m^2}^\\infty ds\\text{ } \\text{Im}\\left(W(s) M(s)\\right) + \\lambda(s)\\, \\mathcal{U}(s). \n\\label{eq13}\n\\end{equation}\nNote that the Lagrangian density is now manifestly local in $M$ as the Cauchy kernel from~(\\ref{analandcrossing}) has been nicely absorbed into $W$. This locality, together with the quadratic nature of the constraint equations\\footnote{Dispersions for higher point amplitudes are no longer expected to be quadratic in lower point functions due to the presence of Landau singularities.} leads to the next simplification over generic dual optimization problems: we can perform both the maximization over $M$ in (\\ref{dualfunctional}) and the minimization over~$\\lambda$ in~\\eqref{dual generic} exactly. We now analyze those in sequence. \n\nBefore doing that, first notice, linearity of $\\mathcal{L}$ under $g^2$ implies that \n\\begBvR{equation}\nd(W, \\lambda) = + \\infty \\,\\,\\,\\, \\text{ unless } \\,\\,\\,\\, \\pi W(m^2)=-1. \\label{normunless}\n\\end{equation}\nThis means that unless $W$ is properly normalized at $m^2$, the bounds obtained from the dual functional are vacuous. Hence, in solving the dual problem, there is no loss of generality in restricting ourselves to the space of $W$ satisfying the constrain in (\\ref{normunless}).\n \n The linear Lagrange equations with respect to variations of $M(s)$ for $s>4m^2$ results in\n \\begBvR{equation}\n M_\\text{critical}(s) = \\left[\\text{Im}(W(s))\/\\lambda(s) + i \\left(2\\lambda(s) + \\text{Re}(W(s))\/\\lambda(s)\\right)\\right] \/(2 \\rho^2_{11}) .\n\\nonumber\n \\end{equation}\nwhere $ \\rho^2_{11} = 1\/(2 \\sqrt{s-4m^2}\\sqrt{s})$. Second order variations show that, indeed, this is a local maximum provided $\\lambda(s)>0$. It follows from the definition (\\ref{dualfunctional}) that, provided $\\pi W(m^2)=-1$,\n\n\\begBvR{equation}\nd(W, \\lambda) = \\int_{4m^2}^\\infty ds \\left( \\frac{\\lvert W(s)\\rvert^2}{4 \\lambda(s)} + \\lambda(s) + \\text{Re}W(s))\\right)\/ \\rho^2_{11} . \\la{dWL}\n\\end{equation}\n\nNext, we minimize over $\\lambda$ leading to $\\lambda=|W(s)|\/2$. The result is $D(W) \\equiv \\underset{\\lambda \\geq 0}{\\text{inf }} d(W,\\lambda))$ given by \n\\begBvR{equation}\nD(W) = \\int_{4m^2}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11}. \\label{bfunc} ,\n\\end{equation}\nin which case\\footnote{Note that unitarity is automatically saturated once we minimize in $\\lambda$.} \n\\begBvR{equation}\n M_{\\text{critical}}(s) = \\frac{i}{\\rho^2_{11}}\\left(1 + \\frac{W^*}{\\lvert W\\rvert} \\right).\\nonumber\n\\end{equation}\n\nIn sum, the dual of (\\ref{primal}) simplifies to\n\n \\begin{mdframed}[frametitle={Dual problem (S-matrix bootstrap)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\t\\vspace{-0.4cm}\n \\begin{align} &\\underset{\\text{in } W(s)}{\\text{minimize}} && D(W)=\\int_{4m^2}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11} \\label{dual bootstrap}\\\\\n & \\text{constrained by} && \\pi W(m^2)=-1. \\label{norm}\n \\end{align}\n \\end{mdframed}\n\nThe dual problem can be tackled numerically through the same strategy used for the primal problem, that is, restricting our search to a finite dimensional subspace of analytic $W$s. For example, one could use the $\\rho$ foliation variables to write the ansatz\\footnote{The Ansatz (\\ref{wansatz}) is consistent with the dispersion (\\ref{disp}). In particular, the poles in (\\ref{wansatz}) correspond to a delta function contribution in $w(s)$.}\n\\begBvR{equation}\nW_{\\text{ansatz}}(s) = \\frac{1}{s (4m^2-s)}\\sum_{n=1}^{N_\\text{max}}a_n (\\rho(s)^n - \\rho(t)^n), \\label{wansatz}\n\\end{equation}\nwhere \n\\begBvR{equation}\n\\rho(s) = \\frac{\\sqrt{2m^2 } - \\sqrt{4m^2 -s}}{\\sqrt{2m^2} + \\sqrt{4m^2 -s}},\n\\label{rhovariabledef}\n\\end{equation}\nand minimize the functional (\\ref{dual bootstrap}) in the finite dimensional space parametrized by the $a_n$'s. Note that the constraint (\\ref{norm}) is a linear constraint in this space. The functional (\\ref{bfunc}) is nonlinear, but it is convex in $W$. Performing such minimization, say, in \\texttt{Mathematica} shows that, as one increases $N_\\text{max}$, the result of the problem (\\ref{dual bootstrap}) converges to the result of the primal problem (\\ref{primal}). This is expected if our optimization problem satisfies\n\n\\begin{mdframed}[frametitle={Strong Duality},frametitlealignment=\\centering,backgroundcolor=black!10, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false]\nThe solutions to the primal (\\ref{primal}) and dual problem (\\ref{dual bootstrap}) are identical, i.e. $g_*^2 = \\underset{\\text{in } W}{\\text{min}} \\text{ } D(W).$ In other words, the $\\ge$ symbol in (\\ref{weak}) is actually an $=$ sign.\n \\end{mdframed}\nThis property is argued for in appendix \\ref{strong}.\n \nTo explain how the dual formulation solves the second shortcoming of the primal optimization, and in view of the applications in section \\ref{application}, let's consider a slightly different class of S-matrix Bootstrap problems. Consider a gapped theory with two real stable particles of masses $m_1$ and $m_2$ respectively, $m_14m_1^2, \\label{analandcrossing2}\\\\\n & \\text{} && \\mathbb{U}(s) \\equiv 2\\text{ Im}\\,\\mathbb{M}(s) - \\mathbb{M}^\\dagger \\text{\\outline{$\\rho$}}\\, \\mathbb{M} \\succeq 0 \\qquad \\qquad &&&\\text{for } s>4m_1^2. \\label{unitarity2}\n \\end{align}\n \\end{mdframed}\nwhere $\\mathbb{A}_{a b} \\equiv \\mathcal{A}_{a \\to b}$ are analogous to (\\ref{analandcrossing}) and impose the correct dispersion relations for the amplitudes $M_{a \\to b}$ (see e.g. (\\ref{AA}) in the next section). Here $\\text{\\outline{$\\rho$}}$ are the phase space factors for the intermediate states (see e.g. (\\ref{rhoMatrix}) in the next section). \nTo obtain the dual problem, we introduce the Lagrangian\n\\begBvR{equation}\n\\mathcal{L}(\\mathbb{M}, \\text{\\outline{w}} ,\\text{\\outline{$\\Lambda$}}) = g^2 + \\int_{4 m_1^2}^\\infty ds\\text{ } \\text{Tr}\\left( \\text{\\outline{w}} \\cdot \\mathbb{A} (s) + \\text{\\outline{$\\Lambda$}} \\cdot \\mathbb{U} (s) \\right), \\label{lagrangian2}\n\\end{equation}\nwhere $\\text{\\outline{w}}$ and $\\text{\\outline{$\\Lambda$}}$ are respectively symmetric and hermitian matrices of dual variables with~$\\text{\\outline{$\\Lambda$}}$ positive semi-definite. The new dual functional\n\\begBvR{equation}\nd(\\text{\\outline{w}}, \\text{\\outline{$\\Lambda$}}) = \\underset{\\mathbb{M}}{\\text{sup }} \\mathcal{L}(\\mathbb{M}, \\text{\\outline{w}} ,{\\text{\\outline{$\\Lambda$}}}) \\label{dualfunctional2}\n\\end{equation}\nsatisfies weak duality by similar arguments as those in equations (\\ref{since}-\\ref{maxmin}). The dual optimization problem is \n \\begin{mdframed}[frametitle={Dual problem (matrix)},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,\n\tbottomline=false, leftline=false, rightline=false] \n\t\\vspace{-0.6cm}\n \\begin{align} &\\underset{\\text{in } \\text{\\outline{w}}(s)\\text{, }\\text{\\outline{$\\Lambda$}}(s)}{\\text{minimize}} && d(\\text{\\outline{w}}, \\text{\\outline{$\\Lambda$}}) \\label{dual 2}\\\\\n & \\text{constrained by} && \\text{\\outline{$\\Lambda$}}(s)\\succeq 0. \\nonumber\n \\end{align}\n \\end{mdframed}\n Note that an upper bound on the solution of the primal problem (\\ref{primal}) is obtained by choosing minimizing $d$ in the subspace $\\text{\\outline{w}}_{ab}(s) = \\delta_a^{11} \\delta_b^{11} w(s)$, $\\text{\\outline{$\\Lambda$}}_{ab} = \\delta_a^{11} \\delta_b^{11} \\lambda(s)$, $\\lambda\\geq0$. This is equivalent to the dual problem obtained by including only the amplitude $M_{11\\to11}$ in the bootstrap system, or primal problem. Restricting to a scattering subsector in the dual formulation provides true bounds to the more complete optimization problem. Conversely, bounds obtained by studying some restricted space of amplitudes and constrains remain valid once extra axioms and degrees of freedom are considered. We hope it is clear that the argument provided by means of an example is generic. This solves the second shortcoming of the primal formulation.\n \n\n\\section{An application}\n\\label{application}\n\\subsection{The setup}\nWe now turn our attention to much richer S-matrix bootstrap. We consider a theory with two particles of mass $m_1$ and $m_2>m_1$. We will \\textit{not} assume any global symmetry. For concreteness, we will take\\footnote{Setting $m_1=1$ simply sets our units. All $m_2> \\sqrt{2}$ would then give very similar plots\/conclusions. We could also consider $m_2<\\sqrt{2}$; the plots are a little bit less eye pleasing in that case. The significance of the transition point $m_2^*=\\sqrt{2}$ is that this is the crossing invariant point for the $11\\to 11$ process; on either sign of this point residues have different signs leading to quite different optimization results.}\n\\begBvR{equation}\nm_1=1\\,, \\qquad m_2=3\/2 \\,.\\nonumber\n\\end{equation}\nThere are a priori four couplings involving these two particles: $g_{111},g_{112},g_{122},g_{222}$. They would show up as $s$-channel residues in the various scattering amplitudes:\n\\begBvR{equation}\n\\begin{array}{c|c|c}\n\\text{Amplitude} & \\text{Exchange of particle } 1 & \\text{Exchange of particle } 2 \\\\ \\hline\n11\\to 11 & {\\color{red}g_{111}^2} & {\\color{blue} g_{112}^2} \\\\ \\hline\n11\\to 12 &{\\color{red}g_{111}}{\\color{blue} g_{112}} &{\\color{blue} g_{112}}{\\color{cadmiumgreen}g_{122}} \\\\ \\hline\n12\\to 12 & {\\color{blue} g_{112}^2} & {\\color{cadmiumgreen} g_{122}^2} \\\\ \\hline\n11\\to 22 & {\\color{red} g_{111}} {\\color{cadmiumgreen} g_{122}} & {\\color{blue} g_{112}} {\\color{magenta}g_{222}} \\\\ \\hline\n12\\to 22 & {\\color{blue} g_{112}}{\\color{cadmiumgreen} g_{122}} & {\\color{cadmiumgreen} g_{122}} {\\color{magenta} g_{222}} \\\\ \\hline\n22\\to 22 & {\\color{cadmiumgreen} g_{122}^2} & {\\color{magenta} g_{222}^2 }\n\\end{array} \\nonumber\n\\end{equation}\nWe will not consider the full coupled system of six amplitudes. Instead we will consider a nice closed subset involving the $11\\to 11$, $11\\to 12$ and (the forward) $12\\to 12$ processes only (that is, the first three lines in the table). As such we will be insensitive to $g_{222}$. We will furthermore consider a section of the remaining three-dimensional space where $g_{122}=0$ so that the problem simplifies slightly to\\footnote{The analysis for any other fixed value of $g_{122}$ follows identically, see more at the end of this section. }\n\\begBvR{equation}\n\\begin{array}{c|c|c}\n\\text{Amplitude} & \\text{Exchange of particle } 1 & \\text{Exchange of particle } 2 \\\\ \\hline\n11\\to 11 & {\\color{red}g_{111}^2} & {\\color{blue} g_{112}^2} \\\\ \\hline\n11\\to 12 &{\\color{red}g_{111}}{\\color{blue} g_{112}} & 0 \\\\ \\hline\n12\\to 12 & {\\color{blue} g_{112}^2} & 0 \n\\end{array} \\nonumber\n\\end{equation}\nand our main goal here is to explore the allowed two dimensional $(g_{112},g_{111})$ space. A convenient way to find the boundary of this space is by shooting radially. We fix an angle $\\beta$ and define a radius $R$ as\n\\begBvR{equation}\n(g_{112},g_{111}) = R(\\cos\\beta,\\sin\\beta) \\,. \\nonumber\n\\end{equation}\nThen we find the maximum value of $R$ for each $\\beta$ choice to plot the full two-dimensional space. \n\nIn the primal language we will get larger and larger $R$'s as our ansatz is more and more complete. In the dual language we will rule out smaller and smaller $R$ as we improve our ansatz. Sandwiched between the two will be the true (two dimensional section of the) boundary of the S-matrix space. \n\nIt is equally straightforward to fix $g_{122}$ to any other value and analyze another 2d section in this way or even collect various values of $g_{122}$ to construct the full $3D$ space. We leave such detailed scans for the future when we will have more realistic setups designed to bootstrap particular relevant physical theories such as the (regular and tricritical) Ising model (perturbed by thermal and magnetic deformations) as discussed in the conclusions. \n\n\\subsection{Single Component Horn}\n\\label{Horn}\nLet us start our search for the two dimensional section of the allowed S-matrix space by focusing on the constraints arising from the single $M=M_{11\\to 11}$ component alone. \n\nThis is a warm up section and many of the results here are not new: indeed, the primal formulation of single component scattering has been the subject of \\cite{Paper2}; a minor new ingredient we will consider here is the radial search element. (The radial problem for the space of S-matrices with $O(N)$ symmetry and no bound states was introduced in~\\cite{Monolith}.) In appendix H of \\cite{Paper4} an almost identical primal problem was solved analytically; the analytic curves in figure \\ref{figHorn} are obtained by trivially adapting the arguments therein. The dual formulation for these single component cases with several exchanges masses, however, will be novel and provide very useful intuition for the most general case. \n\nThe primal radial problem can be compactly formulated as \n\n\\begin{mdframed}[frametitle={Primal Radial Problem for Single Component},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.6cm}\n\\begin{align} &\\underset{\\text{in } {M, R^2}}{\\text{maximize}} && R^2\\nonumber\\\\\n& \\text{constr. by} && \\text{Res}_{m_1^2}(M)=R^2\\sin^2\\beta, \\quad \\text{Res}_{m_2^2}(M)=R^2\\cos^2\\beta \\label{radialcondition}\\\\ \n& s\\geq4m_1^2 && \\mathcal{A}(s)=M(s){-}M_\\infty+\\left(\\frac{g_{111}^2}{s-m_1^2}{+}\\frac{g_{112}^2}{s-m_2^2}{-}\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\,\\frac{\\IM M(z)}{z-s} +(s\\leftrightarrow t)\\right){=}0\\nonumber\\\\\n& s\\geq 4m_1^2&& \\mathcal{U}(s)=2\\IM M(s) -\\rho_{11}^2 |M(s)|^2 \\geq 0. \n \\label{primal bootstrap 11to11}\n\\end{align}\n\\end{mdframed}\n\nWe will now construct the dual problem. If it were not for the radial additional equality constraints~\\eqref{radialcondition} the corresponding dual problem would be given already in eq.~\\eqref{dual bootstrap}.\nIn this case we need to introduce additional Lagrange multipliers $\\nu_1$ and $\\nu_2$\nto the lagrangian~\\eqref{lagrangian}\n\\begBvR{equation}\n\\mathcal{L}=R^2+\\nu_1 (\\text{Res}_{m_1^2}(M)-R^2\\sin^2\\beta)+\\nu_2 (\\text{Res}_{m_2^2}(M)-R^2\\cos^2\\beta)+\\int_{4m_1^2}^\\infty ds\\,\\mathcal{A}(s)w(s)+\\mathcal{U}(s)\\lambda(s).\n\\label{lagrangianhorn}\n\\end{equation}\nNow we follow the logic of section \\ref{sec2} verbatin modulo a few small differences inherent to the radial nature of the primal problem which we will highlight. First of all note that the maximum of the Lagrangian with respect to $R^2$ yields a bounded result only when\n\\begBvR{equation}\n1-\\nu_1 \\sin^2\\beta-\\nu_2 \\cos^2\\beta=0.\\nonumber\n\\end{equation}\nNext, identifying $w(s)=\\IM W(s)$ with $W(s)$ given by (\\ref{disp}) as before will lead to a beautiful dual problem formulation with a totally local optimization target. Importantly \n\\begBvR{equation}\n\\int_{4m_1^2}^\\infty ds\\,\\mathcal{A}(s)w(s)= \\int_{4m_1^2}^\\infty ds\\,\\text{Im}(M(s)W(s))+ \\pi \\text{Res}_{m_1^2}(M) W(m_1^2)+\\pi \\text{Res}_{m_2^2}(M) W(m_2^2)\\nonumber\n\\end{equation}\nso we see that the optimization with respect to the parameters $ \\text{Res}_{m_i^2}(M) $ identifies the lagrange multipliers $\\nu_i$ with the normalization of the dual functional at the stable mass values $W(m_i^2)$. All in all we therefore obtain the simple dual problem radial generalization of (\\ref{dual bootstrap}) as \n\\begin{mdframed}[frametitle={Dual Radial Problem for Single Component},frametitlealignment=\\centering,backgroundcolor=red!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.3cm}\n\\begin{align} &\\underset{\\text{in } {W}}{\\text{minimize}} && D(W)=\\int_{4m^2_1}^\\infty ds \\left(\\text{Re}(W(s)) + \\lvert W(s)\\rvert \\right)\/ \\rho^2_{11}\\nonumber\\\\\n& \\text{constrained by} && 1+\\pi\\, W(m_1^2)\\sin^2\\beta+\\pi\\, W(m_2^2)\\cos^2\\beta=0.\n\\label{dual bootstrap 11to11}\n\\end{align}\n\\end{mdframed}\n\n\nNotice again the nice complementarity between the pole singularities associated to bound states in the physical amplitude and the absence of poles in the ``dual scattering function\" $W$ given by (\\ref{disp}), replaced instead by the simple normalization conditions (\\ref{dual bootstrap 11to11}). Conversely, when we maximize effective couplings in theories without bound-states the primal S-matrices have no bound-states and the dual functionals have poles \\cite{Monolith}. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{Hornv1-crop.pdf}\n\t\\caption{Numerical bounds on the coupling space $\\{g_{111},g_{112}\\}$. The blue shaded regions enclose the allowed points for different $N_{\\text{max}}$ in our primal ansatz. The red shaded regions mark the points that are rigorously excluded. The thin black analytic curve is the boundary of the allowed region \\cite{Paper4}. \n\t As we increase $N_{\\max}$ from 1 to 5 in the primal problem, the blue regions enlarge, allowing for more and more points and eventually converging to touch the boundary of the permitted space (this is more evident in the ``horn'' region). In the dual strategy as we increase $N_{\\max}$ from 1 to 5 we exclude more and more points. At convergence the excluded region touches the boundary of the allowed space. We restrict the plot to the first quadrant since it is symmetric under $g \\leftrightarrow -g$.}\n\t\\label{figHorn}\n\\end{figure}\n\n\nIn figure~\\ref{figHorn} we show the numerical results for both the primal (inner blue shaded regions) and the dual problem (outer red shaded regions).\n\n\n\\subsection{Multiple Component Kinematics}\n\nNext we consider the full system with $11\\to 11$, $11\\to 12$ and \\textit{forward} $12\\to 12$ amplitudes.\\footnote{As reviewed in detail in \\cite{Paper4} when a particle of type $1$ scatters with a particle of type $2$ it can either continue straight (\\textit{forward amplitude}) or bounce back (\\textit{backward amplitude}). Here we consider the forward process only. This process is nicely crossing symmetric. (The backward process is not; instead it is related by crossing to $11\\to 22$ scattering so considering this backward process would require more scattering processes to close the system of {unitarity} equations.)} The two dimensional kinematics of the $11\\to 11$ process and of the \\textit{forward} $12\\to 12$ process are reviewed in great detail in section 2 of \\cite{Paper4} so here we will mostly focus on the new $11\\to 12$ process.\\footnote{This process was not considered in \\cite{Paper4} because it violates $\\mathbb{Z}_2$ symmetry. Here we don't have $\\mathbb{Z}_2$ symmetry so it is the first most natural process to consider after the lightest $11\\to 11$ scattering amplitude.} \nThis scattering process is a nice fully symmetric process. No matter which channel we look at it, it always describes two particles of type $1$ (in the infinite future or past) scattering into a particle of type $1$ and another of type $2$. As such \n\\begBvR{equation}\nM_{11\\to 12}(s,t,u)\\nonumber\n\\end{equation}\nis fully symmetric under any permutation of the three Mandelstam variables $s,t,u$. Of course, they are not independent. Besides \n\\begBvR{equation}\ns+t+u=3m_1^2+m_2^2 \\la{Plus}\n\\end{equation} \nwhich holds in any dimension, we have the two dimensional constraint \n\\begBvR{equation}\ns t u=m_1^2\\left(m_1^2- m_2^2\\right){}^2 \\la{Times}\n\\end{equation}\n\nEquations (\\ref{Plus}) and (\\ref{Times}) describe a curve. Its projection into real $s,t,u$ is given by the solid curved blue lines in figure \\ref{triangle}. \n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{1112Triangle.pdf}\n\t\\caption{Maldelstam Triangle for $11\\to 12$ scattering. The x-axis is given by $x=(s+2 t-3 m_1^2-m_2^2)\/\\sqrt{3}$. The $11\\to 12$ scattering if fully crossing invariant and indeed so is this picture. Physical processes in 2D lie on top of the blue solid lines and outside the red lines; in higher dimensions they fill in the interior of the regions delimited by the blue solid lines as one scans over physical scattering angles. Similar triangle for $12\\to 12$ scattering can be found in \\cite{Paper4}.}\n\t\\label{triangle}\n\\end{figure}\nThere, we see four disconnected regions: three non-compact parabola like curves related by a rotation symmetry and a round triangle in the middle. The three outer curves are the three physical regions associated to the three scattering channels. The one in the top, for instance, corresponds to the $s$-channel. (Each outer curve has a left and right components which are equivalent; they are related to a simple parity transformation.) \nThe $s$-channel outer curve start at $s=(m_1+m_2)^2$ as indicated by the red solid line. That corresponds to the minimal energy necessary to produce a particle of type $1$ and a particle of mass $2$ at rest. (Recall that $2$ is heavier than $1$.) Another important energy marked by the blue dashed line in the figure occurs at $s=(2m_1)^2$ which would correspond to the minimal energy necessary to produce two particle of type $1$ at rest. This is however \\textit{not} a physical energy for this process since physical energies are those for which we can produce \\textit{both} initial \\textit{and} final state. Nonetheless, the region between $s=4m_1^2$ and $s=(m_1+m_2)^2$ is very interesting because we know precisely what are the only possible {physical} states in that energy range: they can only be two particle states involving two particles of type $1$.~\\cite{Landau} The equation which reflects this is the so called \\textit{extended} unitarity relation which in this case reads\n\\begBvR{equation}\n2\\IM M_{11\\to 12}=\\rho_{11}^2 M_{11\\to 11} M_{11\\to 12}^*, \\qquad 4 m_1^2< s < (m_1+m_2)^2 \\la{extUnit1112}\n\\end{equation}\n\nHere, since we are focusing on the top curve (which is crossing equivalent to any of the other two) we can think of $M$ as a single function of $s$ with \n\\begin{eqnarray}\n&&t(s)=\\frac{1}{2} \\left(3\n m_1^2+m_2^2-s-\\sqrt{\\frac{\\left(s-4 m_1^2\\right) \\left(-2 m_2^2\n \\left(m_1^2+s\\right)+\\left(s-m_1^2\\right){}^2+m_2^4\\right)}{s}}\\right)\\label{t11to12}\\\\\n&&u(s)=\\frac{1}{2} \\left(3\n m_1^2+m_2^2-s+\\sqrt{\\frac{\\left(s-4 m_1^2\\right) \\left(-2 m_2^2\n \\left(m_1^2+s\\right)+\\left(s-m_1^2\\right){}^2+m_2^4\\right)}{s}}\\right)\\label{u11to12}\n\\end{eqnarray}\nAs a check, note that as $m_2 \\to m_1$ we find $u \\to 0$ and $t\\to 4m_1^2-s$ as expected for two dimensional elastic scattering of particles of equal mass. \n\nThe extended unitarity relation (\\ref{extUnit1112}) is of course part of a coupled system of equations when we consider all components at once. They can all be nicely packed into matrix form by defining \n\\begBvR{equation}\n\\mathbb{U}\\equiv 2\\IM \\mathbb{M}-\\mathbb{M}^\\dagger \\text{\\outline{$\\rho$}}\\, \\mathbb{M} \\,,\n\\label{unitarityfullsystem}\n\\end{equation}\nwhere\n\\begBvR{equation}\n\\!\\!\\!\\! \\mathbb{M}\\equiv \\begin{pmatrix}\nM_{11\\to 11} & M_{11\\to 12} \\\\\nM_{11\\to 12} & M_{12\\to 12}\n\\end{pmatrix}, \\,\\,\\,\\,\\,\n\\text{\\outline{$\\rho$}} \\equiv \\begin{pmatrix}\n\\rho_{11}^2=\\frac{\\theta\\left(s-4m_1^2\\right)}{2\\sqrt{s-4m_1^2}\\sqrt{s}} & 0 \\\\\n0 & \\rho_{12}^2=\\frac{\\theta\\left(s-(m_1 + m_2)^2\\right)}{2\\sqrt{s-(m_1 + m_2)^2}\\sqrt{s-(m_1 - m_2)^2}} \n\\end{pmatrix} \\la{rhoMatrix}\n\\end{equation}\nThen extended unitarity is the statement that $\\mathbb{U}=\\mathbf{0}$ for $s\\in [4m_1^2,(m_1+m_2)^2]$. Above $s=(m_1+m_2)^2$ we are at physical energies and the extended unitarity relation is replaced by regular unitarity which is now nothing but the statement that $\\mathbb{U}$ is a positive semi-definite matrix $\\mathbb{U} \\succeq 0$ for $s>(m_1+m_2)^2$.\\footnote{Strictly speaking we can impose $\\mathbb{U}=\\mathbf{0}$ for a while longer in the unitarity region, more precisely until the energy where we can produce two particles of type $2$ or three particles of type $1$. In practice, bounds we will find will saturate unitarity so this will be automatic. Because of this, in all implementations, we will actually impose $\\mathbb{U} \\succeq 0$ even in the extended unitarity region, that is for any $s>4m_1^2$. This is very convenient as it renders the problem convex. }\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.5]{1112Poles.pdf}\n\t\\vspace{-1.5cm}\n\t\\caption{$t(s)$ (blue) and $u(s)$ (yellow) for $11\\to 12$ scattering and $m_2=\\tfrac{3}{2} m_1$. $u(s)$ and $t(s)$ are two branches of the same analytic function. In the extended unitarity region they are complex. As a function of $s$, all poles are located before the extended unitarity region. The grey horizontal dashed lines are equal to $m_1^2$ and $m_2^2$ and fix the position of the $t$-- and $u$-- channel poles.}\n\t\\label{poles1112}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.5]{1212Poles.pdf}\n\t\\vspace{-1.5cm}\n\t\\caption{$t(s)$ (blue) and $u(s)=0$ (yellow) for $12\\to 12$ forward scattering and $m_2=\\tfrac{3}{2} m_1$. In the $s$-channel extended unitarity sit $t$-channel poles (and vice-versa). The $s$-channel poles lie before the s-channel extended unitarity region. As in the previous figure, the grey horizontal dashed lines are equal to $m_1^2$ and $m_2^2$ determine the position of $t$-channel poles. }\n\t\\label{poles1212}\n\\end{figure}\n\nFinally we have poles. These correspond to the single particle exchanges when $s$ or $t$ or $u$ are equal to either $m_1$ or $m_2$. The poles show up in the (rounded) triangle region in the Mandelstam triangle picture \\ref{triangle} in the $11\\to 12$ process as depicted in figure \\ref{poles1112}. For $12\\to 12$, we have $u=0$ and the two $t$-channel poles lie in the extended unitarity region. Note here the important difference between unitarity and extended unitarity. In the unitarity region the amplitudes describe physical probability amplitudes, are bounded and can thus never have poles. In the extended unitarity region they can in principle. And here they do as we see in the figure. \n\nAll in all, we can summarize the analytic structure of our amplitudes with their cuts and poles by dispersion relations as usual. \nThese can be conveniently packaged into a simple matrix statement $\\mathbb{A}=\\mathbf{0}$ with\n\\begBvR{equation}\n\\mathbb{A}\\equiv \\begin{pmatrix}\n\\mathcal{A}_{11\\to11} & \\mathcal{A}_{11\\to 12} \\\\\n\\mathcal{A}_{11\\to 12} & \\mathcal{A}_{12\\to12}\n\\end{pmatrix} \\, \\la{AA}\n\\end{equation}\nand \n\\begin{align}\n\\mathcal{A}_{11\\to11}(s)\\equiv&M_{11\\to11}(s)-M_{11\\to11}^\\infty+{\\color{red}g_{111}^2}\\left(\\frac{1}{s{-}m_1^2}+\\frac{1}{t(s){-}m_1^2}\\right)+{\\color{blue}g_{112}^2}\\left(\\frac{1}{s{-}m_2^2}+\\frac{1}{t(s){-}m_2^2}\\right)\\nonumber\\\\\n&- \\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{11\\to11}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}\\right)dz\\,,\\label{m11to11disp} \\\\\n\\mathcal{A}_{11\\to12}(s)\\equiv&\\,M_{11\\to 12}(s)-M_{11\\to12}^\\infty+{\\color{red}g_{111}}{\\color{blue}g_{112}}\\left(\\frac{1}{s-m_1^2}+\\frac{1}{t(s)-m_1^2}+\\frac{1}{u(s)-m_1^2}\\right)\\nonumber\\\\\n&-\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{11\\to12}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}+\\frac{1}{z-u(s)}\\right)dz\\,,\\label{m11to12disp}\n\\end{align}\n\\begin{align}\n\\mathcal{A}_{12\\to12}(s)\\equiv&\\,M_{12\\to12}(s)-M_{12\\to12}^\\infty+{\\color{blue}g_{112}^2}\\left(\\frac{1}{s-m_1^2}+\\frac{1}{t(s)-m_1^2}\\right)\\nonumber\\\\\n&-\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty \\IM M_{12\\to12}(z)\\left(\\frac{1}{z-s}+\\frac{1}{z-t(s)}\\right)dz\\,.\n\\label{m12to12disp}\n\\end{align}\nWe hope there will be no confusing created by the fact that $t(s)$ signifies different things depending in which equation we are since crossing is implemented differently for different components. In~(\\ref{m11to11disp}) is it $t(s)=4m_1^2-s$; in (\\ref{m11to12disp}) it is given by (\\ref{t11to12}); and in (\\ref{m12to12disp}) it is given by~$t(s)=2m_1^2+2m_2^2-s$. In what follows, it should always be clear from the context which~$t(s)$ we are talking about. \n\n\n\\subsection{Multiple Component Dual Problem} \\la{mDual}\n\nThe formulation of the dual problem for the multiple component scenario can be derived following the steps outlined in Sec.~\\ref{sec2}.\nThere are, however, two practical obstacles: one is the complicated analytic structure of the $11\\to12$ component, the other is the presence of the \\emph{extended} unitarity region. \nIn this section we shall solve both problems if we want to arrive at an elegant and efficient dual numerical setup.\n\nAs always, we start from the primal radial problem\n\\begin{mdframed}[frametitle={Primal Radial Problem for Multiple Component},frametitlealignment=\\centering,backgroundcolor=blue!6, leftmargin=0cm, rightmargin=0cm, topline=false,bottomline=false, leftline=false, rightline=false] \n\\vspace{-0.4cm}\n\\begin{align} &\\underset{\\text{in } {R^2,\\mathbb{M}}}{\\text{maximize}} && R^2\\nonumber\\\\\n& \\text{constr. by} && \n0=c_1 \\equiv \\text{Res}_{m_1^2}(M_{11\\to11})- R^2\\sin^2\\beta \\,, \\nonumber\\\\\n& && 0=c_2\\equiv \\text{Res}_{m_2^2}(M_{11\\to11})-R^2\\cos^2\\beta \\,,\\nonumber \\\\\n& && 0=c_3 \\equiv \\text{Res}_{m_1^2}(M_{11\\to12})-R^2\\sin\\beta\\cos\\beta\\,, \\nonumber\\\\ \n& && 0=c_4 \\equiv \\text{Res}_{m_1^2}(M_{12\\to12})-R^2\\cos^2\\beta \\,,\\nonumber\\\\\n& s > 4m_1^2 && \\mathbb{A}=0 \\qquad \\text{where $\\mathbb{A}$ is given in (\\ref{AA})} \\nonumber \\,, \\\\\n& s > 4m_1^2&& \\mathbb{U} \\succeq 0\\qquad \\text{where $\\mathbb{U}$ is given in (\\ref{unitarityfullsystem})}\\,.\n \\label{primal bootstrap 11to12}\n\\end{align}\n\\end{mdframed}\nIf not for the $c_i=0$ equality constraints related to the radial problem, this setup would fit~(\\ref{primal2}). \nNote also that the last constraint incorporate automatically unitarity and extended unitarity. Sometimes it is convenient to analyze it separately in the extended and regular unitarity regions corresponding to $s$ bigger\/smaller than $(m_1+m_2)^2$ respectively. \n\nWe start our path towards the dual problem with the usual Lagrangian starting point \n\\begBvR{equation}\n\\mathcal{L}{=}R^2 + \\sum_{i=1}^4 c_i \\nu _i +\n\\int_{4m_1^2}^\\infty {\\rm tr~}{(\\text{\\outline{w}} \\mathbb{A})}\\,ds\n+\\int_{4m_1^2}^\\infty{\\rm tr~}{(\\text{\\outline{$\\Lambda$}} \\mathbb{U})}\\,ds,\n\\label{fullsystlag}\n\\end{equation}\nwith $$\\text{\\outline{w}} =\\begin{pmatrix}\nw_1 & \\tfrac{1}{2}w_2\\\\\n\\tfrac{1}{2}w_2 & w_3\n\\end{pmatrix}$$ \nand $\\text{\\outline{$\\Lambda$}}$ semi-definite positive. Next we want to identify $\\text{\\outline{w}}$ as the discontinuities of full analytic functions $\\mathbb{W}$ such that the resulting lagrangian becomes manifestly local. This is still possible here but turns out to be more interesting than before because of the richer $11\\to 12$ kinematics reviewed in the previous section. The final result is \n\\begBvR{equation}\\mathbb{W} =\\begin{pmatrix}\nW_1 & \\tfrac{1}{2}W_2\\\\\n\\tfrac{1}{2}W_2 & W_3\n\\end{pmatrix} \\label{Wmat}\n\\end{equation}\nwith the dispersive representations of the three \\emph{dual scattering functions}\n\\begin{align}\n\\label{analyticWcitable}\nW_1(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\, \\IM W_1(z)\\left(\\frac{1}{z-s}-\\frac{1}{z-4m_1^2+s}\\right),\\\\\nW_2(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\,\\IM W_2(z)\\left(\\frac{1}{z-s}+\\frac{J_t(s)}{z-t(s)}+\\frac{J_u(s)}{z-u(s)}\\right), \\la{W2disp}\\\\\nW_3(s)&=\\frac{1}{\\pi}\\int_{4m_1^2}^\\infty dz\\, \\IM W_3(z)\\left(\\frac{1}{z-s}-\\frac{1}{z-(m_1+m_2)^2+s}\\right).\n\\end{align}\nNote that the first and last lines here are pretty much as before: they correspond to anti-crossing symmetric symmetric functionals $W_1$ and $W_3$. The middle line -- with its Jacobians $J_t=dt\/ds$ and $J_u=du\/ds$ from (\\ref{u11to12},\\ref{t11to12}) -- is more interesting and more subtle. We explain its origin in full detail in appendix \\ref{W2explanation}. \n\n\nThen we have the crucial relation required to render the Lagrangian local:\n\\begin{eqnarray}\n\\int_{4m_1^2}^\\infty {\\rm tr~}{(\\text{\\outline{w}}\\,\\mathbb{A})}\\,ds &=&\\int_{4m_1^2}^\\infty \\IM {\\rm tr~}{(\\mathbb{W}\\, \\mathbb{M})}\\,ds+\\pi\\big(\\underset{m_1^2}{\\text{Res}}(M_{11\\to11}) W_1(m_1^2)+\\underset{m_2^2}{\\text{Res}}(M_{11\\to11}) W_1(m_2^2) \\nonumber\\\\\n&&+\\underset{m_1^2}{\\text{Res}}(M_{11\\to12}) W_3(m_1^2)+\\underset{m_1^2}{\\text{Res}}(M_{12\\to12}) W_2(m_1^2) \\big)\\nonumber\n\\end{eqnarray}\nOnce we plug this relation into our lagrangian (\\ref{fullsystlag}) the last line nicely combines with the first two terms there; these terms are the only terms where $R$, $\\nu_i$ and the various residues appear.\\footnote{{Recall that $R$, the residues and $M(s)$ for $s>4$ are our primal variables, while $\\nu_i$ and $W_i(s)$ are our dual variables.}} Maximization with respect to the residues will relate the various functionals $W$ evaluated at the stable particle masses to the lagrange multipliers $\\nu_i$ as before while maximization with respect to $R$ will lead to to a linear constraint involving all these functionals which plays the important role of our normalization condition. It reads: \n\\begBvR{equation}\n1+\\pi(W_1(m_1)^2 \\sin^2\\beta+W_1(m_2^2)\\cos^2\\beta+W_2(m_1^2)\\sin\\beta\\cos\\beta+W_3(m_1^2)\\cos^2\\beta)=0 \\,.\n\\label{RadialCondFull}\n\\end{equation}\nAt this point we already got rid of the lagrange multipliers, the radius and the residues; our (partially extremized) Lagrangian is now a functional of the real and imaginary parts of the amplitudes $\\mathbb{M}$ above $4m_1^4$ and of the functionals $W_i$ also for $s>4m_1^2$. Our dual functional $d$ is therefore the maximization over the amplitudes $\\mathbb{M}$ of \n\\begBvR{equation}\nd( \\mathbb{W},\\text{\\outline{$\\Lambda$}})= \\sup_{\\mathbb{M}} \\int_{4m_1^2}^\\infty ds \\Big( {\\rm tr~}\\!(\\IM \\mathbb{W}\\, \\mathbb{M})+{\\rm tr~}{(\\text{\\outline{$\\Lambda$}} \\mathbb{U}(\\mathbb{M}))}\\Big)\n\\label{LindaLagrangia}\n\\end{equation}\nSince we are dealing with small $2\\times 2$ matrices we found it convenient to go to components at this point and also to separate the last integral into its extended and regular unitarity contributions separately. \n\nFor example, using \n\\begBvR{equation}\n\\text{\\outline{$\\Lambda$}}=\\begin{pmatrix}\n\\lambda_1 & \\tfrac{1}{2} \\lambda_2\\\\\n\\tfrac{1}{2}\\lambda_2^* & \\lambda_3\n\\end{pmatrix}\\succeq \\mathbf{0}, \\label{Lmat} \n\\end{equation}\nand evaluating the equations of motion for $\\RE M_{12\\to12}$ and $\\IM M_{12\\to12}$ in the extended unitarity region we get\n\\begBvR{equation}\n\\RE W_3+2\\lambda_3=0,\\qquad \\IM W_3=0.\\nonumber\n\\end{equation}\nThese two equations constrain the dual scattering function associated to the $12\\to 12$ to have a discontinuity starting at $(m_1+m_2)^2$. Moreover, the semidefinite-positiveness condition on $\\text{\\outline{$\\Lambda$}}$ \nimplies\\footnote{Second order variations show that the full positive semidefiniteness of $\\text{\\outline{$\\Lambda$}}$ is required for the critical $\\mathbb{M}_c$ to be a maximum.} that \n\\begBvR{equation}\n\\lambda_3(s)\\geq 0 \\qquad \\implies \\qquad \\RE W_3(s)\\leq0, \\qquad \\text{for } 4m_1^20$ in $t((m_1+m_2)^2)=(m_1-m_2)^2