diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznjpd" "b/data_all_eng_slimpj/shuffled/split2/finalzznjpd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznjpd" @@ -0,0 +1,5 @@ +{"text":"\\section{I. Introduction} \nMachine-learning has progressed significantly over the last decade\nin various areas of physical sciences~\\cite{Snyder_2012,Gabbard_2018, Mills_2018} \nafter some theoretical works in the area of neural networks (See~\\cite{Hornik_1989,Cybenko_1989} for examples.)\\\\\n\\indent\nIn fluid dynamics area~\\textcite{Ling_2016} presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data (see also~\\cite{Kutz_2018}). \n\\textcite{Gamahara_2017} uses an artificial neural network to find a new subgrid model of the \nsubgrid-scale stress in large-eddy simulation.\nBy using ``Long Short-Term Memory~(LSTM)''~\\cite{Hochreiter_1997}, \\textcite{Wan_2018} studies a data-assisted reduced-order modeling of extreme events in various dynamics including the Kolomogorov flow of the two-dimensional incompressible Navier--Stokes equation.\nSee also~\\textcite{Vlachas_2018} for the result on the barotropic climate model.\\\\\n\\indent \nIt is recently reported that reservoir computing, brain-inspired machine-learning framework that employs a data-driven dynamical system,\n is effective in the inference of a future such as time-series, frequency spectra \nand the Lyapunov spectra~\\cite{Verstraeten_2007,Inubushi_2017, Zhixin_2017, Pathak_2017,Ibanez_2018,Pathak_2018,Antonik_2018}.\n\\textcite{Pathak_2017} exemplifies using the Lorenz system and the Kuramoto-Sivashinsky system that the model obtained by reservoir computing can generate an arbitrarily long time-series whose Lyapunov exponents approximate those of the input signal. \\\\\n\\indent A reservoir is a recurrent neural network whose internal parameters are not adjusted to fit the data in the training process. \nWhat is done is to train the reservoir by feeding it an input time-series and fitting a linear function of the reservoir state variables \nto a desired output time-series. \nDue to this approach of reservoir computing we can save a great amount of computational costs, \nwhich enables us to deal with a complex deterministic behavior. \nThe framework was proposed as Echo-State Networks~\\cite{Jaeger_2001,Jaeger_2004} and Liquid-State Machines~\\cite{Maass_2002}.\\\\\n\\indent \nIt is known that an inference of a fluid flow is difficult but important in both physical and industrial aspects. \nIn this paper, we infer variables of a chaotic fluid flow by applying the method of reservoir computing without a prior knowledge of physical process. \\\\\n\\indent\nAfter introducing the method of reservoir computing in Section II and a fluid flow in Section III, \nwe explain how to apply the method to the inference of fluid variables, \nand show that inferences of both microscopic and macroscopic behaviors are successful in Sections IV and V, respectively. \nIn Section VI, we exemplify that a time-series inference of high-dimensional dynamics is possible by using delay coordinates, even when the number of measurements is smaller than the Lyapunov dimension of the attractor. \nDiscussions and remarks are given in Section VII.\n\\section{II. Reservoir computing} \nReservoir computing is recently used in the inference of complex dynamics~\\cite{Zhixin_2017, Pathak_2017,Pathak_2018,Ibanez_2018,Lu_2018}. \nThe reservoir computing focuses on the determination of a translation matrix from reservoir state variables to variables to be inferred \n(see eq.~(\\ref{eq:output2})).\nHere we review the outline of the method \\cite{Jaeger_2004,Zhixin_2017}. We consider a dynamical system \n$$\\frac{d\\mathbf{\\phi}}{dt}=\\mathbf{f}(\\mathbf{\\phi}),$$\ntogether with a pair of $\\phi$-dependent, vector valued variables \n\\begin{equation*}\n\\mathbf{u}=\\mathbf{h}_1(\\mathbf{\\phi})\\in \\mathbb{R}^{M} \n~~\\text{and}~~\n\\mathbf{s}=\\mathbf{h}_2(\\mathbf{\\phi})\\in \\mathbb{R}^{P}.\\label{eq:input}\n\\end{equation*} \nWe seek a method for using the continued knowledge of \n$\\mathbf{u}$ to determine an estimate of $\\mathbf{s}$ as a function of time when direct measurement of $\\mathbf{s}$ \nis not available, which we call the {\\bf partial-inference}. \nWe also consider the {\\bf full-inference} for which we have a knowledge $\\mathbf{u}$ only for $t\\le T$.\nConcerning the algorithm, this is just a variant of the partial-inference~\\cite{Pathak_2017,Pathak_2018}, and will be explained later.\\\\\n\\indent \nThe dynamics of the reservoir state vector \n$$\\mathbf{r}\\in \\mathbb{R}^{N}~(N \\gg M),$$\n is defined by\n\\begin{equation}\n\t\\mathbf{r}(t+\\Delta t)=(1-\\alpha)\\mathbf{r}(t)+\\alpha \\tanh(\\mathbf{A}\\mathbf{r}(t)+\\mathbf{W}_{\\text{in}}\\mathbf{u}(t)\n\t),\\label{eq:reservoir}\n\\end{equation}\nwhere $\\Delta t$ is a relatively short time step.\nThe matrix $\\mathbf{A}$ is a weighted adjacency matrix of the reservoir layer, and the $M$-dimensional \ninput $\\mathbf{u}(t)$ is fed in to the $N$ reservoir nodes via a linear input weight matrix denoted by $\\mathbf{W}_{\\text{in}}$.\nThe parameter $\\alpha$ ($0<\\alpha\\le 1$) in eq.~(\\ref{eq:reservoir}) adjusts the nonlinearity of the dynamics of $\\mathbf{r}$, \nand is chosen depending upon the complexity of the dynamics of measurements and the time step $\\Delta t$. \\\\\n\\indent Each row of $\\mathbf{W}_{\\text{in}}$ has one nonzero element, chosen from a uniform distribution on $[-\\sigma,\\sigma]$.\nThe matrix $\\mathbf{A}$ is chosen from a sparse random \nmatrix in which the fraction of nonzero matrix elements is $(D_1+D_2)\/N$, \nso that the average degree of a reservoir node is $D_1+D_2$. \nThe $D_1$ non-zero components are chosen from a uniform distribution on $[-1, 1]$, and $D_2$ from that on $[-\\gamma, \\gamma]$ for $\\gamma~(\\ll 1)$, \nwhere $D_2$ non-zero components are introduced to reflect weak couplings among components of $\\mathbf{r}$. \nThen we uniformly rescale all the elements of $\\mathbf{A}$ so that the largest value of the magnitudes of its eigenvalues becomes $\\rho$. \\\\\n\\indent The output, which is a $P$-dimensional vector, is taken to be a linear function of the reservoir state $\\mathbf{r}$:\n\\begin{equation}\n\t\\Hat{\\mathbf{s}}(t)=\\mathbf{W}_{out}\\mathbf{r}(t)+\\mathbf{c}.\\label{eq:output}\n\\end{equation}\nThe reservoir state $\\mathbf{r}$ evolves following eq.~(\\ref{eq:reservoir}) with input \n$\\mathbf{u}(t)$, \nstarting from random initial state $\\mathbf{r}(-\\tau)$ whose elements are chosen from $(0, 1]$ in order not to diverge, \nwhere \n$\\tau\/\\Delta t~(\\gg 1)$ is the transient time.\nWe obtain $L=T\/\\Delta t$ steps of reservoir states $\\{\\mathbf{r}(l\\Delta t)\\}_{l=1}^{L}$ by eq.~(\\ref{eq:reservoir}). \nMoreover, we record the actual measurements of the state variables $\\{\\mathbf{s}(l\\Delta t)\\}_{l=1}^{L}$. \\\\\n\\indent We train the network by determining $\\mathbf{W}_\\text{out}$ and $\\mathbf{c}$ \nso that the reservoir output approximates the measurement for $0< t \\le T$~(training phase), which is the main part of this computation.\nWe do this by minimizing the following quadratic form with respect to $\\mathbf{W}_\\text{out}$ and $\\mathbf{c}$:\n\\begin{equation}\n\\displaystyle\\sum^{L}_{l=1} \\|(\\mathbf{W}_\\text{out}\\mathbf{r}(l\\Delta t)+\\mathbf{c})-\\mathbf{s}(l\\Delta t)\\|^2\n+\\beta[Tr(\\mathbf{W}_\\text{out}\\mathbf{W}^{T}_\\text{out})],\\label{eq:minimize}\n\\end{equation}\nwhere $\\|\\mathbf{q}\\|^2=\\mathbf{q}^T \\mathbf{q}$ for a vector $\\mathbf{q}$,\nand the second term is a regularization term introduced to avoid overfitting \n$\\mathbf{W}_\\text{out}$ for $\\beta \\ge 0$.\n\\color{black}{}\nWhen the training is successful, $\\Hat{\\mathbf{s}}(t)$ should approximate the desired unmeasured quantity $\\mathbf{s}(t)$ for $t>T$~(inference phase). \nFollowing eq.~(\\ref{eq:output}), we obtain \n\\begin{equation}\n\t\\Hat{\\mathbf{s}}(t)=\\mathbf{W}^*_\\text{out}\\mathbf{r}(t)+c^*, \\label{eq:output2}\n\\end{equation}\nwhere $\\mathbf{W}^*_\\text{out}$ and $c^*$ denote the solutions for the minimizers \nof the quadratic form~(\\ref{eq:minimize})~(see~\\cite{Lukosevicius_2009}~P.140 for details):\n\\begin{align*}\n\t\\mathbf{W}^*_\\text{out}&=\\delta\\mathbf{S}\\delta\\mathbf{R}^{T}(\\delta\\mathbf{R}\\delta\\mathbf{R}^{T}+\\beta\\mathbf{I})^{-1},\\\\\n\tc^*&=-[\\mathbf{W}^*_\\text{out}\\overline{\\mathbf{r}}-\\overline{\\mathbf{s}}],\n\\end{align*}\t\nwhere $\\overline{r}=\\sum^{L}_{l=1} \\mathbf{r}(l \\Delta t)$\/L, \n$\\overline{s}=\\sum^{L}_{l=1} \\mathbf{s}(l \\Delta t)\/L$,\nand $\\mathbf{I}$ is the $N \\times N$ identity matrix, $\\delta\\mathbf{R}$ (respectively, $\\delta\\mathbf{S}$) is the matrix \nwhose $l$-th column is $\\mathbf{r}(l\\Delta t)-\\overline{r}$ (respectively, $\\mathbf{s}(l\\Delta t)-\\overline{s}$).\\\\\n\\begin{table*}[htb]\n\\normalsize\n\t\t\\begin{tabular}{|l|l|r|r|r|} \n\t\t\t\\hline \t\n \t\t \\multicolumn{2}{|c|}{parameter}& (a) & (b) & (c)\\\\ \\hline\n \t\t\t~$\\tau$&transient time& 1000 &2500 &2350 \\\\ \\hline\n \t\t\t~$T$&training time&10000 &20000 &20000\\\\ \\hline\n \t\t\t~$M$&dimension of measurements&270 &9 &36 \\\\ \\hline\n \t\t\t~$P$&dimension of inferred variables &2 &9 &36 \\\\ \\hline\n\t \t\t~$N$&number of reservoir nodes&6400 &3200 &3200\\\\ \\hline\n \t \t\t~$D_1$¶meter of determining elements of $\\mathbf{A}$&60 &320 &120 \\\\ \\hline\n \t\t\t~$D_2$¶meter of determining elements of $\\mathbf{A}$&60 &0 &0\\\\ \\hline\n \t\t\t~$\\gamma$&scale of input weights in $\\mathbf{A}$ &0.1 &0 &0\\\\ \\hline\n\t \t \t~$\\rho$&maximal eigenvalue of $\\mathbf{A}$ &1.0 &0.5 &0.5 \\\\ \\hline\n \t\t\t~$\\sigma$&scale of input weights in $\\mathbf{W}_{in}$&0.4 &0.3 &0.5 \\\\ \\hline\n\t \t\t~$\\alpha$&nonlinearity degree of reservoir dynamics&0.7 &0.3 &0.4 \\\\ \\hline\n \t\t\t~$\\Delta t$&time step for reservoir dynamics &0.1 &0.25 &0.5 \\\\ \\hline\n \t\t\t~$\\beta$®ularization parameter &0 &0.01 &0.1\\\\ \\hline\n \\end{tabular}\n \t\t\\caption{{\\bf Sets of parameters for our reservoir computing.} The set (a) is used for the partial-inference of microscopic Fourier variables, whereas the set (b) is for the full-inference of macroscopic variables of energy functions and energy spectrum, \n \t\tand the set (c) is for the full-inference from only one measurement.\n \t\t}\n \t\t\\label{tab:parameter}\n\\end{table*}\n\\indent \nIn order to consider the effect of all the variables equally, we take the normalized value $\\tilde{X}(t)$ for each variable $X(t)$, which will be used throughout the whole procedure of our reservoir computing:\n $$\\tilde{X}(t)=[X(t)-X_1]\/X_2,$$\nwhere $X_1$ is the mean value and $X_2$ is the variance. \nWhen we reconstruct $X(t)$ in the inference phase from $\\tilde{X}(t)$, \nwe employ $X_1$ and $X_2$ obtained in the training phase.\n Due to the normalization we can avoid adjustments of $\\sigma$. \\\\\n\\begin{figure}[]\n \\includegraphics[width=1.\\columnwidth,height=0.705\\columnwidth]{fig1a.eps}\n \\includegraphics[width=1.\\columnwidth,height=0.705\\columnwidth]{fig1b.eps}\n\\caption{ {\\bf \nPartial-inference of time-series of microscopic variables in Fourier space of a fluid flow.}\nFourier variables ${\\tilde{a}}_{\\eta_1=(1, 3,3,3)}$~(top) and ${\\tilde{a}}_{\\eta_2=(1, 2,3,4)}$~(bottom) are inferred \nby using measured variables $\\tilde{a}_{\\eta}$ for $\\eta \\in S$ as well as the \npast time-series data for all the measured variables $\\tilde{a}_{\\eta}$ for $\\eta \\in S\\cup\\{\\eta_1, \\eta_2\\}$.\nWe can observe that the inferred time-series almost coincide with the actual ones obtained by the direct numerical simulation of the Navier--Stokes equation even after sufficiently large time has passed since the training phase finished.\nThe inference errors in $l^1$-norm averaged over $t-T\\in[0,2000]$ are $1.8 \\%$ and $3.5 \\%$ \nfor $\\tilde{a}_{\\eta_1}$ and $\\tilde{a}_{\\eta_2}$, respectively.\n}\\label{fig:partial-micro}\n\\end{figure}\n\\section{III. Fluid flow} \nIn order to generate measurements of the reservoir computing, \nwe employ the direct numerical simulation of the incompressible three-dimensional Navier--Stokes equation \nunder periodic boundary conditions:\n\t\\begin{align*}\n \\begin{cases}\n\t\t\\partial_t v -\\nu \\Delta v\n\t\t\t+(v \\cdot \\nabla) v+\\nabla \\pi=f,~\\nabla \\cdot v=0, ~\\mathbb{T}^3\\times(0,\\infty),\\\\\nv\\big| _{t=0}=v_0\\quad \\text{with $\\nabla \\cdot v_0=0$}, ~~~~~~~~~~~~~~~~~~~~\\mathbb{T}^3,\n \\end{cases}\n\t\\end{align*}\nwhere ${\\mathbb{T}}=[0,2\\pi)$, $\\nu>0$ is viscosity parameter, $\\pi (x,t)$ is pressure, and $v(x,t)= (v_1(x,t),v_2(x,t),v_3(x,t))$ is velocity.\nWe use the Fourier spectral method~\\cite{ishioka_1999} with $N_0(=9)$ modes in each direction, meaning that the system is approximated by \n$2(2 N_0+1)^3~(=13718)$-dimensional ordinary differential equations (ODEs).\nThe ODEs are integrated by the 4th-order Runge--Kutta method, and the forcing is input into the low-frequency variables at each time step so as to preserve the energy of the low-frequency part.\nThat is, both the real and the imaginary parts of the Fourier coefficient of the vorticity $\\omega~(=\\text{rot}~v)$, \n$$\n\t\\mathcal{F}_{[\\omega_{\\zeta}]}(\\kappa,t):= \\dfrac{1}{(2\\pi)^3} \\displaystyle\\int_{\\mathbb{T}^3}\n\t\t\t\\omega_{\\zeta}(x, t)\n\t\te^{-i(\\kappa\\cdot x)}dx, \n$$\n are kept constant for $\\zeta=1,2$, $\\kappa = (1,0,0), (0,1,0)$. \n We use an initial condition, which has energy only in the low-frequency variables. \nSee~\\cite{ishioka_1999} for the details. \n\\section{IV. Partial-inference of microscopic variables: Fourier variables of velocity.}\nWe consider the absolute value of Fourier variables of velocity $\\mathcal{F}_{[v_{\\zeta}]}(\\kappa,t)$ as the representative microscopic variables:\n\t\\begin{align}\n\t\ta_{\\eta} (t)= \\left| \\mathcal{F}_{[v_{\\zeta}]}(\\kappa,t) \\right|\n\t\t:= \\left| \\dfrac{1}{(2\\pi)^3} \\int_{\\mathbb{T}^3}\n\t\t\tv_{\\zeta}(x, t)\n\t\te^{-i(\\kappa\\cdot x)}dx \\right|, \\label{eq:fourier-coefficient}\n\t\\end{align}\n\twhere \n\t\t$\\eta = (\\zeta, \\kappa)\\in S_0:= \\{(\\zeta, \\kappa_1, \\kappa_2, \\kappa_3) \\in \\mathbb{Z}^4 |~ \\zeta\\in \\{1,2,3\\}, \\kappa_1, \\kappa_2, \\kappa_3 \\in [-N_0,N_0] \\}$.\nSince $v$ is real, $a_{(\\cdot, \\kappa_1, \\kappa_2, \\kappa_3)}=a_{(\\cdot, -\\kappa_1, -\\kappa_2, -\\kappa_3)}$. \nThe reason why we take the absolute value in eq.~(\\ref{eq:fourier-coefficient}) is to kill the rotational invariance of a complex variable and to make an inference possible.\nWe choose a chaotic parameter $\\nu=0.05862$, \nand set $\\mathbf{u}(t)$ as the time-series of $M=270$ Fourier variables $\\tilde{a}_{\\eta}$, \nwhere $\\eta \\in S:= \\{(2, \\pm \\kappa_1, \\kappa_2, \\kappa_3) \\in \\mathbb{Z}^4 |~1 \\leq \\kappa_1 \\leq N_0, \\kappa_1 \\leq \\kappa_2 \\leq \\kappa_3 \\leq \\kappa_1+4 \\}$ and each component is taken $\\bmod~N_0$, \n that is, \n\t\\begin{align*}\n\t\\mathbf{u}(t)&=(\\{ \\tilde{a}_{\\eta} \\}_{\\eta \\in S})^{t}.\n\t\\end{align*}\nWe also set \n\t\\begin{align*}\n\t\\mathbf{s}(t)&=(\\tilde{a}_{(1, 3,3,3)}, \\tilde{a}_{(1, 2,3,4)})^{t}, \n\t\\end{align*}\nwhere $(1, 3,3,3), (1, 2,3,4) \\notin S$. \nUnder the set of parameters in TABLE \\ref{tab:parameter}~(a)\nwe infer the time-series $\\mathbf{s}(t)$, \nwhich is successful for quite a long time (see Fig.~\\ref{fig:partial-micro}).\\\\\n\\indent \nThe choice of variables to be trained is not very significant in this study, because the attractor does not show a homogeneous \nisotropic turbulence, and it has less symmetries.\nWe can see from the Poincar\\'e section of the microscopic variables that the flow is not isotropic and \nindeterminacy in inference due to the continuous symmetry does not appear. \nHowever, by training variables with different types of behaviors, we can construct a reservoir model \nin less computational costs with lower dimension $N$ of the reservoir system. \nIn fact, we confirmed that we can infer some other fluid variables including both low-frequency and high-frequency variables \nfrom some other training variables.\nWe found that an inference of a high-frequency variable tends to be more difficult, maybe because of the stronger intermittency.\nRemark that $D_2$ is useful to represent non-local relatively weak interactions among microscopic variables in the partial inference. \n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth,height=0.705\\columnwidth]{fig2a.eps}\n\\includegraphics[width=1.0\\columnwidth,height=0.705\\columnwidth]{fig2b.eps}\n\\includegraphics[width=1.0\\columnwidth,height=0.705\\columnwidth]{fig2c.eps}\n\\caption{ {\\bf \nFull-inference of time-series of macroscopic variables of a fluid flow.}\nTime-series of energy function ${\\tilde{E}}(k,t)$ for $k=4$ (top) and $9$ (middle) are inferred from the reservoir system \nin comparison with that of a reference data obtained by the direct numerical simulation of the Navier--Stokes equation.\nThe inference error defined by $\\varepsilon(t)=\\sum^{N_0}_{k=1}|\\tilde{E}(k,t)-\\hat{\\tilde{E}}(k,t)|\/N_0$ $(N_0=9)$ is shown to grow exponentially with time up to $t-T=100$ (bottom), which is inevitable for a chaotic behavior of a fluid flow.\nThe growth of error within a short time highly depends on the direction of the perturbation vector\n $\\{\\tilde{E}(\\cdot,T+\\Delta t)-\\hat{\\tilde{E}}(\\cdot,T+\\Delta t)\\}$, \n \\color{black}{}\n and its slope can vary in different settings.\n}\\label{fig:full-macro}\n\\end{figure}\n\\begin{figure}\n \\includegraphics[width=1.0\\columnwidth, height=0.705\\columnwidth]{fig3.eps}\t\n\\caption{\n{\\bf \nEnergy spectrum $\\overline{E}(k)$ reproduced from the reservoir computing.}\nThe spectrum is obtained from the full-inference of an energy function $E(k,t)$, which is compared with that for a reference data obtained by the direct numerical simulation of the Navier--Stokes equation.\nThe coincidence of the two energy spectra implies that the reservoir system captures the dynamics of a fluid flow in statistical sense, \neven after the time-series inference has failed due to the chaotic property (see Fig.~\\ref{fig:full-macro}). \nThe Kolmogorov $-5\/3$ law of the energy spectrum is shown as a reference.\nThe relative error of inferred variable $\\overline{\\hat{E}}(k)$ from $\\overline{E}(k)$ $(k=1,\\cdots,9)$ is up to $1.3\\%$. \n}\\label{fig:full-macro-average}\n\\end{figure}\n\\section{V. Full-inference of macroscopic variables: Energy function and Energy spectrum} \nWe study an energy function as the representative of a macroscopic variable. \nWe set $\\nu=0.058$ for which the flow is more turbulent than the previous case.\nHowever, the complexity of the dynamics is much less \nthan that for a microscopic variable for the same viscosity. \nThis is because the energy function can be thought of as an averaged quantity of many microscopic variables. \nThe energy function $E_0(k, t)$ for wavenumber $k \\in \\mathbb{N}$ is defined by \n\t\\begin{align*}\n\t\tE_0(k, t):= \\dfrac{1}{2} \\int_{D_k} {\\sum_{\\zeta=1}^{3}}\n\t\t\t\\left| \n\t\t\t\t\\mathcal{F}_{[v_{\\zeta}]}(\\kappa, t)\n\t\t\t\\right|^2 d\\kappa, \n\t\\end{align*}\nwhere $D_k:=\\{ \\kappa \\in \\mathbb{Z}^3| k-0.5 \\leq |\\kappa| < k+0.5 \\}$. \nSee eq.~(\\ref{eq:fourier-coefficient}) for the expression of $\\mathcal{F}_{[v_{\\zeta}]}(\\kappa, t)$.\nIn order to get rid of the high-frequency fluctuation, we take the\nshort-time average \n$$E(k,t)=\\sum_{s=t-99\\Delta s}^{t}E_0(k,s)\/100,$$ \nwhere $\\Delta s=0.05$ is the time step of the integration of the Navier--Stokes equation. \nThis helps us to obtain essential low-frequency dynamics of an energy function and infer its time-series \nwith less computational costs with lower dimension $N$ of the reservoir vectors. \nThe averaged energy function $E(k,t)$ will be called an energy function hereafter.\\\\\n\\indent In the training phase for $t \\in (0,T]$, $\\mathbf{W}^{*}_\\text{out}$ and $c^{*}$ are determined by settin\n\t\\begin{align*}\n\t\\mathbf{u}(t)&=(\\tilde{E}(1,t), \\tilde{E}(2,t),\\cdots,\\tilde{E}(9,t))^{t}, \\\\\n\t\\mathbf{s}(t)&=(\\tilde{E}(1,t), \\tilde{E}(2,t),\\cdots,\\tilde{E}(9,t))^{t}, \n\t\\end{align*}\nand by following the same procedure as the partial-inference.\nIn the inference phase for $t>T$, eq.(\\ref{eq:reservoir}) is written as \n\\begin{equation*}\n\\mathbf{r}(t+\\Delta t)=(1-\\alpha)\\mathbf{r}(t)+\\alpha \\tanh(\\mathbf{A}\\mathbf{r}(t)+\\mathbf{W}_{\\text{in}}\\Hat{\\mathbf{s}}(t)),\\label{eq:full-reservoir}\n\\end{equation*}\nby setting $\\mathbf{u}(t)$ as\n\t\\begin{align*}\n\t\\Hat{\\mathbf{s}}(t)=(\\hat{\\tilde{E}}(1,t), \\hat{\\tilde{E}}(2,t),\\cdots,\\hat{\\tilde{E}}(9,t))^{t} \n\t\\end{align*}\n\t obtained from eq.~(\\ref{eq:output2}).\nA set of parameters employed here is shown in TABLE \\ref{tab:parameter}~(b). \\\\\n\\indent We found that an inference of energy functions is successful \nfor some time after finishing training $9$-dimensional time-series data of energy functions. \nThe two cases for $\\tilde{E}(4,t)$ and $\\tilde{E}(9,t)$ are shown in Fig.~\\ref{fig:full-macro}~(top)(middle). \nThe failure in the long-term time-series inference is inevitable just due to the sensitive dependence on initial condition of a chaotic property of the fluid flow. \nIn fact, the growth rate of error in the energy functions is shown to be exponential for $t-T\\lesssim 100$ in Fig.~\\ref{fig:full-macro}~(bottom).\nHowever, the energy spectrum $\\overline{E}(k)=\\langle E(k,t) \\rangle$, the time average of an energy function $E(k,t)$, can be reproduced from the inferred time-series data for $1000j}^n e_{ij},\n\\end{gather}\nand $D$ is a~diagonal matrix\n\\begin{gather*}\nD=\\operatorname{diag}(q_1,q_2,\\ldots,q_n)=\\sum_{i=1}^n q_i e_{ii}.\n\\end{gather*}\nMatrices $ e_{ij}$ are $n\\times n$ with only one non zero $(ij)$ entry, which equals to unit.\n\nSubstituting this tensor f\\\/ield $A$ and~a~natural Hamilton function\n\\begin{gather*}\nH=\\sum_{i=1}^n p_i^2+V(q)\n\\end{gather*}\ninto the def\\\/inition of $P'$ one gets a~system of equations~\\eqref{sch-eq} on $V(q)$.\nOne of the partial solutions of this system is the Hamilton function for the open Toda lattice\nassociated with $\\mathcal A_n$ root system\n\\begin{gather*}\nH=\\sum_{i=1}^n p_i^2+a\\sum_{i=1}^{n-1}e^{q_i-q_{i+1}},\\qquad a\\in\\mathbb R.\n\\end{gather*}\nTraces of powers of the corresponding recursion operator $N$~\\eqref{rec-op}\n\\begin{gather}\n\\label{tr-ham}\nH_k=\\operatorname{tr} N^k, \\qquad k=1,\\ldots,n,\n\\end{gather}\nare functionally independent constants of motion in bi-involution\nwith respect to both Poisson brackets\n\\begin{gather*}\n\\{H_i,H_j\\}=\\{H_i,H_j\\}'=0.\n\\end{gather*}\nThis Poisson bivector $P'$ was found by Das, Okubo and~Fernandes~\\cite{das89,fern93}.\n\nIn generic case we can use a~more complicated tensor f\\\/ield\n\\begin{gather*}\n\\widetilde{A}=\\left(\n\\begin{matrix}\n\\widetilde{B}+\\widetilde{D}&0\n\\\\\n0&\\widetilde{C}\n\\\\\n\\end{matrix}\n\\right)P,\n\\end{gather*}\nwhere entries of $\\widetilde{D}$ are linear on $q_i$ and~$\\widetilde{B}$ and~$\\widetilde{C}$ are\nnumerical matrices.\nHere $\\widetilde{B}$ is an arbitrary matrix, whereas $\\widetilde{A}$ and~$\\widetilde{B}$ satisfy to\nalgebraic equations which may be obtained from~\\eqref{sch-eq} at $V(q)=0$.\n\nUsing this tensor f\\\/ield $\\widetilde{A}$ we can get the recursion operators which produce either\nintegrals of motion for the periodic Toda lattice~\\cite{gts06} or variables of separation for the\nToda lattice~\\cite{ts07}.\nIn similar manner we can consider the Toda lattices associated with other classical root\nsystems~\\cite{ts10}.\n\n\\subsection{Relativistic Toda lattice}\n\\label{section2.2}\n\nIf we substitute the Hamilton function $H(q,p)$ and~the following tensor f\\\/ield $A$\n\\begin{gather*}\nA=\\left(\n\\begin{matrix}\n-B^\\top&0\n\\\\\n-E&0\n\\end{matrix}\n\\right)\nP=\\left(\\begin{matrix}\n0&-B^\\top\n\\\\\n0&-E\n\\end{matrix}\n\\right),\n\\end{gather*}\nwhere $E$ is a~unit matrix and~$B$ is given by~\\eqref{b-mat},\ninto the def\\\/inition of $P'$~\\eqref{poi-2} we will obtain a~system of equations on $H$.\nOne of the solutions is the Hamiltonian of the open discrete Toda lattice associated with $\\mathcal\nA_n$ root system\n\\begin{gather}\n\\label{rtl-ham}\nH=\\sum_{i=1}^n \\bigl( c_i+d_i \\bigr),\n\\end{gather}\nwhere $c_i$ and~$d_i$ are the so-called Suris variables\n\\begin{gather*}\nc_i=\\exp(p_i-q_i+q_{i+1}),\\qquad d_i=\\exp(p_i),\\qquad\nq_0=-\\infty,\\qquad q_{n+1}=+\\infty.\n\\end{gather*}\nTraces of powers of the corresponding recursion operator $N$~\\eqref{tr-ham} are integrals of\nmotion in bi-involution with respect to both Poisson brackets.\nNamely this Poisson bivector~$P'$~\\eqref{poi-2} is discussed in~\\cite{rag89,sur93}.\n\nRemind, that according to~\\cite{sur93} there is an equivalence between the relativistic Toda lattice\nand the discrete time Toda lattice.\nNamely, substituting\n\\begin{gather*}\np_j =\\theta_j +\\frac12 \\ln\\left(\n{\\frac{1+\\exp({q}_j-{q}_{j-1} )}{1+\\exp({q}_{j+1} -{q}_j )}}\\right)\n\\end{gather*}\nin~\\eqref{rtl-ham} one gets standard Hamiltonian for the\nrelativistic Toda lattice\n\\begin{gather*}\nH=\\sum_{j=1}^{n-1} \\exp (\\theta_j )\n\\Bigl[\\bigl[1+\\exp({q}_j -{q}_{j-1} ) ]\n[ 1+\\exp({q}_{j+1} -{q}_j ) \\bigr]\\Bigr]^{1\/2}.\n\\end{gather*}\nTransformation $(\\theta_j\\cdot q_j )\\rightarrow\n(p_j\\cdot q_j )$ is a~canonical transformation.\n\nAs above, two numerical matrices $\\widetilde{B}$ and~$\\widetilde{C}$ in the tensor f\\\/ield\n\\begin{gather*}\nA=\\left(\n\\begin{matrix}\n0&\\widetilde{B}\n\\\\\n0&\\widetilde{C}\n\\\\\n\\end{matrix}\n\\right)\n\\end{gather*}\nallow us to get recursion operators $N=P'P^{-1}$ which generate either integrals of motion\nfor the periodic relativistic Toda lattice or variables of separation~\\cite{kuz94}.\n\n\\subsection[Henon-Heiles system]{Henon--Heiles system}\n\\label{section2.3}\n\nAt $n=2$ we can introduce the following linear in momenta tensor f\\\/ield $A$\n\\begin{gather}\n\\label{a-hh}\nA=\\left(\n\\begin{matrix}\nB & 0\n\\\\\n0 & C\n\\\\\n\\end{matrix}\n\\right)P=\\left(\n\\begin{matrix}\n0 & B\n\\\\\n-C & 0\n\\\\\n\\end{matrix}\n\\right),\n\\end{gather}\nwhere\n\\begin{gather*}\nB=\\left(\n\\begin{matrix}\n2q_1p_1&q_1p_2\n\\\\\nq_1p_2&q_2p_2\n\\\\\n\\end{matrix}\n\\right),\\qquad C=\\left(\n\\begin{matrix}\nf_1(q)p_1+f_2(q)p_2&0\n\\\\\n0&f_3(q)p_1+f_4(q)p_2\n\\\\\n\\end{matrix}\n\\right).\n\\end{gather*}\nSubstituting this tensor f\\\/ield $A$ and~a~natural Hamilton function\n\\begin{gather*}\nH_1=p_1^2+p_2^2+V(q)\n\\end{gather*}\ninto the def\\\/inition of $P'$ one gets a~system of equations~\\eqref{sch-eq} on $V(q)$ and~functions\n$f_k(q)$.\nThe resulting system of PDE's has two partial polynomial solutions\n\\begin{gather*}\nV(q)={c_1} q_2\\big(3q_1^2+16q_2^2\\big)+c_2 \\left(2q_2^2\n+\\dfrac{q_1^2}{8}\\right)+c_3q_2,\\qquad c_k\\in\\mathbb R,\n\\end{gather*}\nand\n\\begin{gather*}\nV(q)={c_1}\\big(q_1^4+6q_1^2q_2^2+8q_2^4\\big)\n+{c_2}\\big(q_1^2+4q_2^2\\big)+\\dfrac{c_3}{q_2^2}.\n\\end{gather*}\nSecond integrals of motion $H_2 =\\operatorname{tr} N^2$ are fourth order polynomials in momenta.\n\nSo, one gets the Henon--Heiles potential and~the fourth order potential~\\cite{gdr84} as\nparticular polynomial solutions of the equations~\\eqref{sch-eq} associated with tensor\nf\\\/ield~\\eqref{a-hh}.\n\nUsing slightly deformed tensor f\\\/ield $A$ we can get the same systems with singular\nterms~\\cite{gts11} and\ntheir three-dimensional counterparts~\\cite{ts10}.\n\n\\subsection[Rational Calogero-Moser model]{Rational Calogero--Moser model}\n\\label{section2.4}\n\nFollowing~\\cite{mpt11} let us consider tensor f\\\/ield $A$, which is proportional to $P$\n\\begin{gather*}\nA=\\rho(q,p)P,\n\\end{gather*}\nwhere $\\rho(q,p)$ is a~function on $M$.\nIf\n\\begin{gather}\n\\label{a-cal}\nA=(p_1q_1+\\cdots+p_nq_n)P, \\qquad \\rho=p_1q_1+\\cdots+p_nq_n,\n\\end{gather}\nthen equations~\\eqref{sch-eq} have the following partial solution\n\\begin{gather}\n\\label{ham-cal}\nH=\\dfrac{1}{2}\\sum_{i=1}^{n}p_{i}^{\\;2}+\\dfrac{g^2}{2}\\sum_{i\\neq\nj}^n\\frac{1}{(q_{i}-q_{j})^{2}},\n\\end{gather}\nwhere $g$ is a~coupling constant.\nIt is the Hamilton function of the $n$-particle rational Calogero--Moser model associated with the\nroot system $\\mathcal A_n$.\n\nThe corresponding recursion operator $N$~\\eqref{rec-op} generates only a~Hamilton function\n\\begin{gather*}\n\\operatorname{tr}N^k=2H^{k},\\qquad k=1,\\ldots,n,\n\\end{gather*}\nthat allows us to identify our phase space $M=\\mathbb R^{2n}$ with the irregular bi-Hamiltonian\nmanifold.\n\nIn this case~\\cite{mpt11,ts10} integrals of motion are polynomial solutions of the equations\n\\begin{gather}\n\\label{cal-int}\nP dH=-\\frac{1}{k} P' d\\ln H_k,\\qquad k=1,\\ldots,n,\n\\end{gather}\nwhich have two functionally independent solutions for any $k\\geq2$.\nIt is easy to see that the functions\n\\begin{gather}\n\\label{caz-cal}\nC_{km}=\\frac{H_m^{-1\/m}}{H_k^{-1\/k}}\n\\end{gather}\nare Casimir functions of $P'$, i.e.~$P'dC_{km}=0$.\n\nSome solutions of equations~\\eqref{cal-int} coincide with the well-known integrals of motion\n\\begin{gather*}\nJ_{n-m}\\equiv\\frac{1}{m!}\\underbrace{\\bigg\\{\\sum_{i=1}^n q_{i}\\cdots\\bigg\\{\n}_{m~\\text{times}}\\sum_{i=1}^n q_{i},J_{m}\\bigg\\}\\cdots\\bigg\\},\\qquad m=1,\\dots,n-1,\n\\end{gather*}\nobtained from the conserved quantity\n\\begin{gather}\nJ_{n}\\equiv \\exp\\left(-\\frac{g^{2}}{2}\\sum_{i\\neq j}\n\\frac{1}{(q_{i}-q_{j})^{2}}\\frac{\\partial^{2}}{\\partial\np_{i}\\partial p_{j}}\\right)\\prod_{k=1}^{n}p_{k}\n\\nonumber\n\\end{gather}\nby taking its successive Poisson brackets with $\\sum\\limits_{i=1}^{n}q^{i}$~\\cite{gon01}.\nThese $n$ solutions, including $J_2=H$, are in involution with respect to the Poisson\nbrackets~\\eqref{can-br}.\n\nOther $n-1$ functionally independent solutions of~\\eqref{cal-int},\n\\begin{gather*}\nK_{m}=m g_{1}J_{m}-g_{m}J_{1},\\qquad\ng_{m}=\\frac{1}{2}\\left\\{\\sum_{i=1}^{n}q_{j}^{\\;2},J_{m}\\right\\},\\qquad\nm=2,\\dots,n,\n\\end{gather*}\nare not in involution with respect to the canonical Poisson bracket def\\\/ined\nby~\\eqref{can-p}~\\cite{gon01}.\n\n\\subsection[Rational Ruijsenaars-Schneider model]{Rational Ruijsenaars--Schneider model}\n\\label{section2.5}\n\nLet us consider tensor f\\\/ield $A$, which is proportional to canonical bivector $P$\n\\begin{gather*}\nA=(q_1+\\cdots+q_n)P,\\qquad\\rho=q_1+\\cdots+q_n.\n\\end{gather*}\nIn this case equations~\\eqref{sch-eq} have the following partial solutions\n\\begin{gather}\n\\label{tr-rtl}\nJ_k=\\frac{1}{k!}\\operatorname{tr} L^k,\\qquad k=\\pm 1,\\pm2,\\ldots,\\pm n,\\end{gather}\nwhere $L$ is the Lax matrix of the Ruijsenaars--Schneider model\n\\begin{gather*}\nL=\\sum_{i,j=1}^n\\frac{\\gamma}{q_i-q_j+\\gamma} b_j e_{ij},\\qquad\nb_k=e^{p_k}\\prod_{j\\neq k}\\left(1-\\frac{\\gamma^2}{(q_k-q_j)^2}\\right)^{1\/2}.\n\\end{gather*}\nAs above recursion operator produces only the Hamilton function.\nIt is easy to prove that traces of powers of the Lax matrix $L$\n\\eqref{tr-rtl} satisfy to the following relations\n\\begin{gather}\n\\label{rs-rel}\nP dJ_{\\pm 2}=-\\frac{1}{k} P' d\\ln J_k,\\qquad k=\\pm 1,\\ldots,\\pm n,\n\\end{gather}\ninstead of the standard Lenard--Magri relations~\\cite{mag97,tt12}.\nMoreover, similar to the Calogero--Moser system, there are other solutions $K_m$ of these\nequations~\\eqref{rs-rel}, which are described in~\\cite{afg12}.\n\nRemind, that the so-called principal Ruijsenaars--Schneider Hamiltonian has the form\n\\begin{gather*}\nH_{RS}=\\frac{1}{2}(J_1+J_{-1})=\\sum_{k=\n1}^n(\\cosh2p_k)\\prod_{j\\neq k}\\left(1-\\frac{\\gamma^2}{(q_k-q_j)^2}\\right)^{1\/2}\n\\end{gather*}\nand that the rational Ruijsenaars--Schneider system is in duality with the corresponding variant of\nthe trigonometric Sutherland system, see~\\cite{afg12} and references therein.\n\nWe want to highlight that for all integrable systems listed in~\\cite{mpt11,tt12,ts10,ts11s}\nthe second Poisson bivector $P'$~\\eqref{poi-2} is a~Lie derivative of the canonical Poisson\nbivector $P$ along the vector f\\\/ield\n$Y=\\operatorname{Ad}H$~\\eqref{y1}, where tensor f\\\/ield $A$ usually has a~very simple form.\n\nIn the next section we show that such simple tensor f\\\/ields $A$ may be useful to search for new\nintegrable systems.\n\n\\section{Noncommutative integrable systems}\n\\label{section3}\n\nThe extreme rarity of integrable dynamical systems makes the quest for them all the more exciting.\nWe want to apply tensor f\\\/ields $A$ to partial solution of this problem.\nBelow we present a~method to construct a~new family of three dimensional noncommutative integrable\nsystems.\n\nLet us consider natural Hamilton function on $M=\\mathbb R^{2n}$\n\\begin{gather*}\nH=\\sum_{i=1}^n p_i^2+V(q_1,\\ldots,q_n)\n\\end{gather*}\nand bivector $A$ associated with the rational Calogero--Moser system~\\eqref{a-cal}\n\\begin{gather*}\nA=(p_1q_1+\\cdots+p_nq_n)P,\n\\end{gather*}\nwhere $P$ is canonical Poisson bivector~\\eqref{can-p}, \\eqref{can-p2}.\n\nIn previous section we have discussed partial solutions of the equations~\\eqref{sch-eq}, here we\nwant to discuss their complete solution.\n\\begin{proposition}\nThe Lie derivative of $P$~\\eqref{can-p} along the vector field $Y$\n\\begin{gather}\n\\label{p-p}\nP'=\\mathcal L_Y P,\\qquad Y=(p_1q_1+\\cdots+p_nq_n)PdH\n\\end{gather}\nis a~Poisson bivector compatible with $P$ if and~only if\n\\begin{gather}\n\\label{m-pot}\nH=\\sum_{i=1}^n p_i^2+\\frac{1}{q_1^2}\nF\\left(\\frac{q_2}{q_1},\\frac{q_3}{q_1},\\ldots,\\frac{q_n}{q_1}\\right).\n\\end{gather}\nHere $F$ is an arbitrary homogeneous function of zero degree function depending on the homogeneous\ncoordinates\n\\begin{gather*}\nx_1=\\frac{q_2}{q_1},\\quad x_2=\\frac{q_3}{q_1},\\quad\\ldots,\\quad x_{n-1}=\\frac{q_n}{q_1}.\n\\end{gather*}\n\\end{proposition}\n\nThe definition of the homogeneous coordinates may be found in~\\cite{gr94}.\nProof is a~straightforward calculation of the Schouten bracket~\\eqref{sch-eq}.\n\nIt is easy to see that some Hamilton functions separable in spherical coordinates and~Hamilton\nfunctions for the rational Calogero--Moser systems associated with the $A_n$, $B_n$, $C_n$\nand~$D_n$ root systems have the form~\\eqref{m-pot}.\n\nWe got accustomed to believing that the notion of two compatible Poisson structures $P$ and~$P'$\nallows us to get the appropriate integrable systems~\\cite{gts11,mag97,tt12,ts10,ts11s}.\nIn our case recursion operator $N=P'P^{-1}$ reproduces only the Hamilton function\n\\begin{gather*}\n\\operatorname{tr}N^k=2(2H)^{k}.\n\\end{gather*}\nIt allows us to identify our phase space $M=\\mathbb R^{2n}$ with the irregular bi-Ha\\-mil\\-to\\-nian\nmani\\-fold~\\mbox{\\cite{mag97,ts10}}, but simultaneously it makes the use of standard constructions of the\nintegrals of motion impossible.\n\nWe do not claim that all the Hamilton functions~\\eqref{m-pot} are integrable because we do not have\nan explicit construction of the necessary number of integrals of motion.\nNevertheless, even in generic case there is one additional integral of motion.\n\\begin{proposition}\nThe following second order polynomial in momenta\n\\begin{gather*}\nC=(p_1q_1+\\cdots+p_nq_n)^2-(q_1^2+\\cdots+q_n^2)H\n\\end{gather*}\nis a~Casimir function of $P'$, i.e.~$P'dC=0$.\n\\end{proposition}\nConsequently we have\n\\begin{gather*}\n\\{H,C\\}=0.\n\\end{gather*}\nIt is enough for integrability at $n=2$ when we get Hamilton functions\n\\begin{gather*}\nH=p_1^2+p_2^2+\\frac{1}{q_1^2} F\\left(\\dfrac{q_2}{q_1}\\right)\n\\end{gather*}\nseparable in polar coordinates on the plane.\n\nAt $n>3$ we can make some assumptions on the form of the additional integrals of motion.\nFor instance,\nlet us postulate that our dynamical system is invariant with respect to translations, i.e.\\;that\nthere is a~linear in momenta integral of motion\n\\begin{gather*}\nH_{\\rm post}=p_1+\\cdots+p_n,\\qquad\\{H,H_{p\\rm ost}\\}=0.\n\\end{gather*}\nIt leads to the additional restriction on the form of the proper Hamilton functions~\\eqref{m-pot}\n\\begin{gather*}\nH=\\sum_{i=1}^n p_i^2+\\frac{1}{(q_2-q_1)^2} G\\left(\\frac{q_3-q_2}{q_2-q_1},\\frac{q_4-q_3}{q_2-q_1},\\ldots,\\frac{q_n-q_{n-1}}{q_1-q_2}\\right),\n\\end{gather*}\nwhich generate bi-Hamiltonian vector f\\\/ields\n\\begin{gather}\n\\label{vf-biham}\nX=PdH=P'd\\ln H_{\\rm post}^{-1}\n\\end{gather}\nequipped with the four integrals of motion\n\\begin{gather}\n\\label{4-int}\nH_1=H_{\\rm post},\\qquad H_2=H,\\qquad H_3=C,\\qquad H_4=\\{H_1,C\\}\n\\end{gather}\nwith the linearly independent dif\\\/ferentials $dH_i$.\nAccording to the Euler--Jacobi theorem~\\cite{jac36} it is enough for integrability by quadratures\nat $n=3$.\n\nRemind that the Euler--Jacobi theorem~\\cite{jac36} states that a~system of $N$ dif\\\/ferential\nequations\n\\begin{gather}\n\\label{eqm}\n\\dot{x}_i=X_i(x_1,\\ldots,x_N),\\qquad i=1,\\ldots,N,\\end{gather}\npossessing the last Jacobi multiplier $\\mu$ (invariant measure) and~$N-2$ independent f\\\/irst\nintegrals is integrable by quadratures.\nIn our case $N=6$, we have four independent integrals of motion~\\eqref{4-int} and~$\\mu=1$.\n\n{\\sloppy So, at $n=3$ the following Hamilton functions\n\\begin{gather}\n\\label{3g-ham}\nH_2=p_1^2+p_2^2+p_3^3+\\frac{1}{(q_2-q_1)^2} G\\left(\\frac{q_3-q_2}{q_2-q_1}\\right)\n\\end{gather}\nlabelled by functions $G$ generate integrable by quadratures Hamiltonian equations of\nmotion~\\eqref{vf-biham}--\\eqref{eqm}.\nBecause\n\\begin{alignat}{4}\n& \\{H_1,H_2\\}=0,\\qquad&& \\{H_1,H_3\\}=H_4,\\qquad &&\\{H_1,H_4\\}=2H_1^2-6H_2, &\n\\nonumber\\\\\n& \\{H_2,H_3\\}=0,\\qquad && \\{H_2,H_4\\}=0,\\qquad &&\\{H_4,H_3\\}=4H_1H_3 &\\label{alg-int}\n\\end{alignat}\nwe have noncommutative integrable systems with respect to the canonical\nPoisson bracket, see, for instance,~\\cite{kh10} and~references therein.\n\n}\n\nOf course, in the center of momentum frame, the total linear momentum of the system is zero $H_1=0$\nand~we have three integrals of motion $H_2$, $H_3$ and~$H_4$ in the involution that is enough for\nintegrability at $n=3$ and~$n=4$.\n\nOn the other hand, Hamilton functions~\\eqref{3g-ham} def\\\/ine superintegrable systems in the\nLiouville sense\n\\begin{gather*}\n\\{H_i,H_j\\}'=0,\\qquad i,j=1,\\ldots,4,\n\\end{gather*}\nwith respect to the second Poisson bracket $\\{\\cdot,\\cdot\\}'$ associated with the Poisson tensor\n$P'$~\\eqref{p-p}.\nIf we put\n\\begin{gather*}\nG(x)=g^2\\left(1+\\frac{1}{x^2}+\\frac{1}{(1+x)^2}\\right),\n\\end{gather*}\nwe can obtain a~well-known Hamiltonian for the rational Calogero--Moser system~\\eqref{ham-cal}\n\\begin{gather*}\nH_{\\rm CM}=\\sum_{i=\n1}^{3}p_{i}^{2}+\\frac{g^2}{(q_2-q_1)^2}+\\frac{g^2}{(q_3-q_2)^2}+\\frac{g^2}{(q_3-q_1)^2}.\n\\end{gather*}\nIn this case there are other polynomial integrals of motion~\\eqref{cal-int} and~other Casimir\nfunctions of~$P'$~\\eqref{caz-cal}.\nThis system was separated by Calogero~\\cite{cal69} in cylindrical coordinates in $\\mathbb R^3$.\nBeing a~superintegrable system it is actually separable in four other types of coordinate\nsystems~\\cite{bcr00}.\nThese variables of separation may be easily found using either the generalised Bertrand--Darboux\ntheo\\-rem~\\cite{rw03} or methods of the bi-Hamiltonian geometry~\\cite{gts05}.\nRemind, that variables of separation are eigenvalues of the Killing tensor $K$ satisfying equation\n\\begin{gather}\\label{killing-eq}\nKdV=0,\n\\end{gather}\nwhere $V$ is potential part of the Hamiltonian $H$.\n\nAny additive deformation of this function $G(x)$ leads to the integrable additive deformation of\nthe rational Calogero--Moser system, for instance, if\n\\begin{gather*}\n\\widetilde{G}(x)=G(x)+\\frac{a}{x},\n\\end{gather*}\nthen one gets an integrable system with the three-particle interaction\n\\begin{gather*\n\\widetilde{H}_{\\rm CM}=H_{\\rm CM}+\\frac{a}{(q_1-q_2)(q_2-q_3)}.\n\\end{gather*}\nFor this Hamilton function we couldn't f\\\/ind any polynomial in momenta\nintegrals of motion except $H_1$, $H_3$ and~$H_4$~\\eqref{4-int}).\nMoreover, we couldn't get variables of separation using the standard (regular) methods such as\ngeneralised Bertrand--Darboux theorem~\\cite{rw03} and~bi-Hamiltonian algorithm discussed\nin~\\cite{gts05}.\nNamely, in contrast with the case $a=0$ at $a\\neq0$ the Killing ten\\-sor~$K$\nsatisfying~\\eqref{killing-eq} has only functionally dependent eigenvalues.\n\nAt $n=3$ in order to get rational Calogero--Moser systems associated with other classical root\nsystems and~their deformations we can postulate an existence of the fourth order integral of motion\n\\begin{gather*}\nH_{\\rm post}=\\sum_{i\\neq j}^n p_i^2p_j^2+\\sum_{k}f_k(q)p_k^2+g(q),\n\\end{gather*}\nwith some unknown functions $f_k(q)$ and~$g(q)$.\nHowever we do not have\nan exhaustive classif\\\/ication as of yet.\n\nIn generic case at $n\\geq3$ we can use other hypotheses about\nadditional integrals of motion commuting with $H$~\\eqref{m-pot}.\n\nOf course, construction of such integrable systems is trivial and~closely related with construction\nof the\ngroup invariant solutions of partial dif\\\/ferential equations through imposing side\nconditions~\\cite{or87,ov82}.\nRemind, we can look for solution $W(q)$ of the Hamilton--Jacobi equation\n\\begin{gather*}\nH(p,q)=\\sum_{i=1}^n\\left(\\frac{\\partial W}{\\partial q_i}\\right)^2+W(q_1,\\ldots,q_n)=\n\\mathcal E,\\qquad p_j=\\dfrac{\\partial W}{\\partial q_j},\n\\end{gather*}\nup to the side condition\n\\begin{gather*}\n\\mathcal S(q,p)=0.\n\\end{gather*}\nIf $\\{H,\\mathcal S\\}=f(q,p)\\mathcal S$ then this side condition is consistent with $H$ and~the\ncorresponding integrals of motion are def\\\/ined modulo $\\mathcal S=0$, i.e.\n\\begin{gather*}\n\\{H,H_k\\}=g_k(q,p)\\mathcal S.\n\\end{gather*}\nHere $f$ and~$g_k$ are some functions on phase space and~$W(q)$ is the so-called characteristic\nHamilton function.\n\nIn our case the side condition is related with the transition to the center of momentum frame\n\\begin{gather*}\n\\mathcal S=H_{\\rm post}\\equiv p_1+p_2+p_3=0,\n\\end{gather*}\nwhich is always consistent with the Hamilton function~\\eqref{3g-ham} and~we have three integrals of\nmotion~$H_2$,~$H_3$ and~$H_4$~\\eqref{4-int} in involution by modulo $\\mathcal S=0$~\\eqref{alg-int}.\nConstruction of the variables of separation for the Hamilton--Jacobi equation with a~side\nconditions is discussed in~\\cite{mil12}.\n\nIn quantum case we can consider the Schr\\\"{o}dinger equation\n\\begin{gather*}\nH\\Psi=\\mathcal E\\Psi,\\qquad H=\\Delta+V(q_1,\\ldots,q_n),\n\\end{gather*}\nwhere $\\Delta$ is the Laplace--Beltrami operator on $M=\\mathbb R^{2n}$, and~study solution of this\nequation that also satisf\\\/ies a~side condition\n\\begin{gather*}\n\\mathcal S\\Psi=0.\n\\end{gather*}\nThe consistency condition for the existence of nontrivial solutions $\\Psi$ is a~standard\n\\begin{gather*}\n[H,\\mathcal S]=f\\mathcal S.\n\\end{gather*}\nIn this case linear dif\\\/ferential operator $K$ will be a~symmetry operator for $H$ modulo\n$\\mathcal S\\Psi=0$ if\n\\begin{gather*}\n[H,K]=g\\mathcal S.\n\\end{gather*}\nHere $f$ and~$g$ are some linear partial dif\\\/ferential operators, see examples and~discussion\nin~\\cite{mil12}.\n\nWe assume that the quantum counterpart of $H$~\\eqref{3g-ham} could\nbe embedded in this generic scheme.\n\n\\section{Conclusion}\n\\label{section4}\n\nWe have demonstrated that the trivial deformations of the canonical Poisson bracket\nassociated with the well-known integrable systems have a~very simple form def\\\/ined\nby some 2-tensor f\\\/ield $A$ acting on the dif\\\/ferential of the Hamilton function.\nWe have shown\na collection of examples and~also proven that such tensor f\\\/ields may be useful for searching new\nintegrable by quadratures dynamical systems.\nFor example, we have proven the\nnoncommutative integrability of a~new generalisation of the\nrational Calogero--Moser system with three particle interaction.\n\nIn fact, we propose a~new form for the old content and~believe that this unif\\\/ication\nis a~next step in creating the invariant and~rigorous geometric theory of integrable systems on\nregular and\nirregular bi-Hamiltonian manifolds.\n\n\\subsection*{Acknowledgements}\nWe would like to thank E.G.~Kalnins, W.~Miller, Jr.\\\nand G.~Rastelli for useful discussion on noncommutative integrable systems.\nThe study was supported by the ministry of education and~science of Russian Federation, project\n07.09.2012 no.~8501, grant no.~2012-1.5-12-000-1003-016.\n\n\\pdfbookmark[1]{References}{ref}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{figures\/venice2.png}\n \\caption{\n Optimized 3D reconstruction of the largest \\emph{venice} BAL dataset with 1778 cameras, around one million landmarks, and five million observations.\n For this problem, the proposed square root bundle adjustment ($\\sqrt{BA}$) solver is 42\\% faster than the best competing method at reaching a cost tolerance of 1\\%.\n }\n \\label{fig:teaser}\n\\end{figure}\n\nBundle adjustment (BA) is a core component of many 3D reconstruction algorithms and structure-from-motion (SfM) pipelines.\nIt is one of the classical computer vision problems and has been investigated by researchers for more than six decades \\cite{brown1958solution}.\nWhile different formulations exist, the underlying question is always the same:\ngiven a set of approximate point (landmark) positions that are observed from a number of cameras in different poses, what are the actual landmark positions and camera poses?\nOne can already compute accurate 3D positions with only a few images; however, with more available images we will get a more complete reconstruction.\nWith the emergence of large-scale internet photo collections has come the need to solve bundle adjustment on a large scale, i.e., for thousands of images and hundreds of thousands of landmarks.\nThis requires scalable solutions that are still efficient for large problems and do not run into memory or runtime issues.\n\nModern BA solvers usually rely on the Schur complement (SC) trick that computes the normal equations of the original BA problem and then breaks them down into two smaller subproblems, namely (1) the solution for camera poses and (2) finding optimal landmark positions.\nThis results in the drastically smaller reduced camera system (RCS), which is also better-conditioned~\\cite{agarwal2010bundle} than the original normal equations.\nThe use of the Schur complement has become the de facto standard for solving large-scale bundle adjustment and is hardly ever questioned.\n\nIn this work, we challenge the widely accepted assumption of SC being the best choice for solving bundle adjustment, and provide a detailed derivation and analysis of an alternative problem reduction technique based on QR decomposition.\nInspired by similar approaches in the Kalman filter literature~\\cite{yang2017null}, we factorize the landmark Jacobian $J_l$ such that we can project the original problem onto the nullspace of $J_l$.\nThus, we circumvent the computation of the normal equations and their system matrix $H$, and are able to directly compute a matrix square root of the RCS while still solving an algebraically equivalent problem.\nThis improves numerical stability of the reduction step, which is of particular importance on hardware optimized for single-precision floats.\nFollowing terminology for nullspace marginalization on Kalman filters, we call our method \\emph{square root bundle adjustment}, short $\\sqrt{BA}$.\nIn particular, our contributions are as follows:\n\\setlist{nolistsep}\n\\begin{itemize}[noitemsep]\n \\item We propose nullspace marginalization as an alternative to the traditional Schur complement trick and prove that it is algebraically equivalent.\n \\item We closely link the very general theory of nullspace marginalization to an efficient implementation strategy that maximally exploits the specific structure of bundle adjustment problems.\n \\item We show that the algorithm is well parallelizable and\n that the favorable numerical properties admit computation in single precision,\n resulting in an additional twofold speedup.\n \\item We perform extensive evaluation of the proposed approach on the Bundle Adjustment in the Large (BAL) datasets and compare to the state-of-the-art optimization framework Ceres. \n \\item We release our implementation and complete evaluation pipeline as open source to make our experiments reproducible and facilitate further research:\\\\\n \\url{https:\/\/go.vision.in.tum.de\/rootba}.\n\\end{itemize}\n\n\\section{Related work}\n\nWe propose a way to solve large-scale bundle adjustment using QR decomposition, so we review both works on bundle adjustment (with a focus on large-scale problems) and works that use QR decomposition for other tasks in computer vision and robotics.\nFor a general introduction to numerical algorithms \n(including QR decomposition and iterative methods for solving linear systems),\nwe refer to~\\cite{bjorck1996numerical,golub13}.\n\n\\paragraph{(Large-scale) bundle adjustment}\nA detailed overview of bundle adjustment in general can be found in~\\cite{triggs1999bundle}, including an explanation of the Schur complement (SC) reduction technique~\\cite{brown1958solution} to marginalize landmark variables.\nByr\u00f6d and \u00c5str\u00f6m use the method of conjugate gradients (CG) on the normal equations~\\cite{hestenes1952methods, bjorck1979accelerated} to minimize the linearized least squares problem without the Schur complement~\\cite{byrod2010conjugate}.\nThey also QR-decompose the Jacobian, but only for block preconditioning without marginalizing landmarks.\nAgarwal et~al.\\ have proposed preconditioned CG on the RCS after SC \nto solve the large-scale case~\\cite{agarwal2010bundle}, and Wu et~al.\\ further extend these ideas to a formulation which avoids explicit computation of the SC matrix~\\cite{wu2011multicore}.\nA whole number of other works have proposed \nways to further improve efficiency, accuracy and\/or robustness of BA~\\cite{engels2006bundle, konolige2010sparse, jeong2011pushing, zach2014robust, natesan2017distributed}, all of them using the Schur complement.\nMore recently, in Stochastic BA~\\cite{zhou2020stochastic} the reduced system matrix is further decomposed into subproblems to improve scalability.\nSeveral open source BA implementations are available, e.g., the SBA package~\\cite{lourakis2009sba}, the g2o framework~\\cite{kummerle2011g}, or Ceres Solver~\\cite{ceres-solver}, which has become a standard tool for solving BA-like problems in both academia and industry.\n\n\\paragraph{Nullspace marginalization, square root filters, and QR decomposition}\nThe concept of nullspace marginalization has been used in contexts other than BA, e.g., for the multi-state constraint Kalman filter~\\cite{mourikis2007} and earlier in~\\cite{bayard2005estimation}.\n\\cite{yang2017null} proves the equivalence of nullspace marginalization and the Schur complement in the specific case of robot SLAM.\nSeveral works explicitly point out the advantage of matrix square roots in state estimation~\\cite{maybeck1982stochastic,bierman2006factorization,dellaert2006square,wu2015square}, but to the best of our knowledge matrix square roots have not yet been used for problem size reduction in BA.\nThe QRkit~\\cite{svoboda2018qrkit} emphasizes the benefits of QR decomposition for sparse problems in general and also mentions BA as a possible application, but the very specific structure of BA problems and the involved matrices is not addressed.\nThe orthogonal projector used in the Variable Projection (VarPro) method~\\cite{okatani11, hong17} is related to the nullspace marginalization in our approach.\nHowever, VarPro focuses on separable non-linear least squares problems, which do not include standard BA. While~\\cite{hong17} mentions the use of QR decomposition to improve numeric stability, we take it one step further by more efficiently multiplying in-place with $Q_2^\\top$ rather than explicitly with $I - Q_1Q_1^\\top$ (further discussed in Section~\\ref{sec:nullspace-marginalization}). This also enables our very efficient way to compute landmark damping (not used in VarPro).\nOur landmark blocks can be seen as a specific instance of Smart Factors proposed in~\\cite{carlone2014eliminating}, where nullspace projection with explicit SVD decomposition is considered. Instead of the re-triangulation in~\\cite{carlone2014eliminating}, we suggest updating the landmarks with back substitution. The factor grouping in~\\cite{carlone2014mining} allows to mix explicit and implicit SC. This idea is orthogonal to our approach and could be considered in future work.\n\n\n\\section{QR decomposition}\n\nWe briefly introduce the QR decomposition, which can be computed using Givens rotations (see appendix).\nFor further background, we refer the reader to textbooks on least squares problems (e.g.,~\\cite{bjorck1996numerical}).\nLet $A$ be an $m\\times n$ matrix of full rank with $m \\geq n$, i.e., $\\operatorname{rank}(A) = n$.\n$A$ can be decomposed into an $m \\times m$ orthogonal matrix $Q$ and an $m \\times n$ upper triangular matrix $R$.\nAs the bottom ($m - n$) rows of $R$ are zeros, we can partition $R$ and $Q$:\n\\begin{align}\n A = QR = Q \\begin{pmatrix} R_1 \\\\ 0 \\end{pmatrix}\n = \\begin{pmatrix} Q_1 & Q_2 \\end{pmatrix} \\begin{pmatrix} R_1 \\\\ 0 \\end{pmatrix}\n = Q_1 R_1\\,,\n\\end{align}\nwhere $R_1$ is an $n \\times n$ upper triangular matrix, $Q_1$ is $m \\times n$, and $Q_2$ is $m \\times (m-n)$.\nNote that this partitioning of $Q$ directly implies that the columns of $Q_2$ form the left nullspace of $A$, i.e., $Q_2^\\top A =0$.\nSince $Q$ is orthogonal, we have\n\\begin{align}\n\\label{eq:q_orth}\n Q^\\top Q &= \\id_m = QQ^\\top\\,,\n\\end{align}\nwhere $\\id_m$ is the $m \\times m$ identity matrix.\nFrom \\eqref{eq:q_orth} we derive:\n\\begin{gather}\n Q_1^\\top Q_1 = \\id_n\\,, \\quad\n Q_2^\\top Q_2 = \\id_{m-n}\\,, \\quad\n Q_1^\\top Q_2 = 0\\,, \\\\\n\\label{eq:qqt}\n Q_1Q_1^\\top = \\id_m - Q_2Q_2^\\top\\,.\n\\end{gather}\n\n\n\\section{Square root bundle adjustment}\n\nWe assume a very general form of bundle adjustment, similar to~\\cite{agarwal2010bundle}:\nlet $\\vect{x}$ be a state vector containing all the optimization variables.\nWe can subdivide $\\vect{x}$ into a pose part $\\vect{x}_p$ containing extrinsic and possibly intrinsic camera parameters for all $n_p$ images (index $i$), and a landmark part $\\vect{x}_l$ consisting of the 3D coordinates of all $n_l$ landmarks (index $j$).\nThe total bundle adjustment energy is a sum of squared residuals\n\\begin{align}\n\\label{eq:ba_energy}\n E(\\vect{x}_p,\\vect{x}_l) = \\sum_{i}\\sum_{j\\in O(i)}{r_{ij}(\\vect{x}_p,\\vect{x}_l)^2} = \\Vert \\vect{r}(\\vect{x}_p,\\vect{x}_l) \\Vert^2\\,,\n\\end{align}\nwhere $j\\in O(i)$ means that landmark $j$ is observed in frame $i$ and $\\vect{r}(\\vect{x})$ is the concatenation of all residuals $r_{ij}$ into one vector.\nWe call the total number of residuals $N_r$.\nFor a pose dimensionality $d_p$, the length of the total state vector $(\\vect{x}_p,\\vect{x}_l)$ is $d_pn_p+3n_l=:N_p+3n_l$.\nTypically, $d_p=6$ if only extrinsic camera parameters need to be estimated, and $d_p=9$ if intrinsic calibration is also unknown.\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{figures\/sparsity2.pdf}\n \\caption{Dense landmark blocks. (a)~Sparsity structure of the pose Jacobian is fixed during the optimization. Non-zero elements shown in blue, potentially non-zero elements after Givens QR shown in gray, and elements that will always stay zero shown in white. (b)~Dense storage for the outlined (red) landmark block that efficiently stores all Jacobians and residuals for a single landmark. (c)~Same landmark block after in-place marginalization. As Givens rotations operate on individual rows, marginalization can be performed for each landmark block separately, possibly in parallel.}\n \\label{fig:sparsity}\n\\end{figure*}\n\n\n \n\\subsection{Least squares problem}\n\nThe energy in \\eqref{eq:ba_energy} is usually minimized by the Levenberg-Marquardt algorithm, which is based on linearizing $\\vect{r}(\\vect{x})$ around the current state estimate $\\vect{x}^0=(\\vect{x}_p^0,\\vect{x}_l^0)$ and then solving a damped linearized problem\n\\begin{align}\n\\begin{aligned}\n\\label{eq:linearized_residual}\n \\min_{\\Delta\\vect{x}}&\\left\\Vert \\begin{pmatrix}\\vect{r}\\\\0\\\\0\\end{pmatrix} + \\begin{pmatrix}J_p & J_l \\\\ \\sqrt{\\lambda}D_p & 0 \\\\ 0 & \\sqrt{\\lambda}D_l\\end{pmatrix}\\begin{pmatrix}\\Delta\\vect{x}_p \\\\ \\Delta\\vect{x}_l\\end{pmatrix}\\right\\Vert^2 = \\\\\n \\min_{\\Delta\\vect{x}_p, \\Delta\\vect{x}_l}&\\bigg(\\Vert \\vect{r} + \\begin{pmatrix}J_p & J_l\\end{pmatrix}\\begin{pmatrix}\\Delta\\vect{x}_p \\\\ \\Delta\\vect{x}_l\\end{pmatrix}\\Vert^2 \\\\\n & + \\lambda \\Vert D_p \\Delta\\vect{x}_p \\Vert^2 + \\lambda\\Vert D_l \\Delta\\vect{x}_l \\Vert^2\\bigg) \\,,\n\\end{aligned}\n\\end{align}\nwith $\\vect{r}=\\vect{r}(\\vect{x}^0)$, $J_p = \\left.\\frac{\\partial \\vect{r}}{\\partial \\vect{x}_p}\\right\\vert_{\\vect{x}^0}$, $J_l = \\left.\\frac{\\partial \\vect{r}}{\\partial \\vect{x}_l}\\right\\vert_{\\vect{x}^0}$, and $\\Delta\\vect{x}=\\vect{x}-\\vect{x}^0$.\nHere, $\\lambda$ is a damping coefficient and $D_p$ and $D_l$ are diagonal damping matrices for pose and landmark variables (often $D^2=\\text{diag}(J^\\top J)$). \n\nTo simplify notation, in this section we consider the undamped problem (i.e., $\\lambda=0$) and discuss the efficient application of damping in Section \\ref{sec:lm_damping}.\nThe undamped problem in \\eqref{eq:linearized_residual} can be solved by forming the corresponding normal equation\n\\begin{align}\n\\label{eq:normal_eq}\n\\begin{pmatrix}\nH_{pp} & H_{pl} \\\\\nH_{lp} & H_{ll}\n\\end{pmatrix}\n\\begin{pmatrix}\n-\\Delta \\vect{x}_{p} \\\\\n-\\Delta \\vect{x}_{l}\n\\end{pmatrix}\n &= \n \\begin{pmatrix}\n\\vect{b}_{p} \\\\\n\\vect{b}_{l}\n\\end{pmatrix},\n\\end{align}\nwhere\n\\begin{align}\nH_{pp} &= J_p^\\top J_p\\,, \\quad\nH_{ll} = J_l^\\top J_l\\,, \\\\\nH_{pl} &= J_p^\\top J_l = H_{lp}^\\top\\,, \\\\\n\\vect{b}_{p} &= J_p^\\top \\vect{r}\\,, \\quad\n\\vect{b}_{l} = J_l^\\top \\vect{r}\\,.\n\\end{align}\nThe system matrix $H$ of this problem is of size $(N_p+3n_l)^2$, which can become impractically large (millions of rows and columns) for problems like those in~\\cite{agarwal2010bundle} (see Figure~\\ref{fig:teaser} for an example).\n\n\\subsection{Schur complement (SC)}\n\nA very common way to solve \\eqref{eq:normal_eq} is by applying the Schur complement trick (see e.g.,~\\cite{brown1958solution,agarwal2010bundle,wu2011multicore}):\nwe form the RCS\n\\begin{align}\n\\label{eq:red_normal_eq}\n\\tilde{H}_{pp} (- \\Delta\\vect{x}_p) = \\tilde{\\vect{b}}_p\\,,\n\\end{align}\nwith\n\\begin{align}\n\\tilde{H}_{pp} &= H_{pp} - H_{pl} H_{ll}^{-1} H_{lp}\\,, \\\\\n\\tilde{\\vect{b}}_p &= \\vect{b}_p - H_{pl} H_{ll}^{-1}\\vect{b}_l\\,.\n\\end{align}\nThe solution $\\Delta\\vect{x}_p^*$ of \\eqref{eq:red_normal_eq} is the same as the pose component of the solution of \\eqref{eq:normal_eq}, but now the system matrix has a much more tractable size of $N_p^2$, which is usually in the order of thousands $\\times$ thousands.\nNote that as $H_{ll}$ is block-diagonal with blocks of size $3\\times 3$, the multiplication with $H_{ll}^{-1}$ is cheap.\nGiven an optimal pose update $\\Delta\\vect{x}_p^*$, the optimal landmark update is found by back substitution\n\\begin{align}\n\\label{eq:backsub_sc}\n - \\Delta\\vect{x}_{l}^* &= H_{ll}^{-1} (\\vect{b}_{l} - H_{lp} (-\\Delta\\vect{x}_{p}^*))\\,.\n\\end{align}\n\n\n\\subsection{Nullspace marginalization (NM)}\n\\label{sec:nullspace-marginalization}\n\nUsing QR decomposition on $J_l=QR$, and the invariance of the L2 norm under orthogonal transformations, we can rewrite the term in \\eqref{eq:linearized_residual}:\n\\begin{align}\n\\label{eq:q_linearized_residual}\n\\begin{aligned}\n &\\Vert \\vect{r} + \\begin{pmatrix}J_p & J_l\\end{pmatrix}\\begin{pmatrix}\\Delta\\vect{x}_p \\\\ \\Delta\\vect{x}_l\\end{pmatrix}\\Vert^2 \\\\\n &= \\Vert Q^\\top\\vect{r} + \\begin{pmatrix}Q^\\top J_p & Q^\\top J_l\\end{pmatrix}\\begin{pmatrix}\\Delta\\vect{x}_p \\\\ \\Delta\\vect{x}_l\\end{pmatrix}\\Vert^2 \\\\\n &= \\Vert Q_1^\\top\\vect{r} + Q_1^\\top J_p\\Delta\\vect{x}_p + R_1\\Delta\\vect{x}_l\\Vert^2 \\\\\n &\\qquad + \\Vert Q_2^\\top\\vect{r} + Q_2^\\top J_p\\Delta\\vect{x}_p\\Vert^2\\,.\n\\end{aligned}\n\\end{align}\nAs $R_1$ is invertible, for a given $\\Delta\\vect{x}_p^*$, the first term can always be set to zero (and thus minimized) by choosing\n\\begin{align}\n\\label{eq:backsub_nm}\n \\Delta\\vect{x}_l^* = -R_1^{-1} (Q_1^\\top \\vect{r} + Q_1^\\top J_p \\Delta\\vect{x}_{p}^*)\\,.\n\\end{align}\nSo \\eqref{eq:linearized_residual} reduces to minimizing the second term in \\eqref{eq:q_linearized_residual}:\n\\begin{align}\n\\min_{\\Delta\\vect{x}_p}{\\Vert Q_2^\\top \\vect{r} + Q_2^\\top J_p \\Delta \\vect{x}_{p} \\Vert^2}\\,.\n\\label{eq:reduced}\n\\end{align}\nAgain, this problem is of significantly smaller size than the original one.\nHowever, as opposed to the (explicit) Schur complement trick, we do not explicitly have to form the Hessian matrix.\n\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{figures\/damping3.pdf}\n \\caption{Illustration of the landmark damping in the Levenberg-Marquardt optimization. (a) We add three zero rows with diagonal damping for landmarks to the marginalized landmark block. (b) With 6 Givens rotations we eliminate the values on diagonal, which gives us a new landmark block with marginalized out landmark. (c) By applying the transposed rotations in reverse order and zeroing out the diagonal we can bring the landmark block to the original state. Zero entries of the landmark block are shown in white, parts that change are shown in green, and parts that stay unchanged are shown in blue.}\n \\label{fig:damping}\n\\end{figure*}\n\n\n\\subsection{Equivalence of SC and NM}\n\nWith the QR decomposition $J_l = Q_1 R_1$ used in the last paragraphs, we get\n\\begin{align}\nH_{pp} &= J_p^\\top J_p\\,, \\\\\nH_{pl} &= J_p^\\top Q_1 R_1\\,, \\\\\nH_{ll} &= R_1^\\top Q_1^\\top Q_1 R_1 = R_1^\\top R_1\\,, \\\\\n\\vect{b}_{p} &= J_p^\\top \\vect{r}\\,, \\\\\n\\vect{b}_{l} &= R_1^\\top Q_1^\\top \\vect{r}\\,. \\\n\\end{align}\nUsing this, we can rewrite the Schur complement matrix $\\tilde{H}_{pp}$ and vector $\\tilde{\\vect{b}}_p$ and simplify with \\eqref{eq:qqt}:\n\\begin{align}\n&\\begin{aligned}\n\\label{eq:red_H}\n\\tilde{H}_{pp} &= H_{pp} - J_p^\\top Q_1 R_1 (R_1^\\top R_1)^{-1} R_1^\\top Q_1^\\top J_p \\\\\n&= H_{pp} - J_p^\\top (\\id_{N_r} - Q_2 Q_2^\\top) J_p \\\\\n&= J_p^\\top Q_2 Q_2^\\top J_p\\,,\n\\end{aligned}\\\\\n&\\begin{aligned}\n\\label{eq:red_b}\n\\tilde{\\vect{b}}_p &= \\vect{b}_p - J_p^\\top Q_1 R_1 (R_1^\\top R_1)^{-1} R_1^\\top Q_1^\\top \\vect{r} \\\\\n&= J_p^\\top Q_2 Q_2^\\top \\vect{r}\\,.\n\\end{aligned}\n\\end{align}\nThus, the SC-reduced equation is nothing but the normal equation of problem \\eqref{eq:reduced}, which proves the algebraic equivalence of the two marginalization techniques. \nAdditionally, we can show that the equations for back substitution for Schur complement (\\ref{eq:backsub_sc}) and nullspace marginalization (\\ref{eq:backsub_nm}) are also algebraically equivalent:\n \\begin{align}\n \\begin{aligned}\n \\Delta\\vect{x}_{l}^* &= -H_{ll}^{-1} (\\vect{b}_{l} + H_{lp} \\Delta\\vect{x}_{p}^*) \\\\\n &= -(R_1^\\top R_1)^{-1} (R_1^\\top Q_1^\\top \\vect{r} + (J_p^\\top Q_1 R_1)^\\top \\Delta\\vect{x}_{p}^*) \\\\\n &= -R_1^{-1} (Q_1^\\top \\vect{r} + Q_1^\\top J_p \\Delta\\vect{x}_{p}^*)\\,.\n \\end{aligned}\n \\end{align}\n\nNote that the above arguments also hold for the damped problem \\eqref{eq:linearized_residual}, the difference being that the Hessian will have an augmented diagonal and that the matrices in the QR decomposition will have a larger size.\n\n\n\\section{Implementation details}\n\nBundle adjustment is a very structured problem, so we can take advantage of the problem-specific matrix structures to enable fast and memory-efficient computation.\n\n\\subsection{Storage}\n\nWe group the residuals by landmarks, such that $J_l$ has block-sparse structure, where each block is $2k_j\\times 3$ with $k_j$ the number of observations for a particular landmark, see Figure~\\ref{fig:sparsity} (a).\nAs each landmark is only observed by a subset of cameras, the pose Jacobian $J_p$ is also sparse.\n\nWe group the rows corresponding to each landmark and store them in a separate dense memory block, which we name a \\emph{landmark block}.\nWe store only the blocks of the pose Jacobian that correspond to the poses where the landmark was observed, because all other blocks will always be zero. For convenience we also store the landmark's Jacobians and residuals in the same landmark block, as shown in Figure~\\ref{fig:sparsity} (b).\nThis storage scheme can be applied both to the undamped and the damped Jacobian (see Section \\ref{sec:lm_damping} for damping).\n\n\n\\subsection{QR decomposition}\n\nApplying a sequence of Givens rotations in-place transforms the landmark block to the marginalized state shown in Figure~\\ref{fig:sparsity} (c). The bottom part corresponds to the reduced camera system, and the top part can be used for back substitution. This transformation can be applied to each block independently, possibly in parallel.\nWe never have to explicitly store or compute the matrix $Q$; we simply apply the sequence of Givens rotations to the landmark block one by one, as they are computed.\nNote that alternatively we can use three Householder reflections per landmark block, with which we noticed a minor improvement in runtime.\n\n\\subsection{Levenberg-Marquardt damping}\n\\label{sec:lm_damping}\n\nThe augmentation of the Jacobians by diagonal matrices as used in \\eqref{eq:linearized_residual} consists of two parts that we treat differently to optimally exploit the nature of the BA problem in our implementation.\n\n\\paragraph{Landmark damping}\nWe first look at damping the landmark variables:\nrather than actually attaching a large diagonal matrix $\\sqrt{\\lambda}D_l$ to the full landmark Jacobian $J_l$, we can again work on the landmark block from Figure~\\ref{fig:sparsity} (b) and only attach a $3\\times 3$ sub-block there, \nsee Figure~\\ref{fig:damping} (a) and (b).\nTo simplify the expressions in figures, we slightly abuse notation when considering a single landmark and denote the corresponding parts of $J_p$, $J_l$ and $r$ in the landmark block by the same symbols.\nThe matrices involved in the QR decomposition of the undamped system are $Q_1$, $Q_2$, $R_1$ and those for the damped system are marked with a hat.\nNote that $Q$ and $\\hat{Q}$ are closely related; the additional three diagonal entries in the damped formulation can be zeroed using only six Givens rotations, such that\n\\begin{align}\n\\hat{Q} = \\begin{pmatrix}\\hat{Q}_1 & \\hat{Q}_2 \\end{pmatrix} = \\begin{pmatrix}Q_1 & Q_2 & 0 \\\\ 0 & 0 & \\id_{3} \\end{pmatrix}Q_\\lambda\n\\,,\n\\end{align}\nwhere $Q_\\lambda$ is a product of six Givens rotations.\nThus, applying and removing landmark damping is computationally cheap: we apply the Givens rotations one by one and store them individually (rather than their product $Q_\\lambda$) to undo the damping later.\nFigure~\\ref{fig:damping} illustrates how this can be done in-place on the already marginalized landmark block.\nThis can speed up LM's backtracking, where a rejected increment is recomputed with the same linearization, but with increased damping.\nBy contrast, for an explicit SC solver, changing the damping would mean recomputing the Schur complement from scratch.\n\n\\paragraph{Pose damping}\nMarginalizing landmarks using Givens rotations in the damped formulation of \\eqref{eq:linearized_residual} does not affect the rows containing pose damping.\nThus, it is still the original diagonal matrix $\\sqrt{\\lambda}D_p$ that we append to the bottom of $\\hat{Q}^\\top J_p$:\n\\begin{align}\n \\hat{J}_p = \n \\begin{pmatrix}\n \\hat{Q}_1^\\top J_p \\\\ \\hat{Q}_2^\\top J_p \\\\ \\sqrt{\\lambda}D_p\n \\end{pmatrix}\\,.\n\\end{align}\nIn practice, we do not even have to append the block, but can simply add the corresponding summand when evaluating matrix-vector multiplication for the CG iteration \\eqref{eq:cg_mult}.\n\n\\subsection{Conjugate gradient on normal equations}\n\nTo solve for the optimal $\\Delta x_p^*$ in small and medium systems, we could use dense or sparse QR decomposition of the stacked $Q_2^{\\top} J_p$ from landmark blocks to minimize the linear least squares objective $\\Vert Q_2^{\\top} J_p \\Delta x_p + Q_2^{\\top} r\\Vert^2$. However, for large systems this approach is not feasible due to the high computational cost.\nInstead, we use CG on the normal equations as proposed in~\\cite{agarwal2010bundle}.\nOther iterative solvers like LSQR~\\cite{paige1982lsqr} or LSMR~\\cite{fong2011lsmr} that can be more numerically stable than CG turned out to not improve stability for the case of bundle adjustment~\\cite{byrod2010conjugate}.\n\nCG accesses the normal equation matrix $\\tilde{H}_{pp}$ only by multiplication with a vector $\\vect{v}$, which we can write as\n\\begin{align}\n\\label{eq:cg_mult}\n\\tilde{H}_{pp} ~ v = (\\hat{Q}_2^\\top J_p)^\\top (\\hat{Q}_2^\\top J_p ~ v) + \\lambda D^2_p ~ v\n\\,.\n\\end{align}\nThis multiplication can be efficiently implemented and well parallelized using our array of landmark blocks.\nThus, we do not need to explicitly form the normal equations for the reduced least squares problem.\n\nStill, the CG part of our solver has the numerical properties of the normal equations (squared condition number compared to the marginal Jacobian $\\hat{Q}_2^\\top J_p$). To avoid numeric issues when using single-precision floating-point numbers, we scale the columns of the full Jacobian to have unit norm and use a block-Jacobi preconditioner, both standard procedures when solving BA problems and both also used in the other evaluated solvers.\nWe also note that with the Levenberg-Marquardt algorithm, we solve a strictly positive definite damped system, which additionally improves the numeric stability of the optimization.\n\nStoring the information used in CG in square root form allows us to make sure that $\\tilde{H}_{pp}$ is always strictly positive definite. As we show with our experiments (see Section \\ref{sec:analysis}), for many sequences small round-off errors during SC (explicit or implicit) render $\\tilde{H}_{pp}$ to be numerically indefinite with single-precision floating-point computations.\n\nWith the computed $\\Delta\\vect{x}_{p}^*$ we can do back substitution for each individual landmark block independently and in parallel. We already have all the necessary information ($\\hat{Q}_1^\\top J_p$, $\\hat{R}$, $\\hat{Q}_1^\\top r$) stored in the landmark blocks \nafter marginalization.\n\n\\subsection{Parallel implementation}\nAs pointed out above, the linearization, marginalization, and back substitution can be computed independently for each landmark block. There is no information shared between landmark blocks, so we can use a simple \\emph{parallel for} loop to evenly distribute the workload between all available CPU cores.\nThe matrix-vector multiplications that constitute the most computationally expensive part of CG can also be efficiently parallelized. In this case, multiplication results of individual landmark blocks have to be summed, so we employ the common \\emph{parallel reduce} pattern.\nHow effective these simple parallelization schemes are is underlined by our evaluation, which shows excellent runtime performance of the square root bundle adjustment implementation, compared to both our custom and Ceres' SC solvers.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.98\\textwidth]{results\/performance_profile_all.pdf}\n \\caption{Performance profiles for all BAL datasets show percentage of problems solved to a given accuracy tolerance $\\tau$ with increasing relative runtime $\\alpha$.\n Our proposed $\\sqrt{BA}$ solver outperforms other methods across all accuracy tolerances.\n In single precision, the solver is about twice as fast as with double, but does not lose in accuracy, underpinning the favorable numerical properties of the square root formulation.\n In contrast, while the SC solver in double precision is equally accurate, this is not the case for the single-precision variant. Keep in mind that Ceres is not as tailored to the exact problem setup as our custom implementation, possibly explaining its slower performance. Note that the performance profiles are cut off at the right side of the plots to show only the most relevant parts.\n }\n \\label{fig:performance_all}\n\\end{figure*}\n\n\n\n\\section{Experimental evaluation}\n\n\\subsection{Algorithms and setup}\n\n\\begin{table}\n\\setlength{\\tabcolsep}{0.3em}\n\\centering\n\\begin{tabular}{l|c;{1pt\/1pt}c;{1pt\/1pt}c;{1pt\/1pt}c|c;{1pt\/1pt}c|}\n& \\rotatebox{90}{\\emph{$\\sqrt{BA}$-32} (ours)} \n& \\rotatebox{90}{\\emph{$\\sqrt{BA}$-64} (ours)} \n& \\rotatebox{90}{\\emph{explicit-32}} \n& \\rotatebox{90}{\\emph{explicit-64}}\n& \\rotatebox{90}{\\emph{ceres-implicit}}\n& \\rotatebox{90}{\\emph{ceres-explicit}}\n\\\\\n\\hline\nsolver implementation & \\multicolumn{4}{c|}{custom} & \\multicolumn{2}{c|}{Ceres} \\\\\n\\hline\nfloat precision & s & d & s & d & \\multicolumn{2}{c|}{d} \\\\\n\\hline\nlandmark marginalization & \\multicolumn{2}{c;{1pt\/1pt}}{NM} & \\multicolumn{2}{c|}{SC} & \\multicolumn{2}{c|}{SC} \\\\\n\\hline\nRCS storage & \\multicolumn{2}{c;{1pt\/1pt}}{LMB} & \\multicolumn{2}{c|}{H} & -- & H\\\\\n\\hline\n\\end{tabular}\n\\vspace{0.3cm}\n\\caption{The evaluated solvers---proposed and baseline---are implemented either completely in our custom code base, or using Ceres, with single (s) or double (d) floating-point precision, using Nullspace (NM) or Schur complement (SC)-based marginalization of landmarks, and storing the reduced camera system sparsely in landmark blocks (LMB), sparsely as a reduced Hessian (H), or not at all (--).\n}\n\\label{tab:solver_properties}\n\\end{table}\n\n\n\nWe implement our $\\sqrt{BA}$ solver in C++ in single (\\emph{$\\sqrt{BA}$-32}) and double (\\emph{$\\sqrt{BA}$-64}) floating-point precision\nand compare it to the methods proposed in \\cite{agarwal2010bundle}\nas implemented in Ceres Solver \\cite{ceres-solver}.\nThis solver library is popular in the computer vision and robotics community, \nsince it is mature, performance-tuned, and offers many linear solver variations.\nThat makes it a relevant and challenging baseline to benchmark our implementation against.\nWhile Ceres is a general-purpose solver,\nit is very good at exploiting the specific problem structure as\nit was built with BAL problems in mind.\nOur main competing algorithms are Ceres' sparse Schur complement solvers, \nwhich solve the RCS iteratively by either explicitly saving $\\tilde{H}_{pp}$ in memory as a block-sparse matrix (\\emph{ceres-explicit}), \nor otherwise computing it on the fly during the iterations (\\emph{ceres-implicit}).\nIn both cases, the same block-diagonal of $\\tilde{H}_{pp}$ that we use in $\\sqrt{BA}$ is used as preconditioner.\nAs the bottleneck is not computing Jacobians, but marginalizing points and the CG iterations, we use analytic Jacobians for our custom solvers and Ceres' exact and efficient autodiff with dual numbers.\nFor an even more direct comparison,\nwe additionally implement the sparse iterative explicit Schur complement solver without Ceres, sharing much of the code with our $\\sqrt{BA}$ implementation. While Ceres always uses double precision, \nwe use our custom implementation to evaluate numerical stability by considering single (\\emph{explicit-32}) and double (\\emph{explicit-64}) precision.\nTable~\\ref{tab:solver_properties} summarizes the evaluated configurations.\n\n\nFor Ceres we use default options, unless otherwise specified.\nThis includes the scaling of Jacobian columns to avoid numerical issues~\\cite{agarwal2010bundle}.\nJust like in our custom implementation, \nwe configure the number of threads to be equal to the number of (virtual) CPU cores. \nOur Levenberg-Marquardt loop is in line with Ceres: \nstarting with initial value $10^{-4}$, we update the damping factor $\\lambda$ according to the ratio of actual and expected cost reduction,\nand run it for at most 50 iterations, terminating early if a relative function tolerance of $10^{-6}$ is reached. \nIn the inner CG loop we use the same forcing sequence as Ceres, with a maximum of 500 iterations and no minimum.\nWe run experiments on an Ubuntu 18.04 desktop with 64GB RAM and an Intel Xeon W-2133 with 12 virtual cores at 3.60GHz.\nIn our own solver implementation we rely on Eigen \\cite{eigenweb} for dense linear algebra and TBB\\cite{tbbweb} for \nsimple \\emph{parallel for} and \\emph{parallel reduce} constructs.\n\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{c@{\\hskip 0.005\\textwidth}c@{\\hskip 0.005\\textwidth}c@{\\hskip 0.005\\textwidth}c}\n \\includegraphics[width=0.24\\textwidth]{results\/ladybug138.png} & \n \\includegraphics[width=0.24\\textwidth]{results\/trafalgar170.png} & \n \\includegraphics[width=0.24\\textwidth]{results\/venice1158.png} & \n \\includegraphics[width=0.24\\textwidth]{results\/final4585-1.png} \\\\\n \\includegraphics[width=0.24\\textwidth]{results\/bal_cost_convergence_ladybug138.pdf} & \n \\includegraphics[width=0.24\\textwidth]{results\/bal_cost_convergence_trafalgar170.pdf} & \n \\includegraphics[width=0.24\\textwidth]{results\/bal_cost_convergence_venice1158.pdf} & \n \\includegraphics[width=0.24\\textwidth]{results\/bal_cost_convergence_final4585.pdf}\n \\end{tabular}\n \\caption{Convergence plots from small to large problems and rendered optimized landmark point clouds. The $y$-axes show the total BA cost (log scale), and the horizontal lines indicate cost thresholds for the tolerances $\\tau \\in \\{ 10^{-1}, 10^{-2}, 10^{-3}\\}$.}\n \\label{fig:convergence}\n\\end{figure*}\n\n\n\\subsection{Datasets}\n\nFor our extensive evaluation we use all 97 bundle adjustment problems from the BAL \\cite{agarwal2010bundle} project page.\nThese constitute initialized bundle adjustment problems and come in different groups: the \\emph{trafalgar}, \\emph{dubrovnik}, and \\emph{venice} problems originate from successive iterations in a skeletal SfM reconstruction of internet image collections \\cite{svoboda2018qrkit}. \nThey are combined with additional leaf images, which results in the thus denser \\emph{final} problems.\nThe \\emph{ladybug} sequences are reconstructions from a moving camera, but despite this we always model all camera intrinsics as independent, \nusing the suggested \\emph{Snavely} projection model with one focal length and two distortion parameters. Figure~\\ref{fig:convergence} visualizes some exemplar problems after they have been optimized.\n\nAs is common, we apply simple gauge normalization as preprocessing:\nwe center the point cloud of landmarks at the coordinate system origin and rescale to median absolute deviation of 100.\nInitial landmark and camera positions are then perturbed with small Gaussian noise.\nTo avoid close-to-invalid state, we remove all observations with a small or negative $z$ value in the camera frame, \ncompletely removing landmarks with less than two remaining observations.\nWe additionally employ the Huber norm with a parameter of 1~pixel for residuals (implemented with IRLS as in Ceres).\nThis preprocessing essentially follows Ceres' BAL examples. It is deterministic and identical for all solvers by using a fixed random seed and being computed on state in double precision, regardless of solver configuration.\n\n\\subsection{Performance profiles}\n\nWhen evaluating a solver, we are primarily interested in accurate optimization results.\nSince we do not have independent ground truth of correct camera positions, intrinsics, and landmark locations,\nwe use the bundle adjustment cost as a proxy for accuracy. Lower cost in general corresponds to better solutions.\nBut depending on the application, we may desire in particular low runtime, which can be a trade-off with accuracy.\nThe difficulty lies in judging the performance across several orders of magnitudes in problem sizes, cost values, and runtimes (for the BAL datasets the number of cameras $n_p$ ranges from 16 to 13682).\nAs proposed in prior work \\cite{kushal2012visibility,dolan2002benchmarking}, we therefore use performance profiles to evaluate both accuracy and runtime jointly.\nThe performance profile of a given solver maps the relative runtime~$\\alpha$ (relative to the fastest solver for each problem and accuracy) to the percentage of problems solved to accuracy~$\\tau$.\nThe curve is monotonically increasing, \nstarting on the left with the percentage of problems for which the solver is the fastest, \nand ending on the right with the percentage of problems on which it achieves the accuracy~$\\tau$ at all.\nThe curve that is more to the left indicates better runtime and the curve that is more to the top indicates higher accuracy.\nA precise definition of performance profiles is found in the appendix.\n\n\n\\subsection{Analysis}\n\\label{sec:analysis}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{results\/memory_ladybug.pdf}\n \\caption{Memory consumption for the relatively sparse \\emph{ladybug} problems grows linearly with the problem size. The number of cameras here ranges from 49 to 1723.}\n \\label{fig:memory_ladybug}\n\\end{figure}\n\nFigure~\\ref{fig:performance_all} shows the performance profiles with all BAL datasets for a range of tolerances $\\tau \\in \\{ 10^{-1}, 10^{-2}, 10^{-3}\\}$.\nWe can see that our proposed square root bundle adjustment solver \\emph{$\\sqrt{BA}$-64} is very competitive, yielding better accuracy than some SC-based iterative solvers, and often at a lower runtime.\n\\emph{$\\sqrt{BA}$-32} is around twice as fast and equally accurate, which highlights the good numerical stability of the square root formulation. It clearly outperforms all other solvers across all tolerances.\nThe fact that \\emph{\\mbox{explicit-32}} does not always reach the same accuracy as its double precision counterpart \\emph{explicit-64} indicates that SC-based solvers do not exhibit the same numerical stability. We also do not observe the same twofold speedup, which is related to the fact that \\emph{\\mbox{explicit-32}} does have to backtrack in the LM loop significantly more often to increase damping when the Schur complement matrix becomes indefinite due to round-off errors. This happens at least once for 84 out of the 97 problems with \\emph{explicit-32} and even with \\emph{explicit-64} for 7 of the problems. With $\\sqrt{BA}$, we have never encountered this.\n\nA similar conclusion can be drawn from the convergence plots in Figure~\\ref{fig:convergence}, which show a range of differently sized problems.\nFor the small \\emph{ladybug} as well as the medium and large \\emph{skeletal} problems, our solver is faster. \nEven on the large and much more dense \\emph{final4585}, the square root solver is competitive. In the square root formulation memory and thus to some degree also runtime grows larger for denser problems---in the sense of number of observations per landmark---since a specific landmark block grows quadratically in size with the number of its observations. This is in contrast to density in the sense of number of cameras co-observing at least one common landmark, as for the SC.\nStill, across all BAL datasets, only for the largest problem, \\emph{final13682}, where the landmarks have up to 1748 observations, does \\emph{$\\sqrt{BA}$-32} run out of memory.\nFor sparse problems, such as \\emph{ladybug}, one can see in Figure~\\ref{fig:memory_ladybug} that the memory grows linearly with the number of landmarks, and for \\emph{$\\sqrt{BA}$-32} is similar to Ceres' iterative SC solvers.\nIn summary, while for very small problems we expect direct solvers to be faster than any of the iterative solvers, and for very large and dense problems implicit SC solvers scale better due to their memory efficiency~\\cite{agarwal2010bundle}, the proposed \\emph{$\\sqrt{BA}$} solver outperforms alternatives for medium to large problems, i.e., the majority of the BAL dataset.\n\n\n\\section{Conclusion}\n\nWe present an alternative way to solve large-scale bundle adjustment that marginalizes landmarks without having to compute any blocks of the Hessian matrix.\nOur square root approach $\\sqrt{BA}$ displays several advantages over the standard Schur complement, in terms of speed, accuracy, and numerical stability.\nWe have combined a very general theoretical derivation of nullspace marginalization with a tailored implementation that maximally exploits the specific structure of BA problems.\nExperiments comparing our solver to both a custom SC implementation and the state-of-the-art Ceres library show how $\\sqrt{BA}$ can handle single-precision floating-point operations much better than the Schur complement methods, outperforming all evaluated competing approaches. We see great potential in $\\sqrt{BA}$ to benefit other applications that play up to its strong performance on sparse problems, for example incremental SLAM.\n\n\\balance\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\n\\section{Additional details}\n\\label{sec:appendix_details}\n\n\\subsection{Givens rotations}\n\nThe QR decomposition of a matrix $A$ can be computed using Givens rotations.\nA Givens rotation is represented by a matrix $G_{ij}(\\theta)$ that is equal to identity except for rows and columns $i$ and $j$, where it has the non-zero entries\n\\begin{align}\n \\begin{pmatrix}\n g_{ii} & g_{ij} \\\\g_{ji} & g_{jj}\n \\end{pmatrix}\n =\n \\begin{pmatrix}\n \\cos\\theta & \\sin\\theta \\\\ -\\sin\\theta & \\cos\\theta\n \\end{pmatrix}\\,.\n\\end{align}\n$G_{ij}(\\theta)$ describes a rotation by angle $\\theta$ in the $ij$-plane.\nThe multiplication of a matrix $A$ with $G_{ij}(\\theta)$ changes only two rows in $A$, leaving all other rows unchanged.\n$\\theta$ can be chosen such that the element $(i,j)$ of $G_{ij}(\\theta)A$ is zero:\n\\begin{align}\n \\cos\\theta = \\frac{a_{jj}}{\\sqrt{a_{jj}^2+a_{ij}^2}}\\,, \\quad\n \\sin\\theta = \\frac{a_{ij}}{\\sqrt{a_{jj}^2+a_{ij}^2}}\\,.\n\\end{align}\nBy subsequent multiplication of $A$ with Givens matrices, all elements below the diagonal can be zeroed (see~\\cite[p. 252]{golub13} for the full algorithm).\nAs all $G_{ij}(\\theta)$ are orthogonal by construction, their product matrix $Q$ is also orthogonal.\n\n\n\\subsection{Performance profiles}\n\nLet $\\mathcal{P}$ be a set of BA problems and $\\mathcal{S}$ be the set of evaluated solvers which we run according to the termination criteria (maximum number of iterations and function tolerance).\nFor a given problem $p \\in \\mathcal{P}$ and solver $s \\in \\mathcal{S}$, \nwe define the minimal cost value achieved by that solver after time $t$ as $f(p, s, t)$.\nThe smallest achieved cost by any solver for a specific problem is denoted by $f^*(p) := \\min_{s,t} f(p,s,t)$, \nwhich we use to define for a chosen accuracy tolerance $0 < \\tau < 1$ the cost threshold\n\\begin{align}\n f_\\tau(p) := f^*(p) + \\tau (f_0(p) - f^*(p)),\n\\end{align}\nwhere $f_0(p)$ is the initial cost of problem $p$.\nThe runtime for solver $s$ to achieve that threshold is\n\\begin{align}\n t_\\tau(p, s) := \\min \\; \\{ t \\; | \\; f(p, s, t) \\leq f_\\tau(p) \\} \\cup \\{ \\infty \\}.\n\\end{align}\nWith this, the performance profile of a solver~$s$ is\n\\begin{align}\n \\rho_\\tau(s, \\alpha) := 100 \\frac{|\\{ p \\; | \\; t_\\tau(p, s) \\leq \\alpha \\min_s t_\\tau(p, s) \\}|}{|\\mathcal{P}|}.\n\\end{align}\nIn other words, $\\rho_\\tau(s, \\alpha)$ maps the relative runtime~$\\alpha$ to the percentage of problems that $s$ has solved to accuracy~$\\tau$.\nThe curve is monotonically increasing, \nstarting on the left with $\\rho_\\tau(s, 1)$, the percentage of problems for which solver~$s$ is the fastest, \nand ending on the right with $\\max_{\\alpha} \\rho_\\tau(s, \\alpha)$, \nthe percentage of problems on which the solver~$s$ achieves the cost threshold $f_\\tau(p)$ at all.\nComparing different solvers, the curve that is more to the left indicates better runtime \nand the curve that is more to the top indicates higher accuracy.\n\n\n\\section{Algorithm complexities}\n\\label{sec:complexities}\n\n\\begin{table*}[t]\n \\centering\n \\renewcommand{\\arraystretch}{1.4}\n \\begin{tabular}{l | l l l}\n & $\\sqrt{BA}$ (ours) & explicit SC & implicit SC \\\\\n \\hline\n \\textbf{outer iterations} \\\\\n Jacobian computation &\n $\\mathcal{O}(\\mu_on_l)$ & $\\mathcal{O}(\\mu_on_l)$ & $\\mathcal{O}(\\mu_on_l)$ \\\\\n Hessian computation &\n $0$ & $\\mathcal{O}(\\mu_on_l)$ & $0$ \\\\\n QR &\n $\\mathcal{O}((\\mu_o^2+\\sigma_o^2)n_l)$ & $0$ & $0$ \\\\\n \\textbf{middle iterations} \\\\\n damping &\n $\\mathcal{O}(\\mu_on_l)$ & $\\mathcal{O}(n_l+n_p)$ & $0$ \\\\\n SC &\n $0$ & $\\mathcal{O}((\\mu_o^2+\\sigma_o^2)n_l)$ & $0$ \\\\\n preconditioner & $\\mathcal{O}((\\mu_o^2+\\sigma_o^2)n_l+n_p)$ & $\\mathcal{O}(n_p)$ & $\\mathcal{O}(\\mu_on_l+n_p)$ \\\\\n back substitution & $\\mathcal{O}(\\mu_on_l)$ & $\\mathcal{O}(\\mu_on_l)$ & $\\mathcal{O}(\\mu_on_l)$ \\\\\n \\textbf{inner iterations} \\\\\n PCG & $\\mathcal{O}((\\mu_o^2+\\sigma_o^2)n_l+n_p)$ & $\\mathcal{O}(n_p^2)$ (worst case) & $\\mathcal{O}(\\mu_on_l+n_p)$ \\\\ \n \\end{tabular}\n \\caption{Complexities of the different steps in our $\\sqrt{BA}$ solver compared to the two SC solvers (explicit and implicit), expressed only in terms of $n_l$, $n_p$, $\\mu_o$, and $\\sigma_o$.\n We split the steps into three stages:\n outer iterations, i.e., everything that needs to be done in order to setup the least squares problem (once per linearization); middle iterations, i.e., everything that needs to be done within one Levenberg-Marquardt iteration (once per outer iteration if no backtracking is required or multiple times if backtracking occurs); inner iterations, i.e., everything that happens within one PCG iteration.}\n \\label{tab:complexities}\n\\end{table*}\n\nIn Table~\\ref{tab:complexities}, we compare the theoretical complexity of our QR-based solver to the explicit and implicit Schur complement solvers in terms of number of poses $n_p$, number of landmarks $n_l$, and mean $\\mu_o$ and variance $\\sigma_o^2$ of the number of observations per landmark.\nNote that the total number $n_o$ of observations equals $\\mu_on_l$.\nWhile most of the entries in the table are easily determined, let us briefly walk through the not so obvious ones:\n\n\\paragraph{$\\sqrt{BA}$ (our solver)}\nAssume landmark $j$ has $k_j$ observations.\nWe can express the sum over $k_j^2$ by $\\mu_o$ and $\\sigma_o^2$:\n\\begin{gather}\n \\sigma_o^2=\\operatorname{Var}(\\{k_j\\}) = \\frac{1}{n_l}\\sum_{j=1}^{n_l}{k_j^2} - (\\frac{1}{n_l}\\sum_{j=1}^{n_l}{k_j})^2\\,, \\\\\n \\Rightarrow \\sum_{j=1}^{n_l}{k_j^2} = n_l(\\mu_o^2 + \\sigma_o^2)\\,.\n\\end{gather}\nThe sum over $k_j^2$ appears in many parts of the algorithm, as the dense landmark blocks $\\begin{pmatrix}\\hat{J}_p & \\hat{J}_l & \\hat{r}\\end{pmatrix}$ after QR decomposition have size $(2k_j+3)\\times(d_pk_j+4)$, where the number of parameters per camera $d_p$ is 9 in our experiments.\nIn the QR step, we need $6k_j$ Givens rotations per landmark (out of which 6 are for the damping), and we multiply the dense landmark block by $\\hat{Q}^\\top$, so we end up having terms $\\mathcal{O}(\\sum_j{k_j})$ and $\\mathcal{O}(\\sum_j{k_j^2})$, leading to the complexity stated in Table~\\ref{tab:complexities}.\nFor the block-diagonal preconditioner, each landmark block contributes summands to $k_j$ diagonal blocks, each of which needs a matrix multiplication with $\\mathcal{O}(k_j)$ flops, thus we have a complexity of $\\mathcal{O}(\\sum_jk_j^2)$.\nPreconditioner inversion can be done block-wise and is thus $\\mathcal{O}(n_p)$.\nIn the PCG step, we can exploit the block-sparse structure of $\\hat{Q}_2^\\top \\hat{J}_p$ and again have the $k_j^2$-dependency.\nBecause of the involved vectors being of size $2n_o+d_pn_p$ (due to pose damping), we additionally have a dependency on $2n_o+d_pn_p$.\nFinally, for the back substitution, we need to solve $n_l$ upper-triangular $3\\times 3$ systems and then effectively do $n_l$ multiplications of a $(3\\times d_pk_j)$ matrix with a vector, which in total is of order $\\mathcal{O}(\\sum_jk_j)$.\n\n\\paragraph{Explicit SC}\nThe first step is the Hessian computation.\nAs each single residual only depends on one pose and one landmark, the Hessian computation scales with the number of observations\/residuals.\nDamping is a simple augmentation of the diagonal and contributes terms $\\mathcal{O}(n_l)$ and $\\mathcal{O}(n_p)$.\nMatrix inversion of $H_{ll}$ for the Schur complement scales linearly with $n_l$, while the number of operations to multiply $H_{pl}$ by $H_{ll}^{-1}$ scales with the number of non-zero sub-blocks in $H_{pl}$, and thus with $n_o$.\nThe multiplication of this product with $H_{lp}$ involves matrix products of sub-blocks sized $(d_p\\times 3)$ and $(3\\times d_p)$ for each camera pair that shares a given landmark, i.e., $\\mathcal{O}(k_j^2)$ matrix products for landmark $j$.\nThe preconditioner can simply be read off from $\\tilde{H}_{pp}$, and its inversion is the same as for $\\sqrt{BA}$.\nThe matrices and vectors involved in PCG all have size $d_pn_p(\\times d_pn_p)$.\nThe sparsity of $\\tilde{H}_{pp}$ is not only determined by $n_p$, $n_l$, $\\mu_o$, and $\\sigma_o$, but would require knowledge about which cameras share at least one landmark. In the worst case, where each pair of camera poses have at least one landmark they both observe, $\\tilde{H}_{pp}$ is dense.\nThus, assuming $\\tilde{H}_{pp}$ as dense we get quadratic dependence on $n_p$.\nBack substitution consists of matrix inversion of $n_l$ blocks, and a simple matrix-vector multiplication.\n\n\\paragraph{Implicit SC}\nSince the Hessian matrix is not explicitly computed, we need an extra step to compute the preconditioner for implicit SC.\nFor each pose, we have to compute a $d_p\\times d_p$ block for which the number of flops scales linearly with the number of observations of that pose, thus it is $\\mathcal{O}(n_o)$ in total.\nPreconditioner damping contributes the $n_p$-dependency.\nAs no matrices except for the preconditioner are precomputed for the PCG iterations, but sparsity can again be exploited to avoid quadratic complexities, this part of the algorithm scales linearly with all three numbers (assuming the outer loop for the preconditioner computation is a \\emph{parallel for} over cameras, rather than a \\emph{parallel reduce} over landmarks).\nLastly, back substitution is again only block-wise matrix inversion and matrix-vector multiplications.\nWhile avoiding a dependency on $(\\mu_o^2+\\sigma_o^2)n_l$ in the asymptotic runtime seems appealing, the implicit SC method computes a sequence of five sparse matrix-vector products in each PCG iteration in addition to the preconditioner multiplication, making it harder to parallelize than the other two methods, which have only one (explicit SC) or two ($\\sqrt{BA}$) sparse matrix-vector products.\nThus, the benefit of implicit SC becomes apparent only for very large problems. \nAs our evaluation shows, for medium and large problems, i.e.\\ the majority in the BAL dataset, our $\\sqrt{BA}$ solver is still superior in runtime.\n\n\n\\section{Problem sizes}\n\\label{sec:problem_sizes}\n\nTable~\\ref{tab:problem-size} in Section~\\ref{sec:problem_sizes_table}\ndetails the size of the bundle adjustment problem for each instance in the BAL dataset (grouped into \\emph{ladybug}, the skeletal problems \\emph{trafalgar}, \\emph{dubrovnik}, and \\emph{venice}, as well as the \\emph{final} problems). \nBesides number of cameras $n_p$, number of landmarks $n_l$, and number of observations $n_r = \\frac{N_r}{2}$, \nwe also show indicators for \\emph{problem density}: \nthe average number of observations per camera \\emph{\\#obs \/ cam} (which equals $n_r \/ n_p$), \nas well as the average number of observations per landmark \\emph{\\#obs \/ lm}, including its standard deviation and maximum over all landmarks. \n\nIn particular, a high mean and variance of \\emph{\\#obs \/ lm} indicates that our proposed $\\sqrt{BA}$ solver may require a large amount of memory (see for example \\emph{final961}, \\emph{final1936}, \\emph{final13682}), \nsince the dense storage after marginalization in a landmark block is quadratic in the number of observations of that landmark.\nIf on the other hand the problems are sparse and the number of observations is moderate, the memory required by $\\sqrt{BA}$ grows only linearly in the number of observations, similar to SC-based solvers (see Figure~6 in the main paper).\n\n\n\\section{Convergence}\n\\label{sec:convergence}\n\n\\balance\n\nIn Section~\\ref{sec:convergence_plots}, each row of plots corresponds to one of the 97 bundle adjustment problems and contains from left to right a plot of optimized cost by runtime (like Figure~5 in the main paper) and by iteration, trust-region size (inverse of the damping factor~$\\lambda$) by iteration, number of CG iterations by (outer) iteration, and peak memory usage by iteration. \nThe cost plots are cut off at the top and horizontal lines indicate the cost thresholds corresponding to accuracy tolerances $\\tau \\in \\{10^{-1}, 10^{-2}, 10^{-3}\\}$ as used in the performance profiles. The plot by runtime is additionally cut off on the right at the time the fastest solver for the respective problem terminated.\n\nWe make a few observations that additionally support our claims in the main paper: all solvers usually converge to a similar cost, but for most problems our proposed $\\sqrt{BA}$ solver is the fastest to reduce the cost. On the other hand, memory use can be higher, depending on problem size and density (see Section~\\ref{sec:problem_sizes}).\nMissing plots indicate that the respective solver ran out of memory, which for example for \\emph{$\\sqrt{BA}$-32} happens only on the largest problem \\emph{final13682}, where the landmarks have up to 1748 observations.\nOur single precision solver \\emph{$\\sqrt{BA}$-32} runs around twice as fast as its double precision counterpart, since it is numerically stable and usually requires a comparable number of CG iterations.\nThis is in contrast to our custom SC solver, where the twofold speedup for single precision is generally not observed. \nThe good numeric properties are further supported by the evolution of the trust-region size approximately following that of the other solvers in most cases.\nFinally, for the smallest problems (e.g., \\emph{ladybug49}, \\emph{trafalgar21}, \\emph{final93}), the evolution of cost, trust-region size, and even number of CG iterations is often identical for all solvers for the initial 5 to 15 iterations, before numeric differences become noticeable.\nThis supports the fact that the different marginalization strategies are algebraically equivalent and that our custom solver implementation uses the same Levenberg-Marquardt strategy and CG forcing sequence as Ceres.\n\n\n\\onecolumn\n\n\\section{Problem sizes table}\n\\label{sec:problem_sizes_table}\n\n{\n\\setlength{\\LTcapwidth}{0.99\\textwidth}\n\\begin{longtable}{l r r r r r r r}%\n\\label{tab:problem-size}\n\\endfirsthead\n\\endhead\n\\toprule\n&\\multicolumn{1}{c}{\\#cam}&\\multicolumn{1}{c}{\\#lm}&\\multicolumn{1}{c}{\\#obs}&\\multicolumn{1}{c}{\\#obs \/ cam}&\\multicolumn{3}{c}{\\#obs \/ lm} \\\\\n&\\multicolumn{1}{c}{$(n_p)$}&\\multicolumn{1}{c}{$(n_l)$}&\\multicolumn{1}{c}{$(n_r)$}& \\multicolumn{1}{c}{mean} & \\multicolumn{1}{c}{mean} & \\multicolumn{1}{c}{std-dev} & \\multicolumn{1}{c}{max}\\\\\n\\midrule\n\\input{results_supplementary\/problem_size.tex}\n\\bottomrule\n\\caption{Size of the bundle adjustment problem for each instance in the BAL dataset.}\n\\end{longtable}\n}\n\n\n\\section{Convergence plots}\n\\label{sec:convergence_plots}\n\n\\subsection{Ladybug}\n\\label{sec:ladybug}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug49}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug73}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug138}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug318}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug372}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug412}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug460}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug539}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug598}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug646}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug707}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug783}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug810}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug856}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug885}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug931}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug969}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1064}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1118}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1152}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1197}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1235}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1266}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1340}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1469}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1514}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1587}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1642}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1695}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_ladybug1723}\n\\end{center}\n\n\\subsection{Trafalgar}\n\\label{sec:trafalgar}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar21}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar39}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar50}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar126}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar138}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar161}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar170}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar174}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar193}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar201}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar206}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar215}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar225}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_trafalgar257}\n\\end{center}\n\n\\subsection{Dubrovnik}\n\\label{sec:dubrovnik}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik16}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik88}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik135}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik142}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik150}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik161}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik173}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik182}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik202}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik237}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik253}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik262}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik273}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik287}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik308}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_dubrovnik356}\n\\end{center}\n\n\\subsection{Venice}\n\\label{sec:venice}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice52}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice89}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice245}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice427}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice744}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice951}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1102}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1158}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1184}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1238}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1288}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1350}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1408}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1425}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1473}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1490}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1521}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1544}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1638}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1666}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1672}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1681}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1682}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1684}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1695}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1696}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1706}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1776}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_venice1778}\n\\end{center}\n\n\\subsection{Final}\n\\label{sec:final}\n\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final93}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final394}\\\\\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final871}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final961}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final1936}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final3068}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final4585}\n\\includegraphics[width=\\textwidth]{results_supplementary\/bal_cost_convergence_final13682}\n\\end{center}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Nomenclature}\n\\addcontentsline{toc}{section}{Nomenclature}\n\\subsection{Constant parameters}\n\\begin{IEEEdescription}[\\IEEEusemathlabelsep\\IEEEsetlabelwidth{${\\underline P _{mn}}$,${\\overline P _{mn}}$}]\n\\item[$r_{ij}, \\chi_{ij}$] Resistance and reactance of line $i\\rightarrow j$.\n\\item[$\\underline p_i, \\overline p_i$] Controllable active power limit at node $i$.\n\\item[$\\underline q_i, \\overline q_i$] Controllable reactive power limit at node $i$.\n\\item[$\\underline v_i, \\overline v_i$] Voltage safety limit at node $i$.\n\\item[$\\overline \\ell_{ij}$] Current safety limit on line $i\\rightarrow j$.\n\\item[]\n\\item[$A_f, B_f, A_s$] Constant matrices in the feasibility problem.\n\\item[$\\gamma_f, \\gamma_s$] Constant vectors in the feasibility problem.\n\\item[$A_{y},\\!b_{y},\\!c_{q},\\!\\gamma_{q}$] Constant matrices and vectors in SOCP.\n\\item [$\\underline w, \\overline w$] Bounds for the initial polytope in Algorithm \\ref{alg:approximate-socp}. \n\\item [$\\delta,\\eta, \\eta'$] Positive parameters used in Algorithm \\ref{alg:remove-inexact}. \n\\end{IEEEdescription}\n\n\n\\subsection{Variables}\n\\begin{IEEEdescription}[\\IEEEusemathlabelsep\\IEEEsetlabelwidth{${\\underline P _{mn}}$,${\\overline P _{mn}}$}]\n\\item[$p_i, q_i$] Controllable power injection at node $i$.\n\\item[$w_i$] Renewable active power generation at node $i$.\n\\item[$v_i$] Squared voltage magnitude at node $i$.\n\\item[$\\ell_{ij}$] Squared current magnitude on line $i \\rightarrow j$.\n\\item[$P_{ij}, Q_{ij}$] Active and reactive power flows onto line $i \\rightarrow j$.\n\\item[]\n\\item[$x$] Vector of state variables $x:=(p,q,v,\\ell,P,Q)$.\n\\item[$z_s$, $z_q$, $\\tilde z_q$] Vectors of nonnegative slack variables.\n\\item[$y_{ij} \\in \\mathbb{R}^3$] Auxiliary variables in SOCP for line $i\\rightarrow j$. \n\\item[$\\mu_f, \\mu_y$] Dual variables for equality constraints. \n\\item[$\\lambda_s, \\lambda_q$] Dual variables for inequality constraints.\n\\end{IEEEdescription}\n\n\n\\subsection{Optimization problems, values, sets}\n\\begin{IEEEdescription}[\\IEEEusemathlabelsep\\IEEEsetlabelwidth{${\\underline P _{mn}}$,${\\overline P _{mn}}$}]\n\\item[$\\mathrm{FP}(w)$] Feasibility problem for renewable generation $w$.\n\\item[$\\mathrm{fp}(w)$] Minimum objective value of $\\mathrm{FP}(w)$.\n\\item[$\\mathcal{W}$] Dispatchable region of $w$, in which $\\mathrm{fp}(w)=0$.\n\\item[$\\mathrm{FP}'(w)$] SOCP relaxation of $\\mathrm{FP}(w)$.\n\\item[$\\mathrm{fp}'(w)$] Minimum objective value of $\\mathrm{FP}'(w)$.\n\\item[$\\mathcal{W}'$] SOCP-relaxed dispatchable region of $w$.\n\\item[]\n\\item[$\\mathrm{DP}'(w)$] Dual problem of $\\mathrm{FP}'(w)$, also an SOCP.\n\\item [$D_w(\\mu,\\lambda)$] Dual objective function. \n\\item [$\\mathrm{dp}'(w)$] Maximum objective value of $\\mathrm{DP}'(w)$.\n\\item[$\\mathrm{DP}''(w,\\delta)$] Dual SOCP with feasible set tightened by $\\delta$.\n\\item[$\\mathrm{dp}''(w,\\delta)$] Maximum objective value of $\\mathrm{DP}''(w,\n\\delta)$.\n\\item[]\n\\item[$\\mathcal{W}'_{poly}$] Polytopic approximation of $\\mathcal{W}'$, by Algorithm \\ref{alg:approximate-socp}.\n\\item[$\\mathcal{\\tilde W}$] SOCP-inexact region of $w$.\n\\item[$\\mathcal{\\tilde W}_d$] Approximation of $\\mathcal{\\tilde W}$ using dual SOCP.\n\\item[$\\mathcal{\\tilde W}_{poly}$] Polytopic approximation of $\\mathcal{\\tilde W}$, by Algorithm \\ref{alg:remove-inexact}.\n\\end{IEEEdescription}\n\n\\section{Introduction}\n\\IEEEPARstart{W}{ith} the benefits of near-zero carbon emissions and low operating costs, distributed renewable power is experiencing tremendous expansion in recent years \\cite{ahmad2018dynamic}. Meanwhile, its volatile and intermittent features pose great challenges to electric grid operation, especially the distribution system. Unlike the bulk system, distribution system has few controllable units and a stronger coupling of active and reactive power flows due to its high resistance to reactance ratio \\cite{shen2022admissible}. This makes it even harder to accommodate the fluctuating renewable generations. Therefore, characterizing the renewable power capacities that can be safely hosted by a distribution network prior to its actual operation is vital. This necessitates finding all renewable power outputs that can ensure \\emph{solvability} of the power flow equations and satisfaction of \\emph{safety limits}.\n\nThe first requirement is \\emph{solvability} of the power flow equations. For a transmission network modeled by direct-current (DC) power flow, solvability is easy to check since a closed-form solution can be obtained \\cite{soroudi2015stochastic}. However, for distribution networks, the lossless DC model is not accurate enough since the distribution lines have higher resistance to reactance ratios. Some literature proved sufficient conditions under which the alternating current (AC) power flow equations are solvable, by utilizing Banach fixed-point theorem for contraction mappings \\cite{bolognani2015existence, wang2016explicit} or Brouwer fixed-point theorem for continuous mappings over compact convex sets \\cite{dvijotham2017solvability, simpson2017theory}. Nonetheless, those methods cannot be readily applied to output the dispatchable region since they are based on power flow equations and can hardly deal with inequality safety constraints.\n\nTo further take into account the second requirement, i.e., satisfaction of safety limits, optimization based methods were developed. Two well-known concepts are the do-not-exceed limit (DNEL) \\cite{zhao2014variable} and the dispatchable region \\cite{wei2014dispatchable}. The DNEL provides an allowable power interval for each renewable generator based on robust optimization. Data-driven approach \\cite{qiu2016data} and topology control \\cite{korad2015enhancement} were incorporated to improve the accuracy of DNEL. The correlation between different renewable generators is ignored in DNEL, so the obtained capacity regions can be conservative. The dispatchable region further considers those correlations and provides the exact region consisting of all renewable power outputs that can be accommodated. An adaptive constraint generation algorithm was proposed to generate the dispatchable region \\cite{wei2015real}. The interaction between different prosumers with renewable generators was considered in \\cite{chen2021energy}. Similarly, dispatchable region can be applied to quantify the allowable variation of loads based on Fourier\u2013Motzkin elimination \\cite{abiri2016loadability}. The above studies are based on DC power flow models.\n\nAs mentioned above, AC power flow model is a must for a distribution network. Reference \\cite{chen2018convex} solved nonlinear programs to get a set of boundary points that each make a different safety limit binding, and then built a dispatchable region heuristically as the convex hull of those boundary points. Linearized models were used to approximate the real dispatchable region under AC power flow \\cite{wan2016maximum, liu2019real}. However, there is no guarantee that all scenarios inside the obtained region are feasible. Reference \\cite{shen2022admissible} used the intersection of the dispatchable regions generated from two linearized models to output a more accurate approximation. To guarantee feasibility, certified inner approximations of dispatchable regions were solved from convex programs based on a tightened-relaxed second-order cone approximation \\cite{nick2017exact} or refined linear approximations \\cite{nguyen2018constructing, nazir2019convex} to AC power flow. \nHowever, such estimation typically only works for a specific objective function that merely explores the dispatchable region towards a single direction or with a specific shape of the renewable generation vector. Moreover, all the aforementioned regions are convex, while the actual dispatchable region can be nonconvex due to the AC power flow constraints.\n\nIn this paper, we propose an alternative method to complement the literature above. Our main contributions are two-fold:\n\n\\begin{itemize}\n\\item[1)] \\emph{Accurate Dispatchable Region of the Second-Order Cone (SOC) Relaxed Model.} A nonlinear Dist-Flow model based optimization problem is developed to characterize the dispatchable region, which is hard to solve due to its nonconvexity. Therefore, we first relax the problem to a convex second-order cone program (SOCP). Then, unlike reference \\cite{shen2022admissible} that further linearized the SOC constraint using polyhedral approximation \\cite{ben2001polyhedral}, we generate the dispatchable region directly without further linearization. To be specific, the dual problem of the SOCP is derived and strong duality holds as proven in Proposition \\ref{prop:strong-duality}. We then propose a projection algorithm (Algorithm \\ref{alg:approximate-socp}) to construct a polytopic approximation of the SOCP-relaxed dispatchable region. We prove that the approximation is accurate under certain conditions.\n\n\\item[2)] \\emph{Removal of SOCP-Inexact Regions.} The other inaccuracy lies in the possible inexactness of the SOC relaxation. In fact, the actual dispatchable region may be nonconvex, but the algorithms developed in previous studies can only generate convex regions. Distinctly, we propose a heuristic method to find out the SOCP-inexact regions by requiring the corresponding dual variables to be larger than some small positive values. Removing the SOCP-inexact regions from the region generated by Algorithm 1 from the SOC relaxed model, we can build a tighter approximation of the actual dispatchable region. The proposed method provides an innovative idea for constructing an accurate dispatchable region as the difference of several convex sets. Numerical results show that the proposed method can approximate the complicated dispatchable region with a simple polytope after moderate computation, while preserving relatively good accuracy. It can also reach a satisfactory balance between ensuring safety and reducing conservatism.\n\\end{itemize}\n\nThe rest of this paper is organized as follows. Section \\ref{sec:model} introduces the power network model we use. Section \\ref{sec:problem} defines the dispatchable region and the optimization problem to characterize it. Section \\ref{sec:methods} elaborates our method to approximate the dispatchable region. Section \\ref{sec:numerical} reports numerical experiments, and Section \\ref{sec:conclusion} concludes the paper. \n\n\\section{Power network model}\\label{sec:model}\n\nConsider the single-phase equivalent model of a distribution network, which is a radial graph with a set $\\mathcal{N}$ of nodes and a set $\\mathcal{L}$ of lines. Index the nodes as $\\mathcal{N} = \\{0, 1,\\dots,N\\}$, where $0$ represents the root node (slack bus). \nFor convenience, we treat the lines as directed; for example, if a line connects nodes $i,j \\in \\mathcal{N}$, where node $i$ is closer to the root than node $j$, then the line directs from $i$ to $j$ and is denoted by $i \\rightarrow j$. The power flow in the network at a particular time instant can be modeled by the classic Dist-Flow equations purely in real numbers \\cite{baran1989optimal, farivar2013branch}, elaborated as follows. \n\nAt each node $i\\in \\mathcal{N}$: let $v_i$ denote the squared voltage magnitude; aggregate all the \\emph{controllable} power sources and loads into a complex power injection $p_i + \\mathrm{j} q_i$; denote the \\emph{uncontrollable} active power generation of a renewable energy source as $w_i$. \nLet $\\ell_{ij}$ denote the squared current magnitude through each line $i\\rightarrow j$. \nLet $P_{ij}$ and $Q_{ij}$ denote the \\textit{net} active and \\textit{net} reactive power, respectively, that are sent by node $i$ onto line $i\\rightarrow j$; they are different from the net power arriving at node $j$ due to power loss, and are negative if node $i$ receives power from line $i\\rightarrow j$. \nLet $r_{ij}$, $\\chi_{ij}$ denote the constant resistance and reactance of line $i\\rightarrow j$, respectively.\nThe Dist-Flow equations are:\n\\begin{subequations}\\label{eq:dist-flow}\n\\begin{IEEEeqnarray}{rrCl}\n\\forall i\\rightarrow j: \n&P_{ij} - r_{ij} \\ell_{ij} - \\sum_{k: j\\rightarrow k} P_{jk}+ p_j+ w_j &=& 0 \\label{eq:dist-flow:p} \\\\\n&Q_{ij} - \\chi_{ij} \\ell_{ij} - \\sum_{k: j\\rightarrow k} Q_{jk}+ q_j &=& 0 \\label{eq:dist-flow:q} \\\\\nv_{i} - v_{j} & - 2(r_{ij} P_{ij} + \\chi_{ij} Q_{ij}) + (r_{ij}^2 + \\chi_{ij}^2) \\ell_{ij} &=& 0 \\label{eq:dist-flow:v} \\\\\n& P_{ij}^2 + Q_{ij}^2 - v_i \\ell_{ij} &=& 0. \\label{eq:dist-flow:s}\n\\end{IEEEeqnarray}\n\\end{subequations}\n\nSuppose renewable energy sources only exist at a subset of nodes $\\mathcal{N}_w \\subseteq \\mathcal{N}\\backslash \\{0\\}$, whose cardinality is $W:=|\\mathcal{N}_w|$. For nodes $i\\notin \\mathcal{N}_w$, set constant $w_i\\equiv 0$. \nThe variables in Dist-Flow equations \\eqref{eq:dist-flow} are grouped as follows:\n\\begin{itemize}\n\\item Renewable power generation $w:=(w_i,~i\\in \\mathcal{N}_w)\\in \\mathbb{R}^W$, which is treated as input to the system;\n\\item State variables $x:=(p,q, v, \\ell, P, Q)$, where each of $p$, $q$, $v$, $\\ell$, $P$, $Q$ is a column vector indexed by $\\{1,...,N\\}$. \n\\end{itemize}\n\n\\textit{Remark:} Without loss of generality, we assume there is only one node, indexed as node $1$, connected to the root node $0$. In this case, the power exchange between the distribution network and the upper grid at node $0$ is $p_0 + \\mathrm{j} q_0 = P_{01} +\\mathrm{j} Q_{01}$, so that it is just considered as part of $(P,Q)$, not $(p,q)$. \nAs customary, assume $v_0$ is a given constant and thus not in state variable $v$. \nThe radial network has $N$ lines, where each line $i\\rightarrow j$ can be uniquely indexed by its destination node $j$, so that we can index line variables $\\ell$, $P$, $Q$ by $\\{1,...,N\\}$.\n\nAssume known capacity limits of controllable power:\n\\begin{subequations}\\label{eq:pq_limits}\n\\begin{IEEEeqnarray}{rCl}\n\\underline p_i \\leq p_i \\leq \\overline p_i, &\\quad& \\forall i =1,...,N \\label{eq:pq_limits:p} \\\\\n\\underline q_i \\leq q_i \\leq \\overline q_i, && \\forall i =1,...,N \\label{eq:pq_limits:q} \n\\end{IEEEeqnarray}\n\\end{subequations}\nAt any node $i$ where there are only fixed (or zero) power injections, the constant limits can be set as $\\underline p_i = \\overline p_i$ ($=0$) and\/or $\\underline q_i = \\overline q_i$ ($=0$).\nIn addition, power system operations require the following safety limits to be satisfied:\n\\begin{subequations}\\label{eq:vl_limits}\n\\begin{IEEEeqnarray}{rCl}\n\\underline v_i \\leq v_i \\leq \\overline v_i, &\\quad& \\forall i =1,...,N \\label{eq:vl_limits:v} \\\\\n0\\leq \\ell_{ij} \\leq \\overline \\ell_{ij}, && \\forall i\\rightarrow j \\label{eq:vl_limits:ell} \n\\end{IEEEeqnarray}\n\\end{subequations}\nwhere the voltage limits $\\underline v_i$, $\\overline v_i$ for all nodes $i$ and the current limits $\\overline \\ell_{ij}$ for all lines $i\\rightarrow j$ are given as positive constants.\n\nWith the model above, we next define and analyze the dispatchable region of renewable power generation. \n\n\\section{Dispatchable region and relaxation}\\label{sec:problem}\n\nIn this paper, the \\emph{dispatchable region} is the region of renewable power generation $w$, for which there is a feasible dispatch. Its formal definition is provided below.\n\n\n\\begin{definition}\\label{def:feasibility}\nA vector of renewable power generation $w\\in\\mathbb{R}^W$ has a \\textbf{feasible dispatch} if there exists $x=(p,q,v,\\ell,P,Q) \\in \\mathbb{R}^{6N}$ such that $(w,x)$ satisfies power flow equations \\eqref{eq:dist-flow}, capacity limits \\eqref{eq:pq_limits}, and safety limits \\eqref{eq:vl_limits}. The \\textbf{dispatchable region} of renewable power generation is defined as:\n\\begin{IEEEeqnarray}{rCl}\\nonumber\n\\mathcal{W}&:=& \\left\\{w \\in \\mathbb{R}^W~|~w~\\text{has a feasible dispatch.}\\right\\}\n\\end{IEEEeqnarray}\n\\end{definition}\n\n\nFor conciseness, we rewrite the linear part \\eqref{eq:dist-flow:p}--\\eqref{eq:dist-flow:v} of Dist-Flow equations as\n$A_f x \\!+\\! B_{f} w \\!+\\! \\gamma_f = 0$ and affine inequalities \\eqref{eq:pq_limits}--\\eqref{eq:vl_limits} as $A_s x + \\gamma_s \\leq 0$, where both equality and inequality are element-wise, and constant matrices and vectors $A_f$, $B_f$, $\\gamma_f$, $A_s$, $\\gamma_s$ are provided in Appendix-A.\nGiven any $w$, we introduce the following optimization to check its feasibility\n\\begin{subequations}\\label{eq:opt-feasibility}\n\\begin{IEEEeqnarray}{rCl}\n\\mathrm{FP}(w):~\\min &~& 1^\\intercal \\tilde z \\label{eq:opt-feasibility:obj}\n\\\\ \n\\text{over} && x=(p,q,v,\\ell,P,Q), ~\\tilde z=(z_s, z_q, \\tilde z_q)\\geq 0 \\nonumber\n\\\\\n\\text{s. t.} && A_f x + B_f w + \\gamma_f = 0 \\label{eq:opt-feasibility:lin-dist-flow}\n\\\\ && A_s x +\\gamma_s \\leq z_s \\label{eq:opt-feasibility:limits}\n\\\\ && P_{ij}^2 + Q_{ij}^2 - v_i \\ell_{ij} \\leq z_{q,ij}, ~\\forall i\\rightarrow j \\label{eq:opt-feasibility:nonlinear-small}\n\\\\ && v_i \\ell_{ij}-(P_{ij}^2 + Q_{ij}^2)\\leq \\tilde z_{q,ij}, ~\\forall i\\rightarrow j \\label{eq:opt-feasibility:nonlinear-big}\n\\end{IEEEeqnarray} \n\\end{subequations}\nwhere $1^\\intercal$ in objective \\eqref{eq:opt-feasibility:obj} is a row vector of all ones. Any element of the slack variable $\\tilde z$ can increase as needed to satisfy the corresponding inequality constraint, but only $\\tilde z=0$ can guarantee feasibility in terms of \\eqref{eq:dist-flow}--\\eqref{eq:vl_limits}. \nTherefore, denoting the minimum objective value of $\\mathrm{FP}(w)$ as $\\mathrm{fp}(w)$, the dispatchable region in Definition \\ref{def:feasibility} is equivalently:\n\\begin{IEEEeqnarray}{rCl}\\nonumber\n\\mathcal{W}&=& \\left\\{w \\in \\mathbb{R}^W~|~\\mathrm{fp}(w)=0 \\right\\}.\n\\end{IEEEeqnarray}\n\nDue to the nonconvex quadratic inequality constraint \\eqref{eq:opt-feasibility:nonlinear-big}, problem $\\mathrm{FP}(w)$ is nonconvex and thus hard to analyze. \nBy removing \\eqref{eq:opt-feasibility:nonlinear-big} and rewriting \\eqref{eq:opt-feasibility:nonlinear-small}, we relax $\\mathrm{FP}(w)$ to a convex \\emph{second order cone program (SOCP)}:\n\\begin{subequations}\\label{eq:opt-feasibility-relaxed}\n\\begin{IEEEeqnarray}{rCl}\n\\mathrm{FP}'(w):~\\min &~& 1^\\intercal z \\label{eq:opt-feasibility-soc:obj}\n\\\\ \n\\text{over} && x,~y,~z=(z_s, z_q)\\geq 0 \\nonumber\n\\\\\n\\text{s. t.} && \\text{\\eqref{eq:opt-feasibility:lin-dist-flow}--\\eqref{eq:opt-feasibility:limits}} \\nonumber \\\\\n&& y = A_{y} x + b_{y} \\label{eq:opt-feasibility:soc-substitute}\\\\\n \\| y_{ij} \\|_2 &&\\leq c_{q,ij} x + \\gamma_{q,ij}+z_{q,ij},~ \\forall i\\rightarrow j \\label{eq:opt-feasibility:soc}\n\\end{IEEEeqnarray} \n\\end{subequations}\nwhere $y\\in \\mathbb{R}^{3N}$, $A_y \\in\\mathbb{R}^{(3N)\\times(6N)}$, and $b_y \\in \\mathbb{R}^{3N}$ vertically stack $y_{ij}\\in\\mathbb{R}^3$, $A_{y,ij} \\in \\mathbb{R}^{3\\times(6N)}$, and $b_{y,ij}\\in \\mathbb{R}^3$ respectively for all lines $i\\rightarrow j$. Row vector $c_{q,ij} \\in \\mathbb{R}^{1\\times(6N)}$ and scalar number $\\gamma_{q,ij} \\in \\mathbb{R}$ are also stacked vertically for all $i\\rightarrow j$ as $c_q \\in \\mathbb{R}^{N\\times (6N)}$ and $\\gamma_q \\in \\mathbb{R}^{N}$. The constant matrices and vectors $A_{y}$, $b_{y}$, $c_{q}$, $\\gamma_{q}$ are provided in Appendix-B, which make: \n\\begin{IEEEeqnarray}{rCl}\nA_{y,ij} x + b_{y,ij} &=& [2P_{ij}, ~2Q_{ij}, ~ v_i \\!-\\! \\ell_{ij}]^\\intercal,\\quad \\forall i\\rightarrow j \\nonumber \\\\\nc_{q,ij} x + \\gamma_{q,ij} &=& v_i + \\ell_{ij}, \\qquad\\qquad\\qquad\\quad \\forall i\\rightarrow j \\nonumber\n\\end{IEEEeqnarray}\nand thus make \\eqref{eq:opt-feasibility:soc-substitute}--\\eqref{eq:opt-feasibility:soc} equivalent to \\eqref{eq:opt-feasibility:nonlinear-small}.\\footnote{Given $x$, the values of $z_q$ in \\eqref{eq:opt-feasibility:nonlinear-small} and \\eqref{eq:opt-feasibility:soc} are generally not equal, but we do not differentiate notation due to their identical role as slack variables.} \n\nProblem $\\mathrm{FP}'(w)$ facilitates the definition of an \\textit{SOCP-relaxed} dispatchable region:\n\\begin{IEEEeqnarray}{rCl}\\nonumber\n\\mathcal{W}'&:=& \\left\\{w \\in \\mathbb{R}^W~|~\\mathrm{fp}'(w)=0 \\right\\}\n\\end{IEEEeqnarray}\nwhere $\\mathrm{fp}'(w)$ is the minimum objective value of $\\mathrm{FP}'(w)$. It is obvious that~$\\mathcal{W} \\subseteq \\mathcal{W}'$, i.e., $\\mathcal{W}'$ is a relaxation of $\\mathcal{W}$.\n\nA common practice to further simplify the dispatchable-region characterization is to outer approximate the second-order cone \\eqref{eq:opt-feasibility:soc} with a polytopic cone, which can achieve arbitrary precision by constructing sufficiently many planes tangent to the surface of the second-order cone \\cite{ben2001polyhedral,chen2018energy}. Consequently, $\\mathrm{FP}'(w)$ is relaxed to a linear program, and then the algorithm in \\cite{wei2014dispatchable,wei2015real, chen2021energy} can be employed to get a convex polytopic outer approximation of $\\mathcal{W}'$. \nIn this work, we propose an alternative method that does not rely on such linearization. Instead, we work directly on the SOCP $\\mathrm{FP}'(w)$ and its dual problem to preserve the intrinsic nonlinearity of the AC power flow model and hence the accuracy of our characterization.\n\n\\section{Polytopic approximation algorithms}\\label{sec:methods}\n\nTo offer a closed-form approximation of dispatchable region $\\mathcal{W}$, we first develop a convex polytopic approximation of its relaxation $\\mathcal{W}'$ via the dual problem of SOCP $\\mathrm{FP}'(w)$. \nWe then develop a heuristic method to approximately remove the renewable generations that make the SOCP relaxation inexact, resulting in a tighter approximation of $\\mathcal{W}$. \n\n\\subsection{Dual SOCP}\n\nLet $\\mu:=(\\mu_f, \\mu_y)$ denote the dual variables for the equality constraints in problem $\\mathrm{FP}'(w)$, with $\\mu_f \\in \\mathbb{R}^{3N}$ for \\eqref{eq:opt-feasibility:lin-dist-flow} and $\\mu_y\\in \\mathbb{R}^{3N}$ for \\eqref{eq:opt-feasibility:soc-substitute} vertically stacking $\\mu_{y,ij}\\in \\mathbb{R}^3,~\\forall i\\rightarrow j$. \nLet $\\lambda:=(\\lambda_s, \\lambda_q)$ denote the dual variables for the inequality constraints, with $\\lambda_s \\in \\mathbb{R}^{8N}$ for \\eqref{eq:opt-feasibility:limits} and $\\lambda_q=(\\lambda_{q,ij},~\\forall i\\rightarrow j) \\in \\mathbb{R}^N$ for \\eqref{eq:opt-feasibility:soc}. Then the Lagrangian of $\\mathrm{FP}'(w)$ is:\n\\begin{IEEEeqnarray}{rCl}\\label{eq:lagrangian}\nL_u &=& 1^\\intercal z \\ + \\ \\mu_f^\\intercal (A_f x + B_f w + \\gamma_f) \\nonumber \\\\\n&& + \\lambda_s^\\intercal (A_s x + \\gamma_s - z_s) + \\mu_{y}^\\intercal \\left(y -A_{y} x - b_{y}\\right) \\nonumber \\\\\n&&+ \\sum_{i\\rightarrow j} \\lambda_{q,ij} \\left(\\| y_{ij} \\|_2 - c_{q,ij} x - \\gamma_{q,ij} - z_{q,ij}\\right) \\nonumber \\\\ \n&=& z^\\intercal (1-\\lambda) + \\sum_{i\\rightarrow j} \\left(y_{ij}^\\intercal \\mu_{y,ij} +\\| y_{ij} \\|_2\\lambda_{q,ij}\\right)\\nonumber \\\\\n&& + x^\\intercal \\left(A_f^\\intercal \\mu_f + A_s^\\intercal \\lambda_s - A_{y}^\\intercal \\mu_{y} - c_{q}^\\intercal \\lambda_{q}\\right) \\nonumber \\\\\n&& + \\mu_f^\\intercal (B_f w + \\gamma_f)+\\lambda_s^\\intercal \\gamma_s - \\mu_{y}^\\intercal b_y -\\lambda_{q}^\\intercal \\gamma_q. \\label{eq:Lagrangian}\n\\end{IEEEeqnarray}\n\nThrough $\\min_{z\\geq 0, x, y} L_u(x,y,z; \\mu,\\lambda)$ we can get the dual objective function.\nBy \\eqref{eq:Lagrangian}, $L_u$ can only attain a finite minimum over $(z\\geq 0, x,y)$ when the dual variables satisfy: \n\\begin{subequations}\\label{eq:dual-feasibility}\n\\begin{IEEEeqnarray}{rCl}\n0 \\leq &~\\lambda~ & \\leq 1 \\label{eq:dual-feasibility:z} \\\\\nA_f^\\intercal \\mu_f + A_s^\\intercal \\lambda_s &=& A_y^\\intercal \\mu_y + c_q^\\intercal \\lambda_q \\label{eq:dual-feasibility:x}\\\\\n \\|\\mu_{y,ij}\\|_2 &\\leq& \\lambda_{q,ij},\\quad\\forall i\\rightarrow j\\label{eq:dual-feasibility:y}\n\\end{IEEEeqnarray}\n\\end{subequations}\nNote that $\\lambda\\geq 0$ in \\eqref{eq:dual-feasibility:z} is a general requirement for all the dual variables associated with inequality constraints, and \\eqref{eq:dual-feasibility:y} must hold by noticing\n\\begin{IEEEeqnarray}{rCl}\ny_{ij}^\\intercal \\mu_{y,ij} +\\| y_{ij} \\|_2\\lambda_{q,ij} &\\geq& \\left(\\lambda_{q,ij} - \\|\\mu_{y,ij}\\|_2 \\right)~ \\|y_{ij}\\|_2. \\nonumber\n\\end{IEEEeqnarray}\nWhen \\eqref{eq:dual-feasibility} is satisfied, all the terms containing $(x,y,z)$ in \\eqref{eq:Lagrangian} attain their minimum value zero, and hence we obtain the dual problem for $\\mathrm{FP}'(w)$, which is also an SOCP:\n\\begin{IEEEeqnarray}{rCl}\n\\mathrm{DP}'(w):~\\max_{\\mu,\\lambda} &\\quad& \\mu_f^\\intercal (B_f w + \\gamma_f) + \\lambda_s^\\intercal \\gamma_s - \\mu_{y}^\\intercal b_y -\\lambda_{q}^\\intercal \\gamma_q \\nonumber\n\\\\\n\\text{s. t.} && \\text{\\eqref{eq:dual-feasibility}.} \\nonumber\n\\end{IEEEeqnarray} \n Let $D_w(\\mu,\\lambda)$ denote the objective function and $\\mathrm{dp}'(w)$ denote the maximum objective value of $\\mathrm{DP}'(w)$.\nThe following result lays the foundation for approximating the SOCP-relaxed dispatchable region $\\mathcal{W}'$ via the dual SOCP $\\mathrm{DP}'(w)$.\n\n\\begin{proposition}\\label{prop:strong-duality}\nFor all $w\\in\\mathbb{R}^W$, strong duality holds between $\\mathrm{FP}'(w)$ and $\\mathrm{DP}'(w)$, i.e., their optimal values $\\mathrm{fp}'(w) = \\mathrm{dp}'(w)$. \n\\end{proposition}\n\\begin{IEEEproof}\nConsider an arbitrary $w\\in\\mathbb{R}^W$. Since problem $\\mathrm{FP}'(w)$ is convex, it is sufficient to prove Slater's condition \\cite[Section 5.2.3]{boyd2004convex}, i.e., existence of $(z\\geq 0,x,y)$ that satisfies affine constraints \\eqref{eq:opt-feasibility:lin-dist-flow}\\eqref{eq:opt-feasibility:limits}\\eqref{eq:opt-feasibility:soc-substitute} and strictly satisfies \\eqref{eq:opt-feasibility:soc}. \n\nIndeed, it is adequate to find a point $x=(p,q,v,\\ell, P, Q)$ to satisfy \\eqref{eq:opt-feasibility:lin-dist-flow}, i.e., \\eqref{eq:dist-flow:p}--\\eqref{eq:dist-flow:v}; then one can explicitly determine $y$ by \\eqref{eq:opt-feasibility:soc-substitute} and always find large enough $z$ to make \\eqref{eq:opt-feasibility:limits}\\eqref{eq:opt-feasibility:soc} (strictly) feasible, satisfying Slater's condition. \nSuch a point $x$ can be easily found as follows: set $p=q=\\ell=0 \\in \\mathbb{R}^N$; determine $(P,Q)$ backward from the leaves to the root of the radial network, using \\eqref{eq:dist-flow:p}--\\eqref{eq:dist-flow:q}; then determine $v$ forward from the root to the leaves, using \\eqref{eq:dist-flow:v}. This completes the proof.\n\\end{IEEEproof}\n\nBy Proposition \\ref{prop:strong-duality}, the relaxed region $\\mathcal{W}'$ is equivalently:\n\\begin{IEEEeqnarray}{rCl}\n&&\\mathcal{W}'= \\left\\{w \\in \\mathbb{R}^W~|~\\mathrm{dp}'(w)=0 \\right\\} \\nonumber\\\\\n&=& \\left\\{w \\in \\mathbb{R}^W~|D_w(\\mu,\\lambda)\\leq 0,~\\forall (\\mu,\\lambda)~\\text{satisfying \\eqref{eq:dual-feasibility}}\\right\\} \\label{eq:relaxed-region}\n\\end{IEEEeqnarray}\nwhere the second equality holds because $D_w(\\mu,\\lambda)=0$ can always be attained at the dual feasible point $(\\mu,\\lambda)=0$.\n\n\\begin{proposition}\\label{proposition:convexityU}\n$\\mathcal{W}'$ is a convex set.\n\\end{proposition}\n\\begin{IEEEproof}\nConsider arbitrary $w_1, w_2 \\in \\mathcal{W}'$ and $t \\in [0,1]$. Denote $w_t:=t w_1 + (1-t) w_2$. Then for every $(\\mu,\\lambda)$ satisfying \\eqref{eq:dual-feasibility}, we have:\n\\begin{IEEEeqnarray}{rCl}\nD_{w_t}(\\mu, \\lambda) \n&=& t D_{w_1} (\\mu,\\lambda) + (1-t)D_{w_2} (\\mu,\\lambda) \\nonumber \\\\\n&\\leq& t\\cdot 0 + (1-t) \\cdot 0 = 0 \\nonumber\n\\end{IEEEeqnarray}\nwhere the first equality is due to linearity of $D_w(\\mu,\\lambda)$ with respect to $w$ when $(\\mu, \\lambda)$ is fixed, and the inequality holds because $w_1, w_2 \\in \\mathcal{W}'$. Therefore $w_t \\in \\mathcal{W}'$. By the definition of a convex set, $\\mathcal{W}'$ is convex.\n\\end{IEEEproof}\n\n\\subsection{Approximating SOCP-relaxed dispatchable region}\n\n\n\n\\begin{algorithm}[t]\\label{alg:approximate-socp}\n\t\\caption{Approximate $\\mathcal{W}'$}\n\t1. \\textbf{Initialization:} $\\mathcal{W}'_{poly} = \\left\\{w \\in \\mathbb{R}^W~|~\\underline w \\leq w \\leq \\overline w\\right\\}$ for sufficiently low $\\underline w$ and high $\\overline w$; $\\mathcal{V}_{safe}=\\emptyset$; $c=0$.\n\t\n\t2. Update vertex set $vert\\left(\\mathcal{W}'_{poly}\\right)$. Let $\\mathrm{dp}'_{max}=0$; \n\t\n\t\\For{$w\\in \\mbox{vert}\\left(\\mathcal{W}'_{poly}\\right)$ and $w\\notin \\mathcal{V}_{safe}$}{\n\t\tsolve $\\mathrm{DP}'(w)$ to obtain an optimal solution $(\\mu^*, \\lambda^*)$ and maximum objective value $\\mathrm{dp}'(w)$;\n\t\t\n\t\t\\lIf{$\\mathrm{dp}'(w)>\\mathrm{dp}'_{max}$}{\n\t\t \n\t\t \\qquad $\\mathrm{dp}'_{max} \\leftarrow \\mathrm{dp}'(w)$;\n\t\t\n\t\t \\qquad $(\\mu_{max},\\lambda_{\\max}) \\leftarrow (\\mu^*,\\lambda^*)$\n\t\t}\n\t\t\\lElseIf{$\\mathrm{dp}'(w)\\leq 0$}{\n\t\t $\\mathcal{V}_{safe} = \\mathcal{V}_{safe} \\cup \\{w\\}$\n\t\t}\n\t}\n\t\\eIf{$\\mathrm{dp}'_{max} = 0$ or $c=C_{max}$}{\n\t return $\\mathcal{W}'_{poly}$.\n\t}{\n\t add to $\\mathcal{W}'_{poly}$ a cutting plane:\n\t $\\mu_{f,max}^\\intercal (B_f w + \\gamma_f) + \\lambda_{s,max}^\\intercal \\gamma_s \\leq \\mu_{y,max}^\\intercal b_y +\\lambda_{q,max}^\\intercal \\gamma_q$;\n\t \n\t $c \\leftarrow c+1$;\n\t \n\t go back to Line 2;\n\t}\n\\end{algorithm}\n\nWe propose Algorithm \\ref{alg:approximate-socp} to approximate $\\mathcal{W}'$ defined in \\eqref{eq:relaxed-region}. It starts with a region $\\mathcal{W}'_{poly}$ that is large enough to contain $\\mathcal{W}'$. Then it solves the dual SOCP $\\mathrm{DP}'(w)$ for every vertex $w$ of polytope $\\mathcal{W}'_{poly}$, records the vertex that most severely violates the condition in \\eqref{eq:relaxed-region}, and adds a corresponding cutting plane to remove that vertex from $\\mathcal{W}'_{poly}$. Meanwhile, all the vertices that satisfy the condition in \\eqref{eq:relaxed-region} are added to $\\mathcal{V}_{safe}$ and never checked again. \n\n\\begin{proposition}\\label{prop:polytopic-outer-approximation}\nThe output $\\mathcal{W}'_{poly}$ in an arbitrary iteration of Algorithm \\ref{alg:approximate-socp} is an outer approximation of $\\mathcal{W}'$. \n\\end{proposition}\n\\begin{IEEEproof}\nNote the initial $\\mathcal{W}'_{poly}$ contains $\\mathcal{W}'$. \nWe next prove that any cutting plane added to $\\mathcal{W}'_{poly}$ would not remove any point in $\\mathcal{W}'$.\nTo show that, consider an arbitrary $w$ removed by a cutting plan whose coefficients are $(\\mu_{max},\\lambda_{max})$. Then there must be $D_w(\\mu_{max},\\lambda_{max}) > 0$. Since $(\\mu_{max},\\lambda_{max})$ is dual feasible satisfying \\eqref{eq:dual-feasibility}, we have $w\\notin \\mathcal{W}'$ by \\eqref{eq:relaxed-region}. \n\\end{IEEEproof}\n\n Unlike \\cite{wei2014dispatchable, chen2021energy} that based on linear programs, the SOCP-relaxed dispatchable region $\\mathcal{W}'$ may not be the intersection of a finite number of cutting planes (i.e., a convex polytope). \n Therefore, Algorithm \\ref{alg:approximate-socp} may not guarantee $\\mathrm{dp}'(w)= 0$ for all vertices $w\\in vert\\left(\\mathcal{W}'_{poly}\\right)$ in a finite number of iterations. However, if it does so, as what always happens in our numerical experiments, it will produce a nice result as follows.\n \n\\begin{proposition}\\label{prop:polytopic-approximation}\nIf Algorithm \\ref{alg:approximate-socp} terminates with $\\mathrm{dp}'_{max} = 0$ in a finite number of iterations, it returns the accurate SOCP-relaxed dispatchable region, i.e., $\\mathcal{W}'_{poly}=\\mathcal{W}'$. \n\\end{proposition}\n\\begin{IEEEproof}\nProposition \\ref{prop:polytopic-outer-approximation} has shown $\\mathcal{W}' \\subseteq \\mathcal{W}'_{poly}$.\nIf Algorithm \\ref{alg:approximate-socp} terminates with $\\mathrm{dp}'_{max} = 0$ after adding a finite number of cutting planes, then it returns a convex polytope $\\mathcal{W}'_{poly}$. Moreover, all the vertices $w\\in vert\\left(\\mathcal{W}'_{poly}\\right)$ satisfy $\\mathrm{dp}'(w)= 0$, therefore, $w\\in \\mathcal{W}'$ by \\eqref{eq:relaxed-region}. \nThis fact, together with the convexity of $\\mathcal{W}'$ shown in Proposition \\ref{proposition:convexityU}, implies $\\mathcal{W}'_{poly} \\subseteq \\mathcal{W}'$. Thus we have proved $\\mathcal{W}'_{poly}=\\mathcal{W}'$.\n\\end{IEEEproof}\n\nAn immediate corollary of Proposition \\ref{prop:polytopic-approximation} is that if $\\mathcal{W}'$ is not a polytope, then Algorithm \\ref{alg:approximate-socp} cannot terminate in a finite number of iterations with $\\mathrm{dp}'_{max} = 0$. \nIf that happens, one can terminate Algorithm \\ref{alg:approximate-socp} when reaching the maximum number of iterations $C_{max}$, to obtain a convex polytopic outer approximation of $\\mathcal{W}'$.\nIn this sense, the outcome of Algorithm \\ref{alg:approximate-socp} serves as a posterior indicator of the structure of $\\mathcal{W}'$. \n\n\n\\subsection{Removing SOCP-inexact renewable generations}\n \nRemember our goal is to characterize the dispatchable region $\\mathcal{W}$, whereas $\\mathcal{W}'$ studied so far is just a SOCP-relaxation of $\\mathcal{W}$. To overcome this drawback, we design a heuristic to approximately remove the SOCP-inexact region $\\mathcal{\\tilde W}:=\\mathcal{W}' \\backslash \\mathcal{W}$ from $\\mathcal{W}'$.\nThe renewable generations $w \\in \\mathcal{\\tilde W}$ are feasible in terms of the SOCP relaxation $\\mathrm{FP}'(w)$ but infeasible in terms of $\\mathrm{FP}(w)$, as formally defined below.\n\n\n\n\n\n\\begin{definition}\nA vector of renewable power generation $w\\in\\mathcal{W}'$ is \\textbf{SOCP-inexact}, if every optimal solution of $\\mathrm{FP}'(w)$ satisfies: \n\\begin{IEEEeqnarray}{rCl}\n\\| y_{ij} \\|_2 &<& c_{q,ij} x + \\gamma_{q,ij}\\quad\\text{for some}~ i\\rightarrow j. \\nonumber\n\\end{IEEEeqnarray}\nThe \\textbf{SOCP-inexact region} of $w$ is defined as:\n\\begin{IEEEeqnarray}{rCl}\n \\mathcal{\\tilde W} &=& \\left\\{w \\in \\mathcal{W}'~|~w~\\text{is SOCP-inexact}\\right\\}. \\nonumber\n\\end{IEEEeqnarray}\n\\end{definition}\n\nOur next focus is to build an approximation of $\\mathcal{\\tilde W}$. \nFor that, we consider the following set defined on the dual SOCP:\n\\begin{IEEEeqnarray}{rCl}\n\\mathcal{\\tilde W}_d &:=& \\{w \\in \\mathcal{W}'~|~\\text{Every optimal solution of}~\\mathrm{DP}'(w) \\nonumber \\\\\n&&\\qquad\\qquad\\quad \\text{satisfies}~\\lambda_{q,ij}=0~\\text{for some}~i\\rightarrow j \\}. \\nonumber\n\\end{IEEEeqnarray}\nBy complementary slackness \\cite[Section 5.5.2]{boyd2004convex}, for every primal-dual optimal of $\\mathrm{FP}'(w)$ and $\\mathrm{DP}'(w)$, there is:\n\\begin{IEEEeqnarray}{rCl}\n\\lambda_{q,ij}\\left(\\| y_{ij} \\|_2 - c_{q,ij} x - \\gamma_{q,ij}\\right)&=&0, \\quad \\forall i\\rightarrow j. \\nonumber\n\\end{IEEEeqnarray}\nThis implies $\\mathcal{\\tilde W}\\subseteq \\mathcal{\\tilde W}_d$. Although $\\mathcal{\\tilde W}= \\mathcal{\\tilde W}_d$ may not hold, their difference can only occur under rare circumstances where $\\lambda_{q,ij} = \\| y_{ij} \\|_2 \\!-\\! c_{q,ij} x \\!-\\! \\gamma_{q,ij} = 0$ at a primal-dual optimal. Hence we focus on $\\mathcal{\\tilde W}_d$ as an approximation of $\\mathcal{\\tilde W}$. \n\nGiven an arbitrary $w \\in \\mathcal{\\tilde W}_d \\subseteq \\mathcal{W}'$, the maximum objective value of $\\mathrm{DP}'(w)$ is $\\mathrm{dp}'(w)=0$ but with some $\\lambda_{q,ij}=0$ so the SOC relaxation is inexact (except for some very rare case). To approximate $\\tilde{\\mathcal{W}}_d$, first we add the following constraint to tighten the dual feasible set \\eqref{eq:dual-feasibility}:\n\\begin{IEEEeqnarray}{rCl}\\label{eq:add-dual-constraint}\n\\lambda_{q} &\\geq& \\delta\n\\end{IEEEeqnarray}\nwhere the inequality is element-wise and $\\delta\\in\\mathbb{R}_+^{9N}$ is a vector of all \\textit{strictly positive} parameters, whose design will be elaborated later. Consider the tightened dual SOCP:\n\\begin{IEEEeqnarray}{rCl}\n\\mathrm{DP}''(w,\\delta):~\\max_{\\mu,\\lambda} &\\quad& \\mu_f^\\intercal (B_f w + \\gamma_f) + \\lambda_s^\\intercal \\gamma_s - \\mu_{y}^\\intercal b_y - \\lambda_{q}^\\intercal \\gamma_q \\nonumber\n\\\\\n\\text{s. t.} && \\text{\\eqref{eq:dual-feasibility}, \\eqref{eq:add-dual-constraint}} \\nonumber\n\\end{IEEEeqnarray} \nand let $\\mathrm{dp}''(w,\\delta)$ denote its maximum objective value. For $w\\in\\mathcal{\\tilde W}_d$, there must be $\\mathrm{dp}''(w,\\delta) < 0$, because otherwise $\\mathrm{DP}'(w)$ would have an optimal solution that satisfies \\eqref{eq:add-dual-constraint}, contradicting the definition of $\\mathcal{\\tilde W}_d$. \nActually $\\mathrm{dp}''(w,\\delta) \\leq -\\eta$ for some $\\eta>0$ that depends on $w$ and $\\delta$.\n\n\\begin{algorithm}[t]\\label{alg:remove-inexact}\n\t\\caption{Approximate $\\mathcal{\\tilde W}_d$ (or SOCP-inexact $\\mathcal{\\tilde W}$)}\n\t1. \\textbf{Initialization:} $\\mathcal{\\tilde W}_{poly} = \\mathcal{W}'_{poly}$ returned by Alg. \\ref{alg:approximate-socp}. Given positive $\\delta$, $\\eta$, $\\eta'$; $\\mathcal{V}_{safe}=\\emptyset$; $c=0$;\n\t\n\t2. Update vertex set $vert\\left(\\mathcal{\\tilde W}_{poly}\\right)$. Let $\\mathrm{dp}''_{max}= -\\eta$;\n\t\n\t\\For{$w\\in vert\\left(\\mathcal{\\tilde W}_{poly}\\right)$ and $w\\notin \\mathcal{V}_{safe}$}{\n\t\tsolve $\\mathrm{DP}''(w,\\delta)$ to obtain an optimal solution $(\\mu^*, \\lambda^*)$ and maximum objective value $\\mathrm{dp}''(w,\\delta)$;\n\t\t\n\t\t\\lIf{$\\mathrm{dp}''(w,\\delta)>\\mathrm{dp}''_{max}$}{\n\t\t \n\t\t \\qquad $\\mathrm{dp}''_{max} \\leftarrow \\mathrm{dp}''(w,\\delta)$;\n\t\t\n\t\t \\qquad $(\\mu_{max},\\lambda_{\\max}) \\leftarrow (\\mu^*,\\lambda^*)$\n\t\t}\n\t\t\\lElseIf{$\\mathrm{dp}''(w,\\delta)\\leq -\\eta$}{\n\t\t $\\mathcal{V}_{safe} = \\mathcal{V}_{safe} \\!\\cup\\! \\{w\\}$\n\t\t}\n\t}\n\t\\eIf{$\\mathrm{dp}''_{max} = -\\eta$ or $c=C_{max}$}{\n\t return $\\mathcal{\\tilde W}_{poly}$.\n\t}{\n\t add to $\\mathcal{\\tilde W}_{poly}$ a cutting plane:\n\t $\\mu_{f,max}^\\intercal (B_f w + \\gamma_f) + \\lambda_{s,max}^\\intercal \\gamma_s \\leq \\mu_{y,max}^\\intercal b_y +\\lambda_{q,max}^\\intercal \\gamma_q - \\eta'$; \n\t \n\t $c\\leftarrow c+1$;\n\t \n\t go back to Line 2;\n\t}\n\\end{algorithm}\n\nThe idea above inspires us to approximate $\\mathcal{\\tilde W}_d$ (or $\\mathcal{\\tilde W}$) by\n\\begin{IEEEeqnarray}{rCl}\n\\tilde{\\mathcal{W}}_d \\approx \\left\\{w \\in \\mathbb{R}^W|D_w(\\mu,\\lambda)\\leq -\\eta,\\forall (\\mu,\\lambda)~\\text{satisfying \\eqref{eq:dual-feasibility},\\eqref{eq:add-dual-constraint}}\\right\\} \\nonumber\n\\end{IEEEeqnarray}\nTo this end, Algorithm \\ref{alg:remove-inexact} can be designed using a similar procedure to Algorithm \\ref{alg:approximate-socp}.\nAlgorithm \\ref{alg:remove-inexact} returns a convex polytope $\\mathcal{\\tilde W}_{poly} \\subseteq \\mathcal{W}'_{poly}$ that guarantees $\\mathrm{dp}''(w,\\delta) \\leq -\\eta < 0$ for all $w \\in \\mathcal{\\tilde W}_{poly}$, which is an approximation of $\\tilde{\\mathcal{W}}_d$ (or $\\tilde{\\mathcal{W}}$). Removing $\\tilde{\\mathcal{W}}_{poly}$ from $\\mathcal{W}'_{poly}$, we can obtain an approximation $\\mathcal{W}_{poly}=\\mathcal{W}'_{poly} \\backslash \\tilde{\\mathcal{W}}_{poly}$ of the actual dispatchable region $\\mathcal{W}$. To make Algorithm \\ref{alg:remove-inexact} more robust, we may choose $\\eta'> \\eta$ for the added cutting plane in each iteration.\n\n\n\\textit{Remark:} The parameters $\\delta$ and $\\eta$ are essential for Algorithm \\ref{alg:remove-inexact}. A general guideline is that (1) given $\\delta$, choosing a smaller $\\eta$ and (2) given $\\eta$, choosing a bigger $\\delta$ will both make $\\mathcal{\\tilde W}_{poly}$ bigger and lead to a smaller (more conservative) approximation of $\\mathcal{W} = \\mathcal{W}' \\backslash \\mathcal{\\tilde{W}}$. Moreover, sometime it is difficult for Algorithm \\ref{alg:remove-inexact} to use a single convex polytope $\\mathcal{\\tilde W}_{poly}$ to accurately approximate the most likely nonconvex $\\mathcal{\\tilde{W}}$.\nTo deal with this difficulty, we propose to run Algorithm \\ref{alg:remove-inexact} multiple times with different vectors $\\delta$. As a result, we obtain multiple convex polytopes whose union serves as a better approximation of $\\mathcal{\\tilde{W}}$. Those vectors $\\delta$ can be selected in the following way. We traverse the vertices of $\\mathcal{W}'_{poly}$, select one vertex $w$, and solve the dual SOCP $\\mathrm{DP}'(w)$ to get an optimal solution $(\\mu^*,\\lambda^*)$. Then $\\delta$ is constructed by keeping all the strictly positive elements of $\\lambda_q^*$ as they are, and add a small positive perturbation to all the zero elements. \n\n\n\\begin{table}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\renewcommand{\\tabcolsep}{1em}\n \\centering\n \\caption{Summary of different regions}\n \\label{tab:summary}\n \\begin{tabular}{m{1.2cm}<{\\centering}m{1.5cm}<{\\centering}m{0.1cm}<{\\centering}m{1.2cm}<{\\centering}m{0.1cm}<{\\centering}m{1.5cm}<{\\centering}}\n \\hline \n & SOCP-relaxed Region & = & SOCP-exact Region & + & SOCP-inexact Region \\\\\n \\hline\n Actual & $\\mathcal{W}'$ & = & $\\mathcal{W}$ & + & $\\tilde{\\mathcal{W}} \\approx \\tilde{\\mathcal{W}}_d$ \\\\\n Approx. & $\\mathcal{W}'_{poly}$ & = & $\\mathcal{W}_{poly}$ & + & $\\tilde{\\mathcal{W}}_{poly}$ \\\\\n Method & Algorithm \\ref{alg:approximate-socp} & & $\\mathcal{W}'_{poly} \\backslash \\tilde{\\mathcal{W}}_{poly}$ & & Algorithm \\ref{alg:remove-inexact}\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\textit{Summary}. The relationship of different regions mentioned in this paper is summarized in TABLE \\ref{tab:summary}. As discussed, the dispatchable region $\\mathcal{W}=\\mathcal{W}' \\backslash \\mathcal{\\tilde W}$, where $\\mathcal{W}'$ is the SOCP-relaxed dispatchable region and $\\mathcal{\\tilde W}$ is the SOCP-inexact region. \nWe develop Algorithm \\ref{alg:approximate-socp} to get $\\mathcal{W}'_{poly}$, a convex polytopic approximation of $\\mathcal{W}'$; and Algorithm \\ref{alg:remove-inexact} to get $\\mathcal{\\tilde W}_{poly}$, a convex polytopic approximation of $\\mathcal{\\tilde W}$. Algorithm \\ref{alg:remove-inexact} can run multiple times to obtain a more accurate approximation of nonconvex $\\mathcal{\\tilde W}_d$ (or $\\mathcal{\\tilde W}$). The outputs of multiple runs of Algorithm \\ref{alg:remove-inexact} are then removed from $\\mathcal{W}'_{poly}$ to obtain a generally nonconvex polytopic approximation of $\\mathcal{W}$.\n\n\n \n\n\n\\section{Case Studies} \\label{sec:numerical}\n\\begin{figure}\n\t\\includegraphics[width=0.35\\textwidth]{network.pdf}\n\t\\centering\n\t\\caption{IEEE 33-node network model from \\cite{chen2018energy}.}\n\t\\label{fig:network}\n\\end{figure}\nIn this section, we conduct numerical experiments on the IEEE 33-bus system whose topology is in Fig. \\ref{fig:network}. The proposed algorithms are implemented to approximate the dispatchable region of renewable generation $(w_1, w_2)$ at nodes 13 and 29, respectively. Then, we test the impact of several factors and compare with other approaches.\n\\subsection{Benchmark} \\label{sec:numerical-1}\n\\begin{table}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\renewcommand{\\tabcolsep}{1em}\n \\centering\n \\caption{Parameters of generators in Benchmark}\n \\label{tab:generator}\n \\begin{tabular}{cccccc}\n \\hline \n Generator & Location & $\\underline{p}_i$ (p.u.) & $\\overline{p}_i$ (p.u.)\\\\\n \\hline\n G1 & node 10 & 0.4 & 0.6 \\\\\n G2 & node 18 & 0.3 & 0.4\\\\\n G3 & node 23 & 0.4 & 0.6\\\\\n G4 & node 25 & 0.3 & 0.5\\\\\n G5 & node 33 & 0.4 & 0.6\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{actual_relax.pdf}\n \\caption{The SOCP-relaxed region $\\mathcal{W}'$ (outside) and the actual dispatchable region $\\mathcal{W}$ obtained by checking sampled points in the $(w_1,w_2)$ space.}\n \\label{fig:actual-relax}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{algorithm1.pdf}\n \\caption{The output of Algorithm \\ref{alg:approximate-socp} in different iterations (solid line) to approximate the SOCP-relaxed dispatchable region $\\mathcal{W}'$ (dashed line).}\n \\label{fig:alg1}\n\\end{figure}\nIn the IEEE-33 bus system, there are 5 controllable generators whose parameters are given in TABLE \\ref{tab:generator}. Two renewable generators are connected to nodes 13 and 29, respectively.\nFor comparison, the actual SOCP-relaxed region $\\mathcal{W}'$ and the actual dispatchable region without relaxation $\\mathcal{W}$ are generated as in Fig. \\ref{fig:actual-relax}. This can be done by checking the feasiblity of a nonlinear optimization with \\eqref{eq:dist-flow}-\\eqref{eq:vl_limits} as its constraints, over sample points $w$ in the $(w_1,w_2)$ space using the nonlinear solver IPOPT. As we can see from Fig. \\ref{fig:actual-relax}, the actual dispatchable region $\\mathcal{W}$ can be nonconvex and the SOCP-relaxed region is not accurate enough. In the following, we apply the proposed algorithms to output a more accurate region.\n\nFirst, we test the performance of Algorithm \\ref{alg:approximate-socp}. We observe that algorithm terminates with $\\mathrm{dp}'_{max} = 0$ in 25 iterations, taking about 289.84s. The output regions $\\mathcal{W}'_{poly}$ in the 2nd, 5th, 10th, and final iterations are given in Fig. \\ref{fig:alg1}. The Algorithm \\ref{alg:approximate-socp} removes the nondispatchable regions iteratively (the blue region is becoming smaller), and finally returns a convex polytope $\\mathcal{W}'_{poly}$ exactly the same as the actual SOCP-relaxed region $\\mathcal{W}'$ (dashed line). This validates Proposition \\ref{prop:polytopic-approximation}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{algorithm2.pdf}\n \\caption{The gray polytope $\\mathcal{W}_{poly}$ is an approximation of $\\mathcal{W}$ (red dash line). It is obtained by removing the output $\\tilde{\\mathcal{W}}_{poly}$ of Algorithm \\ref{alg:remove-inexact} (white polytope) from the output $\\mathcal{W}'_{poly}$ of Algorithm \\ref{alg:approximate-socp} (the outside blue dash line).}\n \\label{fig:algorithm2}\n\\end{figure}\n\n\nEven though Algorithm 1 can output the accurate SOCP-relaxed dispatchable region, as we can see in Fig. \\ref{fig:actual-relax}, there is still a gap between the actual dispatchable region $\\mathcal{W}$ and the relaxed one $\\mathcal{W}'$. If the renewable generator output $(w_1,w_2)$ lies in the gap area, there is actually no feasible dispatch that satisfies power flow equation \\eqref{eq:dist-flow} and safety limits \\eqref{eq:pq_limits}-\\eqref{eq:vl_limits}. Thus, using the SOCP-relaxed region as a guidance will threaten power system security. In this paper, Algorithm \\ref{alg:remove-inexact} is developed to further remove the nondispatchable points. As in Fig. \\ref{fig:algorithm2}, the $\\tilde{\\mathcal{W}}_{poly}$ (white area) generated by Algorithm \\ref{alg:remove-inexact} is removed and the resulting region $\\mathcal{W}_{poly}$ (grey area) is closer to the actual region (red dash line). This shows the great potential of the proposed algorithm in improving the accuracy of dispatchable region in a distribution system. The operational risk under the obtained region $\\mathcal{W}_{poly}$ and the SOCP-relaxed region $\\mathcal{W}'$ will be compared later in TABLE \\ref{tab:compare}.\n\n\\subsection{Impact of different factors} \\label{sec:numerical-2}\nIn the following, we test the impact of two factors (adjustable capability of controllable generators $[\\underline{p}_i,\\overline{p}_i],\\forall i$ and current limit $\\overline \\ell$) on the shape of the dispatchable region and the performance of the proposed algorithm.\n\n\\begin{table}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\renewcommand{\\tabcolsep}{1em}\n \\centering\n \\caption{Parameters of generators in Cases L and H}\n \\label{tab:generator2}\n \\begin{tabular}{ccccc}\n \\hline \n Generator & \\multicolumn{2}{c}{Case L} & \\multicolumn{2}{c}{Case H} \\\\ No. & $\\underline{p}_i$ (p.u.) & $\\overline{p}_i$ (p.u.) & $\\underline{p}_i$ (p.u.) & $\\overline{p}_i$ (p.u.)\\\\\n \\hline\n G1 & 0.4 & 0.5 & 0 & 0.6\\\\\n G2 & 0.3 & 0.4 & 0 & 0.4\\\\\n G3 & 0.4 & 0.5 & 0 & 0.6\\\\\n G4 & 0.4 & 0.5 & 0 & 0.5\\\\\n G5 & 0.4 & 0.5 & 0 & 0.6\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{Comparison.pdf}\n \\caption{Left: the $\\mathcal{W}'_{poly}$ returned by Algorithm \\ref{alg:approximate-socp} for Case L (subfigure (a)) and Case H (subfigure (c)). Right: the $\\mathcal{W}_{poly}$ returned by Algorithm \\ref{alg:remove-inexact} for Case L (subfigure (b)) and Case H (subfigure (d)).}\n \\label{fig:comparison}\n\\end{figure}\n\\begin{figure}[t]\n\t\\includegraphics[width=0.8\\columnwidth]{Converge.pdf}\n\t\\centering\n\t\\caption{The change of $\\mathrm{dp}'_{max}$ over iterations of Algorithm \\ref{alg:approximate-socp}.}\n\t\\label{fig:dpmax}\n\\end{figure}\nTo show how $[\\underline{p}_i,\\overline{p}_i]$ influences the dispatchable region, we test three cases: (1) \\textbf{Benchmark}, which has the same setting as in Section \\ref{sec:numerical-1}. (2) \\textbf{Case L}, where the generators have less adjustable capability than the benchmark. (3) \\textbf{Case H}, where the generators have more adjustable capability than the benchmark. The parameters of the Cases L and H are given in TABLE \\ref{tab:generator2}. The regions returned by Algorithm \\ref{alg:approximate-socp} ($\\mathcal{W}'_{poly}$) and Algorithm \\ref{alg:remove-inexact} ($\\mathcal{W}_{poly}$) are given in Fig. \\ref{fig:comparison}. Subfigures (a), (b) are for Case L and subfigures (c), (d) are for Case H. The changes of $\\mathrm{dp}'_{max}$ under three cases are recorded in Fig. \\ref{fig:dpmax}.\n\nAs shown in Figs. \\ref{fig:alg1} and \\ref{fig:comparison}, Algorithm \\ref{alg:approximate-socp} can always output the accurate SOCP-relaxed dispatchable region, i.e., $\\mathcal{W}'_{poly}=\\mathcal{W}'$. The final dispatchable regions (grey area) returned by the proposed algorithms are much closer to the actual ones compared with the SOCP-relaxed regions. In addition, as the adjustable capability of generators decreases, the system's ability to accommodate volatile renewable power becomes weaker, and thus, the dispatchable region becomes smaller. We also find that with a weaker adjustable capability, the actual dispatchable region $\\mathcal{W}$ is more likely to be nonconvex and to differ more from the SOCP-relaxed region. The difference between the red dash line and the blue dash line in Fig. \\ref{fig:comparison}(b) is more significant than that in Fig. \\ref{fig:comparison}(d).\nIn the future power systems, more renewable generators are replacing the controllable generators, so the use of an SOCP-relaxed dispatchable region is not accurate enough.\nTherefore, the proposed Algorithms \\ref{alg:approximate-socp}-\\ref{alg:remove-inexact} to remove the nondispatchable points will be helpful. \n\n\\begin{table}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\renewcommand{\\tabcolsep}{1em}\n \\centering\n \\caption{Comparison of three cases.}\n \\label{tab:compare}\n \\begin{tabular}{cccccc}\n \\hline \n & \\textbf{FR}($\\mathcal{W}'$) & \\textbf{FR}($\\mathcal{W}_{poly}$) & Reduction & Time(s) \\\\\n \\hline\n Benchmark & 10.4\\% & 4.5\\% & 56.73\\% & 289.84 \\\\\n Case L & 15.7\\% & 8.7\\% & 44.59\\% & 291.01\\\\\n Case H & 3.5\\% & 2.5\\% & 28.57\\% & 724.60\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nIn Fig. \\ref{fig:dpmax}, the $\\mathrm{dp}'_{max}$ under all three cases decrease towards zero when Algorithm \\ref{alg:approximate-socp} terminates. The computational times are 289.84s (Benchmark), 291.01s (Case L), and 724.60s (Case H), respectively, showing that our algorithm is efficient. Moreover, we randomly generate 2000 points $(w_1,w_2)$ in the SOCP-relaxed dispatchable region $\\mathcal{W}'$ and the final obtained region $\\mathcal{W}_{poly}=\\mathcal{W}'_{poly} \\backslash \\tilde{\\mathcal{W}}_{poly}$, and calculate the failure rate defined as\n\\begin{align}\\label{eq:failurerate}\n \\textbf{FR}(\\mathcal{S})=\\frac{\\mbox{No. of points}~w \\in \\mathcal{S}~\\mbox{that is nondispatchable}}{\\mbox{No. of points}~w\\in \\mathcal{S}}\n\\end{align}\nThe failure rates under three cases are summarized in TABLE \\ref{tab:compare}. In all three cases, the proposed method can greatly reduce the failure rate, and the reduction is more than 50\\% under benchmark. This can help better ensure system security. Moreover, we can find that in a system with relatively small adjustable capability, the reduction is more significant.\n\nFurthermore, we test the impact of current limit by running two other cases where we halve and double the $\\overline \\ell$, respectively. The obtained region are shown in Fig. \\ref{fig:current}. The computation times are both less than 500s, which is acceptable. A more stringent line-flow limit results in a smaller dispatchable region and also a greater deviation between the relaxation region $\\mathcal{W}'$ and the exact region $\\mathcal{W}$.\n\\begin{figure}[t]\n\t\\includegraphics[width=1.0\\columnwidth]{Current.pdf}\n\t\\centering\n\t\\caption{The $\\mathcal{W}_{poly}$ returned by Algorithm 2 under two current limits.}\n\t\\label{fig:current}\n\\end{figure}\n\n\n\\subsection{Comparison with other methods}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.8\\columnwidth]{DifferentMethod.pdf}\n \\caption{The regions returned by Linearized DistFlow model (left), the proposed algorithms (middle), and the polyhedral approximation of SOCP-relaxed model (right).}\n \\label{fig:differentmethod}\n\\end{figure*}\nWe then compare the performance of the proposed algorithms with two well-known approaches based on (1) Linearized DistFlow model \\cite{baran1989optimal} (denoted as \\textbf{LinDistFlow}); (2) Polyhedral approximation of the SOCP-relaxed model \\cite{chen2018energy} (denoted as \\textbf{SOCP-Linear}). These two models are linear programs so that the adaptive constraint generation algorithm in \\cite{wei2015real} can be applied to generate the region. The results are shown in Fig. \\ref{fig:differentmethod}. Theoretically, the LinDistFlow region can be an inner\/outer approximation or a region that intersects the actual dispatchable region. In the benchmark case, the LinDistFlow \nregion is very small and conservative. The SOCP-Linear region is always an outer approximation of the actual region. We can see that it is very close to the SOCP-relaxed region $\\mathcal{W}'$. To better illustrate the results under three approaches, we calculate the failure rate \\eqref{eq:failurerate} and the missing rate (\\textbf{MR}) defined below.\n\\begin{align}\n \\textbf{MR}(\\mathcal{S})=\\frac{\\mbox{No. of points}~ w \\in \\mathcal{W}~\\mbox{but}~\\notin \\mathcal{S}}{\\mbox{No. of points}~w \\in \\mathcal{W}}\n\\end{align}\nThe failure rate and missing rate under three approaches are compared in TABLE \\ref{tab:threemethod}. We can find that, in this simulation case, the LinDistFlow region is an inner approximation so its failure rate is zero. However, it has a very high missing rate, meaning that the region is too conservative. The SOCP-Linear region is always an outer approximation so its missing rate is zero, but its failure rate is high. The proposed method can achieve a good balance between ensuring security and reducing conservatism.\n\\begin{table}[t]\n \\renewcommand{\\arraystretch}{1.3}\n \\renewcommand{\\tabcolsep}{1em}\n \\centering\n \\caption{Comparison of three methods.}\n \\label{tab:threemethod}\n \\begin{tabular}{cccccc}\n \\hline \n & \\textbf{FR}($\\mathcal{W}_{poly}$) & \\textbf{MR}($\\mathcal{W}_{poly}$) \\\\\n \\hline\n LinDistFlow & 0\\% & 91.1\\% \\\\\n Proposed Method & 4.5\\% & 2.7\\% \\\\\n SOCP-Linear & 11.8\\% & 0\\%\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn this paper, we develop an improved approximation of the renewable generation dispatchable region in radial distribution networks. First, a nonconvex optimization problem is formulated to describe the dispatchable region. The nonconvex problem is then relaxed to a convex SOCP. An SOCP-based projection algorithm (Algorithm 1) is proposed to generate the accurate SOCP-relaxed dispatchable region under certain conditions. In addition, a heuristic method (Algorithm 2) is developed to remove the SOCP-inexact region from the region obtained above. Therefore, the final region can better approximate the actual nonconvex dispatchable region. Our main findings are:\n\\begin{itemize}\n \\item The proposed method can reduce the operational risk (quantified by failure rate) by more than 50\\% compared with the SOCP-relaxed region.\n \\item The proposed method has a greater potential in the future power system with fewer controllable units and thus weaker adjustable capability.\n \\item Compared with existing approaches (LinDistFlow and SOCP-Linear), the proposed method achieves a better tradeoff between security and conservatism.\n\\end{itemize}\n\nThis paper provides an innovative perspective for constructing the dispatchable region: While the existing literature can only generate convex regions, the proposed algorithm can generate nonconvex approximations. For future work, we aim to improve the accuracy of the proposed algorithms by properly setting the initial points for heuristic searching.\n\n\n\\section*{Appendix. Constant parameters}\n\nThis appendix provides in full detail the constant matrices, vectors, and numbers used in Section \\ref{sec:methods}. \n\n\\subsection{Equation \\eqref{eq:opt-feasibility}: $A_f$, $B_f$, $\\gamma_f$, $A_s$, $\\gamma_s$}\n\nThe vector $x=(p,q, v,\\ell, P,Q)$ is arranged in the order explained in Section \\ref{sec:model}. Let $C\\in \\{-1,0,1\\}^{(N+1)\\times N}$ be the incidence matrix of the radial network, with its element at the $k$-th row, $j$-th column:\n\\begin{IEEEeqnarray}{rCl}\nC_{kj} &=& \\begin{cases}\n1,\\qquad\\text{if}~k=i~\\text{for line}~i\\rightarrow j \\\\\n-1,\\quad \\text{if}~k=j~\\text{for line}~i\\rightarrow j \\\\\n0,\\qquad \\text{otherwise.}\n\\end{cases}\\nonumber\n\\end{IEEEeqnarray}\nRemoving the first row of $C$, we get the reduced incidence matrix $\\overline C \\in \\{-1,0,1\\}^{N\\times N}$. Define diagonal matrices $R:=\\text{diag}(r_{ij}, \\forall i\\rightarrow j)$ and $X:=\\text{diag}(x_{ij}, \\forall i\\rightarrow j)$. Denote the $N \\times N$ all-zero matrix as $\\mathbf{O}_{N}$, identity matrix as $I_N$, and $N$-dimensional all-zero column vector as $0_N$. We have: \n\\begin{IEEEeqnarray}{rCl}\nA_f &=& \\begin{bmatrix}\nI_N &\\mathbf{O}_{N}&\\mathbf{O}_{N} & -R & -\\overline C & \\mathbf{O}_{N} \\\\\n\\mathbf{O}_N &I_N&\\mathbf{O}_{N} & -X & \\mathbf{O}_{N} & -\\overline C \\\\\n\\mathbf{O}_N &\\mathbf{O}_N &\\overline C^\\intercal & \\left(R^2 \\!+\\!X^2\\right) & -2R & -2X\n\\end{bmatrix}\\nonumber\\\\\n\\nonumber\\\\\n\\gamma_f &=& \\left[0_N^\\intercal,~\n0_N^\\intercal,~\nv_0,~ 0_{N-1}^{\\intercal}\\right]^\\intercal.\\nonumber\n\\end{IEEEeqnarray}\nMoreover, we define: \n\\begin{IEEEeqnarray}{rCl}\nB'_f &=&\\left[\nI_N,~\\mathbf{O}_{N},~\\mathbf{O}_{N} \\right]^\\intercal\n\\nonumber\n\\end{IEEEeqnarray}\nand let $B_f$ be a submatrix of $B'_f$ that contains only the columns corresponding to the nodes $i$ with nonzero renewable generation $w_i$. \nDefine column vectors $\\overline v := (\\overline v_i, ~\\forall i=1,...,N)$, $\\underline v := (\\underline v_i, ~\\forall i=1,...,N)$, similarly $\\overline p$, $\\underline p$, $\\overline q$, $\\underline q$, and $\\overline \\ell =(\\overline \\ell_{ij}, ~\\forall i\\rightarrow j)$. To write inequalities \\eqref{eq:pq_limits}\\eqref{eq:vl_limits} as $A_s x + \\gamma_s \\leq 0$, we need: \n\\begin{IEEEeqnarray}{rCl}\nA_s &=& \\begin{bmatrix}\nI_N & \\mathbf{O}_{N} & \\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N} \\\\\n-I_N & \\mathbf{O}_{N} & \\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N}\\\\\n \\mathbf{O}_{N} & I_N &\\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N} \\\\\n \\mathbf{O}_{N} &-I_N & \\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N}\\\\\n \\mathbf{O}_{N} & \\mathbf{O}_{N}& I_N &\\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N} \\\\\n \\mathbf{O}_{N} &\\mathbf{O}_{N}&-I_N & \\mathbf{O}_{N} & \\mathbf{O}_{N} & \\mathbf{O}_{N}\\\\\n\\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & I_N & \\mathbf{O}_{N} & \\mathbf{O}_{N} \\\\\n\\mathbf{O}_{N}& \\mathbf{O}_{N} & \\mathbf{O}_{N} & -I_N & \\mathbf{O}_{N} & \\mathbf{O}_{N}\n\\end{bmatrix},~ \n\\gamma_s =\\begin{bmatrix}\n-\\overline p \\\\\n\\underline p \\\\\n-\\overline q \\\\\n\\underline q \\\\\n-\\overline v \\\\\n\\underline v \\\\\n-\\overline \\ell \\\\\n0_N\n\\end{bmatrix}.\\nonumber\n\\end{IEEEeqnarray}\n\n\\subsection{Equation \\eqref{eq:opt-feasibility-relaxed}: $A_{y}$, $b_{y}$, $c_{q}$, $\\gamma_{q}$}\n\nTo make \\eqref{eq:opt-feasibility:soc-substitute}--\\eqref{eq:opt-feasibility:soc} the same as: \n\\begin{IEEEeqnarray}{rCl}\n \\left\\| \\begin{bmatrix}\n2P_{ij} \\\\\n2Q_{ij} \\\\\nv_i - \\ell_{ij}\n\\end{bmatrix} \\right\\|_2\n&\\leq& v_i + \\ell_{ij} +z_{q,ij}, \\quad \\forall i\\rightarrow j \\nonumber\n\\end{IEEEeqnarray}\nwe need $A_{y}$, $b_{y}$, $c_{q}$, $\\gamma_{q}$ as follows:\n\\begin{itemize}\n \\item For all $i \\rightarrow j$, $A_{y,ij}$ is $3\\times (6N)$ sparse matrix with all elements zero except its element at the first row, $(4N+j)$-th column equal to $2$; at the second row, $(5N+j)$-th column equal to 2; at the third row, $(2N+i)$-th column equal to $1$ (if $i\\neq 0$), and $(3N+j)$-th column equal to $-1$. \n \\item For all $i\\rightarrow j$ except $0\\rightarrow 1$, $b_{y,ij}$ is a three-dimensional column vector of all zeros; $b_{y,01} = \\left[0, 0, v_0 \\right]^\\intercal$.\n \n \\item For all $i\\rightarrow j$, $c_{q,ij}$ is a $(6N)$-dimensional row vector of all zeros except its $(2N+i)$-th (if $i\\neq 0$) and $(3N+j)$-th elements both equal to $1$. \n \\item $\\gamma_{q,ij}=0$ for all $i\\rightarrow j$ except $0\\rightarrow 1$; $\\gamma_{q,01} = v_0$. \n\\end{itemize}\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n\\newpage\n\\fi\n \n \\bibliographystyle{IEEEtran}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s:intro}\n\nThe study of galaxy kinematics as a function of cosmic time provides important insights into the evolution of galactic mass budgets and structure \\citep[e.g.][]{Sofue01, FS20}. Different kinematic tracers like molecular gas, ionised gas, or stars move in the same galactic potential and allow for estimates of the galactic dark matter content.\nThe kinematic signatures of different tracers vary due to their different nature: stars are collision-less while gas is dissipative; different gas phases have different temperatures, turbulent velocities, and might be affected by outflows. The various tracers often have different spatial distributions and probe different regions of the overall potential. Nonetheless, dynamical models based on complementary tracers should give the same mass estimates for systems in equilibrium.\n\nIn the local Universe, large interferometric and IFS surveys provide spatially resolved kinematics of stars, and atomic, molecular, and ionised gas. \nComparative studies of baryonic kinematics at $z=0$, in particular from the EDGE-CALIFA survey \\citep{Sanchez12, Bolatto17} and the ATLAS$^{\\rm 3D}$ project \\citep{Cappellari11}, brought forth the following general results:\n(i) rotation velocities are highest and velocity dispersions are lowest for cold gas, followed by ionised gas, and then stars \\citep[e.g.][]{Davis13, Martinsson13a, Bolatto17, Levy18, CrespoGomez21, Girard21}; \n(ii) modelling of stellar kinematics (e.g.\\ with axisymmetric Jeans Anisotropic Multi-Gaussian Expansion models; JAM; \\citealp{Cappellari08}) produces circular velocity curves that match observed cold molecular gas rotation velocities in regularly rotating galaxies \\citep[e.g.][]{Davis13, Leung18}, suggesting that the mass estimates from molecular gas and stars are in agreement;\n(iii) the mis-alignment of gas and stellar kinematic major axes with each other and with the morphological major axis is small for the majority of non-interacting systems without strong bars, but generally higher for Early-Type Galaxies (ETGs) \\citep[e.g.][]{FalconBarroso06, Sarzi06, Davis11, BarreraBallesteros14, BarreraBallesteros15, Serra14, Bryant19}. \n\nIn contrast, our knowledge of galaxy kinematics at $11$ were initially obtained almost exclusively for quiescent galaxies, and from slit spectroscopy, focusing on integrated quantities due to signal-to-noise (S\/N) considerations \\citep[but see][]{Newman15, Newman18, Toft17, Mendel20}. \nThe LEGA-C \\citep[Large Early Galaxy Astrophysics Census;][]{vdWel16, vdWel21, Straatman18} survey brought a step change in stellar kinematics of distant systems. Thanks to its deep uniform integrations and large sample size, spatially resolved kinematic analyses and modelling have become feasible for few hundred galaxies of all types at $0.60$ are sparse, but overall indicate similar trends as $z=0$ studies. Molecular disc velocity dispersions are lower relative to ionised gas at $z\\sim0.2$ (\\citealp{Cortese17}, see also \\citealp{Molina20}). There are indications that this trend prevails out to $z\\sim2$ (\\citealp{Girard19, Uebler19}; Liu et al., subm.), while some individual galaxies have comparable dispersions \\citep{Genzel13, Uebler18, Molina19}. \nTentative trends of higher stellar disc velocity dispersions compared to ionised gas are seen in the data by \\cite{Guerou17} of 17 galaxies at $z\\sim0.5$.\n\nA recent study by \\cite{Straatman22} compares dynamical mass estimates based on slit observations of ionised gas and stars for 157 galaxies at $0.69$ galaxies at $0.69$ and $K < 23$~mag selection function was chosen to obtain a population-wide census reducing biases in SFR or colors. Targets are located in COSMOS, GOODS-S (Great Observatories Origins Deep Survey) and UDS (Ultra Deep Survey). High-resolution Wide Field Camera 3 (WFC3) near-IR and Advanced Camera for Surveys (ACS) optical imaging is available from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \\citep[CANDELS;][]{Grogin11, Koekemoer11, vdWel12}, and further multi-wavelength coverage from X-ray through optical, near- to far-infrared, and radio is accessible from e.g.\\ \\cite{Ueda08, Lutz11, Xue11, Civano12, Magnelli13, Skelton14}.\n\nThe publicly released data cubes have a spatial sampling in $x-y-$direction of $0.2\\arcsec$ which corresponds to $\\sim1.6$ kpc at $z=1$. The wavelength sampling in $z-$direction is 1.7~\\AA. The typical near-IR seeing of the KMOS$^{\\rm 3D}$\\, data has a FWHM of $0.5\\arcsec$, corresponding to $\\sim4.0$ kpc at $z=1$. Point-spread function (PSF) images representing the observing conditions for each combined data cube individually are included in the data release, together with both a Gaussian and Moffat parametrization. \nThe average spectral resolution for KMOS$^{\\rm 3D}$\\, observations in the $YJ$ filter is $R=\\lambda\/\\Delta\\lambda=3515$, corresponding to an average instrumental broadening of 36~km\/s. However, the line-spread function (LSF) of each galaxy is determined individually as a function of wavelength, and encoded in the fits header keywords as described by \\cite{Wisnioski19}. The average on-source integration time for $z\\sim1$ targets in KMOS$^{\\rm 3D}$\\, is 5 hours.\n\nStellar masses $M_*$ and star formation rates (SFRs) for all galaxies are derived from SED fitting following \\cite{WuytsS11a}, assuming a \\cite{Chabrier03} initial mass function, using \\cite{Bruzual03} models with solar metallicity, the reddening law by \\cite{Calzetti00}, and constant or exponentially declining star formation histories. \nStructural parameters such as the effective radius $R_e$, the S\u00e9rsic index $n_S$, the axis ratio $q=b\/a$, the morphological position angle PA$_{\\rm morph}$, and for some galaxies the bulge-to-total ratio $B\/T$ and corresponding radii and S\u00e9rsic indices, are constrained from single-S\u00e9rsic or double-S\u00e9rsic {\\sc{galfit}} \\citep{Peng10} models to the CANDELS F160W imaging as presented by \\cite{vdWel12, Lang14}.\n\n\n\\subsection{The LEGA-C Survey}\\label{s:lgc}\n\nThe LEGA-C survey is a 1107-hour public survey with the Visible Multi-Object Spectrograph (VIMOS) at the VLT \\citep{LeFevre03}, targeting 3741 galaxies at $0.62$ per pixel) with pPXF \\citep{Cappellari04, Cappellari17} by combining a high-resolution stellar population template and an emission line template. These templates are allowed to shift and broaden independently, delivering independent stellar and ionised gas velocity and velocity dispersion profiles. This procedure takes into account the LSF and removes instrumental broadening from the velocity dispersion profiles. An example for one galaxy is shown in Figure~\\ref{f:obsprof}. \nNote that the template for the [OII] doublet consists of two emission lines centred at $\\lambda=3727$~\\AA\\ and $\\lambda=3730$~\\AA.\n\nFor the purpose of our study, we make a few adjustments to the above methodology for individual objects: \nfor two observations in our sample we repeat the above fitting procedure with a relaxed S\/N-cut in order to obtain 1D kinematic profiles. \nAnother two galaxies show strong [NeV]$\\lambda3347$, [NeV]$\\lambda3427$ and [NeIII]$\\lambda3870$ emission. The high-ionisation [NeV]$\\lambda3427$ line is a tell-tale signature of a harder ionising radiation field than produced by pure star formation, indicative of AGNs or shocks \\citep[e.g.][]{Mignoli13, Feltre16, Vergani18, Kewley19}. Where present in the LEGA-C spectra in our sample, it is kinematically decoupled from other emission lines and centrally concentrated. To extract ionised gas kinematics for these galaxies, we mask the corresponding spectral regions and repeat the above fitting procedure.\nThe effect on the extracted gas kinematics is substantial, with differences in individual velocity and velocity dispersion measurements of up to 200~km\/s (see Appendix~\\ref{a:nev} for an example). \nThe stellar kinematic measurements are virtually unaffected by this procedure.\n\nFor visual comparison of the 2D PV diagrams we further resample the LEGA-C spectra to the (coarser) KMOS wavelength steps. Note that we do not resample in spatial direction due to the very small difference in the KMOS and VIMOS pixel scales of $0.005\\arcsec$.\n\n\n\\subsubsection{Measurements}\\label{s:measure}\n\nDue to the different radial coverage of the data it is not straight-forward to compare the gas and stellar kinematics in these systems even after matching the observing conditions. To quantify how well the KMOS$^{\\rm 3D}$\\, and LEGA-C kinematic data compare to each other, we define the following 1D measurements based on the LOS kinematic profiles (see Figure~\\ref{f:obsprof}):\n\n\\begin{itemize}\n \\item $v_{\\rm max}$ is the maximum observed absolute velocity (uncorrected for inclination), and $r_{\\rm vmax}$ is the corresponding radius.\n \\item $v_{\\rm rmax,both}$ is the (mean) velocity at the outermost radius covered by both the KMOS$^{\\rm 3D}$\\, and LEGA-C data, and $r_{\\rm max,both}$ is the corresponding radius.\n \\item $\\sigma_{\\rm out}$ is the weighted mean observed velocity dispersion of the four outermost measured values (outer two on each side of the profile). Note that this measurement may still be affected by beam smearing, especially for smaller systems.\n \\item $\\sigma_{\\rm rmax,both}$ is the (mean) observed velocity dispersion at $r_{\\rm max,both}$.\n \\item $v_{\\rm rms}$ is an approximation of a classical root mean square velocity, via $v_{\\rm rms}^2=v_{\\rm max}^2 + \\sigma_{\\rm out}^2$.\n \\item $v_{c,\\rm max}$ is an approximation of a circular velocity (here without corrections for inclination and beam-smearing), via $v_{c,\\rm max}^2=v_{c}^2(r_{\\rm vmax})=v_{\\rm max}^2+2\\sigma_{\\rm out}^2\\cdot r_{\\rm vmax}\/R_d$ \\citep[see][for details]{Burkert10}, where we assume that the disc scale length $R_d=R_e\/1.68$, with $R_e=R_{e,\\rm F160W}$.\n \\item S$_{0.5,\\rm max}$ is an approximation of the total integrated velocity dispersion, via S$_{0.5, \\rm max}^2=0.5\\cdot v_{\\rm max}^2+\\sigma_{\\rm out}^2$ \\citep[see][for details]{Weiner06a, Kassin07}.\n\n\\end{itemize}\nFor the LEGA-C measurements, the above quantities are measured for ionised gas (primarily [OII] and\/or H$\\beta$) and stellar kinematics individually.\n\nWe stress that the above quantities are not derived from modelling, but are based on the LOS kinematics, which for the KMOS$^{\\rm 3D}$\\, galaxies have been extracted mimicking the LEGA-C observing conditions and setup.\nTherefore, while the LSF is accounted for, the measurements do not include corrections for inclination or beam-smearing. This is to say, intrinsic maximum velocities would be larger, and intrinsic velocity dispersions would be smaller. However, due to our matching of the PSFs differences in the effects of beam-smearing in the original observations are accounted for, and gas kinematics in KMOS$^{\\rm 3D}$\\, and LEGA-C should match if the emission lines trace the same ISM components. \n\nFurthermore we emphasize that the above observed `maximum' velocities do not necessarily represent the true observed maximum velocities of the galaxies, due to the kinematic major axes generally not being aligned with the slit orientations (see Section~\\ref{s:mdynk3d} for KMOS$^{\\rm 3D}$\\, kinematic extractions along the kinematic major axis).\n\nThe measurements described above cannot be meaningfully performed for all galaxies and observations in the sample. We exclude from the subsequent comparison galaxies for which there are less than five extractions of velocity and velocity dispersion possible along the (pseudo-)slit for either the KMOS$^{\\rm 3D}$\\, or LEGA-C data. We further exclude one LEGA-C observation for which the resolved kinematic extractions are contaminated through a secondary object in the slit. \nThe final sample includes 16 galaxies, resulting in 20 pairs of observations, including four duplicate observations from LEGA-C with the correspondingly different slit- and PSF-matched extractions from the KMOS$^{\\rm 3D}$\\, data cubes.\n\n\n\\subsection{Dynamical modelling}\\label{s:modelling}\n\nFor our comparison of dynamical mass measurements, we use the data at their native spatial and spectral resolutions, without matching observing conditions between the KMOS$^{\\rm 3D}$\\, and LEGA-C surveys. For KMOS$^{\\rm 3D}$\\, we build mass models which we fit to the H$\\alpha$ major axis kinematics (see Section~\\ref{s:mdynk3d}), and for LEGA-C we use published dynamical masses from JAM models (see Section~\\ref{s:mdynlgc}) and those computed from integrated stellar velocity dispersions. \nDue to the varying data quality across the sample, robust dynamical models cannot be constructed for all galaxies. Our dynamical mass comparison includes ten galaxies, four of which have two estimates based on integrated stellar velocity dispersion from LEGA-C due to duplicate observations, and six have LEGA-C estimates based on both integrated stellar velocity dispersion and JAM models (see Section~\\ref{s:mdynlgc}).\n\n\n\\subsubsection{Modelling for KMOS$^{\\rm 3D}$\\,}\\label{s:mdynk3d}\n\nFor KMOS$^{\\rm 3D}$\\, we exploit the 3D information available from the IFS data cubes to build 3D mass models to determine dynamical masses. Specifically, we place a pseudo-slit of width equal to the near-IR PSF FWHM on the continuum-subtracted cube along the kinematic major axis, which is well defined from the 2D projected velocity fields. From the 2D PV diagrams we then extract 1D profiles of velocity and velocity dispersion by summing rows spanning the PSF FWHM (or half PSF FWHM), and by fitting a Gaussian to the H$\\alpha$ line position (see Section~\\ref{s:kmoskin}). \n\nWe forward-model the H$\\alpha$ major axis kinematics using {\\sc{dysmal}} \\citep{Cresci09, Davies11, WuytsS16, Uebler18, Price21}, a code that allows for a flexible number of mass components, accounts for finite scale heights and flattened spheroidal potentials \\citep{Noordermeer08}, includes effects of pressure support from the turbulent interstellar medium \\citep{Burkert10, WuytsS16}, and consistently incorporates the observation-specific PSFs and LSFs.\n\nDue to the heterogeneous data quality in our sample, we consider two basic mass models for the baryonic component: a single S\u00e9rsic profile and a bulge-to-disc decomposition. Assuming mass follows light, we fix the structural parameters, specifically $i_{\\rm F160W}$, $R_{e,\\rm F160W}$ (or $R_{e,\\rm F160W,bulge}$ and $R_{e,\\rm F160W,disc}$), and $n_{S,\\rm F160W}$ (or $n_{S,\\rm F160W,bulge}$ and $n_{S,\\rm F160W,disc}$), to measurements from {\\sc{galfit}} models to the CANDELS F160W imaging as presented by \\cite{vdWel12, Lang14, WuytsS16} (see Section~\\ref{s:k3d}). Here, we infer the galaxy inclination $i_{\\rm F160W}$ from $q_{\\rm F160W}=b\/a$ by assuming an intrinsic ratio of scale height to scale length of $q_0=0.2$ \\citep[see][]{vdWel14b, WuytsS16, Straatman22}. If including a bulge, we assume an axis ratio of 1 for this component.\nWe estimate the total baryonic mass $M_{\\rm bar}$ by adding the stellar mass $M_{\\star}$ from SED modeling and the gas mass $M_{\\rm gas}$ based on $M_{\\star}$, SFR, and redshift of each galaxy, by utilising the gas mass scaling relations by \\cite{Tacconi20}. This estimate is used to center a Gaussian prior with standard deviation 0.2~dex on the logarithmic total baryonic mass.\nThe intrinsic velocity dispersion $\\sigma_0$ is assumed to be isotropic and constant throughout the disc, supported by deep adaptive optics assisted observations of SFGs at this redshift (see \\citealp{Genzel06, Genzel08, Genzel11, Genzel17, Cresci09, FS18, Uebler19}; Liu et al., subm.). The value of $\\sigma_0$ is a free parameter in our modelling.\n\nAll our dynamical models include an NFW \\citep{NFW96} dark matter halo. Its total mass $M_{\\rm halo}$ is inferred from the dark matter mass fraction within the effective (disc) radius, $f_{\\rm DM}(10$ in at least three spatial resolution elements. The models consist of two mass components, a stellar component assuming mass follows light based on F814W imaging, and an NFW dark matter halo. The halo concentration $c$ is tied to the halo mass following \\cite{Dutton14}. Free fit parameters are the stellar velocity anisotropy, the stellar mass-to-light ratio $M\/L$, the circular velocity of the dark matter halo, the galaxy inclination, and the slit centering.\nHere, the inclination is constrained by a $q-$dependent prior assuming an intrinsic thickness distribution ${\\mathcal{N}}(0.41,0.18)$ which is constrained from the full primary LEGA-C sample. \\cite{vHoudt21} note that the inclination and the slit centering are typically unconstrained by the data.\n\nBased on the JAM results, dynamical masses are provided out to 20~kpc and\/or $2R_{e,\\rm F814W}$, if supported by the data, where $R_{e,\\rm F814W}$ is the semimajor axis effective radius determined from single-S\u00e9rsic {\\sc{galfit}} models to the F814W imaging as presented by \\cite{vdWel16, vdWel21}. The total JAM dynamical mass is defined as twice the mass within $R_{e,\\rm F814W}$.\n\nAs described above, Jeans anisotropic models can be built only for a subset of the LEGA-C survey. However, the existing models are used to calibrate more accessible virial mass estimators based on the integrated stellar velocity dispersion. The details of this calibration are described by \\cite{vdWel22}. In short, virial masses are computed as\n$$M_{\\rm vir}=K(n_{S,\\rm F814W})\\frac{\\sigma^2_{\\star,\\rm vir}R_{e,\\rm F814W}}{G},$$\nwhere $n_{S,\\rm F814W}$ and $R_{e,\\rm F814W}$ are derived from F814W imaging (see Section~\\ref{s:lgc}) and $K(n_{S})=8.87-0.831n_{S}+0.0241n_{S}^2$ following \\cite{Cappellari06}, $\\sigma_{\\star,\\rm vir}$ is the inclination- and aperture-corrected, integrated stellar velocity dispersion (measured from collapsed 1D spectra), and $G$ is the gravitational constant. \nThe correction for $\\sigma_{\\star,\\rm vir}$ is derived by calibration to the JAM dynamical masses.\n$M_{\\rm vir}$ corresponds to twice the mass within $R_{e,\\rm F814W}$.\n\nThe main focus of our dynamical mass comparison is between the KMOS$^{\\rm 3D}$\\, IFS H$\\alpha$ models and the LEGA-C JAM and $M_{\\rm vir}$ measurements based on stellar kinematics. Two galaxies in our dynamical mass sample are also part of the emission line modelling analysis by \\cite{Straatman22}, and we include their results where appropriate. \\cite{Straatman22} build a kinematic model where the rotation curve is parametrized by an arctan function, assuming the ionised emission originates from a thick, exponential distribution constrained from the F814W imaging, with a constant and isotropic intrinsic velocity dispersion. The model accounts for beam smearing and misalignment between the slit and PA$_{\\rm F814W}$, and the dynamical mass is calculated from the model rotation curve including a pressure support correction. See \\cite{Straatman22} for further details.\n\n\n\\subsubsection{Notable differences between the dynamical models}\\label{s:model_diff}\n\nOur H$\\alpha$ dynamical mass models use structural parameters from F160W imaging, while the stellar dynamical mass measurements use structural parameters from F814W imaging (see Sections~\\ref{s:k3d}, \\ref{s:lgc}, and Appendix~\\ref{a:structure} for further discussion).\nTo quantify the impact of different structural measurements for our sample, we repeat the dynamical modelling for the KMOS$^{\\rm 3D}$\\, galaxies, this time utilizing the $i-$band (F814W) based values for a single-S\u00e9rsic baryonic component. We construct two additional sets of dynamical models adopting $R_e$, $n_S$, and $q$ from F814W imaging. For the first set we assume an intrinsic thickness of $q_0=0.2$ (as for our fiducial models). For the second set we still assume an intrinsic thickness of our baryonic distribution of $q_0=0.2$, but infer the galaxy inclination assuming $q_0=0.41$. This second set attempts a closer match to the assumptions entering the LEGA-C JAM modelling, where the inclination is inferred from a $q-$dependent prior, assuming an intrinsic thickness distribution centered on $q_0=0.41$ \\citep{vHoudt21}. \nOverall, the impact on our dynamical mass estimates is minor, and the corresponding results are presented in Appendix~\\ref{a:f814w_modelling}: for the first set of alternative models we find an average increase in $M_{\\rm dyn}$ of 0.02~dex (standard deviation 0.26~dex); for the second set of alternative models we find an average decrease in $M_{\\rm dyn}$ of 0.03~dex (standard deviation 0.25~dex).\n\nAnother difference lies in the explicit assumption of mass components. As described in Section~\\ref{s:mdynk3d}, for the modelling of the KMOS$^{\\rm 3D}$\\, data we estimate total baryonic mass by including a cold gas component derived from the scaling relations by \\cite{Tacconi20}. For our sample, such derived gas-to-baryonic-mass fractions are between 2 and 70 per cent, with a mean value of $f_{\\rm gas}=0.27$. The LEGA-C JAM modelling assumes only a stellar and a dark matter component (see Section~\\ref{s:mdynlgc}). However, although they assume mass follows light, $M\/L$ is a free parameter in their fit. The combination of a free $M\/L$ and an explicit dark matter halo component therefore allows for an (unconstrained) contribution from gas (following stars) as well. \nThe more simplistic $M_{\\rm vir}$ calculation does not make any assumptions on the involved mass components, however uses the (corrected) integrated 1D stellar velocity dispersion as a tracer of dynamical mass. As the movement of stars is dictated by the full potential, this includes any contribution from all stars, gas, and dark matter.\n\n\n\n\\section{Stellar and ionised gas kinematics}\\label{s:compasobs}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lvmax_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lvmax_both_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lsout_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lsout_both_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\caption{Comparison of LOS kinematic quantities from PSF-matched fixed-slit extractions, as defined in Section~\\ref{s:measure}. From left to right: maximum observed absolute rotation velocity, $v_{\\rm max}(r_{\\rm vmax})$; (average, interpolated) rotation velocity at the outermost radius covered by both KMOS$^{\\rm 3D}$\\, and LEGA-C data, $v_{\\rm rmax,both}(r_{\\rm max,both})$; weighted mean outer velocity dispersion, $\\sigma_{\\rm out}$; (average, interpolated) velocity dispersion at the outermost radius covered by both KMOS$^{\\rm 3D}$\\, and LEGA-C data, $\\sigma_{\\rm rmax,both}(r_{\\rm max,both})$. \n\tGolden filled stars compare KMOS$^{\\rm 3D}$\\, H$\\alpha$ measurements with LEGA-C stellar measurements, and blue filled circles with LEGA-C gas measurements. Green open circles indicate the presence of prominent Balmer lines in the LEGA-C spectra. Larger symbols indicate galaxies for which a dynamical modelling of both the KMOS$^{\\rm 3D}$\\, and LEGA-C data is possible (typically higher-quality, more extended data, excluding mergers). \n The shaded region around the 1:1 line indicates a constant interval of $\\pm0.1$~dex in all plots, to highlight differences in scatter between the comparisons.\n Due to duplicate observations in LEGA-C, galaxies can appear multiple times in each panel.\n\tOn average, velocities are larger and velocity dispersions lower for H$\\alpha$ compared to stars.\n }\n\t\\label{f:measures1}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lvmaxsout_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lvrms_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lvcirc_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\includegraphics[width=0.245\\textwidth]{figures\/lS05_psfmatched_all_newfits_noNe-eps-converted-to.pdf}\n\t\\caption{Comparison of LOS kinematic quantities from PSF-matched fixed-slit extractions, defined as in Section~\\ref{s:measure}. From left to right: rotational support, $v_{\\rm max}\/\\sigma_{\\rm out}$; root mean square velocity, $v_{\\rm rms}$; approximation of the circular velocity, $v_{c,\\rm max}(r_{\\rm vmax})$; approximation of the total integrated velocity dispersion, $S_{0.5,\\rm max}$.\n\tSymbols are as in Figure~\\ref{f:measures1}. \n The shaded region around the 1:1 line indicates a constant interval of $\\pm0.1$~dex in all plots, to highlight differences in scatter between the comparisons. \n\tOn average, the rotational support measured from H$\\alpha$ emission is larger than in stellar and gas measurements from the LEGA-C spectra.\n }\n\t\\label{f:measures2}\n\\end{figure*}\n\n\nWe now compare the stellar (from LEGA-C) and ionised gas (from KMOS$^{\\rm 3D}$) LOS kinematics from matched observing setups (Section~\\ref{s:kinex}), that is from fixed slits after matching the individual PSFs for each pair of observations. We also compare different measurements of the ionised gas observed kinematics using the H$\\alpha$ line from the KMOS$^{\\rm 3D}$\\, observations, and the emission line fits to the full LEGA-C spectrum, typically dominated by [OII] emission.\n\nIn Figure~\\ref{f:obsprof} we show an example of 2D and 1D kinematic extractions from KMOS$^{\\rm 3D}$\\, and LEGA-C data along both a N-S and an E-W (pseudo-)slit of width $1\\arcsec$ for one galaxy. For this galaxy, both the ionised gas and stellar velocities from LEGA-C and the H$\\alpha$ velocities from KMOS$^{\\rm 3D}$ qualitatively agree, with stellar velocities reaching somewhat lower amplitudes. The velocity dispersion profiles are often dissimilar, with asymmetric profiles for the ionised gas and stars from LEGA-C compared to the KMOS$^{\\rm 3D}$\\, profile that is centrally peaked as expected for rotating disc kinematics uncorrected for beam-smearing.\n\nIt has been shown that [OII] emission does not only trace star formation, but can be related to AGN activity and low-ionisation nuclear emission line regions \\citep[LINERS; e.g.][]{Yan06, Yan18, Lemaux10, DaviesRL14, Maseda21}. The differences in shape and intensity between the H$\\alpha$ and [OII] emission we see in the 2D position-velocity diagrams and the extracted 1D profiles further suggests that not all [OII] emission is originating from the co-rotating ISM. However, also line blending of the doublet emission complicates the extraction of kinematic information (due to degeneracies between the line amplitudes, widths, and centroids, when no other prominent emission lines are present, as is typical for LEGA-C galaxies at $z>0.9$). \n\n\n\\subsection{Velocities}\\label{s:comp_v}\nIn the left panels of Figure~\\ref{f:measures1} we compare maximum velocities ($v_{\\rm max}$) and velocities at the outermost common radius ($v_{\\rm rmax,both}$) for the KMOS$^{\\rm 3D}$\\, and LEGA-C samples, measured from the `observed' kinematics. Corresponding numbers for galaxies for which also a dynamical modelling is possible are listed in Table~\\ref{t:statobs}.\nOn average, the velocities measured from the KMOS$^{\\rm 3D}$\\, H$\\alpha$ data are larger compared to LEGA-C stars (golden stars in Figure~\\ref{f:measures1}) by $\\sim40$ per cent. This is similar when comparing to LEGA-C gas (blue circles), but here we also note that the gas velocity measurements agree well for the three non-interacting galaxies where the LEGA-C spectrum includes strong Balmer lines (large symbols with green circles).\nBased on a two-sample Kolmogorov-Smirnov statistic, only the maximum velocity of stars is different from the H$\\alpha$ $v_{\\rm max}$ by more than $1\\sigma$.\nIn general, lower amplitudes in rotation velocity for stars compared to gas are expected based on $z=0$ data (see Section~\\ref{s:intro}).\n\n\\subsection{Velocity dispersions}\\label{s:comp_s}\nIn the right panels of Figure~\\ref{f:measures1} we show the corresponding plots for the outer weighted mean observed velocity dispersion ($\\sigma_{\\rm out}$), and the (mean) velocity dispersion at the outermost radius common to both data sets ($\\sigma_{\\rm rmax,both}$). On average, the H$\\alpha$ dispersion measurements are lower than the LEGA-C measurements. \nIn particular the stellar velocity dispersions are larger by about a factor of two relative to H$\\alpha$, and show a significantly different distribution towards higher values by more than $2\\sigma$ based on a two-sample Kolmogorov-Smirnov statistic (see also Table~\\ref{t:statobs}). \nIn addition, there is some indication that, when measured at the same radius, the difference between stellar and H$\\alpha$ velocity dispersions is higher for systems with higher stellar velocity dispersion.\nIn general, higher disc velocity dispersions for stars compared to gas are also expected based on $z=0$ data, as discussed in Section~\\ref{s:intro}.\n\n\n\\begin{table}\n \\centering\n \\caption{Mean difference of the logarithm, log(KMOS$^{\\rm 3D}$\/LEGA-C), and corresponding standard deviation for various kinematic quantities, comparing KMOS$^{\\rm 3D}$\\, H$\\alpha$ to LEGA-C stars and gas, respectively, averaged over galaxies for which a dynamical modelling is possible. The quantities are defined in Section~\\ref{s:measure} and individual measurements are shown in Figures~\\ref{f:measures1} and \\ref{f:measures2} (large symbols).}\n \\label{t:statobs}\n\\begin{tabular}{lcccc}\n & \\multicolumn{2}{c}{stars} & \\multicolumn{2}{c}{gas} \\\\\n Quantity [dex] & mean & std. dev. & mean & std. dev. \\\\\n\\hline\n $v_{\\rm max}$ & 0.13 & 0.24 & 0.13 & 0.12 \\\\\n $v_{\\rm rmax,both}$ & 0.16 & 0.40 & 0.21 & 0.46 \\\\\n $\\sigma_{\\rm out}$ & -0.30 & 0.20 & -0.15 & 0.14 \\\\\n $\\sigma_{\\rm rmax,both}$ & -0.26 & 0.27 & -0.09 & 0.17 \\\\\n $v_{\\rm max}\/\\sigma_{\\rm out}$ & 0.42 & 0.25 & 0.28 & 0.20 \\\\\n $v_{\\rm rms}$ & -0.05 & 0.20 & 0.02 & 0.10 \\\\\n $v_{\\rm c,max}$ & -0.08 & 0.19 & 0.02 & 0.11 \\\\\n $S_{0.5, \\rm max}$ & -0.10 & 0.18 & -0.01 & 0.10 \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Rotational support}\\label{s:comp_vs}\nIn the left panel of Figure~\\ref{f:measures2} we plot the ratio of maximum LOS velocity and outer LOS velocity dispersion ($v_{\\rm max}\/\\sigma_{\\rm out}$), as defined in Section~\\ref{s:measure}. We stress again that these values are not derived from modelling, but have been measure from fixed slits after PSF matching. I.e., the LSF is accounted for but not any inclination effects or PSF effects, although the latter should be effectively the same for our KMOS$^{\\rm 3D}$\\, and LEGA-C extractions, as discussed in Section~\\ref{s:kmoskin}. \nOverall the KMOS$^{\\rm 3D}$\\, measurements suggest a stronger rotational support in the star-forming ionised gas phase, possibly indicating that the H$\\alpha$ line emission is originating from a more disc-like structure, or that it is less affected by non-circular motions (e.g.\\ compared to [OII]).\nThe difference between the H$\\alpha$ measurements and the stellar measurements is statistically significant by more than $2\\sigma$, and the H$\\alpha$ measurements and the LEGA-C gas measurements by more than $1\\sigma$.\n\nIn addition, we compare several combinations of observed velocity and observed velocity dispersion in the right-hand panels of Figure~\\ref{f:measures2}, as described in Section~\\ref{s:measure}. The combination of velocity and velocity dispersion into a common probe of the galactic potential results in more similar estimates on average between the KMOS$^{\\rm 3D}$\\, and LEGA-C data: both the average offsets and the scatter are reduced (see also Table~\\ref{t:statobs}).\nWe find the best average agreement between stellar and H$\\alpha$ data for $v_{\\rm rms}$ (second panel in Figure~\\ref{f:measures2}). \nWe note that \\cite{Bezanson18b} find comparable {\\it integrated} velocity dispersion for ionised gas and stars within the LEGA-C survey, in qualitative agreement with our result. \n\n\n\\subsection{Implications for compilations of ionised gas kinematics}\\label{s:compilations}\nThe differences in ionised gas velocities and velocity dispersions between the KMOS$^{\\rm 3D}$\\, and LEGA-C extractions, where the latter are mostly dominated by [OII] emission, serve as a caution in the combination of samples with different emission lines. The somewhat lower velocities and higher velocity dispersions measured from [OII] compared to H$\\alpha$ or other Balmer lines might motivate a revision of literature compilations for the study of galaxy gas kinematics evolution, such as the Tully-Fisher relation \\citep{Tully77}, or gas velocity dispersion. \n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS3_05062_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS4_06487_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS4_03493_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS3_05444_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS3_18434_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\includegraphics[width=0.33\\textwidth]{figures\/COS3_26546_mcum_sph_JAMfine-eps-converted-to.pdf}\n\t\\caption{Comparison of cumulative mass profiles from H$\\alpha$ dynamical mass models and JAM. The JAM estimates by \\citet{vHoudt21} are shown as golden diamonds every kiloparsec out to 10~kpc, and at $R_{e,\\rm F814W}$, and the KMOS$^{\\rm 3D}$ best-fit enclosed total mass is shown as a blue line, with lighter shading indicating one and two standard deviations as constrained by the full MCMC chains (60000 realisations). The vertical dashed grey line marks $R_{e,\\rm F814W}$, and the golden and blue arrows indicate the projected extent of the stellar and H$\\alpha$ kinematic data, respectively. Overall, the agreement between the JAM model estimates and the KMOS$^{\\rm 3D}$ model estimates is very good, demonstrating that the total mass distribution can be robustly inferred from different modelling techniques and data sets, as long as data quality allows. For galaxy COS4\\_03493-M4\\_121150 (top right), the JAM model overestimates the dynamical mass (see Section~\\ref{s:mcum} for details.)}\n\t\\label{f:mcum}\n\\end{figure*} \n\nIn fact, the discrepancy in zero-point offset of the stellar mass Tully-Fisher relation at $z\\sim1$ among \\cite{Miller11, Miller12} and \\cite{Tiley16, Uebler17} could be partly due to the use of different tracers.\\footnote{\nIn addition, other important reasons for such differences have been identified \\citep[see e.g.\\ Appendix~A by][]{Uebler17}: inclusion of galaxies where the peak velocity is not reached by the data, lack of a correction for beam-smearing, or sample selection effects based on e.g.\\, $v_{\\rm rot}\/\\sigma_0$ cuts.}\nThe measurements by \\cite{Miller11, Miller12} include or are based on [OII] emission, while \\cite{Tiley16, Uebler17} target H$\\alpha$. A zero-point difference of about $-0.3$~dex is found between those studies, with a corresponding offset in velocity of $\\sim0.1$~dex. Our results suggest that for a velocity of 50~km\/s (200~km\/s), a systematic velocity offset of up to 0.2~dex (0.05~dex) could be solely due to the use of different gas tracers, potentially resolving the disagreement between those studies.\n\nConsidering the recent literature compilation by \\cite{Uebler19} of the redshift evolution of intrinsic ionised gas velocity dispersion, our findings indicate that the difference in average disc velocity dispersion at fixed redshift found between some surveys could be due to the use of different emission line tracers. Any systematic difference between ionised gas velocity dispersion measured from H$\\alpha$ {\\it vs.\\,} [OII] or also [OIII], which is known to typically have a higher excitation contribution from narrow-line AGN than H$\\alpha$ \\citep[e.g.][]{Kauffmann03, DaviesRL14}, could affect evolutionary trends.\\footnote{\nWe note that studies of nearby giant HII regions typically find lower velocity dispersions for [OIII] compared to H$\\alpha$, and this has been linked to [OIII] originating from denser regions more deeply embedded in HII regions \\citep[e.g.][]{Hippelein86}. \\cite{Law22} find a correlation of $\\sigma_{\\rm [OIII]}\/\\sigma_{\\rm H\\alpha}$ with SFR for galaxies in the MaNGA survey \\citep{Bundy15}. In their work, velocity dispersions measured from [OIII] are higher relative to H$\\alpha$ for SFR$\\gtrsim1~M_{\\odot}\/yr$. Massive main-sequence galaxies at $z\\sim1-2$ have typical SFRs of $10-100~M_{\\odot}\/yr$ \\citep[e.g.][]{Whitaker14}. \\cite{Law22} also find [OII] velocity dispersions to be systematically higher compared to measurements based on H$\\alpha$.}\nIndeed, several surveys including [OII] or [OIII] emission lines have average intrinsic velocity dispersion values above the relation derived by \\cite{Uebler19}. However for the more challenging measurement of the velocity dispersion the situation is further complicated by different methodologies in accounting for beam-smearing in those studies. \nIndeed, several surveys including [OII] or [OIII] emission lines have average intrinsic velocity dispersion values above the relation derived by \\cite{Uebler19}. However for the more challenging measurement of the velocity dispersion the situation is further complicated by different methodologies in accounting for beam-smearing in those studies.\\\\\n\nIn general, considering the full 1D profiles, we find that (i) stellar velocities reach lower amplitudes and average disc stellar velocity dispersions are higher compared to the ionised gas kinematics, reminiscent of local Universe findings; (ii) stellar velocity dispersions and ionised gas velocity dispersion dominated by [OII]$\\lambda\\lambda3726,3729$ emission are often more asymmetric compared to H$\\alpha$; (iii) the correspondence between the KMOS$^{\\rm 3D}$\\, H$\\alpha$ data and the LEGA-C emission line data is better for LEGA-C spectra including Balmer lines (green circles in Figures~\\ref{f:measures1} and \\ref{f:measures2}).\n\nOverall, more high-quality data would be beneficial to characterise the differences in ionised gas kinematics provided by different tracers for the same galaxies. \nUpcoming data from {\\it James Webb Space Telescope} ({\\it JWST}) enabling H$\\alpha$ studies up to $z\\sim7$ and extensions of $z<3$ ground-based kinematic studies of multiple emission lines with IFUs such as ERIS, MUSE, and KMOS will provide important references.\n\n\n\n\\section{Dynamical Masses}\\label{s:mdyn}\n\nWe now proceed with a comparison of dynamical mass measurements from the KMOS$^{\\rm 3D}$\\, H$\\alpha$ data and the LEGA-C stellar kinematic data. \nIn contrast to the previous section, where we have matched the observing conditions between KMOS$^{\\rm 3D}$\\, and LEGA-C data, we now use the native spatial and spectral resolution of the data to build the best possible dynamical models based on H$\\alpha$ and stars. \n\n\\subsection{Cumulative total mass profiles based on \\texorpdfstring{H$\\alpha$}{Halpha} and stars} \\label{s:mcum}\n\nWe begin with a comparison of cumulative mass profiles from JAM models and our best-fit H$\\alpha$ dynamical mass models (see Sections~\\ref{s:mdynk3d} and \\ref{s:mdynlgc}). Figure~\\ref{f:mcum} shows the cumulative mass profiles of the six galaxies in our sample for which the data quality is high enough in both stars and H$\\alpha$ to construct spatially resolved mass models. The JAM measurements (golden diamonds with error bars indicating one standard deviation) are shown every kiloparsec out to 10~kpc, and at $R_{e,\\rm F814W}$, and the KMOS$^{\\rm 3D}$\\, models are shown as blue lines, with lighter shading indicating one and two standard deviations, respectively.\n\nIt is remarkable to see that for most cases, despite the different techniques, model inputs, and tracers, the constraints on the enclosed mass and its shape are in agreement. This comparison shows that both stars and H$\\alpha$ at $z\\sim1$ constrain the same total mass distribution over a large range of radii, when high-quality data suitable for dynamical modelling are available. \nWe note that the uncertainties for the H$\\alpha$ and JAM models are not directly comparable, since the former have fewer free parameters. However, also the extent out to which the model is constrained by the data is larger for H$\\alpha$ for all cases discussed here (golden and blue arrows in Figure~\\ref{f:mcum} for LEGA-C and KMOS$^{\\rm 3D}$, respectively), further reducing uncertainty in the model.\n\nFor four of the six galaxies (left panels), the dynamical models agree within their uncertainties from 1~kpc to (at least) 10~kpc, covering a range of $1-2.2~R_{e,\\rm F814W}$ ($1.8-2.6~R_{e,\\rm F160W}$). \nFor one galaxy, COS4\\_25353-M1\\_139825 (bottom right), the models agree within their uncertainties from 4~kpc to (at least) 10~kpc, while in the central 3~kpc the stellar model yields higher dynamical masses relative to the H$\\alpha$ model. This galaxy is seen almost face on, with a difference between the H$\\alpha$ kinematic major axis and the F814W position angle of $26.5^{\\circ}$. This is also one of the objects with strong [NeV] emission in the central region. We speculate that emission from the AGN could bias the light-weighted estimates of the central density for both models.\n\nThere is only one galaxy for which the JAM estimates and the KMOS$^{\\rm 3D}$\\, estimates are significantly different over a large range in radius, COS4\\_03493-M4\\_121150 (top right). At $R_{e,\\rm F814W}$, the JAM measurement is higher by $\\Delta M_{\\rm dyn}=0.35$~dex compared to the H$\\alpha$ model. For this highly inclined galaxy ($i\\approx68-84^\\circ$), the kinematic major axis and the F814W and F160W position angles all align within $1^{\\circ}$. However, the F814W structural parameters indicate a high S\u00e9rsic index ($n_{S,\\rm F814W}=5.1$) and a large disc ($R_{e,\\rm F814W}=8.2$~kpc). Yet, adopting structural parameters from F814W imaging for the H$\\alpha$ model has negligible effect on the dynamical mass constraints. Instead, it is likely that JAM fits a high $M\/L$ to the bright and higher-$S\/N$ bulge component, leading to an overestimate of the mass in the extended disc (see also discussion in Section~\\ref{s:multiple}). \n\n\n\\subsection{Comparison to measurements at \\texorpdfstring{$R_{e,\\rm F814W}$}{Re,F814W} based on integrated stellar velocity dispersion}\\label{s:compmdyn}\n\nThe agreement between H$\\alpha$ dynamical mass models and measurements based on integrated stellar velocity dispersion is not as good. As described in Section~\\ref{s:mdynlgc}, \\cite{vdWel21, vdWel22}, have utilised the JAM results for LEGA-C to re-calibrate virial mass measurements based on the integrated stellar velocity dispersion, which is available for a larger number of galaxies. In Figure~\\ref{f:mdyn} we compare dynamical masses at $R_{e,\\rm 814W}$ from our H$\\alpha$ models to LEGA-C estimates based on $\\sigma_{\\star,\\rm vir}$ (purple symbols) for ten galaxies. We choose this radius as it allows for a straight-forward comparison of the KMOS$^{\\rm 3D}$\\, models to the LEGA-C $M_{\\rm vir}$ values, where $M_{\\rm dyn,LEGA-C}(-11$ and $n_S\\leq2.5$. \n\nDue to the small sample size and the duplicate observations and two methods shown for LEGA-C, we concentrate on standard deviations between the various measurement sets to further quantify our results, as listed in Table~\\ref{t:stddev}. Considering all observational pairs for which LEGA-C estimates based on integrated velocity dispersion exist, we find a standard deviation of 0.24~dex between the LEGA-C and KMOS$^{\\rm 3D}$\\, dynamical mass estimates. The agreement between KMOS$^{\\rm 3D}$\\, and LEGA-C JAM is better, with a standard deviation of 0.13~dex (however for a sample of six). \nWe note that the reduction in scatter from 0.24 to 0.13 dex is marginally significant ($1.2\\sigma$). \nIf we further exclude the JAM measurement of galaxy COS4\\_03493-M4\\_121150 (see Section~\\ref{s:mcum}), we find a standard deviation of 0.07~dex when comparing to the KMOS$^{\\rm 3D}$\\, estimates. In this case, the reduction in scatter relative to the comparison of KMOS$^{\\rm 3D}$\\, models and LEGA-C models based on integrated velocity dispersion has a significance of $2.3\\sigma$.\n\nOverall, the discrepancy between KMOS$^{\\rm 3D}$\\, and LEGA-C $M_{\\rm dyn}$ estimates based on integrated stellar velocity dispersion in our sample is larger than the independent estimate of uncertainties from LEGA-C duplicate observations ($\\sigma_{\\rm Mvir,dupl}=0.14$), but comparable to the independent estimate of uncertainties from different methods within LEGA-C to determine dynamical mass ($\\sigma_{\\rm Mvir~vs~JAM}=0.24$). \nFor the full LEGA-C survey, the scatter among $M_{\\rm vir}$ and JAM measurements for SFGs is lower than our value, with $\\sigma_{\\rm Mvir~vs~JAM}=0.16$ \\citep{vdWel22}. This suggests that our sample includes some outliers in the $M_{\\rm vir}$-to-$M_{\\rm JAM}$ calibration by \\cite{vdWel22}.\n\n\n\\begin{table}\n\t\\centering\n\t\\caption{Standard deviation of dynamical mass discrepancy $\\Delta M_{\\rm dyn}$ for various subsets of KMOS$^{\\rm 3D}$\\, and LEGA-C data.}\n\t\\label{t:stddev}\n\t\\begin{tabular}{lcl} \n\t\t\\hline\n\t\tcomparison & std. dev. & sample size\\\\\n\t\t& [dex] & \\\\\n\t\t\\hlin\n\t\tLEGA-C$_{\\rm JAM}$, KMOS$^{\\rm 3D}$\\, & 0.13 & 6 \\\\\n\t\tLEGA-C$_{\\rm Mvir}$, KMOS$^{\\rm 3D}$\\, & 0.24 & 14 (incl. duplicates)\\\\\n\t\tLEGA-C$_{\\rm Mvir}$, duplicates & 0.14 & 4 \\\\\n\t\tLEGA-C$_{\\rm Mvir}$, LEGA-C$_{\\rm JAM}$ & 0.24 & 6 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\n\\subsection{Notes on LEGA-C duplicate observations and \\texorpdfstring{$M_{\\rm dyn}$}{Mdyn} estimates from multiple techniques}\\label{s:multiple}\n\nFigure~\\ref{f:mdyn} shows for several objects multiple LEGA-C measurements of dynamical mass. \nFour galaxies in our dynamical mass sample have been observed with two different masks in LEGA-C, three of which with a different slit orientation. \nThe $M_{\\rm vir}$ estimates of these duplicate observations agree with each other within the uncertainties for all but one case. In this latter case, the duplicate observations have comparable $S\/N$, but the observation which is also in agreement with the KMOS$^{\\rm 3D}$\\, measurement is better aligned with the kinematic major axis. For all other cases, the duplicate observation with higher $S\/N$ is in better agreement with the KMOS$^{\\rm 3D}$\\, measurement. \nThis conforms to the expectation that in the absence of asymmetric motions, $S\/N$ is more important than alignment for integrated measurements, which are centrally weighted.\nThis is encouraging not only for existing ground-based surveys, but also for upcoming data from {\\it JWST} NIRSpec Multi Shutter Array observations.\n\nFor most cases, the JAM measurements are in better agreement with the KMOS$^{\\rm 3D}$\\, modelling than the $M_{\\rm dyn}$ estimates based on $\\sigma_{\\star,\\rm vir}$. As discussed in Section~\\ref{s:mcum}, for galaxy \nCOS4\\_03493-M4\\_121150 JAM predicts a too large dynamical mass within $R_{e,\\rm F814W}$ compared to the KMOS$^{\\rm 3D}$\\, model ($\\Delta M_{\\rm dyn}=0.35$~dex), but also compared to the LEGA-C measurement from $\\sigma_{\\star,\\rm vir}$ ($\\Delta M_{\\rm dyn}=0.26$~dex). \nFor this galaxy, spatially resolved modelling of the ionised gas from the LEGA-C survey exists as well \\citep{Straatman22}. Prominent emission lines in this LEGA-C slit spectrum are H$\\beta$ and [OIII], and the correspondence of the 2D KMOS$^{\\rm 3D}$\\, H$\\alpha$ pseudo-slit data and the LEGA-C H$\\beta$ data is good. The $M_{\\rm dyn}(0.6$. These trends suggest that apparent misalignments between ionised gas kinematics and stellar light are not primarily due to intrinsic physical differences between the warm gas and stellar distributions in galaxies, a possible consequence of e.g.\\ misaligned accretion, but are largely due to limitations of photometric measurements for face-on systems. We find a comparable trend in our sample. \n\nThis motivates us to explore in more detail possible correlations between dynamical mass discrepancy and measures of position angle. Here, following \\cite{Wisnioski15} and the misalignment diagnostic $\\Psi$ by \\cite{Franx91}, we define $\\sin(\\Psi_{\\rm F814W,kin}) = |\\sin({\\rm PA_{F814W}}-{\\rm PA_{\\rm kin}})|$, where PA$_{\\rm kin}$ is measured from the H$\\alpha$ IFS data. We find that those galaxies with larger mismatches in their dynamical mass estimates also have stronger kinematic misalignments.\n\nFor $M_{\\rm dyn}$ measurements based on JAM models (or the spatially resolved models by \\citealt{Straatman22}), we would expect a trend such that kinematic PAs that are more inclined with respect to the slit orientation than the photometric PA would result in underestimated dynamical masses from the stellar data, whereas kinematic PAs that are closer to the slit orientation than the photometric PA would result in overestimated dynamical masses from the stellar data. We find no indication for a corresponding trend based on the JAM measurements only. \nConsidering all measurement pairs we find no significant correlation ($\\rho_S=0.22$; $\\sigma_\\rho=0.9$).\n\nFurther, we find at most a weak correlation between mismatches of the H$\\alpha$ kinematic major axis and the LEGA-C slit position for the $M_{\\rm dyn}$ discrepancies based on integrated stellar velocity dispersion measurements ($\\rho_S=0.31$; $\\sigma_\\rho=1.1$).\nWe also find no (significant) correlation with $M_{\\rm dyn}$ discrepancy and the F160W or F814W S\u00e9rsic index measurements, their difference, or estimates of the central 1~kpc stellar surface density.\n\n\n\\subsubsection{Kinematic properties}\\label{s:corr_kin}\n\nWe consider correlations of dynamical mass discrepancy with kinematic quantities, specifically the H$\\alpha$ circular velocity at $R_{e, \\rm F814W}$ and the rotational support $v_{\\rm rot}\/\\sigma_0$. These quantities are based on the best-fit dynamical models of the H$\\alpha$ data from the KMOS$^{\\rm 3D}$\\, survey. We find a trend with circular velocity, as illustrated in the left panel of Figure~\\ref{f:dmdyn3}. This shows that the difference in $M_{\\rm dyn}$ estimates from stars and gas is lower for galaxies with higher circular velocities and stronger dynamical support from rotation in the ionised gas phase (middle panel). Possible explanations could be that systems with higher rotational support are closer to dynamical equilibrium (dynamical equilibrium is the base assumption for all dynamical modelling discussed in this work), or that the pressure support corrections for the H$\\alpha$ data are underestimated (cf.\\ right panel). The pressure support correction chosen in this work following the self-gravitating disc description by \\cite{Burkert10} is stronger than other corrections adopted in the literature, so the latter explanation is unlikely \\citep[see e.g.][]{Bouche22, Price22}.\n\nNo corresponding significant correlations have been found in the study by \\cite{Straatman22} comparing slit-based estimates for both ionised gas and stars, where the kinematic major axis is unknown.\n\nWe note that for two galaxies we can only robustly constrain upper limits on the intrinsic velocity dispersion $\\sigma_0$ from our models due to the spectral resolution of KMOS \\citep[see Section 3.4 by][for details]{Uebler19}. If we remove these two galaxies from our calculation of the correlation coefficients, we find $\\rho_S=0.18$ and $\\sigma_\\rho=0.4$ for the correlation between $\\Delta M_{\\rm dyn}$ and $\\sigma_0$, and $\\rho_S=-0.90$, $\\sigma_\\rho=2.2$ and $\\rho_S=-0.68$, $\\sigma_\\rho=1.7$ for the correlations with $v_{\\rm circ}(r=R_{e,{\\rm F814W}})$ and $v_{\\rm rot}(r=R_{e,{\\rm F814W}})\/\\sigma_0$, respectively.\n\n\\subsubsection{Global physical properties}\n\nConsidering physical properties related to feedback strength such as SFR, sSFR, $\\Sigma_{\\rm SFR}$, AGN activity, or outflow signatures, we find no (significant) correlations with dynamical mass offset. This suggests that feedback does not play a major role in systematically affecting the dynamical mass estimates differently for H$\\alpha$ and stars for galaxies in our sample \\citep[see also][]{Straatman22}. However, we remind the reader about the substantial effect of AGN tracer emission line species such as [NeV] on the ionised gas galaxy kinematics extracted from the LEGA-C spectra.\n\nWe also caution about the potential impact of the line broadening in integrated line emission spectra induced by the presence of strong outflows (and of important disc velocity dispersion), as discussed by \\cite{Wisnioski18} for compact massive SFGs. This underscores the benefit of spatially-resolved emission line kinematic modeling as performed here.\n\nAlthough (circular) velocity and galaxy mass are connected through the TFR, we find no significant correlation with $\\Delta M_{\\rm dyn}$ and galaxy mass (see Table~\\ref{t:corr}). To some extent, this can be explained by the scatter in the $z\\sim1$ TFR \\citep[e.g.][]{Uebler17}. However, in our sample the lack of correlation is also driven by the massive, compact galaxy COS4\\_08096 having a large dynamical mass discrepancy. If we exclude this galaxy from the calculations, we still find only weak trends, but more along the expected direction for $\\log(M_{\\star})$ ($\\rho_S=-0.28$, $\\sigma_\\rho=1.1$) and for $\\log(M_{\\rm bar})$ ($\\rho_S=-0.29$, $\\sigma_\\rho=1.2$).\\\\\n\nIn summary, despite the small sample size our investigation of correlations with dynamical mass discrepancy reveals interesting trends, which should be followed up in future studies.\nWe find mild correlations in particular with effective radius, projected axis ratio, dynamical support in the ionised gas phase, and with kinematic misalignment. This confirms on the one hand the expectation that it is more difficult to constrain robust dynamical masses for galaxies that are smaller, more face-on, and with higher dispersion support \\citep[see][for a detailed study]{Wisnioski18}. On the other hand it stresses the importance of spatially-resolved kinematic information to build accurate mass models.\n\n\n\n\\section{Discussion and Conclusions}\\label{s:conclusions}\n\nWe have compared kinematics and inferred dynamical masses from ionised gas and stars in 16 star-forming galaxies at $z\\sim1$, common to the KMOS$^{\\rm 3D}$\\, \\citep{Wisnioski15, Wisnioski19} and LEGA-C \\citep{vdWel16, vdWel21, Straatman18} surveys. Our main conclusions are as follows:\n\n\\begin{itemize}\n\n \\item Comparing stellar and H$\\alpha$ kinematic profiles, we find that on average rotation velocities are higher by $\\sim45$ per cent and velocity dispersions are lower by a factor of two for H$\\alpha$ relative to stars, reminiscent of trends observed in the local Universe (Sections~\\ref{s:comp_v} and \\ref{s:comp_s}).\\\\\n \n \\item We measure higher rotational support in H$\\alpha$ compared to [OII]. This could explain systematic differences found in literature studies of e.g.\\ the Tully-Fisher relation when based only on $v_{\\rm rot}$ without accounting for pressure support (Sections~\\ref{s:comp_vs} and \\ref{s:compilations}).\\\\\n \n \\item We find excellent agreement between cumulative total mass profiles constrained from our {\\sc{dysmal}} models using H$\\alpha$ kinematics and from JAM models to the stellar kinematics, out to at least 10~kpc for five of six galaxies (average $\\Delta M_{\\rm dyn}(R_{e,\\rm F814W})<0.1$~dex, standard deviation 0.07~dex; Section~\\ref{s:mcum}). This shows that dynamical masses at $z\\sim1$ can be robustly measured from modelling spatially resolved observations, either of stellar or ionised gas kinematics. \\\\\n\n \\item Simpler dynamical mass estimates based on integrated stellar velocity dispersion are less accurate (standard deviation 0.24~dex; Section~\\ref{s:compmdyn}). \\\\\n \n \\item We investigate correlations of dynamical mass offset with galaxy properties and find larger offsets e.g.\\ for galaxies with stronger misalignments of photometric and H$\\alpha$ kinematic position angles (Section~\\ref{s:corr}). This highlights the value of 2D spatially resolved kinematic information in inferring dynamical masses. \n \n\\end{itemize}\n\nOur comparison of the kinematics of stars and ionised gas reveals differences in their resolved velocities and velocity dispersions that are marginally significant.\nLower rotational support, lower LOS disc velocities, and higher LOS disc velocity dispersions of stars relative to the star-forming gas phase are also seen in modern cosmological simulations (\\citealp{Pillepich19}; C. Lagos, priv. comm.; E.~Jimenez et al., in prep.).\nA possible scenario explaining lower rotational support and higher dispersion in the stellar component is that the observed stars have been born {\\it in-situ} from gas with higher velocity dispersions \\citep[e.g.][]{Bird21}. \nThe redshift evolution of molecular gas disc velocity dispersion is still poorly constrained through available data \\citep[see][]{Uebler19}. However, if the mild evolution of ionised gas disc velocity dispersion with redshift is an indication also for the evolution of molecular gas velocity dispersion, processes in addition to secular evolution are likely required to explain the stellar velocity dispersion for the majority of galaxies in our sample.\nThis is further supported by the tentative evidence of a larger difference in H$\\alpha$ and stellar velocity dispersion for systems with higher stellar disc velocity dispersion, indicating a process that affects the stellar dynamics differently, and plausibly at earlier times, than the ionised gas.\n\nIn general, the collision-less nature of stars allows for a variety of non-circular orbital motions. A higher fraction of low-angular momentum box orbits or $x$-tubes (rotation around the minor axis) can reduce the LOS velocity of stars \\citep[e.g.][]{Roettgers14}. The origin of such motions is plausibly connected to assembly history, where more frequent mergers in the past reduce the net angular momentum of the stellar component, in particular if their baryon content is dominated by stars \\citep[e.g.][]{Naab14}. \nSuch interactions could also contribute to disc heating, further increasing the velocity dispersion of the stars \\citep[e.g.][]{Grand16}. \nThis is in agreement with our finding of higher $v_{\\rm max}\/\\sigma_{\\rm out}$ measured from H$\\alpha$ compared to stars. \n\nTheoretically, the misaligned smooth accretion of gas can also result in different kinematics of gas and stars \\citep[e.g.][]{Sales12, Aumer13, Aumer14, Uebler14, vdVoort15, Khim21}. However, such processes typically reduce initially only the net angular momentum of the gas phase. This would correspond to a reduction of $v_{\\rm max}\/\\sigma_{\\rm out}$ measured from the star-forming gas relative to the full stellar population, which is not observed in our data set.\n\nDeviating kinematic signatures in gas {\\it vs.\\ }stars could also be caused by feedback. \nThe imprints of stellar- and AGN-driven winds on galaxy emission line spectra at $z\\sim1-2$ are routinely observed \\citep[see e.g.][for an analysis including the KMOS$^{\\rm 3D}$\\, survey]{FS19}. Such feedback can bias disc kinematic measurements due to the difficulty of disentangling e.g.\\ galaxy rotation from outflows in low spectral resolution observations \\citep[see also][]{Wisnioski18}. \nAt least three galaxies in our sample show signatures of outflows in their H$\\alpha$ spectra, but we could not identify a systematic effect on the spatially-resolved disc kinematic measurements of H$\\alpha$ and stars presented in Section~\\ref{s:compasobs}. One of these galaxies (COS\\_19648-M1\\_134839) shows indication of a counter-rotating disc in the stellar {\\it vs.} ionised gas components; the difference in $\\log(M_{\\rm dyn})$ for this object is $\\sim0.15$~dex. \nHowever, we clearly see the impact of feedback processes on the kinematic signatures of specific emission lines (especially [NeV]) deviating from the main disc rotation in the LEGA-C spectra of two objects (see Section~\\ref{s:lgckin} and Appendix~\\ref{a:nev}). \n\nOur results on dynamical mass estimates show that data quality and methods play a role for existing differences in dynamical mass estimates of $z\\sim1$ galaxies. \nThe fact that we find better agreement between the KMOS$^{\\rm 3D}$\\, {\\sc{dysmal}} and the LEGA-C JAM dynamical mass estimates, as compared to the LEGA-C estimates based on integrated velocity dispersion, demonstrates the advantage of detailed dynamical models leveraging the full structural information available over more approximate estimators. \nThe remarkable agreement between spatially resolved dynamical mass estimates from stars and H$\\alpha$, and from independent data sets, provides great confidence in our ability to probe the gravitational potential of $z\\sim1$ galaxies.\nIt further suggests that our implementation of the pressure support correction accounting for the turbulent motions in the ionised gas phase is adequate.\n\nAt the same time, the residual trends between KMOS$^{\\rm 3D}$\\, dynamical mass estimates with {\\sc{dysmal}} and LEGA-C dynamical mass estimates from integrated velocity dispersions, particularly with the major axis misalignment of F814W photometry and H$\\alpha$ kinematics, could be interpreted as signs of physical processes disturbing global equilibrium for some galaxies. \nA difference in position angle of gas and stars could stem from misaligned smooth accretion, but also from a disruptive merger event in the past \\citep[see e.g.][]{Khim21}. If the system has not yet reached a new equilibrium, this could be reflected in deviating dynamical mass estimates from the differently affected baryonic components.\nHowever, galaxies with large misalignment ($\\Delta{\\rm PA}>20^{\\circ}$) in our sample are also seen relatively face-on, indicating that the photometric PA measurements are more uncertain and any intrinsic misalignment is likely smaller. \n\nOverall, the dynamical mass measurements from LEGA-C stellar kinematics tend to be larger than the measurements from the KMOS$^{\\rm 3D}$\\, H$\\alpha$ kinematics by 0.12~dex on average \\citep[see also][]{Straatman22}. If dynamical mass measurements from stellar kinematics are systematically overestimated, this would reduce mass-to-light ratios inferred from such data and impact conclusions on the initial mass function of galaxies. It could also potentially impact the evolutionary study of the Fundamental Plane \\citep{Djorgovski87, Dressler87}. Larger comparison samples at $z>0$ are required to quantify any potential effect.\n\nLarger samples will also be necessary for a statistical assessment of the impact of physical processes on galaxy dynamics at this cosmic epoch. Of further interest would be the extension of our sample towards lower masses, where the shallower potential wells of haloes would allow feedback and accretion processes to have a larger impact on the host galaxy properties. Due to the smaller size of lower-mass galaxies, this would require higher spatial resolution observations than the data presented in this work. This could be achieved with instruments such as ERIS\/VLT, and in the future with HARMONI\/ELT, or GMTIFS\/GMT. Similarly, higher-resolution imaging providing better constrained structural parameters would help in building more accurate dynamical models.\nAt higher redshifts, higher accretion rates and shallower potential wells may cause larger and more frequent kinematic misalignments. This can be investigated through a combination of {\\it JWST}\/NIRCam imaging and {\\it JWST}\/NIRSpec IFS observations.\n\nFor a comprehensive assessment of baryonic kinematics and dynamics at $z\\sim1$, the high-quality data from the KMOS$^{\\rm 3D}$\\, and LEGA-C surveys would ideally be complemented by spatially resolved observations of another independent dynamical tracer, such as CO. With potentially lower disc velocity dispersion than stars and ionised gas, and unaffected by extinction, dynamical masses inferred from molecular gas kinematics could help to determine realistic uncertainties on dynamical masses, and improve our understanding of the role of corrections factors and modelling assumptions required to infer dynamical masses from other baryonic phases. \n\n\n\\section*{Acknowledgements}\n\nH{\\\"U} thanks Sirio Belli, Luca Cortese, Claudia Lagos, and Joop Schaye for insightful discussions on aspects of this work.\nWe thank Amit Nestor for sharing dynamical mass estimates based on the work by \\cite{Nestor22}. We thank Claudia Lagos for sharing a comparison of LOS velocity dispersion from stars and star-forming gas based on calculations described in \\cite{Lagos18} utilising the {\\sc{eagle}} simulations \\citep{Schaye15}. \nH{\\\"U} gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship.\nCMSS acknowledges support from Research Foundation - Flanders (FWO) through Fellowship 12ZC120N.\nThis research made use of \\href{http:\/\/www.astropy.org}{Astropy}, a community-developed core Python package for Astronomy \\citep{astropy13, astropy18}, and of Photutils, an Astropy package for detection and photometry of astronomical sources \\citep{photutils}.\n\n\\section*{Data Availability}\n\nThe KMOS$^{\\rm 3D}$\\, data cubes used in this research are publicly available and accessible at \\url{http:\/\/www.mpe.mpg.de\/ir\/KMOS3D} \\citep{Wisnioski19}. The 1D spectra and $M_{\\rm vir}$ measurements from the LEGA-C survey are published by \\cite{vdWel21}. JAM models for the LEGA-C data are published by \\cite{vHoudt21}.\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}