diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzplt" "b/data_all_eng_slimpj/shuffled/split2/finalzplt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzplt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nCell-free massive MIMO is a new network architecture which has been gaining more attention recently as it has the potential to provide a high capacity per user and per unit area \\cite{7827017, 8422577}. As illustrated in Fig. \\ref{systemmodel}, the service area is not divided into separate cells, and the users are not associated with single base stations, but may communicate via multiple access points (APs) simultaneously. To enable this, the Access Points (APs) are distributed over the whole service area and are connected via fronthaul links to the Central Processing Unit (CPU) so that they can cooperate to detect user signals. \n\nDue to the large number of APs deployed in Cell-Free massive MIMO, distributed detection and precoding are often performed at the APs thus reducing the complexity and improving the scalability of channel state information (CSI) acquisition at the CPU. It also avoids the additional fronthaul load due to CSI transfer. However, this approach sacrifices significant achievable data rates in contrast to the centralized approach where the CSI is available at the CPU \\cite{8422865, 8417560, TVTcorrMBB}. In this case (on the uplink) maximum ratio combining (MRC) must be performed at the APs, and separately weighted estimates of each user's signal transmitted to the CPU. It is shown in [2] that the overall fronthaul load may then be higher, and then overall performance poorer, than the case where the quantized CSI and quantized user signals are transferred to the CPU. Moreover if the CSI is available at the CPU more effective detection algorithms such as zero forcing (ZF) may be used, further improving performance [3].\n\nA natural question is therefore whether one can provide CSI at CPU with low fronthaul requirement and with relative simple CSI acquisition. While there has been a number of studies about CSI acquisition and fronthaul load reduction in the context of Cloud Radio Access Networks (C-RAN) \\cite{7444125}, there are few on Cell-Free massive MIMO \\cite{TVTcorrMBB, 8445960}. For instance, the CSI acquisition strategies taking into account the limited fronthaul capacity for single-antenna APs has been investigated in \\cite{TVTcorrMBB}. In the case of multiple-antenna APs the work in \\cite{8445960} studied CSI acquisition at the CPU by utilizing sophisticated hybrid beamforming, where analog combining is performed prior to one-bit analog-to-digital conversions (ADCs).\n\n\\begin{figure}[!t]\n\\centering\n\\input{modelIlustration}\n\\caption{Illustration of Cell-Free Massive MIMO with $L$ access points, $N$ antenna per access point, $K$ users and finite fronthaul rate $C_l$.}\n\\label{systemmodel}\n\\end{figure}\n\nNevertheless, the above works do not explicitly address the channel correlation. For one-bit co-located massive MIMO the problem of channel estimation with correlation was studied among others in \\cite{Kim2018ChannelEF}. Since the real channel tends to be spatially correlated, in this paper we study CSI acquisition for cell-free massive MIMO with limited fronthaul capacity and assuming spatial correlation at multi-antenna APs. In this case, we use Vector Quantization (VQ) with precision of only a small number of bits, so as simultaneously to exploit the channel correlation and meet the low bit requirement of the fronthaul. By applying a vector quantization at APs we investigate the CSI acquisition strategy \\emph{Quantize-and-Estimate} (QE) based on the Bussgang theorem and its counterpart \\emph{Estimate-and-Quantize} (EQ). Our simulation results demonstrate that this few-bit vector quantization can exploit the channel correlation effectively giving better performance in terms of Mean Squared Error (MSE) compared to utilizing Scalar Quantization (SQ) at individual antennas. Further, the QE scheme, which is relatively simple for the implementation at the APs, is shown to offer a significant performance improvement over EQ particularly at low to moderate SNR.\n\nThe rest of the paper is organized as follows. In section \\ref{SM} the system model is described including the spatial channel correlation model. We describe then our considered fronthaul compression technique in section \\ref{FC}. In section \\ref{CSI A}, we present our CSI acquisition schemes, where we make use of the vector quantization and Bussgang decomposition described in section \\ref{FC}. We evaluate then our schemes numerically in section \\ref{NR} and close with conclusion in section \\ref{Conclusion}.\n\n\\emph{Notation}: Roman letter, lower-case boldface letter and upper-case boldface letter are used respectively to denote a scalar, a column vector and a matrix. The set of all complex and real $M\\times N$ matrix are represented by $\\mathbb{R}^{M \\times N}$ and $\\mathbb{C}^{M \\times N}$ respectively. By $\\langle \\cdot, \\cdot\\rangle$ we denote the inner product with $\\Vert \\cdot \\Vert$ as its corresponding vector norm or Frobenius norm. The expectation of random variable is represented by $\\mathbb{E}\\{\\cdot\\}$. We denote circularly complex Gaussian distribution with mean $\\mathbf{m}$ and covariance matrix $\\mathbf{\\Sigma}$ by $\\mathcal{CN}(\\mathbf{m}, \\mathbf{\\Sigma})$. We use $\\mathbf{I}_N$ for $N\\times N$ identity matrix and $\\mathbf{1}_N$ for all-one vector of dimension $N$. We denote the transpose conjugate by $(\\cdot)^H$. For a vector $\\mathbf{a}$, $\\text{diag}(\\mathbf{a})$ denotes a diagonal matrix with the diagonal elements created from vector $\\mathbf{a}$.\n\n\\section{System Model} \\label{SM}\nWe consider uplink transmission in a cell-free system \\cite{7827017}, where we have $K$ single-antenna users (UEs) and $L$ Access Points (APs) equipped with $N\\geq 1$ antennas. We fix the total number of AP antennas in the system at $M=LN$. The processing of the signals received at the APs is virtualized at the Central Processing Unit (CPU), which is connected to the L APs by $L$ error-free fronthaul links which carry the signals in digitally encoded form. \n \n\\subsection{Channel Model}\nWe denote the channel between the $k$-th user and the $m$-th antenna of the $l$-th AP by $g_{mk}$ where $m=(l-1)N+1, \\dots, lN$ for $l=1,\\dots, L$, and $k=1, \\dots, K$. For a given $l$ and $k$ the channel is specified by the $N\\times 1$ vector $\\mathbf{g}_{lk}\\sim\\mathcal{CN}(\\mathbf{0}_N, \\mathbf{\\Sigma}_{lk})$ where $\\mathbf{\\Sigma}_{lk}\\in\\mathbb{C}^{N \\times N}$ is the covariance matrix including the large scale fading and the spatial correlation given by\n\\begin{align}\n\\mathbf{\\Sigma}_{lk}=\\beta_{lk}\\mathbf{R}_{lk}.\n\\end{align} \nThe large scale fading $\\beta_{lk}$ is a path-loss dependent coefficient whereas the correlation matrix $\\mathbf{R}_{lk}\\in\\mathbb{C}^{N \\times N}$ is dependent on the particular environment between AP and UE. In this case, we follow the local scattering model given in \\cite{SIG-093}, where any user $k$ at the azimuth angle $\\theta$ to the AP $l$ is surrounded by scatterers causing correlation between the multipath signal components received between at the antennas of the AP. Accordingly, the correlation coefficient can be specified by an angle of arrival $\\bar{\\theta}$ which is treated as a random variable with probability density function $f(\\bar{\\theta})$ and the entries of the correlation matrix $\\mathbf{R}_{lk}$ are then determined by\n\\begin{align}\n\\left[\\mathbf{R}_{lk}\\right]_{a,b}= \\int e^{j2\\pi d_H(a-b) sin(\\bar{\\theta})} f(\\bar{\\theta}) d\\bar{\\theta},\n\\end{align}\nwhere $d_H$ is the spacing between antennas $1\\leq a, b \\leq N$. Further, $\\bar{\\theta}$ can be expressed as $\\bar{\\theta}=\\theta+\\delta$, where $\\delta$ is a random angular spread with standard deviation $\\sigma_\\delta$.\n\nUsing the Karhunen-Loeve representation we can describe the correlated channel vector as\n\\begin{equation}\n\\mathbf{g}_{lk}=\\beta_{lk}^{1\/2} \\mathbf{U}_{l} \\Lambda_l^{1\/2} \\mathbf{h}_{lk} , \\label{gmk}\n\\end{equation}\nwhere the vector $\\mathbf{h}_{lk}\\sim\\mathcal{CN}(\\mathbf{0}_N, \\mathbf{I_{N}})$ models the small scale fading between the $k$-th user and the $l$-th AP. The unitary matrix $\\mathbf{U}\\in\\mathbb{C}^{N \\times r}$ and the diagonal matrix $\\Lambda\\in\\mathbb{R}^{r \\times r}$ comprise respectively the eigenvectors and the associated eigenvalues of the correlation matrix $\\mathbf{R}_{lk}$ with rank $r$. The channel vector of the $k$-th user to all $L$ APs is then given by $\\mathbf{g}_{k}\\sim\\mathcal{CN}(\\mathbf{0}_M, \\mathbf{\\Sigma}_{k})$, where $\\mathbf{\\Sigma}_{k}=\\text{diag}\\left(\\mathbf{\\Sigma}_{1k}, \\dots,\\mathbf{\\Sigma}_{Lk}\\right)$. Further, we stack the channel from $K$ users to all $L$ APs in the columns of the $M\\times K$ matrix $\\mathbf{G}=\\left[\\mathbf{g}_{1}, \\dots, \\mathbf{g}_{K}\\right]$, such that under the assumption of perfect fronthaul the received signal at the CPU can be modeled as\n\\begin{align}\n\\mathbf{y}=\\mathbf{G}\\mathbf{x}+\\mathbf{w},\n\\end{align} \nwhere $\\mathbf{x}\\in\\mathbb{C}^{K}$ is the channel input from all $K$ users and $\\mathbf{w}\\sim\\mathcal{CN}(\\mathbf{0}_M, \\mathbf{I}_{M})$ is the i.i.d. additive Gaussian white noise at APs. Later, we will remove the assumption of perfect fronthaul and assume that the $l$-th fronthaul link connecting the $l$-th AP to the CPU can transmit quantized signals reliably at a maximum rate of $C_l$.\n\n\\section{Fronthaul Compression} \\label{FC}\nDue to the limited capacity of the fronthaul link and the high load of the digitally encoded signal we need to compress this data for efficient transmission to the CPU. To simplify our analysis, we consider fronthaul links with equal capacity of $C_l=C$ bits for all $l \\in \\{1,\\dots,L\\}$. \n\n\\subsection{Vector Quantization}\nOur considered compression consists of vector quantization followed by fixed-rate lossless coding. At each AP a vector quantizer $Q$ is applied as interface to the fronthaul with \n\\begin{align}\nQ(\\mathbf{x})= \\sum_{i=0}^{\\mathcal{S}-1} q_i T_i(\\mathbf{x}), \\text{ where } \nT_i(\\mathbf{x})=\n\\begin{cases}\n1& \\text{ if } \\mathbf{x}\\in\\mathcal{C}_i\\\\\n0& \\text{ otherwise. } \n\\end{cases} \\label{VQ}\n\\end{align}\nWhenever the input vector $\\mathbf{x}\\in\\mathbb{R}^N$ falls into the cell $\\mathcal{C}_i$, the index i will be transmitted on the fronthaul link, and the reconstruction value $q_i$ taken from the codebook $\\mathcal{Q}=\\{q_i\\}_{i=0}^{\\mathcal{S}-1}\\subset\\mathbb{R}^N$ will be used at the CPU. The codebook size corresponds to the fronthaul capacity by $\\mathcal{S}=2^{C}$. For $N$-dimensional vector quantization we allocate $C\/N$ bits per dimension. Here, we keep $C\/N$ small, to one or two bits per dimension. For a complex-valued signal we quantize separately the real and imaginary part. We do this for the reason that the correlation affects the real and imaginary part of the channel independently.\n\nThe optimal codebooks can be found using the Linde Buzo Gray (LBG) algorithm for minimum mean squared error \\cite{Gersho:1991:VQS:128857}. This algorithm is the counter part of Lloyd algorithm for vector quantization, where the optimal codebook is obtained by alternating between finding the optimal partition by the nearest neighbour criterion and finding the optimal reconstruction values by the centroid condition. The contribution of our scheme compared to separate quantization is that we take the received signal from $N$ antennas at the AP as the input $\\mathbf{x}$ of our vector quantizer $Q$. By doing so we expect to adapt the codebooks to the spatial channel correlation.\n\n\\subsection{Bussgang Decomposition}\nThe quantizer $Q$ given in (\\ref{VQ}) is in general non-linear and the error $\\mathbf{e}\\triangleq\\mathbf{x}-Q(\\mathbf{x})$ resulting from the quantization process is correlated with the input vector $\\mathbf{x}$. However, using the Bussgang theorem \\cite{Bussgang52} we can express our quantizer as the following linear model\n\\begin{align}\n\\mathbf{x}_q=Q(\\mathbf{x})=\\mathbf{F}\\mathbf{x}+\\mathbf{d}, \\label{LinearBussgang}\n\\end{align}\nwhere for a Gaussian input the distortion $\\mathbf{d}$ is statistically equivalent to the quantization error $\\mathbf{e}$ but uncorrelated with the signal component $\\mathbf{x}$. The linear operator $\\mathbf{F}$, which depends essentially on the given distortion characteristic of $Q$, tells us also about the proportional factor between the input-output covariance of the quantizer expressed as \\cite{Bussgang52}\n\\begin{align}\n\\mathbf{C}_{xx_q}&=\\mathbf{F}\\mathbf{C}_{xx}, \\text{ where } \\\\\n\\mathbf{C}_{xx_q}&=\\mathbb{E}\\{\\mathbf{x}\\mathbf{x}_q^H\\} \\text{ and } \\mathbf{C}_{xx}=\\mathbb{E}\\{\\mathbf{x}\\mathbf{x}^H\\}.\n\\end{align}\nIn this case, finding $\\mathbf{F}$ can be seen as finding the LMMSE estimator for $\\mathbf{x}_q$ from the observation $\\mathbf{x}$ \\cite{capacityQuantMIMO}\n\\begin{align}\n\\mathbf{F}&=\\mathbf{C}_{xx_q}\\mathbf{C}_{xx}^{-1}, \\label{F}\n\\end{align}\nwhere the estimation error $\\mathbf{d}$ is then orthogonal to $\\mathbf{x}$. Using (\\ref{F}) the covariance of the distortion $\\mathbf{d}$ can also be expressed as \n\\begin{align}\n\\mathbf{C}_{dd}&=\\mathbb{E}\\{(\\mathbf{x}_q-\\mathbf{F}\\mathbf{x})(\\mathbf{x}_q-\\mathbf{F}\\mathbf{x})^H\\}\\nonumber\\\\\n&=\\mathbf{C}_{x_qx_q}-\\mathbf{C}_{x_qx}\\mathbf{C}_{xx}^{-1}\\mathbf{C}_{xx_q}.\n\\end{align}\nThe closed form expression of $\\mathbf{F}$ is not yet known for a general quantizer, particularly for vector quantizers. Therefore, we compute $\\mathbf{F}$ numerically whenever it is needed by assuming that we have access to measurements of the input as well as the output of $Q$. We estimate the covariance matrix $\\mathbf{C}_{xx}$ from the sample covariance matrix\n\\begin{align}\n\\hat{\\mathbf{C}}_{xx}=\\frac{1}{N_{t}}\\sum_{n_{t}=1}^{N_{t}} \\mathbf{x}[n_{t}]\\mathbf{x}[n_{t}]^H \\label{SampleCovarince}\n\\end{align}\nand respectively for $\\mathbf{C}_{x_qx_q}$ and $\\mathbf{C}_{xx_q}$. The number of observations $N_{t}$ can be conveniently taken equally to the number of codebooks' training, where $\\hat{\\mathbf{C}}_{xx}$ will approach $\\mathbf{C}_{xx}$ for large $N_{t}$.\n\n\\section{CSI Acquisition Strategies}\t\\label{CSI A}\n\\begin{figure*}[ht!]\\centering\n\\includegraphics[width=\\textwidth]{VoronoiChannelN2Q4Merge} \n\\caption{The voronoi region of codebook $\\mathcal{Q}$ for $N=2$ and for different degree of correlation (i.e. for different angular spread standard deviation $\\sigma_{\\delta}$ of Gaussian distributed $\\delta$).}\n\\label{voronoiFig}\n\\end{figure*}\nWe assume in this work that neither the APs nor the CPU knows a priori the channel realization $\\mathbf{G}$. To enable interference suppression at the CPU, the CSI is needed at the CPU. We consider two strategies: estimate and quantize, and quantize and estimate, which will be discussed in the next two sub-sections. \n\n\\subsection{Estimate-and-Quantize}\nIn this scheme we estimate first the channel at APs and then quantize the resulting CSI with the quantizer given in (\\ref{VQ}) to meet the fronthaul-capacity limit of $C$ bits. The channel estimation is done based on the transmission of known pilots. Every user uses a specific sequence taken from a set $\\Psi$ of orthonormal random sequences $\\varphi_k \\in\\mathbb{C}^{\\tau\\times 1}$ with $\\langle\\varphi_k, \\varphi_k'\\rangle=\\delta_{kk'}$ and $\\Vert\\varphi_k\\Vert ^2=1$, where the sequence length $\\tau$ is assumed to be less or equal than the coherence interval $\\tau_c$. The $k$-th user sends $\\sqrt{\\tau}\\varphi_k$ as its pilot such that the $l$-th AP observes the received pilot $\\mathbf{Y}_{p,l}\\in\\mathbb{C}^{N \\times \\tau}$ from all $K$ users as\n\\begin{equation} \\label{receivePilot}\n\\mathbf{Y}_{p,l}= \\sqrt{\\tau\\rho_p}\\sum_{k=1}^{K} \\mathbf{g}_{lk}\\varphi_k^H + \\mathbf{W}_l,\n\\end{equation}\nwhere $\\rho_p$ is the transmit SNR of the pilot and $\\mathbf{W}_l$ is an additive noise matrix at the $l$-th AP whose entries are uncorrelated with zero mean and unit variance. \n\nTo allow all pilots to be orthogonal for all $K$ users, only $K\\leq \\tau$ users may transmit their pilots simultaneously. In this case, the transmitted pilots satisfy\n\\begin{equation}\n\\Phi^H\\Phi=\\tau\\rho_p \\mathbb{I}_K,\\quad \\text{ where } \\Phi=\\sqrt{\\tau\\rho_p}[\\varphi_1, \\dots,\\varphi_K]. \\label{pilotMatrix}\n\\end{equation}\n\nThe channel vector $\\mathbf{g}_{lk}$ can be estimated at the APs where the received pilot $\\mathbf{Y}_{p,l}$ is projected onto $\\varphi_k$ expressed as\n\\begin{align}\n\\mathbf{r}_{p,lk}&=\\frac{1}{\\sqrt{\\tau\\rho_p}}\\mathbf{Y}_{p,l}\\varphi_k\\nonumber\\\\\n&=\\mathbf{g}_{lk}+\\sum_{k'\\neq k}^{K} \\mathbf{g}_{lk'}\\varphi_{k'}^H\\varphi_{k} + \\frac{1}{\\sqrt{\\tau\\rho_p}}\\mathbf{W}_l\\varphi_{k} \\label{projectPilot}\n\\end{align}\nTo obtain the estimate of $\\mathbf{g}_{lk}$ we use the LMMSE estimator given by \n\\begin{align}\n\\hat{\\mathbf{g}}_{lk}= \\mathbf{\\Gamma}_{lk}\\mathbf{r}_{p,lk} \\label{EstimatorIdeal} \n\\end{align}\nThe gain matrix $\\mathbf{\\Gamma}_{lk}$ is given by \n\\begin{align}\n\\mathbf{\\Gamma}_{lk}&=\\mathbf{\\Sigma}_{lk}\\left(\\mathbf{\\Omega}_{lk}\\right)^{-1}, \\text{ where }\n\\mathbf{\\Sigma}_{lk}=\\mathbb{E}\\{\\mathbf{g}_{lk}\\mathbf{g}_{lk}^H\\} \\text{ and } \\\\\n\\mathbf{\\Omega}_{lk}&=\\mathbb{E}\\{\\mathbf{r}_{p,lk}\\mathbf{r}_{p,lk}^H\\}=\\mathbf{\\Sigma}_{lk}+\\frac{1}{\\tau\\rho_p}\\mathbf{I}_{N}\n\\end{align}\n\nAfter accomplishing the channel estimation the APs quantize the channel estimate $\\hat{\\mathbf{g}}_{lk}$. We assume that the large scale fading $\\beta_{lk}$ is relatively constant over a long period and known at the APs. Thus, we may scale the input to the vector quantizer accordingly with $\\beta_{lk}$ and approximate the distribution as multivariate Gaussian. Consequently, we can optimize the codebook $\\mathcal{Q}$ for each AP off-line and need only update it as the $\\beta_{lk}$ changes. As demonstrated in Fig. \\ref{voronoiFig} for $N=2$ the codebooks can exploit the spatial channel correlation effectively. Due to the LBG algorithm the reconstruction points $\\{q_i\\}$ are placed more densely in the region where the input signals come with high probability. As the correlation increases, the reconstruction points get closer to the diagonal to optimally represent the dependency between input signals. Thus, the distance from the input signals to the points $\\{q_i\\}$ becomes smaller resulting a smaller average distortion.\n\n\\subsection{Quantize-and-Estimate}\nInstead of transferring the quantized CSI we consider here another CSI acquisition strategy where we quantize first the received pilots at the APs and then estimate the channel from the quantized pilots at the CPU. To be more specific, the $l$-th AP quantizes the receive pilots at the $N$ antennas jointly as\n\\begin{align}\n\\mathbf{y}_{qp,l}^{(t)}=Q(\\mathbf{y}_{p,l}^{(t)}) \n&=Q\\left(\\sqrt{\\tau\\rho_p}\\sum_{k=1}^{K} \\mathbf{g}_{lk}{\\varphi_k^{(t)}}^* + \\mathbf{w}_l^{(t)} \\right)\\nonumber \\\\\n&=Q\\left(\\sqrt{\\tau\\rho_p}\\mathbf{G}_l{\\varphi^{(t)}}^H + \\mathbf{w}_l^{(t)} \\right) \\label{quantPilots}\n\\end{align}\nwhere the superscript $t=\\{1, \\dots, \\tau\\}$ denotes the index of the pilot sequence. Accordingly $\\mathbf{y}_{p,l}^{(t)}$ is the $t$-th column of $\\mathbf{Y}_{p,l}$ in (\\ref{receivePilot}) and $\\varphi^{(t)}$ is the $t$-th row of $\\Phi$ in (\\ref{pilotMatrix}). \n\nApplying the Bussgang decomposition to (\\ref{quantPilots}) we obtain\n\\begin{align}\n\\mathbf{y}_{qp,l}^{(t)}&=\\mathbf{F}_{p,l}\\mathbf{y}_{p,l}^{(t)} + \\mathbf{d}_{p,l}^{(t)}\\nonumber \\\\\n&=\\sqrt{\\tau\\rho_p}\\mathbf{F}_{p,l}\\mathbf{G}_l{\\varphi^{(t)}}^H+\\mathbf{F}_{p,l}\\mathbf{w}_l^{(t)} + \\mathbf{d}_{p,l}^{(t)}. \\label{y_qp.l^(t)}\n\\end{align}\nThe CPU receives from all $L$ APs as a stack of (\\ref{y_qp.l^(t)})\n\\begin{align}\n\\mathbf{y}_{qp}^{(t)}=\n\\begin{bmatrix}\n\\mathbf{y}_{qp,1}^{(t)}\\\\\n\\vdots \\\\\n\\mathbf{y}_{qp,L}^{(t)}\n\\end{bmatrix}\n=\\begin{bmatrix}\n\\sqrt{\\tau\\rho_p}\\mathbf{F}_{p,1}\\mathbf{G}_1{\\varphi^{(t)}}^H+\\mathbf{F}_{p,1}\\mathbf{w}_1^{(t)} + \\mathbf{d}_{p,1}^{(t)}. \\\\\n\\vdots \\\\\n\\sqrt{\\tau\\rho_p}\\mathbf{F}_{p,L}\\mathbf{G}_L{\\varphi^{(t)}}^H+\\mathbf{F}_{p,L}\\mathbf{w}_L^{(t)} + \\mathbf{d}_{p,L}^{(t)}.\n\\end{bmatrix}\n\\end{align}\nwhere we can concisely rewrite as a $M\\times \\tau$ matrix for $\\tau$-length sequences of quantized received pilots given by\n\\begin{align}\n\\mathbf{Y}_{qp}=\n\\begin{bmatrix}\n\\mathbf{y}_{qp,1}^{(1)}& \\dots &\\mathbf{y}_{qp,1}^{(\\tau)}\\\\\n\\vdots & \\ddots & \\vdots\\\\\n\\mathbf{y}_{qp,L}^{(1)} & \\dots & \\mathbf{y}_{qp,L}^{(\\tau)}\n\\end{bmatrix}\n=\\sqrt{\\tau\\rho_p}\\mathbf{\\tilde{F}}\\mathbf{G}{\\Phi}^H+\\mathbf{\\tilde{F}}\\mathbf{W} + \\mathbf{D},\n\\end{align}\nwhere the matrix $\\mathbf{\\tilde{F}}$ is a $M\\times M$ diagonal matrix with $\\mathbf{F}_{p,l}\\in \\mathbb{R}^{N\\times N}$ in its block diagonal entries. With similar structure to $\\mathbf{Y}_{qp} \\in \\mathbb{C}^{M\\times \\tau}$ the matrix $\\mathbf{W}$ and $\\mathbf{D}$ denote respectively the noise and distortion matrix.\n\nWe can then project $\\mathbf{Y}_{qp}$ onto $\\varphi_k$ expressed as\n\\begin{align}\n\\mathbf{r}_{qp,k}&=\\frac{1}{\\sqrt{\\tau \\rho_p}}\\mathbf{Y}_{qp}\\varphi_k \\nonumber\\\\\n&=\\mathbf{\\tilde{F}}\\mathbf{g}_{k}+\\mathbf{\\tilde{F}}\\sum_{k'\\neq k}^{K} \\mathbf{g}_{k'}\\varphi_{k'}^H\\varphi_{k} + \\frac{1}{\\sqrt{\\tau\\rho_p}}\\left(\\mathbf{\\tilde{F}}\\mathbf{W}+\\mathbf{D}\\right)\\varphi_{k}. \n\\end{align}\nWe estimate $\\mathbf{g}_{k}$ using LMMSE estimator\n\\begin{align}\n\\hat{\\mathbf{g}}_{qp, k}= \\mathbf{\\Gamma}_{qp,k}\\mathbf{r}_{qp,k}. \\label{EstimatorBussgang}\n\\end{align}\nThe gain matrix $\\mathbf{\\Gamma}_{qp,k}$ is given by \n\\begin{align}\n\\mathbf{\\Gamma}_{qp,k}&=\\mathbf{\\Sigma}_{k}\\mathbf{\\tilde{F}}^H\\left(\\mathbf{\\Omega}_{qp,k}\\right)^{-1}, \\text{ where }\n\\mathbf{\\Sigma}_{k}=\\mathbb{E}\\{\\mathbf{g}_{k}\\mathbf{g}_{k}^H\\} \\text{ and } \\nonumber \\\\\n\\mathbf{\\Omega}_{qp,k}&=\\mathbb{E}\\{\\mathbf{r}_{qp,k}\\mathbf{r}_{qp,k}^H\\}=\\mathbf{\\tilde{F}}\\mathbf{\\Sigma}_{k}\\mathbf{\\tilde{F}}^H+\\frac{1}{\\tau\\rho_p}\\left(\\mathbf{\\tilde{F}}\\mathbf{\\tilde{F}}^H+\\mathbf{D}\\mathbf{D}^H\\right)\n\\end{align}\n\n\\begin{remark}\nWe assume that the received signals at the APs are uncorrelated over $l$ and $t$ such that the Gram matrix $\\mathbf{\\tilde{F}}\\mathbf{\\tilde{F}}^H$ and $\\mathbf{D}\\mathbf{D}^H$ have a block diagonal structure. Further, their submatrices are positive definite since $\\mathbf{F}$ and $\\mathbf{d}$ in (\\ref{LinearBussgang}) are positive definite for large observation in sample covariance matrix (\\ref{SampleCovarince}). Thus, the matrix $\\mathbf{\\Omega}_{qp,k}$ is invertible.\n\\end{remark}\n\\section{Numerical Results} \\label{NR}\nWe provide in this section some numerical simulations to asses the performance of the considered schemes above. We did our simulation with $M=120$ total number of the APs' antennas, $L=120\/N$ APs and $K=20$ number of users distributed uniformly in the area of $1\\times 1 \\text{ km}^2$. This area is wrapped around by its copies so that it resembles a network with infinite area. The channel $\\mathbf{g}_{lk}$ in (\\ref{gmk}) is modeled with the large scale fading $\\beta_{lk}$ given as\n\\begin{align}\n\\beta_{lk}=\\text{PL}_{lk}\\cdot 10^{\\frac{\\sigma_{sh}z_{lk}}{10}},\n\\end{align}\nwhere the factor $10^{\\frac{\\sigma_{sh}z_{lk}}{10}}$ is the uncorrelated shadowing with the standard deviation $\\sigma_{sh}= 8 \\text{ dB}$ and $z_{lk}\\sim \\mathcal{N}(0, 1)$. The path loss coefficient follows the three-slope model according to \n\\begin{align}\n\\text{PL}_{lk}=\n\\begin{cases}\n-\\mathcal{L}-35\\text{log}_{10}(d_{lk}), d_{lk} > d_1 \\\\\n-\\mathcal{L}-15\\text{log}_{10}(d_1)-20\\text{log}_{10}(d_{lk}), d_0v_{surf}} \\sum_{A,Z} A Y_{AZ}(v_{surf}') \\, ,\n\\label{eq:atot}\n\\end{equation}\nwhere the sum runs over the different velocity bins. $A_T(t=0)$ is the total baryon number\nat the initial time of the expanding phase. This initial baryon number is calculated using a fitting procedure of the particle energy spectra. Neutrons are not detected in the experiment, and the free neutron number is deduced from the free proton number and the $^3$H\/$^3$He ratio, assuming chemical equilibrium, as explained in \\cite{BougaultJPG19}, and later in the text.\n\n\nIn this equation, we have considered that particles at velocity $v>v_{surf}$ are emitted earlier \nwith respect to the time $t$ identified by the chosen velocity bin $v_{surf}$, and, therefore,\ndo not contribute to the corresponding statistical ensemble.\n\n The chemical constants can be expressed in terms of the mass fractions as:\n\\begin{eqnarray}\nK_c (A,Z)=\\frac{\\omega_{AZ}}{A\\omega_{11}^{Z} \\omega_{10}^{A-Z}}\\left( \\frac{V_T}{A_T}\\right )^{A-1} \\, , \\label{eq:chemica_final}\n\\end{eqnarray}\nwhere $\\omega_{AZ}$, $\\omega_{11}$ and $\\omega_{10}$ are the mass fractions of cluster (A,Z), proton and neutron respectively.\n\nThe measurement of the equilibrium constants thus requires an estimation of the source volume, at the different times of the expansion. In previous studies \\cite{QinPRL108,BougaultJPG19}, this was done with the Mekjian strategy often used in heavy ion collision analyses \\cite{DasGuptaPR72}. Supposing an ideal gas of classical clusters in thermodynamic equilibrium at temperature $T$ in the grand-canonical ensemble, the differential spectrum of a cluster $(A,Z)$ is given by:\n\\begin{equation}\n\\frac{d^3N_{AZ}}{dp^3}(\\vec p)=\\frac{V_f}{h^3} g_{AZ} \\exp \\left [ \\frac {1}{T} \\left ( - \\frac{p^2}{2M_{AZ}} + \nZ\\mu_p + (A-Z)\\mu_n \\right )\\right ] \\; , \\label{eq:spectra}\n\\end{equation}\nwhere $M_{AZ}=Am-B_{AZ}$ is the mass of cluster $(A,Z)$, $m$ is the nucleon mass, $\\mu_n$ and $\\mu_p$ are chemical potentials, $V_f$ is the free volume, and the internal partition sum reads:\n\\begin{equation}\ng_{AZ}=\\sum_K (2J_K+1)\\exp\\left [ -\\frac{E_{K}}{T}\\right \n\\; ,\n\\end{equation}\nwhere the sum runs over the different eigenstates of the clusters with energy $E_K$ and angular momentum $J_K$.\nIn the experimental sample, the differential spectra (corrected for the Coulomb boost as \n$\\Delta p_A=\\sqrt{2mA(E-ZE_C)}=Am \\Delta v_{surf}$, with $E_C=10$ MeV \\cite{BougaultJPG19}), are linked to the multiplicities by:\n\\begin{equation}\n\\frac{d^3N_{AZ}}{dp^3}(\\vec p_A)\\equiv\\tilde Y_{AZ}= \\frac{Y_{AZ}(v_{surf})}{4\\pi p_A^2 \\Delta p_A} \\; . \\label{eq:dN_Y}\n\\end{equation}\n\nAssuming that Eq.~(\\ref{eq:spectra}) holds, if we normalize the cluster spectrum by the proton and neutron spectra at the same velocity, the unknown chemical potentials cancel and we get: \n\\begin{equation}\n\\frac{\\tilde Y_{AZ}(\\vec {p_A})}{\\tilde Y_{p}^Z(\\vec p)\\tilde Y_{n}^{A-Z}(\\vec p)}=h^{3(A-1)}\n\\frac{2J_{AZ}+1}{2^A}\\frac{1}{V_f^{A-1}} \\exp \\left [ \\frac{B_{AZ}}{T}\\right ] \\; , \\label{eq:ratio}\n\\end{equation}\nwhere $p_A=A\\,p$, and $J_{AZ}, B_{AZ}$ are the angular momentum and binding energy of the ground state of the cluster, and we have neglected the population of excited states. \n\nEq.~(\\ref{eq:ratio}) allows independent estimations of the free volume from the different cluster spectra as:\n\\begin{equation}\n{ V_f\n=h^3 \\exp \\left [ \\frac{B_{AZ}}{T(A-1)}\\right ]\\left (\\frac {2J_{AZ}+1}{2^A}\\frac{\\tilde Y_{11}^Z(\\vec p)\\tilde Y_{10}^{A-Z}(\\vec p)}\n{\\tilde Y_{AZ}(\\vec p_A)} \\right )^{\\frac{1}{A-1}} \\; . \\label{eq:vf}\n\\end{equation}\n\nIn the absence of a direct neutron measurement, we can estimate the free neutron-proton ratio \n$R_{np}$ from the multiplicities of the $A=3$ isobars $^3$H and $^3$He using Eq. (\\ref{eq:spectra}):\n\\begin{equation}\nR_{np}=\\frac{\\omega_{10}}{\\omega_{11}}=\\left(\\frac{Y_{31}}{Y_{32}}\\right)\\exp{\\left[ \\frac {B_{32} - B_{31}}{T}\\right]} \\, , \\label{eq:rnp}\n\\end{equation}\nand Eq. (\\ref{eq:vf}) becomes \n\\begin{equation}\n{ V_f\n=h^3R_{np}^{\\frac{A-Z}{A-1}} \\exp \\left [ \\frac{B_{AZ}}{T(A-1)}\\right ]\\left (\\frac {2J_{AZ}+1}{2^A}\\frac{\\tilde Y_{11}^A(\\vec p)}\n{\\tilde Y_{AZ}(\\vec p_A)} \\right )^{\\frac{1}{A-1}} \\; . \\label{eq:vf1}\n\\end{equation}\nFinally, the total volume entering Eq.~(\\ref{eq:chemica_final}) is computed from the free volume, given by Eq.~(\\ref{eq:vf1}), as:\n\\begin{eqnarray}\nV_T= V_f + \\sum_{AZ}V_{AZ}\\frac{\\omega_{AZ}A_T}{A} \\, ,\n\\label{eq:vt}\n\\end{eqnarray}\nwhere we have added the proper volume of the clusters which belong to the\nsource at a given time, with $V_{AZ}=4\\pi R_{AZ}^3\/3$. $R_{AZ}$ is\nthe experimental radius of each cluster, taken from\nRef.~\\cite{Angeli2013}. In Sec.~\\ref{sec:radius}, we will\ndiscuss how the choice of $ R_{AZ}$ affects the determination of the\ntotal volume, and, as a consequence, the total baryonic density and\nthe cluster densities.\n\nIt is important to stress that, for the consistency of the analysis, the different estimations of $V_f$ that can be obtained from Eq.(\\ref{eq:vf1}) considering different particle species, that is different $(A,Z)$ values, should all coincide within experimental errors.\nIf this is not true as we will show in the following, the validity of either Eq.(\\ref{eq:vf1}) or of the hypothesis of statistical equilibrium should be questioned.\n \n\nTo complete the analysis, the thermodynamic parameters $(T,y_p)$ must be evaluated together with the baryonic density $\\rho$, as a function of the surface velocity. \nThe total global proton fraction of the system is given by\n\\begin{eqnarray}\ny_{p}=\\frac{\\sum_{A,Z} Z Y_{AZ}}{\\sum_{A,Z} A Y_{AZ}} \\, ,\\label{eq:yp}\n\\end{eqnarray}\nwith the neutron yield estimate from the proton yield via Eq.(\\ref{eq:rnp}).\nThe temperature is obtained from the Albergo double ratio formula \\cite{AlbergoNCA89}, that we briefly recall.\nIntegrating Eq.~(\\ref{eq:spectra}) over momentum, and neglecting the excited states, we get:\n\\begin{equation}\nN_{AZ}=V_f \\left(\\frac{M_{AZ}T}{2\\pi\\hbar^2}\\right)^{3\/2} (2J_{AZ}+1)e^{ \\frac { \nZ\\mu_p + (A-Z)\\mu_n + B_{AZ} }{T}}\\; . \\label{eq:mult}\n\\end{equation}\n\nThe volume and chemical potential dependence can be eliminated by taking isobaric double ratios \\cite{AlbergoNCA89}.\nIn particular, using $^2$H, $^3$H, $^3$He and $^4$He, we have:\n\\begin{eqnarray}\nT&=&\\frac{\\Delta B}{\\ln\\left [\\frac{3}{2}\\sqrt{\\frac{9}{8}}R_v \\right ] } \n\\; ,\n \\label{eq:temperature}\n\\end{eqnarray}\nwith \n\\begin{eqnarray}\nR_v&=&\\frac{N_{21}N_{42}}{N_{32}N_{31}}= \\frac{Y_{21}Y_{42}}{Y_{32}Y_{31}} \\, ,\\\\\n\\Delta B&=& B_{21}+B_{42}-B_{32}-B_{31} \\, .\n\\end{eqnarray}\nWe notice that a slightly different formula was used in previous analyses \\cite{QinPRL108,HagelPRC62,KowalskiPRC75} with an extra $\\sqrt{9\/8}$ in the argument of the logarithm appearing in Eq.~(\\ref{eq:temperature}) ($3\/2\\sqrt{9\/8}\\sqrt{9\/8}$=$1.59\\sqrt{9\/8}$ instead of $3\/2\\sqrt{9\/8}$). The justification of this extra factor was the fact that the analysis does not use total multiplicities, but particles at a given velocity described by a Maxwellian spectrum. We believe that this argument is not correct, because the multiplicities in a given $v_{surf}$ bin are proportional to the total particle numbers (see Eq.~(\\ref{eq:mult})) and not to differential spectra (see Eq.~(\\ref{eq:dN_Y})).\nHowever, we have verified that the presence (or absence) of an extra $\\sqrt{9\/8}$ factor does not produce any visible effect in all the results shown in this paper.\n\n\n\nChemical constants measured by the NIMROD collaboration \\cite{QinPRL108}, using the method explained above, were compared to statistical models employing different treatments of the nucleon-nucleon and nucleon-cluster interactions \\cite{HempelPRC91,PaisPRC97}, with the purpose of constraining the expected cluster binding energy shifts in dense matter, and the associated Mott densities \\cite{Roepke2015,RopkeNPA867,TypelPRC81}. \nHowever, it was observed in Ref.~\\cite{BougaultJPG19} that the validity of the ideal gas expression, Eq.~(\\ref{eq:spectra}), has to be assumed to determine the thermodynamic state $(\\rho,T,y_p)$, and also the value itself of the chemical constants via the free volume definition Eq.~(\\ref{eq:vf1}). \nIndeed Eq.~(\\ref{eq:spectra}) explicitly assumes that the cluster abundances are uniquely governed by their vacuum properties, notably their vacuum binding energies $B_{AZ}$, which is in contradiction with the very purpose of the analysis.\nMoreover, if in medium corrections were indeed negligible, the measured chemical constants would agree with the ideal gas prediction.\nThis latter can be easily worked out from Eq.~(\\ref{eq:chemical}) considering that, for an ideal gas of clusters, $\\rho_{AZ}=N_{AZ}\/V_f$, with $N_{AZ}$ given by Eq.~(\\ref{eq:mult}):\n\\begin{eqnarray}\nK_c^{id}(A,Z)=\\left(\\frac{2\\pi \\hbar^2}{T}\\right)^{\\frac{3(A-1)}{2}}\\left(\\frac{M_{AZ}}{m^A}\\right)^{3\/2}\\frac{(2J_{AZ}+1)}{2^A} \n\\exp{\\left[ \\frac {B_{AZ}}{T}\\right]} \\, . \\label{eq:ideal}\n\\end{eqnarray}\n\n \\begin{figure\n \\begin{tabular}{c}\n\\includegraphics[width=1\\textwidth]{fig1.eps}\n \\end{tabular}\n \\vspace{-1.cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: The chemical equilibrium\n constants of each cluster as a function of the surface velocity\n $v_{surf}$. The error bars only include statistical and systematic\n experimental errors, see Ref.~\\cite{BougaultJPG19}. The solid lines\n are the ideal gas limit given by Eq.~(\\ref{eq:ideal}). The grey band\n shows the area where data might be contaminated by emission from the spectator source. }\n\\label{fig:start}\n\\end{figure} \n\nChemical constants obtained from the analysis of the $^{124}$Xe$+^{124}$Sn system are displayed in\nFig.~\\ref{fig:start}. \nIn this figure, the value of the free volume used to estimate the baryonic density Eq.(\\ref{eq:rho}) in each $v_{surf}$ bin is given by the arithmetic average of the volumes $ V_{f}^{(AZ)}$ extracted from Eq.(\\ref{eq:vf1}) using the yields $Y_{AZ}$ of the different particle species :\n\\begin{equation}\n\\bar V_f=\\frac{1}{N} \\sum_{AZ} V_{f}^{(AZ)} \\, , \\label{eq:vaverage}\n\\end{equation}\nwith $N$ the total number of cluster species ($N=5$ in the present analysis). In fact, this approach to define the system volume is largely arbitrary, since the different volume estimations from the different particle species are not compatible. In the following, a different approach will be considered for the determination of this key quantity, resulting in smaller volumes and having a two-fold effect on the equilirium constants, i.e. making them smaller and moving them to a higher value of the density.\n\n\nThe only differences with respect to the results\npublished in Ref.~\\cite{BougaultJPG19} are the inclusion of the volume associated to deuterons in the arithmetic average of the present calculation and the normalization. Indeed it has to be noticed that the definition of chemical constants in Ref.~\\cite{BougaultJPG19} differs by a factor $A$ with respect to the one of Ref.~\\cite{QinPRL108}. To allow an easier comparison with previous works, we have adopted the definition of Ref.~\\cite{QinPRL108} in this paper.\nAs already mentioned above, the slightly different\nexpression for the temperature Eq.~(\\ref{eq:temperature})\ndoes not produce any effect on the scale of the figure. The\nerror bars are due to the experimental errors associated with the\nmeasurements. The solid lines represent the ideal gas limit, given\nby Eq.~(\\ref{eq:ideal}). The grey band shows the range where the\nexperimental data might be contaminated by emission from the spectator source, since the proton spectra are\nnot well reproduced by the fit used to deduce the mass of the evolving source. \nOne should also observe that equilibrium constants of $^3$H and $^3$He are so close, that they are almost indistinguishable on the scale of the figure.\nWe can see that the measured chemical constants are systematically lower than the ideal gas prediction, and the effect increases with increasing density, showing that binding energy shifts are necessary.\nA qualitatively similar deviation from the ideal gas limit was also found in NIMROD data \\cite{QinPRL108,HempelPRC91}. \nIt is, therefore, clear that a correction is needed to Eq.~(\\ref{eq:spectra}) for the analysis to be consistent. If in-medium corrections at a given temperature and density only depend on the baryonic number of the particle, then their effect will cancel out when taking isobaric ratios and double ratios as in Eqs.(\\ref{eq:rnp}) and (\\ref{eq:temperature}). However, this is not the case for the volume $V_f$, Eq.~(\\ref{eq:vf1}), which in turn affects both the evaluation of the densities $\\rho_{AZ}$ and the evaluation of the total baryonic density $\\rho$.\n\n\\begin{figure\n \\begin{tabular}{cc}\n\\includegraphics[width=1\\textwidth]{fig2.eps}\n \\end{tabular}\n \\vspace{-1cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: The free volume, as estimated from the yields of different cluster species, as a function of $v_{surf}$. Results for $^3$H and $^3$He are indistinguishable. The grey band shows the area where data might be contaminated by emission from the spectator source. }\n\\label{fig2}\n\\end{figure} \n\n\nThe need of an in-medium correction to the ideal gas expression Eq.~(\\ref{eq:vf1}) is further shown by Fig.~\\ref{fig2}.\nThis figure displays the value of the free volume obtained from Eq.~(\\ref{eq:vf1}) as a function of the sorting variable $v_{surf}$ for the $^{124}$Xe$+^{124}$Sn system, using different particle species. Since a $v_{surf}$ bin represents a specific thermodynamic condition $(\\rho,T,y_p)$, if everything was consistent, we should find the same volume whatever the cluster species considered, which is clearly not the case except for the $A=3$ isobars, which lead to almost identical volume estimations (indistinguishable on the scale of the figure). \nQualitatively similar results were obtained with the NIMROD data \\cite{QinPRL108}, showing that \nthe incompatibility among the different volume estimations is not an experimental problem, but it rather points towards an inconsistency in the analysis method. \nTo solve this inconsistency, in the next section we introduce a modification in Eq.~(\\ref{eq:spectra}) allowing for possible in-medium effects.\n\n\n\n\\section{Bayesian analysis} \\label{sec:analysis}\n\n\nSince the different in-medium effects that contribute to the determination of cluster multiplicities can be seen as a shift of the cluster binding energy \\cite{Roepke2015,RopkeNPA867,TypelPRC81}, it is reasonable to suppose that the correction can be defined as a Boltzmann factor:\n\n\\begin{eqnarray}\nC_{AZ}&=&\\exp \\left [- \\frac{\\Delta_{AZ}}{T(A-1)}\\right ] \\label{eq:correction} \\, ,\n\\end{eqnarray}\nwith $\\Delta_{AZ}$ given by\n\\begin{eqnarray} \\label{eq:DeltaAZ}\n\\Delta_{AZ}=a_1 A^{a_2}+a_3 |I|^{a_4} \\, ,\n\\end{eqnarray}\nwhere $a_i$, $i=1,\\dots,4$ are free parameters.\nThe dependence on the cluster species is given by the term $\\Delta_{AZ}$, which, \n for each thermodynamic condition $(T,\\rho,y_p)$ identified by a bin in $v_{surf}$, \n can in principle depend on the two good quantum numbers of each cluster, $A$ and $I=(2Z-A)\/2$. \nThe influence of the functional expression for the correction $\\Delta_{AZ}$ is studied in Section \\ref{sec:correction}. \n\nIt is important to stress that $\\Delta_{AZ}$ can be interpreted as a reduction of the binding energy only in the framework of the classical simplified ideal expression \nEq.(\\ref{eq:spectra}). Indeed the presence of a nuclear medium affects nucleons both in bound, unbound and resonant states \\cite{TypelPRC81}, and $\\Delta_{AZ}$ should be \nunderstood as a global effective correction accounting for all the missing quantum and interaction effects, such as Pauli blocking, effective masses, couplings to the mesons, etc.\nThis correction modifies the expression of the free volume $V_f$ as it can be estimated from the abundance of a given $(AZ)$ species:\n\\begin{equation}\nV_f=h^3R_{np}^{\\frac{A-Z}{A-1}} C_{AZ} \\exp \\left [ \\frac{B_{AZ}}{T(A-1)}\\right ]\n\\cdot \\left (\\frac {2J_{AZ}+1}{2^A}\\frac{\\tilde Y_{11}^A(\\vec p)}\n{\\tilde Y_{AZ}(\\vec p_A)} \\right )^{\\frac{1}{A-1}} \\; , \\label{eq:vfnew}\n\\end{equation}\nwhere the temperature $T$ is still estimated by eq.(\\ref{eq:temperature}).\nThe unknown parameters $\\vec a =\\{ a_i(\\rho,y_p,T),i=1 - 4\\}$ can be fixed by imposing that the volumes obtained from the experimental spectra $\\tilde Y_{AZ}$ via Eq.(\\ref{eq:vfnew}) of the different $(A,Z)$ nuclear species in a given $v_{surf}$, correspond to compatible values. Because of the presence of experimental uncertainties, we cannot simply solve Eq.~(\\ref{eq:vfnew}) for the $\\vec a$ parameters to impose a strictly identical volume for the different species. Even if the experimental errors were negligible, \nthe correlation between $v_{surf}$ and the volume is not a one-to-one correlation because of the physical dispersion of the $v_{surf}$ variable. For these reasons, we consider the unknown $\\vec a$ parameters as random variables. We take in each $v_{surf}$ bin flat priors, $P_{\\rm prior}(\\vec a)=\\theta(\\vec a_{\\rm min}-\\vec a_{\\rm max})$, within an interval largely covering the physically possible reduction range of the binding energy, $0\\le a_1\\le 15$ MeV, $0\\le a_3\\le a_1$ MeV, $-1\\le a_2 \\le 1$, $0 \\le a_4 \\le 4$. \n\nThe posterior distribution is obtained by imposing the volume observation with a likelihood probability as follows:\n\\begin{eqnarray}\nP_{post}(\\vec a)={\\cal N}\\exp\\left(-\\frac{\\sum_{AZ}(V_{f}^{(AZ)}(\\vec a)-\\bar V_f(\\vec a))^2}{2\\bar V_f(\\vec a)^2}\\right) \\, . \\label{eq:likely}\n\\end{eqnarray} \nHere, ${\\cal N}$ is a normalization, $V_{f}^{(AZ)}(\\vec a)$ is the free volume obtained from the $(A,Z)$ cluster using Eq.(\\ref{eq:vfnew}) with the specific choice $\\vec a$ for the parameter set of the correction, and $\\bar V_f(\\vec a)$ is the average volume corresponding to a given parameter set $\\vec a$ from Eq.(\\ref{eq:vaverage}).\n\n \\begin{figure*\n \\begin{tabular}{cc}\n \\includegraphics[width=0.5\\textwidth]{fig3a.eps} & \\includegraphics[width=0.5\\textwidth]{fig3b.eps}\n \\end{tabular}\n\n\\caption{$^{124}$Xe$+^{124}$Sn system: Left: The prior (purple) and\n posterior (green) probability distributions of the total volume for the bins with $v_{surf}=4.1$ cm\/ns (left) and $v_{surf}=5.9$ cm\/ns (right). In red, the probabilities without the correction, designated by $P(C=0)$, are also shown. Right: The corrected free volumes associated with the clusters as a function of\n $v_{surf}$. The error bars are due to both the correction and the experimental errors. The grey band shows the area where data might be contaminated by emission from the spectator source. }\n\\label{fig3}\n\\end{figure*} \n\n\n\nThe prior (posterior) probability distribution of any physical quantity $X$ is then readily calculated as:\n\\begin{equation}\nP(X=X_0)=\\int d\\vec a P(\\vec a) \\delta\\left ( X(\\vec a) - X_0 \\right ) \\, ,\n\\end{equation}\nwhere $P(\\vec a)$ is the prior (posterior) distribution of the correction parameters.\nSimilarly, expectation values can be calculated as:\n\n\\begin{eqnarray}\n\\langle X \\rangle =\\int d \\vec a P(\\vec a) X(\\vec a) \\, ,\n\\end{eqnarray}\nand the correspondent standard deviations as,\n\\begin{eqnarray}\n\\sigma_X =\\sqrt{\\langle X^2\\rangle - \\langle X\\rangle^2 } \\, .\n\\end{eqnarray}\n\n\n\nThe left part of Fig.~\\ref{fig3} shows the prior and posterior distribution of the total volume $V_T$ in two chosen velocity bins, $v_{surf}=4.1$ cm$\/$ns (6$^{\\rm th}$ bin) and $v_{surf}=5.9$ cm$\/$ns (15$^{\\rm th}$ bin). In red, we also show the probabilities calculated without the correction factor, using Eq.(\\ref{eq:vf1}). In this case, the width of the distribution is only due to the experimental errors.\nThe general effect of allowing for an in-medium correction is a\nreduction of the estimated volume. This can be immediately understood\nby comparing the ideal gas estimation Eq.(\\ref{eq:vf}), and the\nmodified expression Eq.(\\ref{eq:vfnew}), and considering that we\nimpose that the correction $C_{AZ}\\le 1$.\nIndeed, following the microscopic calculations of in-medium effects \\cite{Roepke2015}, we consider that, because of the Pauli blocking effect, the influence of the external nucleon gas goes in the direction of reducing the effective binding in the medium.\n\nWe can see that in the lower velocity bin, corresponding to a later time and larger volume, the prior volume distribution is completely unconstrained, reflecting the large dispersion of the volume estimation which is obtained in the absence of the correction (see Fig.\\ref{fig2}).\nOn the other hand, the condition of compatibility between the volume measurements allows a fair determination of this variable, crucial for the rest of the analysis. In the higher velocity bin, corresponding to earlier times of the expansion and more compact configurations, the volume scale is reduced, meaning that the volume estimation is less sensitive to the importance of the correction.\nStill, the posterior distribution is considerably narrower than the prior one. \nThe corrected expectation values of the volume as estimated from the multiplicities of each cluster from Eq.(\\ref{eq:vfnew}), with the associated standard deviations, are shown as a function of $v_{surf}$ in the right part of Fig.~\\ref{fig3}. It is clear that when we include the correction, the average volumes are systematically lower than the uncorrected results of Fig.\\ref{fig2},\n and the estimations obtained from the different cluster species are compatible within error bars. \n\n\n\\section{Equilibrium constants} \\label{sec:kci}\n\n\n\n\n\n The difference between the volume estimation from the ideal gas\n assumption Eq.(\\ref{eq:vf}), and the one determined by the more\n general expression Eq.(\\ref{eq:vfnew}) with the Bayesian estimation\n Eq.(\\ref{eq:likely}) of the correction parameters $\\vec a$, \nobviously reflects itself on the estimation of the chemical constants, as shown in Figs.~\\ref{fig4} and \\ref{fig5}. \nSimilar to Fig.\\ref{fig3}, we show in Fig.\\ref{fig4} the prior and posterior distribution of the $\\alpha$ chemical constant $K_c(4,2)$ in the same velocity bins analyzed in Fig.\\ref{fig3}, while Fig.~\\ref{fig5} shows the average and variance of the chemical equilibrium constants for all the clusters as\na function of the density for the three different experimental systems.\nComparing the uncorrected (labelled \"P(C=0)\") and corrected (labelled \"Ppost\") distributions in Fig.\\ref{fig4}, \nwe can see that the correction to the ideal gas hypothesis leads to a systematic decrease of the chemical constants with respect to the ideal gas assumption (\\ref{eq:vf1}), and an increased dispersion. The effect is higher at lower density, but a shift as important as a factor 10 is observed also in the highest velocity bin, corresponding to the highest density.\n\n\n\\begin{figure\n \\begin{tabular}{cc}\n\\includegraphics[width=0.7\\textwidth]{fig4.eps} \n \\end{tabular}\n \\caption{ The prior (purple) and posterior (green) probability distributions of the chemical equilibrium constant of the $\\alpha$ cluster for the bins with $v_{surf}=4.1$ cm\/ns (left) and $v_{surf}=5.9$ cm\/ns (right) for the $^{124}$Xe$+^{124}$Sn system. In red, the probabilities without the correction, $P(C=0)$, are also shown. }\n\\label{fig4}\n\\end{figure} \n\n\n\n\\begin{figure\n \\begin{tabular}{cc}\n \\includegraphics[width=0.8\\textwidth]{fig5.eps}\n \\end{tabular}\n \\vspace*{-0.8cm}\n\\caption{(Color online) The chemical equilibrium constant of all the clusters\n as a function of the density for the three experimental systems. \n The error bars are due to the correction and the experimental\n errors. The dashed lines are the ideal gas limit given by Eq.~(\\ref{eq:ideal}). The grey band shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig5}\n\\end{figure} \n\n\n\\begin{figure\n \\begin{tabular}{cc}\n\\includegraphics[width=0.9\\textwidth]{fig6.eps}\n \\end{tabular}\n \\vspace*{-0.8cm}\n\\caption{Symbols: double isotope ratio temperature $T_{HHe}$ (black), global proton fraction (multiplied by a factor of 10) (purple), baryon density (multiplied by a factor of 100) (green) as a function of the experimental quantity $v_{surf}$ for the three experimental systems. The uncertainties reflect both the correction and the experimental errors. The solid lines report the ideal gas limit, and were obtained from eq. (\\ref {eq:vaverage}) with eq. (\\ref{eq:vf1}). The grey band shows the area where\ndata might be contaminated by emission from the spectator source.} \n\\label{fig:thermo}\n\\end{figure}\n\n\n\nThe chemical equilibrium constants of all the clusters measured\nin the three different experimental data sets are displayed in Fig.\\ref{fig5}. The dashed lines show the ideal gas limit, given by Eq.(\\ref{eq:ideal}), which, just like in Fig.~\\ref{fig:start}, give incompatible values with the experimental data. For the corrected case, we can see that the results of all the three systems almost perfectly overlap, confirming the expectation that chemical constants do not depend on the proton fraction of the system. It would be interesting to check this point with a larger proton-neutron asymmetry.\n\n\\begin{figure*\n \\begin{tabular}{cc}\n\\includegraphics[width=1.\\textwidth]{fig7.eps}\n \\end{tabular}\n \\vspace*{-1.2cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system. Average volume estimated from the different \n clusters and standard deviations as a function of $v_{surf}$ for three different functional forms of the correction $\\Delta_{AZ}$. The uncertainties are due to both the correction and the experimental errors. The grey band shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig:corrections-V}\n\\end{figure*} \n\nThe thermodynamic conditions explored in the experiments are displayed in Fig.\\ref{fig:thermo}. The temperature is here estimated through the double isotope formula Eq.(\\ref{eq:temperature}), and is indicated as $T_{HHe}$.\nWe can see that the different collisions explore very similar trajectories in the $(T_{HHe},\\rho)$ plane, the only difference being in the global proton fraction, as expected. \nComparing with the results of Ref.~\\cite{BougaultJPG19}, where the ideal gas equation Eq.(\\ref{eq:vf1}) was used to estimate the volume following Ref.~\\cite{QinPRL108}, (full lines in Fig.\\ref{fig:thermo}), we can see that the existence of an in-medium correction goes in the direction of increasing the density, and the effect is the same for the three systems.\n\n\\begin{figure*\n \\begin{tabular}{cc}\n\\includegraphics[width=1.\\textwidth]{fig8.eps}\n \\end{tabular}\n \\vspace*{-1.2cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system. Posterior estimation of the average and standard deviation of the parameters $a_i$ as a function of $v_{surf}$ for three different functional forms of the correction $\\Delta_{AZ}$. The uncertainties are due to both the correction and the experimental errors. The grey band shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig:corrections-ai}\n\\end{figure*}\n\nStill, it is important to stress that the temperature is evaluated with the Albergo $T_{HHe}$ thermometer of Eq.(\\ref{eq:temperature}). As we have discussed in Section \\ref{sec:form}, this expression corresponds to the true thermodynamical temperature $T$ only if the in-medium corrections to the ideal gas of clusters expression Eq.~(\\ref{eq:mult}) cancel \nin the double ratio. This is in principle not the case if the correction does not scale linearly with the particle numbers. \nOur Bayesian analysis does not allow us to determine the deviation of Eq.(\\ref{eq:temperature}) from the true thermodynamic temperature, and this can only be done in the framework of a specific model.\nOne such model will be considered in Section \\ref{sec:model}.\n\n\n \n\n\\begin{figure*\n \\begin{tabular}{cc}\n\\includegraphics[width=1.\\textwidth]{fig9.eps}\n \\end{tabular}\n \\vspace*{-1.2cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system. $\\alpha-$particle chemical equilibrium constants and standard deviations as a function of $v_{surf}$ for three different functional forms of the correction $\\Delta_{AZ}$. The full lines in the left and middle panels correspond to the average right panel results. The uncertainties are due to both the correction and the experimental errors. The grey band shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig:corrections-Kc}\n\\end{figure*} \n\n\n\n\n\n\n\n\\subsection{Different parameter sets for the in-medium effect correction} \\label{sec:correction}\n\nThe correction given by Eq.(\\ref{eq:DeltaAZ}) is a four-parameter set, function of the number of nucleons $A$ and isospin $I$. \nThis functional form has a certain degree of arbitrariness, and the expression is not unique. \n In principle, the correction can depend on all the good quantum numbers of the clusters, namely $A$, $I$, and the charge $Q$. However, the volumes extracted from $^3$H and $^3$He are fully compatible already in the uncorrected data, meaning that, whatever the in-medium correction, it must be the same for both clusters. As a consequence, the correction cannot depend on $Q$. The compatibility of the volumes for $^3$H and $^3$He also implies that we have four data points in each $v_{surf}$ bin to fit our four parameters. This means that in practice we are extracting independent corrections for the different particle species. This is done on purpose, to insure that we are not introducing unjustified hypotheses on the functional dependence of the in-medium correction. However, the drawback is that we cannot attribute a clear physical meaning to the functional form we have introduced. To progress on this point, the analysis should be extended to other cluster species, to see if a four-parameter set is still enough to describe the whole set of data. In the absence of this extra experimental information, we can see if a reduced number of parameters is sufficient to describe the present data, and if \n varying the number of parameters we can get different results for the equilibrium constants. To this aim, we have introduced three different parameterisations, employing two, three and four parameters, respectively, \n and the results are reported in Figs.~\\ref{fig:corrections-V}, \\ref{fig:corrections-ai}, and \\ref{fig:corrections-Kc}. Fig. ~\\ref{fig:corrections-V} shows the posterior first (data point) and second (error bar) moment of the volume distribution. Fig.~\\ref{fig:corrections-ai} shows the optimal values obtained for the parameters, and Fig.~\\ref{fig:corrections-Kc} presents the posterior chemical equilibrium constant for the $\\alpha-$particle. \n \nThe analysis of Figs.~\\ref{fig:corrections-V} and \\ref{fig:corrections-Kc} reveals\n that, whatever the hypothesis on the functional form, we get fully compatible results for the average volumes and chemical constants, which implies compatible results for all observables. \n In particular, in the left panels of Figs.~\\ref{fig:corrections-V}, \\ref{fig:corrections-ai}, and \\ref{fig:corrections-Kc}, a simple two-parameters prescription $\\Delta_{AZ}=a_1 A^{a_2}$ is employed. We can see that the compatibility between the different volume estimations is comparable to the one which is obtained with the full four-parameters formula presented in the right panels. This means that we do not have a compelling evidence that the in-medium correction is isospin dependent.\nThe comparison between the extracted values of the parameters with the different parametrizations is also instructive. Within the simplest prescription shown in the left panel of Fig.~\\ref{fig:corrections-ai}, the value of the $a_2$ parameter is compatible with zero, meaning that the optimal solution is roughly compatible with a constant, that is an effective correction $\\Delta_{AZ}$ that depends on the thermodynamic condition but does not depend on the particle species.\nThis is confirmed by the comparison between the middle panels of Figs.~\\ref{fig:corrections-V}, \\ref{fig:corrections-ai}, and \\ref{fig:corrections-Kc}, where an isospin independent three-parameter formula $\\Delta_{AZ}=a_1 A^{a_2}+a_3$ is tested. We can see that $a_2$ is still compatible with zero, and the sum $a_1+a_3$ is roughly equal to the $a_1$ value obtained in the simpler prescription. This means that the functional dependence on $A$ of the correction appears relatively robust, and confirms that an $A$ independent correction seems to be sufficient.\nIt is important to stress that this interesting experimental finding \ndoes not imply that the in-medium binding energy shift of light clusters is really a constant; it only shows that the effect does not have any clear monotonic dependence with the baryonic number in the range $A=2-6$. Indeed the posterior volume dispersion is never negligible (see Fig.\\ref{fig3}), suggesting that a dependence on the nuclear species cannot be excluded. Beyond mean-field theoretical calculations are needed to settle this point \\cite{Roepke2015}.\n\nFinally, the right panels of Figs.\\ref{fig:corrections-V}, \\ref{fig:corrections-ai}, and \\ref{fig:corrections-Kc} show the complete four-parameter formula $\\Delta_{AZ}=a_1 A^{a_2}+a_3|I|^{a4}$ that will be used as fiducial expression in the successive figures. We can see that the independence on the cluster size is again confirmed, while the data point towards an (approximately quadratic) isospin dependence. As already mentioned above, this is not however a conclusive argument because we have as many parameters as particle species, and a confirmation of the possible isospin dependence of the in-medium effects would need extra equilibrium constants with different isospin values.\n \n \n \\subsection{Effect of the radius of the clusters} \\label{sec:radius}\n \nAs already observed from Fig.\\ref{fig:thermo}, allowing for the possibility of in-medium corrections \nin each $v_{surf}$ bin, \nglobally leads to smaller volumes (and therefore higher global baryonic densities), with respect to the previous works Refs.\\cite{QinPRL108,BougaultJPG19}, where the hypothesis of an ideal gas of clusters was explicitly made in the data analysis. This density increase stems from the fact that we expect from basic theoretical arguments \\cite{Roepke2015} that the in-medium corrections should reduce the effective binding. This corresponds to a positive $\\Delta_{AZ}$ (see Eq.(\\ref{eq:correction})), and consequently smaller volumes (see Eq.(\\ref{eq:vfnew})). However, in order to quantitatively extract the density value, \nthe proper volume of the clusters has to be added to the free volume, see Eq.(\\ref{eq:vt}), which leads to an extra uncertainty in the analysis.\n\nIn this work we have chosen for the radius of each clusters the\n experimental values given in Ref. \\cite{Angeli2013} but in previous\n works the authors have chosen a fixed value for the radius parameter of 1.3 fm. In this section we want to discuss the effect that this might\n have in both the thermodynamics and the chemical equilibrium\n constants. The temperature and\n proton fraction are not affected by the value of $r_0$, but, as seen In Fig.~\\ref{fig:r0-kc4he}, there is a\n clear effect on the total baryonic density, with smaller values of\n $r_0$ giving larger densities. Due to the influence of this\n parameter, we have decided to take in our data analysis\n the experimental radius of each cluster $R_{AZ}$ taken from\nRef.~\\cite{Angeli2013}, and the results are also shown in the same figure.\nThe use of the experimental value $R_{AZ}$ gives rise to\nthe smallest densities.\nIn Fig.~\\ref{fig:r0-kc4he}, the chemical equilibrium constant for the\n $\\alpha-$particle is plotted as a function of $v_{surf}$ (left panel) and total\n baryonic density (right panel), considering the total volume $V_T$ determined by\n $R_{AZ}$ and by {fixed} $r_0$. For the lowest\n densities, all calculations coincide. At the other $v_{surf}$\n extreme, corresponding to the largest densities, $R_{AZ}$\n estimates smaller maximum densities, and, for a given density, smaller\n equilibrium constants.\n \n \n \\begin{figure}[!htbp]\n \\begin{tabular}{cc}\n\\includegraphics[width=0.7\\textwidth]{fig10.eps}\n \\end{tabular}\n \\vspace*{-0.8cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: The density as a\n function of the experimental quantity $v_{surf}$ for the four\n different radii considered. The uncertainties reflect both the\n correction and the experimental errors. The grey\nband shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig:r0-rho-Vsurf}\n\\end{figure} \n\n\n \\begin{figure}[!htbp]\n \\begin{tabular}{cc}\n\\includegraphics[width=1\\textwidth]{fig11.eps}\n \\end{tabular}\n \\vspace*{-0.7cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: The chemical equilibrium constant for\n the $\\alpha-$particle as a function of $v_{surf}$ (left) and as a\n function of the density (right) for the four\n different radii considered. The uncertainties reflect both the\n correction and the experimental errors. The grey\nband shows the area where data might be contaminated by emission from the spectator source.}\n\\label{fig:r0-kc4he}\n\\end{figure} \n\n\\section{Comparison with a theoretical model} \\label{sec:model}\n\n \n\\begin{figure\n \\begin{tabular}{c}\n \\includegraphics[width=0.8\\textwidth]{fig12.eps}\n \\end{tabular}\n \\vspace*{-0.8cm}\n\\caption{$^{124}$Xe$+^{124}$Sn system: Correlation between the input temperature, $T$, of the theoretical model for different $g_{si}$ at $(\\rho,y_p)_{\\rm exp}$, and the isotopic temperature, $T_{HHe}$ evaluated from the theoretical cluster densities, using experimental values for the density and proton fractions. The dashed line shows $T=T_{\\rm HHe}$. }\n\\label{fig:temp}\n\\end{figure} \n\n\n\\begin{figure*\n\\includegraphics[width=0.9\\textwidth]{fig13.eps}\n\\vspace*{-0.8cm}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: $\\alpha$ chemical equilibrium constants as a function of the density, calculated with the ideal gas prescription for the volume (lower set of data with purple lines) and uncertainties due to experimental errors, and with the corrected one (upper set with black lines). In the last case, the uncertainties are due to both the correction and the\n experimental errors. The homogeneous grey and hatched green bands correspond to the chemical equilibrium\n constants from a calculation \\cite{PaisPRC2019} where we consider homogeneous matter with five light clusters, calculated at the average value of ($T$, $\\rho_{\\rm exp}$, $Y_{p_{\\rm exp}}$), and\n considering different cluster-meson scalar coupling constants $g_{s_i} = x_{s} A_i g_s$. The solid line corresponds to the result with $x_s=1$. The dashed line is the ideal gas result given by Eq. (\\ref{eq:ideal}). The color code associated with the data points represents the temperature in MeV. }\n\\label{fig:alphachem}\n\\end{figure*} \n\n\\begin{figure*\n\\includegraphics[width=0.8\\textwidth]{fig14.eps}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: $^3$He (top) and $^3$H (bottom) chemical equilibrium constants as a function of the density, calculated with the ideal gas prescription for the volume (lower set of data) and uncertianties due to experimental errors, and with the corrected one (upper set). In the last case, the uncertainties are due to both the correction and the experimental errors. The homogeneous grey and hatched green bands correspond to the chemical equilibrium\n constants from a calculation \\cite{PaisPRC2019} where we consider\n homogeneous matter with five light clusters, calculated at the\n average value of ($T$, $\\rho_{\\rm exp}$, $Y_{p_{\\rm exp}}$), and\n considering different cluster-meson scalar coupling constants $\n g_{s_i} = x_s A_i g_s$. The color code associated with the data\n points represents the temperature in MeV. }\n\\label{fig:3He3Hchem}\n\\end{figure*} \n\n\\begin{figure*\n\\includegraphics[width=0.8\\textwidth]{fig15.eps}\n\\caption{(Color online) $^{124}$Xe$+^{124}$Sn system: $^6$He (top) and $^2$H (bottom) chemical equilibrium constants as a function of the density, calculated with the ideal gas prescription for the volume (lower set of data) and uncertianties due to experimental errors, and with the corrected one (upper set). In the last case, the uncertainties are due to both the correction and the experimental errors. The homogeneous grey and hatched green bands correspond to the chemical equilibrium\n constants from a calculation \\cite{PaisPRC2019} where we consider\n homogeneous matter with five light clusters, calculated at the\n average value of ($T$, $\\rho_{\\rm exp}$, $Y_{p_{\\rm exp}}$), and\n considering different cluster-meson scalar coupling constants $\n g_{s_i} = x_s A_i g_s$. The color code associated with the data\n points represents the temperature in MeV. }\n\\label{fig:6He2Hchem}\n\\end{figure*} \n \n\n\nIn Ref.~\\cite{PaisPRC97}, a novel approach for the inclusion of\nin-medium effects in the equation of state for warm stellar matter\nwith light clusters was introduced. This model includes a\nphenomenological modification in the scalar cluster-meson coupling,\nand includes an extra term in the effective mass of the clusters,\nderived in the Thomas-Fermi approximation, which gives rise to\neffects similar to the excluded volume approach.\nThe scalar-meson coupling acting on nucleons bound in cluster $i$ with\nmass numer $A_i$ is defined as $g_{s_i} = x_s A_i g_s$, with $g_s$ the\nscalar-meson coupling to nucleons in homogeneous matter and $x_s$ a free parameter. This parameter was fitted in the low-density regime to the Virial EoS, and a value of $x_s=0.85\\pm 0.05$ was found. \n In that work, only four light clusters were considered, $^2$H,\n $^3$H, $^3$He, and $^4$He. Later, in Ref.~\\cite{PaisPRC2019},\n heavier light clusters, with $A_{\\rm max}=12$, together with an average heavy cluster (pasta phase), were added to the model, as it is expected that heavier unstable clusters also form in neutron rich warm stellar matter. In both works, we compared the chemical equilibrium constants obtained with our model with the NIMROD data \\cite{QinPRL108} analyzed assuming an ideal gas expression for the determination of the nuclear density. \nA satisfactory agreement was obtained for all clusters but the deuteron using the scalar cluster-meson coupling parameter $x_s = 0.85 \\pm 0.05$.\n\nComparing the model of Refs.~\\cite{PaisPRC97,PaisPRC2019} with this new analysis will allow to determine the value of the in-medium parameter $x_s$ in a more consistent way, and at the same time will provide an estimate of the effect of the correction we have introduced, with respect to the analysis method of Refs.\\cite{QinPRL108,BougaultJPG19}. \n\nIn order to make this comparison, the thermodynamic variables\n$(T,\\rho,y_p)$ corresponding to each $v_{surf}$ bin must be\nspecified. Among those variables, the density $\\rho$ was evaluated\nin a model independent way in Section \\ref{sec:analysis} under the\nconstraint that the multiplicities of the different clusters at a\ngiven time of the emission, as measured by the $v_{surf}$ variable,\nshould correspond to the same density. On the other hand, an ideal gas\nassumption was used to determine both the temperature\nvia Eq.(\\ref{eq:temperature}), and the proton fraction\nvia Eqs.(\\ref{eq:yp}),(\\ref{eq:rnp}). We have already discussed the fact\nthat, in the absence of any in-medium correction, the volume extracted\nfrom the $^3$He and $^3$H multiplicities already coincide for all\n$v_{surf}$ bins. This is a strong indication that the associated\nin-medium effects must be very close, meaning that they necessarily\n cancel in the $y_p$ estimation\nEq.(\\ref{eq:rnp}). Moreover, we have also observed that the chemical\nconstants appear to be largely independent of the isospin content of the source (see Fig.\\ref{fig5}). \n An extra confirmation of this hypothesis is given by the fact that we have checked that eq.(\\ref{eq:rnp})\nis very well verified in the theoretical model, whatever the value of $x_s$.\nFor those reasons, we use the experimentally determined $y_p$ value as input of the theoretical model. \n\nConcerning the temperature, we fix it in each $(\\rho,y_p)$ point by imposing that the isotopic thermometer Eq.(\\ref{eq:temperature}) evaluated in the model, correctly reproduces the measured $T_{HHe}$ value.\n \n\n \nIn Fig.~\\ref{fig:temp}, we plot the relation between $T$, the input temperature of the theoretical model, and the double ratio thermometer response $T_{HHe}$, given by Eq.(\\ref{eq:temperature}) using the theoretical particle yields. The density and proton fractions in each point are the ones estimated from the data. Different values for the $x_s$ parameter are considered. \nA deviation from the $T=T_{\\rm HHe}$ limit, given by the dashed line,\nis observed for the largest temperatures, when the density is larger and medium effects more important, and for the highest values of $x_s$, corresponding to larger fractions of clusters.\n This is consistent with the fact that the double ratio thermometer $T_{HHe}$ corresponds to the thermodynamic temperature only in the limit of an ideal gas. However we can see that the correlation between $T$ and $T_{HHe}$ is very good, the in-medium effects largely cancel in the double ratio, and the bias does not exceed 5.75\\%. \n Anticipating our results, the maximum bias observed for the coupling that best reproduces\n the experimental equilibrium constants, $x_s=0.92$, never exceeds 5.75 \\%. We have checked that such a bias does not change the experimental results within the error bars.\n\n \n\n\nFrom this correlation we can extract, for each $x_s$ value, the thermodynamic input sets $(T,\\rho,y_p)$ that are compatible with the $T_{HHe}$. The resulting chemical constants are compared to the\nexperimental ones in Figs.~\\ref{fig:alphachem}, \\ref{fig:3He3Hchem}, and \\ref{fig:6He2Hchem}.\n\n\\begin{figure\n \\begin{tabular}{cc}\n\\includegraphics[width=1\\textwidth]{fig16.eps} \n \\end{tabular}\n\\caption{The mass fractions of clusters as a function of the density from a calculation \\cite{PaisPRC2019} where we consider homogeneous matter with four light clusters, and considering different cluster-meson scalar coupling constants $ g_{s_i} = x_s A_i g_s$, with $T=4$ (left) and 10 (right) MeV. The dashed lines are the Virial EoS.}\n\\label{fig:virial}\n\\end{figure} \n\n The grey (green hatched) regions are the theoretical\n calculations performed at different $x_{s}$, and the bands represent the\n experimental quantities. The experimental data points are\n represented with different colors that indicate the respective temperature which changes from 5 to 10 MeV.\n Very similar results are obtained for all the different experimental\nentrance channels and therefore were not shown, see Fig. \\ref{fig5}.\n\nTo appreciate the effect of the correction, the results obtained in the previous analysis \\cite{BougaultJPG19}, where the ideal gas hypothesis was used to extract the source volume, are displayed as a lower set of data. We can remark again that the correction induced by this new analysis leads to increased values of both the density and the chemical constants. \nThe lower hatched bands in the figures correspond to the calculation of Ref.\\cite{PaisPRC97}, where the in-medium corrections were optimized to the uncorrected data of Ref.\\cite{QinPRL108}.\n\n We can see that we need higher values of $x_s$ in order to fit the data, with respect to the results of Refs.~\\cite{PaisPRC97}, ~\\cite{PaisPRC2019}, and an optimal value can\nbe extracted as $x_s=0.92\\pm 0.02$, which is seen to produce an\nadequate reproduction of the whole set of experimental data. This means\n that the nucleons in the clusters behave closer to the unbound nucleons, which correspond to $x_s=1$, than previously deduced.\n\nStill, we can observe a slight mass effect which does not seem fully accounted by the present calculation. Indeed, the optimal $x_s$ value tends to overestimate the equilibrium constant of the heaviest species ($^6$He) and underestimate the one of the lightest cluster ($^2$H). This might suggest that the hypothesis of the model, namely the fact that the coupling to the meson fields scales linearly with the number of nucleons bound in each cluster, could be not fully correct, and a more ab-initio treatment would be in order. This point is left for future work.\n\n\nIt is also interesting to observe that the uncorrected data set of Ref.\\cite{BougaultJPG19} (lower data set in Figs.~\\ref{fig:alphachem} - \\ref{fig:6He2Hchem}) appear only marginally compatible with the previous estimation $x_s=0.85\\pm 0.05$.\nIndeed, the estimation $x_s=0.85\\pm 0.05$, which nicely reproduced the uncorrected Qin et al. data, cannot describe neither the deuteron, nor the $A=3$ isobars of this new data set. Conversely, the introduction of in-medium effects allows a simultaneous reproduction of the whole data. To confirm the new value of the universal coupling $x_s=0.92\\pm 0.02$, and at the same time test the compatibility of the different data sets, it would be very interesting to apply this new analysis to the NIMROD data, such as to verify if the same value for the $x_s$ parameter is able to reproduce both sets of data, once the source volume is correctly estimated from the Bayesian analysis.\n\n\nAt very low density, some constraints on the in-medium binding energy shifts can be obtained from the ab-initio virial equation of state. The results of our model, with the value of $x_s$ optimized on the new analysis of the chemical constants, is compared to the virial constraint in Fig.~ \\ref{fig:virial}. We can see that the new estimation of the $x_s$ parameter is still within the virial EoS limits. A larger scalar-meson coupling has\n strong implications on the dissolution density of the clusters: \n clusters will survive at larger densities.\n\n\n\n\n\\section{Conclusions} \\label{sec:conc}\n\n\n\nA new analysis, where in-medium effects were included via a correction\nto the internal partition function, was done to the experimental data\non the formation of nuclear clusters in heavy-ion collisions\npresented in Ref.~\\cite{BougaultJPG19}. This was done by including a\n global effective correction to the binding energy of the cluster.\n\nUsing a Bayesian analysis, the probability distribution for the parameters of the in-medium\ncorrection was determined by imposing that the free volumes obtained for the\ndifferent clusters in a given source velocity bin be\ncompatible. The main consequence of including the in-medium\ncorrection was to reduce the free volume, and, as a\nconsequence, increase the density.\n\nIn the absence of any in-medium correction,\nonly the volume extracted from the $^3$He and $^3$H multiplicities \ncoincide for all source velocity bins. This indicates that the associated\nin-medium effects are similar and cancel in the calculation of the\nproton fraction, and suggests that, for a given thermodynamic condition and a given cluster mass, the correction should not \nstrongly depend on the charge of the clusters. Besides, the equilibrium constants obtained showed to\nbe quite independent of the isospin content of the three systems that\nwere analysed. It would be interesting to check this point with a larger isospin\nrange.\n\nDifferent functional forms for the correction as a function of the mass and isospin of the cluster were considered, and we observed that the results, both\n for the thermodynamic conditions and chemical equilibrium constants,\n were similar. \n\n A comparison to a theoretical model \\cite{PaisPRC97,PaisPRC2019} was\n also done. The modification of the experimental chemical constants \ndue to the better evaluation of the source density leads to a modified estimation for\nthe magnitude of the scalar-meson\n coupling $x_s$ for nucleons bound in clusters: the optimal values extracted are $x_s = 0.92 \\pm\n 0.02$, larger than the ones found in \\cite{PaisPRC97},\n $x_s=0.85\\pm 0.05$. Values of $x_s$ closer to the unity mean that\n the effect of the medium is less important than previously estimated, and that clusters will melt at larger densities. In turn, this means that the contribution of clusters in the neutrino opacity in the deleptonization phase of the proto-neutron star should be more important than previously considered \\cite{Fischer}.\n\n\n\nIn a future work, it would be extremely interesting to calculate explicitly the neutrino opacity in the relevant thermodynamic conditions. Prior to that, it will be important to perform a new analysis of the experimental data of Ref.~\\cite{QinPRL108}, including the possibility of in-medium effects in the determination of the effective volume in the same spirit as the one presented in this paper, in order to check the consistency of the different data sets, and to settle the model dependence of the results.\n\n\\ack{\nThis work was partly supported by the FCT (Portugal) Projects No. UID\/FIS\/04564\/2019, UID\/FIS\/04564\/2020 and POCI-01-0145-FEDER-029912, and by PHAROS COST Action CA16214. H.P. acknowledges the grant CEECIND\/03092\/2017 (FCT, Portugal). She is very thankful to F.G. and her group at LPC (Caen) for the kind hospitality during her stay there within a PHAROS STSM, where this work started. This work is part of the INDRA collaboration program. We thank the GANIL staff for providing us the beams and for the technical support during the experiment. We acknowledge support from R\\'egion Normandie under RIN\/FIDNEOS.}\n\n\n\n\\section*{References}\n\\bibliographystyle{iopart-num}\n\\thebibliography{50}\n\n\\bibitem{Sumiyoshi2008} Sumiyoshi K and R\\\"opke G 2008 Phys. Rev. C {\\bf 77} 055804\n\\bibitem{Fischer2014} Fischer T, Hempel M, Sagert I, Suwa Y and Schaffner-Bielich J 2014 Eur. Phys. J. A {\\bf 50} 46\n\\bibitem{Furusawa2013} Furusawa S, Nagakura H, Sumiyoshi K and Yamada S 2013 Astrophys. J. {\\bf 774} 78\n\\bibitem{Furusawa2017} Furusawa S, Sumiyoshi K, Yamada S and Suzuki H 2017 Nucl. Phys. A {\\bf 957} 188\n\\bibitem{Arcones2008} Arcones A, Mart\\'inez-Pinedo G, O'Connor E, Schwenk A, Janka H-T, Horowitz C J and Langanke K 2008 Phys. Rev. C {\\bf 78} 015806\n\\bibitem{Fischer2016} Fischer T, Mart\\'inez-Pinedo G, Hempel M, Huther L, R\\\"opke G, Typel S, and Lohs A 2016 EPJ Web Conf. {\\bf 109} 06002\n\\bibitem{Fischer2017} Fischer T, Bastian N-U, Blaschke D, Cierniak M, and Hempel M 2017 Publ. Astron. Soc. Austral. {\\bf 34} 67\n\\bibitem{Rosswog2015} Rosswog S 2015 Int. J. Mod. Phys. {\\bf D 24} 1530012\n\\bibitem{Oertel2017} Oertel M, Hempel M, Kl\\\"ahn T, and Typel S 2017 Rev. Mod. Phys. {\\bf 89} 015007\n\\bibitem{Roepke2015} R\\\"opke G 2015 Phys. Rev. C {\\bf 92} 054001\n\\bibitem{HempelPRC91} Hempel M, Hagel K, Natowitz J, R{\\\"o}pke G and Typel S 2015 Phys. Rev. C {\\bf 91} 045805\n\\bibitem{PaisPRC97} Pais H, Gulminelli F, Provid{\\^e}ncia C and R{\\\"o}pke G 2018 Phys. Rev. C {\\bf 97} 045805 \n\\bibitem{QinPRL108} Qin L, Hagel K, Wada R, Natowitz J B \\textit{et al.} 2012 Phys. Rev. Lett. {\\bf 108} 172701\n\\bibitem{BougaultJPG19} Bougault R, Bonnet E, Borderie B, Chbihi A \\textit{et al.} 2020 J. Phys. G: Nucl. Part. Phys. {\\bf 47} 025103\n\\bibitem{Pais20-PRL} Pais H, Bougault R, Gulminelli F, Provid{\\^e}ncia C \\textit{et al.} 2020 Phys. Rev. Lett. {\\bf 125} 012701 \n\\bibitem{DasGuptaPR72} Das Gupta S and Mekjian A Z 1981 Phys. Rep. {\\bf 72} 131\n\\bibitem{virial} C.J. Horowitz, A. Schwenk, Phys. Lett. B {\\bf 638}, 153 (2006); C.J. Horowitz, A. Schwenk, Nucl. Phys. A {\\bf 776}, 55 (2006); M. D. Voskresenskaya and S. Typel, Nucl. Phys. A {\\bf 887}, 42 (2012)\n\\bibitem{PaisPRC2019} Pais H, Gulminelli F, Provid{\\^e}ncia C and R{\\\"o}pke G 2019 Phys. Rev. C {\\bf 99} 055806\n\\bibitem{BougaultPRC97} Bougault R \\textit{et al.} 2018 Phys. Rev. C {\\bf 97} 024612\n\\bibitem{WangPRC72} Wang J \\textit{et al.} 2005 Phys. Rev. C {\\bf 72} 024603 \n\\bibitem{Angeli2013} Angeli I and Marinova K P 2013 Atomic Data and Nuclear Data Tables {\\bf 99} 69\n\\bibitem{AlbergoNCA89} Albergo S, Costa S, Constanzo E and Rubbino A 1985 Nuovo Cimento A {\\bf 89} 1\n\\bibitem{HagelPRC62} Hagel K, Wada R, Cibor J, Lunardon M \\textit{et al.} 2000 Phys. Rev. C {\\bf 62} 034607\n\\bibitem{KowalskiPRC75} Kowalski S, Natowitz J B, Shlomo S, Wada R \\textit{et al.} 2007 Phys. Rev. C {\\bf 75} 014601 \n\\bibitem{RopkeNPA867} R\\\"opke G 2011 Nucl. Phys. A {\\bf 867} 66\n\\bibitem{TypelPRC81} Typel S, R\\\"opke G, Kl\\\"ahn T, Blaschke D and Wolter H H 2010 Phys. Rev. C {\\bf 81} 015803\n\\bibitem{Fischer}Fischer T, Hempel M, Sagert I, Suwa Y, and Schaffner-Bielich J 2014 Eur. Phys. J. A {\\bf 50} 46 \n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWith the absence of any signal of new physics at the large hadron\ncollider (LHC) at present energies, searches of physics beyond the\nStandard Model (BSM) is based on the ability to make very precise\ntheoretical predictions within the Standard Model (SM) and to look\nfor possible deviations between experimental observations and\ntheoretical predictions, as a hint of new physics, within estimated\nuncertainties. In order to constrain the new physics model\nparameters, one needs to also compute the BSM signals to the same level\nof theoretical precision as the SM and compare with the observations\nmade at the LHC. Quantum Chromodymanics (QCD) corrections are large at the LHC and inclusion of\nhigher order terms reduces the theoretical uncertainties substantially.\nMany SM processes have been measured at the LHC and have cross sections\nthat are in excellent agreement with higher order QCD predictions. \nThis has helped\nin the discovery of the Higgs boson by ATLAS \\cite{Aad:2012tfa} and CMS\n\\cite{Chatrchyan:2012xdj}\ncollaborations at the LHC and hence the measurement of the important fundamental\nparameter of the SM, the Higgs mass $m_H$ (see \n\\cite{Harlander:2002wh,Anastasiou:2002yz,Ravindran:2003um}). \nPrecise measurement of the\nHiggs mass is essential for the understanding of the stability of\nelectroweak vacuum \\cite{Degrassi:2012ry}.\n\nIn spite of the fact that the SM is in excellent agreement with \nexperimental observations, we know that there are compelling\nreasons to go beyond the SM. In the context of the discovery\nof a boson at 125 GeV in the di-photon channel, models with\nspin-2 were also necessary to ascertain the spin and parity of\nthe discovered boson. In the mean time the bounds on conventional models\nsuch as the Randall-Sundrum models with warped extra dimensions~\\cite{Randall:1999ee},\n where\nthe spin-2 couples universally\nto the SM energy momentum tensor was much higher. \nA universally-coupled spin-2 particle is heavily constrained\n\\cite{Khachatryan:2016yec,Aaboud:2017eta}.\nModels\nwith non-universal coupling of a spin-2 to SM was hence\na suitable alternative. In this model, the spin-2 couples to, two\nsets of gauge invariant SM tensorial operators with different\ncoupling strengths, but are\nnot individually conserved. The universal coupling would correspond\nto the coupling strength being equal and the tensorial operators\nadding up to the conserved energy momentum tensor. Models\nwith non-universal coupling were incorporated in tools like\nHiggs charaterisation \\cite{Artoisenet:2013puc} to NLO in QCD. \nNon universal coupling \nlead to additional challenges: (a) additional UV renornalisation were\nneeded, (b) in the IR sector, additional double and single pole\nterms had to be cancelled with the counter parts from real emission \nprocess and mass factorisation counter terms, thus demonstrating\nthe IR factorisation to NLO for non-universal coupling \\cite{Artoisenet:2013puc}.\nNote that we take this for granted in perturbative QCD (pQCD) and for universal coupling it is guaranteed\nby the conserved energy-momentum tensor. \n\nRecently, the UV structure of non-universal coupling up to three\nloop order in QCD was investigated \\cite{Ahmed:2016qjf} where in the spin-2\nfields couple to two sets of gauge invariant tensorial operators\nconstructed out of the SM fields (with different coupling strengths).\nThese rank-2 operators are unfortunately not conserved, unlike\nenergy-momentum tensor of QCD~\\cite{Nielsen:1977sy}. \nConsequently, both these operators as well as the couplings get\nadditional UV renormalisation order by order in perturbation theory.\nExploiting the universal IR structure of QCD amplitudes\neven in the case of a non-universal spin-2 coupling, on-shell\nform factors of these operators between quark and gluon states\nhave been computed. These are important ingredients for\nobservables at the LHC, to study models with such interactions.\n\nFor universal coupling, depending on the geometry of extra dimensions,\n{\\em viz}.\\ large extra dimensions or warped extra dimension models,\nstudies have been extensively carried out upto higher orders in QCD in various\nchannels that are relevant for the LHC. In these models, the DY process\nhas been studied to NLO \\cite{Mathews:2004xp,Mathews:2005zs,Kumar:2006id}\nfor various observables. Di-vector boson final\nstate have been studied to NLO level in\n\\cite{Kumar:2008pk, Kumar:2009nn, Agarwal:2009xr, Agarwal:2009zg,\nAgarwal:2010sp, Agarwal:2010sn}.\nTo NLO+parton shower (PS) accuracy all the non-color, di-final states have been studied\n\\cite{Frederix:2012dp, Frederix:2013lga, Das:2014tva} in the aMC@NLO\nframework. \nProduction of a generic spin-2 particle in association with coloured\nparticles, vector bosons and the Higgs boson have been studied in\n\\cite{Das:2016pbk} to NLO+PS accuracy. To the next higher order in QCD\nthe form factor of a spin-2 universally coupled to quarks and gluons up\nto two loops was computed in \\cite{deFlorian:2013sza}. Subsequently the\nnext-to-next-to-leading order (NNLO) computation in the threshold limit was done in \\cite{deFlorian:2013wpa}\nand finally the full NNLO computation in \\cite{Ahmed:2016qhu}.\nProduction of a spin-2 in association with a jet to full two-loop QCD\ncorrections has also been completed recently with the evaluation of\ngeneric spin-2 decaying to $g~ g~ g$ \\cite{Ahmed:2014gla} and \n$ q ~\\bar q ~g$ \\cite{Ahmed:2016yox}.\n\nThe di-lepton final state is the most studied and a very clean\nprocess at the LHC. In BSM scenarios the dilepton signal could \nbe enhanced due to additional contributions from BSM intermediate\nstates that could couple to a di-lepton. For the universal\nspin-2 coupling the DY\nprocess has been evaluated upto NNLO in QCD. This involved\nvarious steps: to begin with NLO corrections were evaluated\n\\cite{Mathews:2004xp}, followed by the two loop quark and\ngluon form factors \\cite{deFlorian:2013sza}, which lead to the\ncomputation of NNLO QCD corrections to the graviton\nproduction in models of TeV-scale gravity, within the soft-virtual\napproximation \\cite{deFlorian:2013wpa}. Finally the complete NNLO QCD corrections to the\nproduction of di-leptons at hadron colliders in large extra\ndimension models with spin-2 particles are reported in\n\\cite{Ahmed:2016qhu}. \n\nThe non-universal coupling of spin-2 to SM has been actively\nconsidered by the ATLAS Collaboration \\cite{Aad:2015mxa,Pedersen:2015jdh} to provide\nexclusion of several non-SM spin hypotheses. This analysis\nhas been done in the Higgs characterisation frame work \\cite{Artoisenet:2013puc,Das:2016pbk} \nto NLO+PS accuracy. With the recent results \\cite{Ahmed:2016qjf} upto three loop form factors of a massive spin-2\nparticle with non-universal coupling, NNLO computation is now\npossible. In this article we look at the phenomenological\nimplications of these models to NNLO at the LHC.\n\n\nThe paper is organised as follows. We discuss the effective action that\ndescribes how spin-2 particle couples to those of the SM through two gauge\ninvariant operators with renormalisable coupling. Using this action, we\ncompute QCD radiative correction to the production of pair of leptons in\nparticular their invariant mass distribution up to NNLO level. A detailed\nphenomenological study on the impact of our results is presented for the\nLHC. Finally we conclude. The relevant form factors are presented in the appendix \nand mass factorised partonic cross sections are given as electronically readable version.\n\n\\section{Theoretical Framework}\n\\label{sec:Theory}\n\n\\subsection{Effective action}\n\nThe interaction part of the effective action describes the non-universal coupling of\nthe spin-2 fields denoted by $h_{\\mu\\nu}$ with those of QCD, consists of\ntwo gauge invariant operators, namely $\\hat {\\cal O}^G_{\\mu\\nu}$ and \n$\\hat {\\cal O}^Q_{\\mu\\nu}$ and is given by \n\\begin{equation}\n\\label{eq:action}\nS = -\\frac{1}{2} \\int d^4x ~ h^{\\mu\\nu}(x) \\left(\\hat \\kappa_G~ \\hat {\\cal O}^{G}_{\\mu\\nu} (x) \n\t+ \\hat \\kappa_Q~\\hat {\\cal O}^{Q}_{\\mu\\nu}(x) \\right) \\,,\n\\end{equation}\nwhere $\\hat \\kappa_{G,Q}$ are dimension full couplings, the pure gauge\nsector is denoted by $G$, while $Q$ denotes the fermionic sector and\nits gauge interaction. This decomposition is not unique as one can\nadjust gauge invariant terms between them. The gauge invariant\noperators $\\hat {\\cal O}^G_{\\mu\\nu}$ and $\\hat {\\cal O}^Q_{\\mu\\nu}$\nare as follows:\n\\begin{eqnarray} \\label{eq:emtensor}\n\\hat {\\cal O}^G_{\\mu\\nu} &=&{1 \\over 4} g_{\\mu\\nu} \\hat F_{\\alpha \\beta}^a \\hat F^{a\\alpha\\beta} \n- \\hat F_{\\mu\\rho}^a \\hat F^{a\\rho}_\\nu\n - \\frac{1}{\\hat \\xi} g_{\\mu\\nu} \\partial^\\rho(\\hat A_\\rho^a\\partial^\\sigma \\hat A_\\sigma^a)\n-{1 \\over 2\\hat \\xi}g_{\\mu\\nu} \\partial_\\alpha \\hat A^{a\\alpha} \\partial_\\beta \\hat A^{a\\beta} \n %\n \\nonumber\\\\\n&& + \\frac{1}{\\hat \\xi}(\\hat A_\\nu^a \\partial_\\mu(\\partial^\\sigma \\hat A_\\sigma^a) + \\hat A_\\mu^a\\partial_\\nu\n (\\partial^\\sigma \\hat A_\\sigma^a))\n+\\partial_\\mu \\overline {\\hat \\omega^a} (\\partial_\\nu \\hat \\omega^a - \\hat g_s f^{abc} \\hat A_\\nu^c \\hat \\omega^b)\n\\nonumber\\\\\n&& +\\partial_\\nu \\overline {\\hat \\omega^a} (\\partial_\\mu \\hat \\omega^a- \\hat g_s f^{abc} \\hat A_\\mu^c \\hat \\omega^b)\n %\n-g_{\\mu\\nu} \\partial_\\alpha \\overline {\\hat \\omega^a} (\\partial^\\alpha \\hat \\omega^a - \\hat g_s f^{abc} \\hat A^{c \\alpha} \\hat \\omega^b) \\,,\n\\\\\n \\hat {\\cal O}^Q_{\\mu\\nu} &= &\n \\frac{i}{4} \\Big[ \\overline {\\hat \\psi} \\gamma_\\mu (\\overrightarrow{\\partial}_\\nu -i \\hat g_s T^a \\hat A^a_\\nu)\\hat \\psi\n -\\overline {\\hat \\psi} (\\overleftarrow{\\partial}_\\nu + i \\hat g_s T^a \\hat A^a_\\nu) \\gamma_\\mu \\hat \\psi\n +\\overline {\\hat \\psi} \\gamma_\\nu (\\overrightarrow{\\partial}_\\mu -i \\hat g_s T^a \\hat A^a_\\mu)\\hat \\psi\n \\nonumber\\\\\n &&-\\overline {\\hat \\psi} (\\overleftarrow{\\partial}_\\mu + i \\hat g_s T^a \\hat A^a_\\mu) \\gamma_\\nu \\hat \\psi\\Big]\n- ig_{\\mu\\nu} \\overline {\\hat \\psi} \\gamma^\\alpha (\\overrightarrow{\\partial}_\\alpha -i \\hat g_s T^a \\hat A^a_\\alpha)\\hat \\psi \\,,\n \\end{eqnarray}\nin the above equations the unrenormalised quantities are denoted by hat $( ~{\\hat{}} ~)$.\n$\\hat g_s$ is the strong coupling constant, ${\\hat \\xi}$ the gauge fixing parameter,\n$\\hat A_\\nu^c$ the gauge field, ${\\hat \\psi}$ the quark field and ${\\hat \\omega^a}$ the\nghost fields. The structure constants of $SU(N)$ gauge group are denoted by $f^{abc}$ and\nthe Gell-Mann matrices by $T^a$.\nThe sum of $\\hat{{ \\cal O}}_G$ and $\\hat{{\\cal O}}_Q$ is the energy momentum tensor of the QCD\npart and is protected by radiative corrections to all orders, thanks to fact that\nit is conserved. The Feynman rules for the non-universal case in contrast to the\nuniversal case \\cite{Han:1998sg,Mathews:2004pi}, would have a\nprefactor $\\kappa_Q$ for the coupling for a spin-2 to a pair of fermions or any fermionic \nSM vertex, while a spin-2 coupling to gluons, ghosts or any SM gauge or ghost vertex \nwould have a prefactor $\\kappa_G$. The individual gauge ${\\cal O}_G$ and fermionic\n${\\cal O}_Q$ operators are not conserved in QCD and hence require additional ultraviolet\n(UV) counter terms\nin order to renormalise them. In \\cite{Ahmed:2016qjf}, we determined these additional UV\nrenormalisation constants up to three loop level in QCD. We obtained them by exploiting\nthe universal infrared properties of on-shell amplitudes involving these composite\noperators. Since we have two operators at our disposal, they mix under renormalisation\nas follows:\n\\begin{equation}\n \\label{eq:Zmat}\n\\begin{bmatrix}\nO^{G} \\\\ O^{Q}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n Z_{GG} & Z_{GQ} \\\\ Z_{QG} & Z_{QQ}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\hat O^{G} \\\\ \\hat O^{Q}\n\\end{bmatrix}\n\\,.\n\\end{equation}\nwhere the renormalisation constants $Z_{IJ}$ in terms of the anomalous dimensions\n$\\gamma_{IJ} = \\sum_{n=1}^{\\infty} a_{s}^{n} \\gamma_{IJ}^{(n)}$ are given by\n\\begin{align}\n Z_{IJ} &= \\delta_{IJ} \n + {a}_{s} \\Bigg[ \\frac{2}{\\epsilon}\n \\gamma_{IJ}^{(1)} \\Bigg] \n + {a}_{s}^{2} \\Bigg[\n \\frac{1}{\\epsilon^{2}} \\Bigg\\{ 2\n \\beta_{0} \\gamma_{IJ}^{(1)} + 2 \\gamma_{IK}^{(1)} \\gamma_{KJ}^{(1)} \\Bigg\\} + \\frac{1}{\\epsilon} \\Bigg\\{ \\gamma_{IJ}^{(2)}\\Bigg\\}\n \\Bigg] \\,,\n\\end{align}\nwhere $I,J=G,Q$, $a_s \\equiv g_s^2\/16 \\pi^2$ and space-time dimension is taken to be $d=4+\\epsilon$.\nThe renormalisation constants $Z_{IJ}$ computed in \n\\cite{Ahmed:2016qjf} are given below up to $a_s^2$ for completeness:\n\\begin{align}\nZ_{GG} &= 1 + a_s\\left[ -\\frac{4}{3\\epsilon}n_f \\right]+a_s^2\\bigg[ \\frac{1}{\\epsilon^2}\\left\\{ -\\frac{44}{9}C_A n_f + \\frac{32}{9}C_F n_f +\\frac{16}{9}n_f^2\\right\\}\n\\nonumber\\\\&\n+\\frac{1}{\\epsilon}\\left\\{-\\frac{35}{27}C_A n_f - \\frac{74}{27}C_F n_f\\right\\} \\bigg]\\,,\n\\nonumber\\\\\nZ_{GQ}&= a_s \\left[\\frac{16}{3\\epsilon}C_F \\right] + a_s^2\\bigg[\\frac{1}{\\epsilon^2}\\left\\{\\frac{176}{9}C_A C_F -\\frac{64}{9}C_F n_f -\\frac{128}{9}C_F^2 \\right\\} \n\\nonumber\\\\&\n+ \\frac{1}{\\epsilon}\\left\\{\\frac{376}{27}C_A C_F -\\frac{104}{27}C_F n_f -\\frac{112}{27}C_F^2 \\right\\} \\bigg]\\,,\n\\nonumber\\\\\nZ_{QG}&= a_s\\left[\\frac{4}{3\\epsilon} n_f\\right] + a_s^2\\bigg[ \\frac{1}{\\epsilon^2}\\left\\{\\frac{44}{9}C_A n_f -\\frac{32}{9}C_F n_f -\\frac{16}{9} n_f^2 \\right\\} \n\\nonumber\\\\&\n+\\frac{1}{\\epsilon}\\left\\{ \\frac{35}{27}C_A n_f +\\frac{74}{27} C_F n_f\\right\\}\\bigg]\\,,\n\\nonumber\\\\\nZ_{QQ}&= 1+ a_s\\left[-\\frac{16}{3\\epsilon} \\right] + a_s^2\\bigg[\\frac{1}{\\epsilon^2}\\left\\{-\\frac{176}{9}C_A C_F +\\frac{64}{9}C_F n_f +\\frac{128}{9}C_F^2 \\right\\}\n\\nonumber\\\\&\n+ \\frac{1}{\\epsilon}\\left\\{-\\frac{376}{27}C_A C_F + \\frac{104}{27}C_F n_f +\\frac{112}{27}C_F^2 \\right\\}\\bigg]\\,,\n\\end{align}\nwhere $C_A=N$ and $C_F=(N^2-1)\/2N$ are the quadratic Casimirs of the $SU(N)$ group and $n_f$\nis the number of quark flavours. The fact that the energy momentum tensor \n$T_{\\mu \\nu} = {\\cal O}^G_{\\mu\\nu} + {\\cal O}^Q_{\\mu\\nu}$ is conserved leads to \n$\\gamma^{(n)}_{QG}=-\\gamma^{(n)}_{GG}$ and $\\gamma^{(n)}_{QQ}=-\\gamma^{(n)}_{GQ}$ or\nequivalently $Z_{GG} = 1-Z_{QG}$ and $Z_{QQ}=1-Z_{GQ}$, which is expected to be true to all\norders in $a_s$. \nAll $\\gamma^{(n)}_{GG}$ are proportional to $n_f$ which is consistent with the\nexpectation that the conserved property of ${\\cal O}^G_{\\mu\\nu}$ breaks down beyond\ntree level due to the presence of quark loops. For pure gauge theory \n$(n_f=0)$ and the energy momentum tensor of the pure gauge theory \n${\\cal O}^G_{\\mu\\nu}$ is hence conserved by itself. \n\nDefining the renormalised $\\kappa_I$ in terms of bare ones through \n$\\hat \\kappa_I = \\sum_{J=G,Q} Z_{IJ} \\kappa_J$ with\n$I,J = G,Q$,\nwe find that the action takes the following form\n\\begin{eqnarray}\n\tS &=& -{1 \\over 2} \\int d^4x ~ h_{\\mu\\nu} \\left(\\kappa_G~ {\\cal O}^{G,\\mu\\nu} + \\kappa_Q~{\\cal O}^{Q,\\mu\\nu} \\right)\\,,\n\\end{eqnarray}\nthe resulting interaction terms expressed in terms of renormalised operators and\nrenormalised couplings are guaranteed to predict UV finite quantities to all\norders in strong coupling. In the rest of the paper, we will use this version of\nthe Lagrangian to study the phenomenology.\n\n\n\n\n\n\\subsection{Lepton pair invariant mass distribution $d\\sigma\/dQ^2$}\n\\label{ss:inv}\nOur next task to use the effective action expressed in terms of renormalised operators ${\\cal O}_I$ and couplings $\\kappa_I$ to\nobtain production cross section for a \npair of leptons $(l^+,l^-)$, through the\nscattering of two protons $H_1,H_2$ at the LHC: \n\\begin{equation}\n\\label{eq:3}\nH_1 (P_1)+ H_2 (P_2) \\rightarrow l^+(l_1) + l^-(l_2) + X(P_X)\\,\n\\end{equation}\nwhere the 4-momenta of the corresponding particles are denoted in the \nparentheses and the final inclusive state is denoted by $X$.\nThe hadronic cross section is related to the partonic subprocess \ncross sections in the QCD improved parton model as\n\\begin{align}\n\\label{eq:4}\n2S \\frac{d\\sigma^{H_1H_2}}{d Q^2} \\big(\\tau, Q^2 \\big) = \\sum_{ab =\n q,\\bar{q},g} \\int_{0}^{1} dx_1 \\int_{0}^{1} dx_2 ~~ \n\t\\hat f_a^{H_1}(x_1)\n\t\\hat f_b^{H_2} (x_2) \n\\nonumber\\\\\n\\times \\int_0^1dz \\, 2 s\\frac{d\\hat{\\sigma}^{ab}}{dQ^2}\\big(z,\n Q^2\\big)\\delta(\\tau - zx_1x_2)\\,, \n\\end{align}\nwhere $Q^2$ is the invariant mass square of the final state leptonic pair \nand $S$ is the square of the hadronic center of mass energy which is\nrelated to the partonic one, $s$, through $s=x_{1}x_{2}S$, \nsimilarly $\\tau \\equiv Q^2\/S$, $z \\equiv Q^2\/s$\nand $\\tau = x_1 x_2z$. The unrenormalised partonic distribution functions of the\npartons $a$ and $b$ are $\\hat f_a$ and $\\hat f_b$ respectively.\nThe partonic sub process corresponding to the hadronic process is\n\\begin{align*}\na(p_{1})+b(p_{2})\n\\rightarrow j(q) + \\sum\\limits_{i=1}^{m} X_{i}(q_{i}) \\,,\n\\end{align*}\nwhere the summation over $i$ corresponds to all the real QCD final state\npartons that could contribute to a particular order in perturbative QCD. The \ninitial state partons $a b \\to j$, a neutral state $j$ which could be a\nphoton ($\\gamma^{*}$), Z-boson ($Z^*$) or spin-2 particle and further decays to pair of leptons\n$j \\to l^+ l^-$. \n\nAt the partonic level, one encounters amplitudes involving both SM vector\nbosons and spin-2 particles as propagators and hence, at the cross section\nlevel, the squared amplitudes contain in addition to contributions from SM\nand spin-2 separately, those from interference of SM and spin-2 amplitudes.\nInterestingly, for the invariant mass distributions, the later one identically\nvanishes for the universal case, which was earlier noted both at NLO and NNLO\nlevels in ~\\cite{Mathews:2004xp,Ahmed:2016qhu}. Hence, at the cross section\nlevel, the SM and spin-2 contributions simply add up as\n\\begin{align}\n2 S{d \\sigma^{H_1H_2} \\over dQ^2}(\\tau,Q^2)&=\n2 S{d \\sigma^{H_1H_2}_{\\rm SM} \\over dQ^2}(\\tau,Q^2)\n+2 S{d \\sigma^{H_1H_2}_{\\rm spin-2} \\over dQ^2}(\\tau,Q^2) \\,,\n\\end{align}\nwhere the SM results are known exactly upto NNLO level for long time (see \\cite{Altarelli:1978id, Matsuura:1987wt, Matsuura:1988sm,Hamberg:1990np}) and result at N$^3$LO in the \nsoft gluon approximation is also available, see \\cite{Ahmed:2014cla}.\nFor the spin-2 case with universal coupling, namely $\\kappa_G=\\kappa_Q=\\kappa$, \nthe results upto NNLO level can be found in \\cite{Mathews:2004xp,Ahmed:2016qhu}. \nIn this article, we have extended this\ncomputation to NNLO QCD \nfor the case of non-universal couplings\ni.e., when $\\kappa_G$ and $\\kappa_Q$ are different.\nWe briefly describe the methodology that we use to obtain\nthe mass factorised partonic cross sections up to NNLO level.\nUnlike the SM, for the spin-2 exchange, at\nleading order (LO) we can have\ngluon initiated sub process in addition to the quark initiated\none: \n\\begin{align}\n\\label{eq:16}\nq+{\\bar q} \\rightarrow l^+l^-\\,, \\quad g+g \\rightarrow l^+l^-\\,.\n\\end{align}\n\\\\\nAt next-to-leading order (NLO) in QCD, we have \n\n\\begin{minipage}{2.5in}\n\\begin{align}\n&q + \\bar{q} \\rightarrow l^+l^-+g\\,,\n\\nonumber\\\\\n&g + g \\rightarrow l^+l^- + g\\,,\n\\nonumber\\\\\n&g + q \\rightarrow l^+l^- + q\\,,\n\\nonumber\n\\end{align}\n\\end{minipage}\n\\begin{minipage}{2.5in}\n\\begin{align}\n\\label{eq:17}\n&q + \\bar{q} \\rightarrow l^+l^-+ \\text{one loop}\\,,\n\\nonumber\\\\\n&g + g \\rightarrow l^+l^- + \\text{one loop}\\,,\n\\nonumber\\\\\n&g +\\bar{q} \\rightarrow l^+l^- + \\bar{q}\\,.\n\\end{align}\n\\end{minipage}\n\\\\\n\n\\noindent\nAt NNLO level, we have double real emission, \n\n\\begin{minipage}{2.5in}\n\\begin{align*}\n&q+\\bar{q} \\rightarrow l^+l^- + q + \\bar{q}\\,,\n\\nonumber\\\\\n&g+g \\rightarrow l^+l^- + g + g\\,,\n\\nonumber\\\\\n&g+q \\rightarrow l^+l^- + g + q\\,,\n\\nonumber\\\\\n&q+q \\rightarrow l^+l^- + q + q\\,,\n\\nonumber\\\\\n&q_{1}+\\bar{q}_{2} \\rightarrow l^+l^- + q_{1} + \\bar{q}_{2}\\,,\n\\nonumber\n\\end{align*}\n\\end{minipage}\n\\begin{minipage}{3.1in}\n\\begin{align}\n\\label{eq:18}\n&q_1+\\bar{q}_1 \\rightarrow l^+l^- + q_2 + \\bar{q}_2\\,,\n\\nonumber\\\\\n&q+\\bar{q} \\rightarrow l^+l^- + g + g\\,,\n\\nonumber\\\\\n&g+g\\rightarrow l^+l^- + q + \\bar{q}\\,,\n\\nonumber\\\\\n&g+\\bar{q} \\rightarrow l^+l^- + g + \\bar{q}\\,,\n\\nonumber\\\\\n&q_{1}+q_{2} \\rightarrow l^+l^- + q_{1} + q_{2}\\,,\n\\end{align}\n\\end{minipage}\n\\\\ \n\n\\noindent\nsingle real emission at one loop, \n\n\\begin{minipage}{2.5in}\n\\begin{align*}\n&q + \\bar{q} \\rightarrow l^+l^- + g +\\text{one loop}\\,,\n\\nonumber\\\\\n&g + q \\rightarrow l^+l^- + q + \\text{one loop}\\,,\n\\nonumber\n\\end{align*} \n\\end{minipage}\n\\begin{minipage}{3.1in}\n\\begin{align}\n\\label{eq:19}\n&g+g \\rightarrow l^+l^- + g + \\text{one loop}\\,,\n\\nonumber\\\\\n&g + \\bar{q} \\rightarrow l^+l^- + \\bar{q} + \\text{one loop} \\,,\n\\end{align} \n\\end{minipage}\n\\\\\n\n\\noindent \nand the pure double virtual diagrams:\n\n\\begin{align}\n\\label{eq:20}\n&q + \\bar{q} \\rightarrow l^+l^- + \\text{two loop}\\,,\n\\nonumber\\\\\n&g + g \\rightarrow l^+l^- + \\text{two loop}\\,.\n\\end{align} \n\\\\\nThe virtual corrections at one and two loop levels are straightforward for this process, the \nphase space integrals are often hard to evaluate.\nIn the first computation of the NNLO QCD correction to the DY pair production\n~\\cite{Hamberg:1990np}, the phase space integrals were performed\nin three different frames to achieve the final result. \nThis method was successfully applied \nin~\\cite{Ravindran:2003um} to obtain inclusive cross section for the Higgs production at NNLO. \nIn~\\cite{Harlander:2002wh}, using a systematic expansion around threshold, all the phase space integrals were performed\nto obtain the partonic cross sections for both DY and Higgs productions at NNLO level. Later on, \nin~\\cite{Anastasiou:2002yz}, an elegant formalism was developed to compute both \nreal emissions as well as virtual corrections\napplying integration by parts \n(IBP)~\\cite{Tkachov:1981wb, Chetyrkin:1981qh} and\nLorentz invariance (LI)~\\cite{Gehrmann:1999as} identities. This approach is famously called \nthe method of reverse unitarity. \nThe resulting master integrals (MIs) were computed using the technique of differential equations.\nThe state-of-the-art result, namely, N$^3$LO QCD corrections to the inclusive Higgs boson\n production~\\cite{Anastasiou:2014vaa, Anastasiou:2015ema, Anastasiou:2016cez} uses the method of reverse unitarity. \n We have systematically used this approach \n~\\cite{Anastasiou:2002yz} to calculate the partonic cross section of\nthe DY pair production \nthrough intermediate spin-2 particle at NNLO QCD. \n\nUltraviolet (UV), soft and\ncollinear (IR) divergences do show up beyond leading order \nand they are regularised in dimensional regularisation where the space-time\ndimensions $d$ is chosen to be equal to $4+\\epsilon$. \nThe soft divergences cancel among virtual and real subprocesses processes \nthanks to Kinoshita-Lee-Nauenberg (KLN) theorem~\\cite{Kinoshita:1962ur, Lee:1964is} \nand the remaining UV divergences as well as\ninitial state collinear divergences are removed in ${\\overline {\\text{MS}}}$ scheme\nusing UV renormalisation constants and mass factorisation kernels denoted by $\\Gamma_{ab}(\\mu_F)$\nrespectively. Here, $\\mu_{F}$ is the factorisation scale.\nFor the UV renormalisation, we need to perform \nrenormalisation for strong coupling constant $a_s = g_s^2\/16 \\pi^2$ through $Z_{a_{s}}$ as well as \nrenormalisation of $\\kappa_I$ through $Z_{IJ}$ listed in\nthe previous section. For the former, we have\n\\begin{align}\n\\label{eq:21}\n{\\hat a}_{s} S_{\\epsilon} = \\left( \\frac{\\mu^{2}}{\\mu_{R}^{2}} \\right)^{\\epsilon\/2}\n Z_{a_{s}} a_{s} \\,,\n\\end{align}\nwhere,\n\\begin{align}\n\\label{eq:22}\n&Z_{a_{s}} = 1+ a_s\\left[\\frac{2}{\\epsilon} \\beta_0\\right]\n + a_s^2 \\left[\\frac{4}{\\epsilon^2 } \\beta_0^2\n + \\frac{1}{\\epsilon} \\beta_1 \\right] + \\cdot \\cdot \\cdot \\,,\n\\end{align} \n$a_{s} \\equiv a_{s}(\\mu_{R}^{2})$,\n$S_{\\epsilon} = {\\rm exp} \\left[ (\\gamma_{E} - \\ln 4\\pi)\\epsilon\/2\n\\right]\\,, \\gamma_{E} = 0.5772\\ldots\\,,$ \nand the scale $\\mu$ is introduced to keep the unrenormalised strong\ncoupling constant ${\\hat a}_{s}$ dimensionless in $n$-dimensions. The\nrenormalisation scale is denoted by $\\mu_{R}$. $\\beta_{i}$'s are the coefficients of QCD\n$\\beta$-function~\\cite{Gross:1973id, Politzer:1973fx, Caswell:1974gg, Tarasov:2013zv, Larin:1993tp}. \nThe mass factorised finite cross section can be obtained using \n\\begin{align}\n\\label{eq:23}\n\t2 s {d \\hat \\sigma_{ab} \\over dQ^2}(z,Q^2,1\/\\epsilon) =\n \\sum_{c,d=q,{\\bar q}, g}\\Gamma_{ca}(z,\\mu_F^2,1\/\\epsilon) \\otimes\n \\Gamma_{db}(z,\\mu_F^2,1\/\\epsilon)\\otimes \n 2 s {d \\sigma_{ab} \\over dQ^2}(z,Q^2,\\mu_{F}^2)\\,,\n\\end{align}\nwhere $\\otimes$ are nothing but Mellin convolution. The mass factorisation kernels take the following form\n\\begin{align}\n\\label{eq:24}\n\t\\Gamma_{ab}(z, \\mu_{F}^{2},1\/\\epsilon) =& \n\\delta_{ab} \\delta(1-z)\n + a_s(\\mu_F^2) \\frac{1}{\\epsilon} P^{(0)}_{ab}(z)\n\\nonumber \\\\\n\t& +a_{s}^2(\\mu_{F}^{2})\n\t\\left[\\frac{1}{\\epsilon^{2}} \\Bigg( \\frac{1}{2} P^{(0)}_{ac} \\otimes\n P^{(0)}_{cb} + \\beta_{0} P^{(0)}_{ab}\\Bigg) + \\frac{1}{\\epsilon}\n\t\\Bigg( \\frac{1}{2} P^{(1)}_{ab}\\Bigg)\\right] + \\cdot \\cdot \\cdot \\,, \n\\end{align}\nwhere $P^{(i)}_{ab}$ are the Altarelli-Parisi splitting\nfunctions~\\cite{Altarelli:1977zs, Floratos:1980hm, Floratos:1980hk, Curci:1980uw, Moch:2004pa,Vogt:2004mw}.\nAfter the mass factorisation, the finite partonic cross sections denoted by $2 s {d \\sigma_{ab}\/ dQ^2}$ can be expressed in terms\n$\\Delta^h_{ab}(z,a_s(\\mu_R^2),Q^2\/\\mu_R^2,\\mu_F^2\/\\mu_R^2)$ by factoring out some overall constants. In terms of these $\\Delta^h_{ab}$, the \nhadronic cross section can be written as\n\\begin{align}\n\t2 S{d \\sigma^{H_1H_2}_{\\rm spin-2} \\over dQ^2}(\\tau,Q^2)&=\n\t\\sum_{q,\\bar q,g}{\\cal F}_{h} \\int_0^1 {d x_1 } \\int_0^1 \n{dx_2} \\int_0^1 dz \\delta(\\tau-z x_1 x_2)\n\\times \\Bigg[ \nH_{q{\\bar q}} \n \\sum\\limits_{k=0}^{2} a_{s}^{k} \\Delta^{h, (k)}_{q{\\bar q}} \n\\nonumber\\\\&\n+\nH_{g g} \\sum\\limits_{k=0}^{2} a_{s}^{k} \\Delta^{h, (k)}_{gg} \n+ \\Big( H_{gq} + H_{qg} \\Big)\n \\sum\\limits_{k=1}^{2} a_{s}^{k} \\Delta^{h, (k)}_{gq}\n\\nonumber\\\\&+\nH_{q q} \\sum\\limits_{k=2}^{2}\n a_{s}^{k} \\Delta^{h, (k)}_{qq} \n+ H_{q_{1} q_{2}} \\sum\\limits_{k=2}^{2}\n a_{s}^{k} \\Delta^{h, (k)}_{q_{1}q_{2}} \n\\Bigg]\\,,\n\\end{align}\nwhere\n\\begin{align}\n\\label{eq:31}\n{\\cal F}_{h}=\\;{\\kappa_Q^2 Q^6 \\over 320 \\pi^2 }|{\\cal D}(Q^2)|^2\\,,\n\\quad \\quad \\quad\n\t\\Delta^{h,(k)}_{ab} ~= \\Delta^{h,(k)}_{ab} \\left(z,{Q^2\\over \\mu_R^2},{\\mu_F^2\\over \\mu_R^2}\\right) \\,.\n\\end{align}\n$\\kappa_Q$ in ${\\cal F}_{h}$ corresponds to the leptonic coupling to the spin-2, while\nthe coupling to quarks and gluons are taken in $\\Delta^{h,(k)}_{ab}$. We have provided analytical expressions for these $\\Delta^{h,(k)}_{ab}$ in Mathematica format as an ancillary file.\n${\\cal D}(Q^2)$ is the propagator of the massive spin-2 particle, with a decay width that has\nto be estimated considering its decay to SM particles.\n$H_{ab}$ are the combinations of the mass factorised partonic distribution functions: \n\\begin{align}\n\\label{eq:32}\nH_{q \\bar q}(x_1,x_2,\\mu_F^2)&=\nf_q^{H_1}(x_1,\\mu_F^2) \nf_{\\bar q}^{H_2}(x_2,\\mu_F^2)\n+f_{\\bar q}^{H_1}(x_1,\\mu_F^2)~ \nf_q^{H_2}(x_2,\\mu_F^2)\\,,\n\\nonumber\\\\\nH_{q q}(x_1,x_2,\\mu_F^2)&=\nf_q^{H_1}(x_1,\\mu_F^2) \nf_{q}^{H_2}(x_2,\\mu_F^2)\n+f_{\\bar q}^{H_1}(x_1,\\mu_F^2)~ \nf_{\\bar q}^{H_2}(x_2,\\mu_F^2)\\,,\n\\nonumber\\\\\nH_{q_1 q_2}(x_1,x_2,\\mu_F^2)&=\nf_{q_1}^{H_1}(x_1,\\mu_F^2) \n\\Big( f_{q_2}^{H_2}(x_2,\\mu_F^2) + f_{\\bar q_2}^{H_2}(x_2,\\mu_F^2) \\Big)\n\\nonumber\\\\&+f_{\\bar q_1}^{H_1}(x_1,\\mu_F^2)~ \n\\Big( f_{q_2}^{H_2}(x_2,\\mu_F^2) + f_{\\bar q_2}^{H_2}(x_2,\\mu_F^2) \\Big)\\,,\n\\nonumber\\\\\nH_{g q}(x_1,x_2,\\mu_F^2)&=\nf_g^{H_1}(x_1,\\mu_F^2) \n\\Big(f_q^{H_2}(x_2,\\mu_F^2)\n+f_{\\bar q}^{H_2}(x_2,\\mu_F^2)\\Big)\\,,\n\\nonumber\\\\\nH_{q g}(x_1,x_2,\\mu_F^2)&=\nH_{g q}(x_2,x_1,\\mu_F^2)\\,,\n\\nonumber\\\\\nH_{g g}(x_1,x_2,\\mu_F^2)&=\nf_g^{H_1}(x_1,\\mu_F^2)~ \nf_g^{H_2}(x_2,\\mu_F^2)\\,.\n\\end{align}\nIn the next section, we study the numerical implication of NNLO QCD corrections \nto a spin-2 coupling non-universally to the SM in the DY process.\n\n\n\\begin{section}{Numerical results}\nIn this section, we present the numerical impact of our NNLO results on the\nproduction of di-leptons at the LHC. We considered a minimal scenario of\nnon-universal couplings\nof spin-2 particle with SM fields, where the spin-2 particle couples to all SM fermions \nwith coupling $\\kappa_Q = \\sqrt{2} k_q\/\\Lambda$ and to all SM gauge bosons with a\ncoupling strength of $\\kappa_G = \\sqrt{2} k_g\/\\Lambda$. Numerical results presented\nin this section are for the default choice of model parameters, namely spin-2 particle\nof mass $m_G=500$ GeV, the scale $\\Lambda=2$ TeV and the couplings $(k_q,k_g) = (0.5,1.0)$. Both the renormalization and factorization scales are set equal to the invariant mass of the di-lepton, i.e.,\n$\\mu_R = \\mu_F = Q$. Throughout, we use MSTW2008nnlo parton distribution functions (PDFs) with the corresponding $a_s$ \n\tprovided from {\\tt LHAPDF} unless otherwise stated. The choose $\\sqrt{S}=13$ TeV, the center of mass\nenergy of the incoming hadrons at the LHC.\n\nIn our analysis, we restricted ourselves to the situation where spin-2 particle\ndecays only to SM fields. The spin-2 particle decay widths for non-universal\ncouplings are same as those given in \\cite{Han:1998sg}. For the scenario taken\nup here, where in all spin-2 coupling to all bosons are taken to be identical,\nwe note that\nspin-2 particle decaying to $Z \\gamma$ vanishes identically \n$\\Gamma(h \\to Z\\gamma) =0$ \\cite{Falkowski:2016glr}. In fig.\\ref{subnlo}, we\npresent the NLO corrections\n(only at order $a_s$) from various subprocess contributions to the di-lepton\nproduction. For our default choice of model parameters, we find that $gg$ subprocess \ncontribution dominates over the rest. In general, the total NLO correction is\nsmaller than the $gg$ contribution because of negative contribution from $qg$\nsubprocess. We also note that the $gg$ has dominant contribution to the total \ndecay width for couplings (0.5, 1.0).\n\nTo estimate the impact of QCD corrections, we define the K-factors as follows:\n\\begin{eqnarray}\n\\text{K}_1 = \\frac{d\\sigma^{\\text{NLO}}\/dQ}{d\\sigma^{\\text{LO}}\/dQ} \n\\qquad \\text{and} \\qquad \n\\text{K}_2 = \\frac{d\\sigma^{\\text{NNLO}}\/dQ}{d\\sigma^{\\text{LO}}\/dQ}.\n\\label{eqnkf}\n\\end{eqnarray}\nIn the left panel of fig.\\ref{nlokf}, we present di-lepton invariant mass distributions to NLO for different choices of non-universal \ncouplings $(k_q,k_g)= (1.0,0.5), (1.0,0.1)$ and $(0.5,0.1)$. It is expected for universal couplings that at the resonance region,\nthe cross sections i.e. the height of the peak will be the same simply because the couplings at the matrix element level will cancel with those\nfrom the decay width of the spin-2 particle. However, for non-universal couplings this is not the case \nand hence cross sections at the resonance for different non-universal couplings will be different.\nThus, the precision as well as the phenomenological studies of the spin-2 particle production in this model\nwill be different from those of the warped extra dimension models.The NLO K-factor ($K_1$) is present in the right panel for various choices of $(k_q,k_g)$ and we observe that the K-factor crucially depends on the choice\nof non-universal couplings. In particular we notice that the K-factors are larger for the choice of couplings (1.0,0.1). To understand\nthis behaviour better, it is helpful to study the percentage contribution of various subprocesses to the total correction\nat NLO level, particularly from $qg$ subprocess due to its large flux at LHC energies. In particular we define the percentage of contribution \nof a given subprocess $ab$ as $R^{(i)}_{ab} = (d\\sigma^{H_1 H_2,(i)}_{ab}\/dQ^2) \/\n\t(d \\sigma^{H_1 H_2,(i)}\/dQ^2) \\times 100$, where the numerator is obtained by using \ncontribution from $\\Delta^{h,(i)}_{ab}$ and for the denominator, we include all the partonic channels.\n\n\nIn fig.\\ref{qgnlo-fr}, we plot $R^{(1)}_{qg}$ for different choices of non-universal couplings and we observe that the \nsign of the $qg$ subprocess crucially depends on the choice of couplings. Moreover, we find that $R^{(1)}_{qg}$ is positive\nand is as large as $70\\%$ for the couplings $(1.0,0.1)$, which explains the reason for the large K-factor at the resonance\nregion. However, the sign of the contribution from other subprocesses $q\\bar{q}$ and $gg$ is found \nto be positive for various couplings.\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_sub_nlo_cs_d8.pdf,width=8.0cm,height=8.0cm,angle=0}\n}\n\\caption{\\sf {First order QCD corrections from different subprocesses to di-lepton production. The choice of the\nmodel parameters is as mentioned in the text.}}\n\\label{subnlo}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_nlo_cs2.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_c0var_nlo_kf2.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Di-lepton invariant mass distributions are presented to NLO QCD for\ndifferent choice of couplings ($k_q, k_g$)} in the left panel. The corresponding K-factors\nare presented in the right panel.}\n\\label{nlokf}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_qg_nlo_fr.pdf,width=7.50cm,height=7.50cm,angle=0}\n}\n\\caption{\\sf {Percentage of $qg$ subprocess contribution $R^{(1)}_{qg}$ as defined in the text for different\nchoice of non-universal couplings.}}\n\\label{qgnlo-fr}\n\\end{figure}\n\nIn fig.\\ref{subnnlo}, we present the second order QCD corrections (at $(a_s^2)$) \nfrom various subprocesses to the di-lepton production for the default choice of couplings\n$(k_q, k_g) = (0.5,1.0)$. Similar to the first order QCD corrections, $gg$ \nsubprocess has the dominant contribution over the rest while $qg$ has a negative contribution\nbut is comparable in magnitude to that of $gg$. Because of this large $qg$ subprocess \ncontribution which can flip its sign for certain couplings, it is necessary to study \nthe percentage of its relative contribution $R^{(2)}_{qg}$ to the total second order \ncorrection.\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_sub_nnlo_cs_d8.pdf,width=7.50cm,height=7.50cm,angle=0}\n}\n\\caption{\\sf {Second order QCD corrections from various subprocess to the di-lepton invariant\nmass distribution.}}\n\\label{subnnlo}\n\\end{figure}\nIn fig.\\ref{fr-qgnnlo}, we present $R^{(2)}_{qg}$ for different\nchoices of couplings. As can be seen from the figure, the $qg$ contribution varies\nfrom about $-70\\%$ to about $35\\%$ for the choice of couplings considered here.\nIn particular, for the couplings $(1.0,0.1)$ and $(0.5, 0.1)$ the $qg$ contribution\nis positive while it is negative for the rest of the couplings as well as in the SM.\nThis implies large K-factors for the choice of $(1.0, 0.1)$ couplings for a wide\nrange of the invariant mass distribution.\nIt is worth mentioning here that in general $qg$ subprocess has a negative contribution\nboth in the SM as well as in the case of universal couplings, irrespective of the value\nof the latter.\n\nWe then present the di-lepton invariant mass distribution to various orders in QCD\nfor a particular choice of couplings $(1.0, 0.5)$ in fig.\\ref{c1order}. In this\ncase, the NLO QCD corrections for the signal (SM+spin-2) are as large as $60\\%$ while those\nat NNLO, they are about $80\\%$ at the resonance. Similar results are presented but for our \ndefault choice of model parameters in fig.\\ref{c2order}. Here, the corresponding NLO\ncorrections to the signal are about $45\\%$ while those of NNLO are about $55\\%$.\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_qg_nnlo_fr.pdf,width=7.50cm,height=7.50cm,angle=0}\n}\n\\caption{\\sf {Percentage of $qg$ contribution $R_{qg}^{(2)}$} as defined in the text.}\n\\label{fr-qgnnlo}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_order_cs1.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_order_kf1.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {cross sections at different orders (left panel) and the corresponding K-factors $K_1$ \nand $K_2$ (right panel) are presented for different couplings.}}\n\\label{c1order}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_order_cs2.pdf, width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_order_kf2.pdf, width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Same as fig.\\ref{c1order} but for a different set of couplings.}}\n\\label{c2order}\n\\end{figure}\n\nNext, we will study the invariant mass distributions of both the\nSM and the signal, in particular the impact of QCD corrections for\ndifferent couplings. In fig.\\ref{c01var},\\ref{c02var},\\ref{c03var}, we \npresent these distributions in the left panel and the corresponding NNLO K-factors \n$(K_2)$ in the right panel for 9 different set of non-universal couplings.\nThe respective K-factors for the signal at the resonance region are found\nto vary from about $1.5$ to about as large as $3.0$, owing to different\ncontributions from $qg$ subprocess to the signal as explained before.\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_cs1.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_c0var_kf1.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Di-lepton invariant mass distributions to NNLO for different choice\nof couplings (left panel) and the corresponding K-factors (right panel) are presented.}}\n\\label{c01var}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_cs2.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_c0var_kf2.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Same as fig.\\ref{c01var} but for a different set of couplings.}}\n\\label{c02var}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_c0var_cs3.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_c0var_kf3.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Same as fig.\\ref{c01var} but for a different set of couplings.}}\n\\label{c03var}\n\\end{figure}\n\nFurther, we depict the dependence of invariant mass distributions to NNLO\nin QCD on the center of mass energy E$_\\text{cm}$ of the protons at the LHC.\nWe present our results for E$_\\text{cm} = 7, 8, 13$ and $14$ TeV energies\nfor two different sets of couplings. In fig.\\ref{ecm-var1}, we present\nthe invariant mass distributions and the corresponding K-factors for\nthe universal couplings of $(1.0, 1.0)$. For default choice of non-universal\ncouplings $(0.5, 1.0)$, similar results are presented in fig.\\ref{ecm-var2}.\nIn both the cases, the K-factors at the resonance region \nare found to be larger for $7$ TeV case and are about $1.6$.\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc_ecmvar_cs_d1.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc_ecmvar_kf_d1.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Dependence of cross sections on the di-lepton invariant mass\ndistribution for universal couplings $(1.0,1.0)$.}}\n\\label{ecm-var1}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc_ecmvar_cs.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc_ecmvar_kf.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Same as fig.\\ref{ecm-var1} but for the default choice of\nnon-universal couplings $(0.5,1.0)$.}}\n\\label{ecm-var2}\n\\end{figure}\n\nIn what follows, we study the renormalization scale $\\mu_R$ and \nthe factorization scale $\\mu_F$ uncertainties in our predictions.\nFor this, we define the ratios $R(\\mu_R,\\mu_F)$ of the invariant mass\ndistributions computed at arbitrary scale to those computed at the\nfixed scale. These are defined as\n\\begin{eqnarray*}\n\\text{R}(\\mu_R,\\mu_F) = \\frac{d\\sigma(\\mu_R, \\mu_F)\/dQ}{d\\sigma(Q_0,Q_0)\/dQ}.\n\\end{eqnarray*}\nFor a systematic study of these scale uncertainties, we use LO (NLO and NNLO) PDFs\nfor LO (NLO and NNLO) cross sections respectively.\nFor convenience, we will study at the resonance region i.e. $Q=M=500$ GeV.\nThe fixed scale is set equal to $Q_0 = M$. In the left panel of fig.\\ref{mu-var},\nwe present $\\text{R}(\\mu_R,Q_0)$ by varying $\\mu_R$ from $0.1 Q$ to $10 Q$ and keeping\n$\\mu_F=Q_0$ fixed. At LO, there is no scale $\\mu_R$ entering the cross section.\nThe corresponding scale uncertainties at NLO and NNLO are respectively,\nabout $19\\%$ and $5\\%$.\n\nIn the right panel of fig.\\ref{mu-var}, we present $\\text{R}(Q_0,\\mu_F)$ by\nvarying $\\mu_F$ from $0.1 Q$ to $10 Q$ and keeping $\\mu_R=Q_0$ fixed. For this\nrange of factorization scale variation, the uncertainties in the distributions\nat LO, NLO and NNLO are respectively about $49\\%$, $31\\%$ and $26\\%$.\n\nFinally, we present $\\text{R}(\\mu,\\mu)$ (where $\\mu_R=\\mu_F =\\mu$) in fig.~\\ref{murf-var} by\nvarying $\\mu$ from $0.1 Q$ to $10 Q$. The corresponding scale uncertainties\nat LO, NLO and NNLO are respectively about $49\\%$, $52\\%$ and $30\\%$.\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_murvar_cs.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_mufvar_cs.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Renormalization (left) and factorization (right) scale dependence \nof the di-lepton invariant mass distribution at LO, NLO and NNLO.} }\n\\label{mu-var}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_murfvar_cs.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf Same as fig.\\ref{mu-var} but with $\\mu_R=\\mu_F=\\mu$.}\n\\label{murf-var}\n\\end{figure}\n\nBefore we summarize, we also study the uncertainties in our predictions\ndue to different choice of PDFs used in the calculation. For this analysis,\nwe make predictions using {\\tt MSTW2008, CT10, NNPDF3.0} and {\\tt ABM12}\nPDFs. The results for the invariant mass distributions for the signal at \nNNLO are presented in the left panel of fig.\\ref{pdf-var} and the corresponding \nK-factors are presented in the right panel of fig.\\ref{pdf-var}. The K-factors\nhere are found to vary from $1.18$ at $Q=400$ GeV to about $1.28$ at $Q=1000$ GeV,\nwhile at the resonance they are about $1.54$.\n\n\\begin{figure}[htb]\n\\centerline{\n\\epsfig{file=lhc13_pdfvar_cs.pdf,width=7.5cm,height=7.5cm,angle=0}\n\\epsfig{file=lhc13_pdfvar_kf.pdf,width=7.5cm,height=7.5cm,angle=0}\n}\n\\caption{\\sf {Di-lepton invariant mass distributions for different choice\nparton distribution functions (PDFs)}. }\n\\label{pdf-var}\n\\end{figure}\n\n\\end{section}\n\n\n\n\n\\section{Conclusion}\nIn this article, we have studied for the first time \nthe impact of NNLO QCD corrections\nto the production of a pair of leptons in the presence of a massive spin-2 \nparticle at the LHC.\nThis is done in a minimal scenario where spin-2 particles couple differently to SM\nfermions and SM bosons.\nThis task has been achieved by using the universal IR structure of QCD amplitudes\nand the additional UV renormalization that is particularly required for the\ncase of non-universal couplings, thanks to the recent computations\nof the form factors in QCD beyond leading order with non-universal couplings.\n\nUnlike the models with universal couplings, here the \nphenomenology is rich and different. For collider phenomenology at the LHC, we \npresent the results for the di-lepton production via spin-2 particle in particular \nfor the invariant mass distribution of a pair of leptons for LHC energies. Even at LO, one\ncan notice that the signal has different cross sections at the resonance region \nin contrast to the gravity mediated models where the signal has the same cross section for\ndifferent universal couplings.\nAt higher orders in QCD, say NLO onwards, the spin-2 exploits its freedom of \nbeing produced with different coupling strengths even for a given subprocess. This \nparticular aspect here makes the QCD radiative corrections crucially dependent\non the choice of the spin-2 coupling strength. Hence the impact of \nQCD corrections here is very much different from those of di-lepton or Higgs production \nin the SM.\n\nWe find from our numerical results that the QCD corrections for \n$(k_q,k_g) = (1.0,0.1)$\nare dominant over the rest of the choice of couplings, making the K-factors as large\nas 2.5 or more. For this choice of couplings, the LO gluon fusion contribution\nis very small although gluon fluxes are high for the kinematic region of producing\na 500 GeV particle. But at higher orders where the spin-2 can be emitted off from a \nquark line with large coupling strength, the large quark-gluon fluxes at LHC energies\ncan potentially enhance the spin-2 production rate, as is evident from the numerical\nresults. For di-lepton production the `sign' of $qg$ subprocess is usually negative both\nin the SM as well as in the models of universal couplings. But here we note that the \n`sign' of $qg$ subprocess contribution changes with the non-universal couplings and \nfor the above choice it is positive.\n\nWe also gave predictions for different center of mass energies of the incoming protons\nat the LHC and found that the K-factors are larger for 7 TeV case. We further quantified\nthe renormalization and factorization scale uncertainties. For the variation of the scales \n$\\mu_R$ and $\\mu_F$ between $0.1 Q$ and $10 Q$, the uncertainties are found to get reduced \nfrom about 50\\% at LO to about 30\\% at NNLO. For completeness, we also quantified the \nuncertainty in our predictions due to different choice of the PDFs.\n\nThese NNLO QCD predictions for the hadroproduction of a massive spin-2 with non-universal\ncouplings will augment the similar results previously computed at NLO level and compliment\nthe earlier results for NNLO QCD corrections in models with spin-2\/graviton universal couplings.\n\n\\section*{Acknowledgement}\n\nWe thank G. Das for helping us to validate a part of the numerical results. We also thank T.\\ Ahmed and N.\\ Rana for useful discussions.\nV.\\ Ravindran would like to thank M.\\ Neubert for the visit at University\nof Mainz where the last part of the work was carried out.\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\n\nWe are interested in designing a receding horizon control (RHC) algorithm that is able to readily accommodate interval-wise total energy constraints (ITEC). An ITEC system has the property that the total energy of the control input during a (pre-established) periodic time interval is limited (See Fig. \\ref{description}). Examples of systems subject to ITEC include solar-powered systems that operate on battery power when sunlight is not available (e.g., at night). One can think of solar houses and solar-panel equipped mobile agents performing surveillance tasks for an extended period of time. Other examples of systems subject to ITEC can also be found in avionics and include temperature control of the aircraft cabin, for which power allocation may be limited at certain times, such as during take-off and landing.\n\nThe idea behind RHC is simple: the controller solves a finite-horizon control problem, and the first element of the computed control sequence is applied to the system. At the next time step (or sampling time for continuous-time systems), the procedure is repeated. The RHC strategy is very appealing because in most cases it allows for online tractable computation of the control input that does not suffer from the deficiencies of a pure open-loop strategy. The main drawback is that the direct implementation of RHC can be disastrous because stability is not (immediately) guaranteed.\n\nReceding horizon control (or equivalently, model predictive control) dates back to the 1960s, but not until the late 1980s was the issue of stability dealt rigorously \\cite{Mayne:2000p442}. Until then, industry proponents of RHC had to manually ``fine tune\" parameters to achieve a seemingly stable system. The first modification of RHC that guaranteed stability was the introduction of a zero final state constraint \\cite{Mayne:1990p197},\\cite{Keerthi:1988p725}, which in many cases renders the finite optimization problem unfeasible (or at the very least, it places a heavy burden on the controller). Later relaxations of the zero final state constraint included terminal cost functions (that penalize large final states) and terminal invariant constraint sets \\cite{Chisci:1996p1035}, \\cite{Scokeart:1999p1036}, \\cite{Nicolao:2000p1307}. For a detailed survey of RHC, see \\cite{Mayne:2000p442}.\n\nAny of the traditional implementations of RHC (with step wise receding horizon) cannot readily accommodate the ITEC, as we explain in the next section. To remedy this problem, we implement a strategy that utilizes an initial horizon length of $2N$ that recedes by $N$ steps every $N$ steps (where $N$ is the length for which the ITEC is active) and does not recede at any other time instances. The basic idea is for the optimization horizon to encompass the ITEC in its entirety at all times. To guarantee stability of the system under this strategy, we impose contractive final state constraints during the optimization intervals for which the ITEC is not imposed, along with the assumption that the system is $\\beta$-stabilizable (to be defined in section \\ref{secbeta}).\n\nThe idea of contractive constraints was first suggested by \\cite{Yang:1993p694} and has been later used in other works, such as \\cite{Oliveira:1994p553} and \\cite{Kothare:2000p452}. These papers also make use of a varying horizon strategy, which is also employed in \\cite{Michalska:1993p201}, where a dual-mode controller is employed that steers the system to an invariant set by a horizon length that is computed online, and once the state lies in the invariant set the controller changes mode to drive it to zero. In \\cite{Kothare:2000p452}, two horizons are employed: a shorter horizon used in the computation of the control sequence, and a longer horizon in which the predicted final state (propagated with constant input over the longer horizon) needs to satisfy the contractive constraint. The paper that is most similar to this one is \\cite{Oliveira:1994p553}, where a smaller, non-receding contraction horizon is employed. However, the prediction horizon moves at every step and is not suitable for problems with systems subject to ITEC. Another paper worth to mention is \\cite{Kothare:2003p686}, where a time-varying terminal constraint set is employed. Though the terminal constraint is not (necessarily) contractive and the authors employ a fixed horizon length, we feel this work is relevant to our research due to the time-varying nature of the terminal constraint set.\n\nThe paper is organized as follows: in section \\ref{secprob}, we give the problem statement and discuss the shortcomings of the traditional RHC strategy applied to systems subject to ITEC. In section \\ref{secbeta}, we introduce the idea of $\\beta$-stabilizability. In section \\ref{secvhc}, we describe the new RHC strategy, and provide proof of the stability of the system under the proposed algorithm. In section \\ref{secsim}, we give a concrete example along with simulation results that showcases the new RHC strategy. In section \\ref{seccon}, we conclude. Finally, in the appendix we provide the proof to a lemma used to prove our main result.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=12cm]{figure1.jpg}\n \\caption{ITEC systems: Total energy constraints are enforced only at certain intervals}\n \\label{description}\n\\end{figure}\n\n\n\\section{Problem Statement}\\label{secprob}\nSuppose we have a system with dynamics given by:\n\\begin{equation}\\label{dynamics}\nx(k+1)= \\phi(x(k), u(k)),\n\\end{equation}\n\\noindent where $x(k) \\in \\mathbb{R}^n$ and $u(k) \\in \\mathbb{R}^m$ are the state and the input of the system at time $k$, respectively; and $\\phi: \\mathbb{R}^n \\times \\mathbb{R}^m\\rightarrow \\mathbb{R}^n$, with $\\phi(0,0)=0$. Moreover, the system is constrained by:\n\\begin{align}\nu(k) &\\in \\mathcal{U},~~\\forall k \\label{c2} \\\\\n\\displaystyle \\sum_{k=(2p-1)N}^{(2p)N-1}||u(k)||^2& \\leq C, ~~p \\in \\mathbb{Z}_+, \\label{c3}\n\\end{align}\n\n\\noindent where $\\mathcal{U}$ is the input constraint set. Constraint (\\ref{c3}) is known as the interval-wise total energy constraint (ITEC). In a real world application, this would be imposed during time intervals in which the system needs to rely solely on battery power. During other time intervals the system is powered by other energy sources (such as solar panels) and the total energy constraint is not imposed (see Fig. \\ref{description}).\n\nThe infinite horizon control problem is to find the control sequence that minimizes the quadratic cost:\n\\begin{equation}\n\\begin{array}{ll}\n\\mathrm{minimize}~~&\\displaystyle\\sum_{k=1}^{\\infty} ||x(k)||^2+||u(k-1)||^2\\\\\n\\mathrm{subject~to} & \\mathrm{(\\ref{dynamics}), (\\ref{c2})~and~(\\ref{c3}).}\n\\end{array}\n\\end{equation}\n \nOne way of generating control inputs in a tractable-online fashion is to employ a RHC strategy. The traditional implementation of RHC, however, causes a dilemma: how much energy should the controller allocate to the interval that partially covers the total energy constraint interval? One approach is to proportionally allocate the energy constraints. For example, with a horizon $N=3$, the traditional receding horizon approach would perform a finite optimization at $k=1$ (see Fig. \\ref{rhc}) given by:\n\\begin{equation*}\n\\begin{array}{ll}\n\\mathrm{minimize}~~&\\displaystyle\\sum_{k=2}^{4} ||x(k)||^2+||u(k-1)||^2\\\\\n\\mathrm{subject~to} & \\mathrm{(\\ref{dynamics}), (\\ref{c2})}\\\\\n& ||u(3)||^2 \\leq C\/3, \\label{c3_rhc1}\n\\end{array}\n\\end{equation*}\n\\noindent and at time $k=2$, the total energy constraint would become:\n \\begin{equation*}\n||u(3)||^2+||u(4)||^2 \\leq 2C\/3.\n\\end{equation*}\nIn this paper we propose a modified receding horizon strategy that not only provides analytical stability guarantees, but also does not require a heuristic solution to the ITEC allocation problem. It is important to note that the overlap of the constraint interval at the beginning of the optimization interval is no cause for concern. For example, at time $k=4$ in Fig. \\ref{rhc} the amount of energy spent by $u(3)$ cannot be changed, say $||u(3)||^2=\\gamma_3$. Then, the optimization will be carried out with the total energy constraint equal to: \n \\begin{equation*}\n||u(4)||^2+||u(5)||^2 \\leq C-\\gamma_3.\n\\end{equation*}\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=12cm]{figure2b.pdf}\n \\caption{Traditional RHC with $N=3$. Note that during times $k=1$ and $k=2$, the total energy constraint interval is partially covered by the optimization horizon}\n \\label{rhc}\n\\end{figure}\n\n \\section{$\\beta$-STABILIZABILITY}\\label{secbeta}\n\nThe system is said to be $\\beta$-stabilizable with $N$ steps if there exists an admissible control sequence that reduces the size of the state by a factor of $\\beta$ in $N$ steps. In this paper, we do not require the contraction to occur over the optimization interval where the ITEC is active. A formal definition of $\\beta$-stabilizability with ITEC is given below:\n\n\\begin{mydef} We say that a system $\\phi$ is {$\\beta$-stabilizable} with ITEC parameter $C$ and horizon $N$ in an open ball $B_N$ centered around the origin if there exists $\\sigma>0$ such that:\n\\begin{align}\n\\mathcal{U}_N(x, \\sigma, \\beta) &\\not= 0\\\\\n\\mathcal{U}^C_N(x, \\sigma) &\\not= 0\n\\end{align}\nwhere $x \\in B_N$, and $\\mathcal{U}_N(x(k), \\sigma, \\beta)$, with $0 \\leq \\beta < 1$, is the set of all finite control sequences of length $N$ such that:\n\\begin{align}\n\\sum_{j=k+1}^{k+N} ||x(j)||^2+||u(j-1)||^2 & \\leq \\sigma ||x(k)||^2,\\label{boundedcost}\\\\\nu(k) &\\in \\mathcal{U} \\label{boundedinput}\\\\\n||x(k+N)||^2 &\\leq \\beta ||x(k)||^2\n\\end{align}\nand $\\mathcal{U}^C_N(x(k), \\sigma)$ is the set of all finite control sequences of length $N$ such that (\\ref{boundedcost}) and (\\ref{boundedinput}) hold and \n\\begin{align}\n||x(k+N)||^2 &\\leq ||x(k)||^2 \\label{itecbound}\\\\\n\\displaystyle \\sum_{j=k}^{k+N-1}||u(k)||^2& \\leq C, ~~p \\in \\mathbb{Z}_+, \\label{itec}\n\\end{align}\n\\end{mydef}\\\n\nNote that (\\ref{itecbound}) does not impose contraction of the state norm if we constrain the control sequence to respect (\\ref{itec}). We only require the state not to grow. Moreover, if the system has zero-input dynamics given by $\\phi(x(k), 0) = x(k)$, then (\\ref{itecbound}) is immediately satisfied.\n\n \\section{INTERVAL-WISE HORIZON CONTROL} \\label{secvhc}\n \nIn this section, we present an interval-wise receding horizon strategy (IRHC) that uses varying optimization horizons and contractive constraints. The varying horizon will not only enable the construction of an appropriate strategy for the problem with ITEC, but also guarantee asymptotic infinite horizon stability of the system with the imposition contractive constraints and the assumption that the system is $\\beta$-stabilizable.\n\nIn order to accommodate for the total energy constraint, we employ a receding horizon strategy that entirely encompasses the constraint interval at all times (see Fig. \\ref{vhc}). To accomplish this, the horizon is initially set for $h=2N$. Traditional RHC is performed, except that the horizon does ``recede\" (i.e., $h = h-1$). When the horizon reaches $h=N+1$, it is expanded again to $h=2N$.\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=12cm]{figure3.jpg}\n \\caption{Interval-wise RHC with parameter $N=3$ based on algorithm \\ref{algo1}}\n \\label{vhc}\n\\end{figure}\n\nWe implement a receding horizon strategy in the following way:\n\n\\begin{alg}\\label{algo1} {\\bf IRHC with contractive constraints}\n\\begin{enumerate}\n\\item Set the optimization horizon $h = 2N$, time index $k=0$, energy-used tracking parameter $\\gamma = 0$ and auxiliary indices $i=0$, $f=0$ and $p=N$.\n\\item Read in $x(k)$ and solve the constrained optimization problem:\n\\begin{equation}\n\\begin{array}{ll}\n\\mathrm{minimize}~~&\\displaystyle\\sum_{j=k+1}^{k+h} ||x(j)||^2+||u(j-1)||^2\\\\\n\\mathrm{subject~to} & \\mathrm{(\\ref{dynamics}), (\\ref{c2})}\\\\\n& \\displaystyle \\sum_{j=\\max\\{p,k\\}}^{p+N-1}||u(j)||^2 \\leq C-\\gamma,\\\\\n& ||x(k+h_c)||^2 \\leq \\beta^{(i+1)} ||x(0)||^2\n\\end{array}\n\\end{equation}\nand apply the first element in the resulting control sequence to the system, say $u_1$.\n\\item If $f = 0$ (this means the ITEC sits at the beginning of the optimization horizon), set $\\gamma=\\gamma+||u_1||^2$\n\\item Reduce horizon $h = h-1$. \n\\item If $h=N$, set $h=2N$ (this will push the horizon $N$ steps ahead) and $\\gamma=0$ . If $k = p-1$, set $i=i+1$ (this will contract the constraint at the end of the new horizon), $f =1$; else if $k=p+N-1$, set $p=p+2N$ and $f =0$.\n\\item Set $k=k+1$ and go back to step 2.\n\\end{enumerate}\n\\end{alg}\\\n\n\n\n\nWe are now ready to state our main result:\n\n\\begin{thm}\\label{thm1} If the ITEC system $\\phi$ is $\\beta$-stabilizable with horizon $N$, then the interval-wise receding horizon strategy described in Algorithm \\ref{algo1} will drive it asymptotically stable for any initial condition in $x(0) \\in B_N$. Furthermore, the resulting dynamics will satisfy:\n\\begin{equation}\n\\sum_{j=1}^{\\infty} ||x(j)||^2+||u(j-1)||^2 \\leq \\frac{1+\\beta}{1-\\beta} \\sigma||x(0)||^2.\n\\end{equation}\n\\end{thm}\\\n\n\nWe will prove this result by constructing a sequence of control sequences $v_q(k)$ indexed by $q$ that converges to the control sequence obtained by Algorithm \\ref{algo1} and a bounded sequence $\\Gamma_q$, associated with $v_q(k)$. To proceed, we need to define $v_q(k)$ and $\\Gamma_q$:\n\\begin{mydef} $v_q(k)$ is a $q$-indexed sequence of control sequences defined by:\n\\begin{equation}\\label{defv}\nv_q(k) = \\left\\{ \\begin{array}{ll} v_{q-1}(k) & 0\\leq k< (q-1)N;\\\\\nu_{qN}^{*(q+2)N-1} & (q-1)N \\leq k< (q+1)N;\\\\\n0 & k\\geq (q+1)N\\end{array} \\right. \n\\end{equation}\n\\noindent where $u_{k_1}^{*k_2}$ is the control segment obtained by Algorithm \\ref{algo1} for $k_1 \\leq k \\leq k_2$, and $v_1(k)$ is given by:\n\\begin{equation}\nv_1(k) = \\left\\{ \\begin{array}{ll} u_{0}^{*2N-1} & 0\\leq k< 2N;\\\\\n0 & k \\geq 2N \\end{array} \\right. \n\\end{equation}\n\\end{mydef}\\\n\nIn other words, $v_q(k)$ is indexed with each time the horizon is pushed $N$ steps ahead, and it consists of the past control inputs and the current $2N$ control segment, followed by zeros.\n\n\n\\begin{mydef} $\\Gamma_q$ is defined as:\n\\begin{equation}\\label{gamma}\n\\Gamma_q = \\sum_{j=1}^{(q+1)N}||x(j)||^2+||v_q(j-1)||^2+\\sum_{j=q+1}^{\\infty}\\beta^{\\lceil \\frac{j}{2} \\rceil}\\sigma||x(0)||^2\n\\end{equation}\n\\noindent where $v_q$ is as in (\\ref{defv}) and $x(j)$ is generated by $v_q(j-1)$ and the dynamics (\\ref{dynamics}), and $\\lceil \\cdot \\rceil$ is the ceiling function.\n\\end{mydef}\n\nFinally, the following lemma will be crucial to prove the asymptotic stability of the system under Algorithm \\ref{algo1}:\n\n\\begin{lem}\\label{lemma}If the system is $\\beta$-stabilizable with horizon $N$, then:\n\\begin{equation}\\label{lem1}\n\\Gamma_q \\leq \\frac{1+\\beta}{1-\\beta} \\sigma||x(0)||^2, ~~\\forall q\\geq 1\n\\end{equation}\n\\noindent where $\\Gamma_q$ is as in (\\ref{gamma}).\n\\end{lem}\n\\begin{proof} [Proof of Lemma \\ref{lemma}] See Appendix.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm1}] Let $G$ be defined as the infinite horizon cost incurred by the system under Algorithm \\ref{algo1}, i.e.:\n\n\\begin{equation}\nG \\equiv \\sum_{k=0}^{\\infty} ||x(k+1)||^2+||u^*(k)||^2.\\\\\n\\end{equation}\n\\noindent and note that we can write:\n\\begin{align}\n||u^*(k)||^2 &= ||u^*(k)-v_q(k)+v_q(k)||^2\\\\\n&\\leq ||v_q(k)||^2+||u^*(k)-v_q(k)||^2;\n\\end{align}\n\\noindent and so, G is bounded by:\n\\begin{equation}\nG \\leq \\lim_{q\\to\\infty}\\sum_{k=0}^{\\infty} ||x(k+1)||^2+ ||v_q(k)||^2+||u^*(k)-v_q(k)||^2\n\\end{equation}\nUsing (\\ref{gamma}), the previous inequality can be written as:\n\\begin{align}\nG &\\leq \\lim_{q\\to\\infty} \\Gamma_q +\\lim_{q\\to\\infty}\\sum_{k=0}^{\\infty}||u^*(k)-v_q(k)||^2\\\\\n&= \\lim_{q\\to\\infty} \\Gamma_q \\label{conv} \\\\\n&\\leq \\frac{1+\\beta}{1-\\beta} \\sigma||x(0)||^2\\label{finalstep}\n\\end{align}\n\\noindent where (\\ref{conv}) comes from the fact that the sequence $v_q(k)$ converges to sequence $u^*(k)$, and (\\ref{finalstep}) comes from (\\ref{lem1}). This concludes the proof.\n\\end{proof}\\\n\nNote that the IRHC algorithm can also be applied to system without ITEC, as described in the following corollary:\n\\begin{cor}If the non-ITEC system $\\phi$ is $\\beta$-stabilizable with horizon $N$, then the IRHC strategy described in Algorithm \\ref{algo1} (modified to add contraction at every $N$ steps) will drive it asymptotically stable for any initial condition in $x(0) \\in B_N$. Furthermore, the resulting dynamics will satisfy:\n\\begin{equation}\n\\sum_{j=1}^{\\infty} ||x(j)||^2+||u(j-1)||^2 \\leq \\frac{\\sigma}{1-\\beta} ||x(0)||^2\n\\end{equation}\n\\end{cor}\\\n\nThe proof is similar to that of Theorem \\ref{thm1}.\n\nIt is important to mention that the bound on the infinite horizon cost is only used to prove stability of the system and no claims on tightness are made. \n\n\\section{SIMULATION}\\label{secsim}\nIn this section, we employ the proposed algorithm to stabilize the two-dimensional nonlinear oscillator previously discussed in \\cite{Primbs:1999p697}. The dynamics of the system are given by:\n\n\\begin{align*}\n\\dot{x}_1 = & x_2\\\\\n\\dot{x}_2 = &-x_1\\Big(\\frac{\\pi}{2} +\\tan^{-1}(5x_1)\\Big)\\\\\n& ~~~~~~~~-\\frac{5x_1^2}{2(1+25x_1^2)}+4x_2+3u\n\\end{align*}\n\nWe used Matlab to implement the proposed IRHC algorithm to the discretized version of the system with sampling period $0.05$ and ITEC parameters $C=4.8$, $N=4$, initial state $x(0)=[2; -1]$ and $\\beta=0.8$. The $\\beta$-stabilizability (with $\\beta=0.8$) was determined empirically. Figs. \\ref{fig_x} and \\ref{fig_u} show the evolution of the norms of the state and of the input, respectively, at different time instances. The solid lines (marked with x's) represent past signals while the dashed lines (marked with circles) represent the future prediction. Note that the ITEC constraints are satisfied and that the prediction horizon encompasses the constraint intervals in their entirety. Fig. \\ref{fig_x1x2} shows the plot of $x_1$ vs. $ x_2$. Though the trajectory of the states have a oscillatory nature, the algorithm stabilizes the system.\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=9.3cm]{ex5_x.jpg}\n \\caption{Evolution of the state norm for the ITEC simulation.}\n \\label{fig_x}\n \\includegraphics[width=9.3cm]{ex5_u.jpg}\n \\caption{Evolution of the input norm for the ITEC simulation.}\n \\label{fig_u}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=9.3cm]{ex5x1x2.jpg}\n \\caption{ITEC simulation: $x_1$ vs $x_2$}\n \\label{fig_x1x2}\n\\end{figure}\n\nSince this example was used in \\cite{Primbs:1999p697} to showcase the dangers of the direct application of a receding horizon strategy, we made further simulations where the ITEC is not imposed. An interesting characteristic of this system (and the reason it was discussed in \\cite{Primbs:1999p697}) is that the direct application of a RHC strategy works well for certain horizons, but is unstable for others. We ran simulations with $N=4$ and $N=5$ for the same initial condition as before. The traditional RHC used a fixed horizon of $2N$ and no endpoint constraints, while the proposed IRHC algorithm used contractive constraint parameter $\\beta = 0.8$ and $\\beta = 0.2$. We also compare the receding horizon strategies to the one obtained by $u=-3x_2$ (see \\cite{Primbs:1999p697}). As we can see from Figs. \\ref{ex4_1} and \\ref{ex4_2}, the RHC performs very well for $N=4$, but the system becomes unstable when the horizon is increased to $N=5$. The proposed IRHC strategy with $N=4$ displays a similar oscillatory behavior as before, but incurs a lower cost. Table \\ref{table1} gives the simulated infinite horizon costs for all cases. Note that the system with the constraint $\\beta=0.8$ yields a lower cost than the system with the tighter constraint $\\beta=0.2$, but convergence is faster. Moreover, it is interesting to note that when the horizon parameter is increased to $N=5$, the total cost for $\\beta = 0.8$ increases while the cost for $\\beta=0.2$ decreases. \n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=9.3cm]{N4b08a.jpg}\n \\caption{Non-ITEC simulation with parameter $N=4$, IRHC with $\\beta =0.8$. Here the traditional RHC is stables and performs well}\n \\label{ex4_1}\n \\includegraphics[width=9.3cm]{N5b08a.jpg}\n \\caption{Non-ITEC simulation with parameter $N=5$, traditional RHC is unstable but IRHC $\\beta =0.8$ with still stabilizes the system}\n \\label{ex4_2}\n\\end{figure}\n\n\n\\begin{table}[tb]\n\\caption{Performance Comparison }\n\\label{table1}\n\\begin{center}\n\\begin{tabular}{|r|c|c|}\n\\hline\n& $N=4$ & $N=5$\\\\\n\\hline\n$u=-3x_2$ & \t\t \\multicolumn{2}{|c|}{$325.4$} \\\\\n\\hline\nRHC & \t\t\t\t$437.2$ & Unstable\\\\\n\\hline\nIRHC($\\beta = 0.8$) & $412.9$ & 539.4\\\\\n\\hline\nIRHC($\\beta = 0.2$) & \t$3947$ &2053 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\\section{CONCLUSION}\\label{seccon}\nWe proposed a modified receding horizon strategy that readily handles total energy constraints that are only imposed at certain pre-specified periodic time intervals. Our approach utilizes an interval-wise receding horizon for the online optimization problem and contractive constraints to guarantee boundedness of the infinite horizon cost. The new algorithm is demonstrated by an example and compared to the direct implementation of the traditional step-wise receding horizon when no constraints are imposed. Future work include the use of imperfect state information (in a stochastic setting) and the application of this strategy to decentralized control problems. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nT-splines \\cite{SZBN:2003} have been introduced as a free-form geometric technology and were the first tool of interest in Adaptive Isogeometric Analysis (IGA). Although they are still among the most common techniques in Computer Aided Design, T-splines provide algorithmic difficulties that have motivated a wide range of alternative approaches to mesh-adaptive splines, such as hierarchical B-splines \\cite{Forsey:Bartels:1988,KVZB:2014}, THB-splines \\cite{GJS:2012}, LR splines \\cite{DLP:2013}, hierarchical T-splines \\cite{ESLT:2015}, amongst many others. \n\nOne major difficulty using T-splines for analysis has been pointed out by Buffa, Cho and Sangalli \\cite{BCS:2010}, who showed that general T-spline meshes can induce linear dependent T-spline blending functions. This prohibits the use of T-splines as a basis for analytical purposes such as solving a discretized partial differential equation. \nThis insight motivated the research on T-meshes that guarantee the linear independence of the corresponding T-spline blending functions, referred to as \\emph{analysis-suitable T-meshes}.\nAnalysis-suitability has been characterized in terms of topological mesh properties \\cite{ZSHS:2012} and, in an alternative approach, through the equivalent concept of Dual-Compatibility \\cite{BBCS:2012}.\nWhile Dual-Compatibility has been characterized in arbitrary dimensions \\cite{BBSV:2014}, Analysis-Suitability as a topological criterion for linear independence of the T-spline functions is only available in the two-dimensional setting.\n\nIn this paper, we introduce analysis-suitable T-splines for those 3D meshes that are refinements of tensor-product meshes, and propose an algorithm for their local refinement, based on our preliminary work in \\cite{Morgenstern:Peterseim:2015}. In addition, we generalize the algorithm from \\cite{Morgenstern:Peterseim:2015} by introducing a grading parameter $m$ that represents the number of children in a single elements' refinement. This allows the user to fully control how local the refinement shall be. Choosing $m$ large yields meshes with very local refinement, while a small $m$ will cause more wide-spreaded refinement. The former yields a smaller number of degrees of freedom, while the latter reduces the overlap of the basis functions and hence provides sparser Galerkin and collocation matrices.\n\nThis paper is organized as follows. Section~\\ref{sec: refinement} defines the initial mesh and basic refinement steps and introduces our new refinement algorithm. Section~\\ref{sec: adm meshes} then characterizes the class of `admissible meshes' generated by this algorithm.\nIn Section~\\ref{sec: Tspline def} we give a brief definition of trivariate odd-degree T-splines.\nIn Section~\\ref{sec: AS} we give an abstract definition of Analysis-Suitability in the 3D setting and prove that all admissible meshes are analysis-suitable. In Section~\\ref{sec: DC} we define dual-compatible meshes, and prove that analysis-suitability and dual-compatibility are equivalent, and that all dual-compatible meshes provide linear independent T-spline functions. (Figure~\\ref{fig: overview} illustrates this ``long way'' to linear independence.) Section~\\ref{sec: complexity} proves linear complexity of the refinement procedure, and conclusions and an outlook to future work are finally given in Section~\\ref{sec: conclusions}.\n\\begin{figure}[ht]\n\\centering\n\\begin{tabular}{l@{\\quad}c@{\\quad}c}\n& \\small Symbol & \\small Section \\\\\\hline\nrefinement algorithm & $\\refine$ & \\ref{sec: refinement} \\\\\nadmissible meshes & $\\A$ & \\ref{sec: adm meshes} \\\\\nanalysis-suitable meshes & $\\AS$ & \\ref{sec: AS} \\\\\ndual-compatible meshes & $\\DC$ & \\ref{sec: DC}\\\\\\hline\n\\end{tabular}\\bigskip\n\\[\n\\refine(\\A)\\stackrel{\\text{Theorem~\\ref{thm: ref works}}}\\subseteq\n\\A\\stackrel{\\text{Theorem~\\ref{thm: A in AS}}}\\subseteq\n\\AS\\stackrel{\\text{Theorem~\\ref{thm: AS in DC}}}\\EQ\n\\DC\\stackrel{\\text{Theorem~\\ref{thm: DC has dual basis}}}\\subseteq\n\\left[\\parbox{3.25cm}{\\centering \\small meshes with linearly independent T-splines}\\right]\n\\]\n\\caption{How we prove linear independence of the T-splines induced by the generated meshes.}\n\\label{fig: overview}\n\\end{figure}\n\n\\section{Adaptive mesh refinement}\\label{sec: refinement}\nThis section defines the new refinement algorithm and characterizes the class of meshes which are generated by this algorithm.\nThe algorithm is essentially a 3D version of the one introduced in \\cite{Morgenstern:Peterseim:2015}, with the additional feature of variable grading.\nThe initial mesh is assumed to have a very simple structure. In the context of IGA, the partitioned rectangular domain is referred to as \\emph{index domain}. This is, we assume that the \\emph{physical domain} (on which, e.g., a PDE is to be solved) is obtained by a continuous map from the active region (cf.\\ Section~\\ref{sec: DC}), which is a subset of the index domain. Throughout this paper, we focus on the mesh refinement only, and therefore we will only consider the index domain. For the parametrization and refinement of the T-spline blending functions, we refer to \\cite{SLSH:2012}.\n\n\\begin{df}[Initial mesh, element]\nGiven $\\tilde X,\\tilde Y,\\tilde Z\\in\\mathbb N$, the initial mesh $\\G_0$ is a tensor product mesh consisting of closed cubes (also denoted \\emph{elements}) with side length 1, i.e.,\n\\[\\G_0\\sei\\Bigl\\{[x-1,x]\\times[y-1,y]\\times[z-1,z]\\mid x\\in\\{1,\\dots,\\tilde X\\},y\\in\\{1,\\dots,\\tilde Y\\},z\\in\\{1,\\dots,\\tilde Z\\}\\Bigr\\}.\\]\nThe domain partitioned by $\\G_0$ is denoted by $\\Omega\\sei (0,\\tilde X)\\times (0,\\tilde Y)\\times(0,\\tilde Z)$.\n\\end{df}\n\nThe key property of the refinement algorithm will be that refinement of an element $K$ is allowed only if elements in a certain neighbourhood are sufficiently fine. The size of this neighbourhood, which is denoted $(\\mathbf p,m)$-patch and defined through the definitions below, depends on the size of $K$, the polynomial degree $\\mathbf p=(p_1,p_2,p_3)$ of the T-spline functions, and the grading parameter $m$. For the sake of legibility, we assume that $p_1,p_2,p_3$ are odd and greater or equal to 3. (For comments on even polynomial degrees, see Section~\\ref{sec: conclusions}.)\n\n\\begin{df}[Level]\nThe \\emph{level} of an element $K$ is defined by \\[\\ell(K)\\sei-\\log_m|K|,\\] where $m$ is the manually chosen grading parameter, i.e., the number of children in a single elements' refinement, and $|K|$ denotes the volume of $K$.\nThis implies that all elements of the initial mesh have level zero and that the refinement of an element $K$ yields $m$ elements of level $\\ell(K)+1$.\n\\end{df}\n\n\\begin{df}[Vector-valued distance]\\label{df: distance}\nGiven $x\\in\\barOmega$ and an element $K$, we define their distance \nas the componentwise absolute value of the difference between $x$ and the midpoint of $K$,\n\\begin{align*}\n\\Dist(K,x)&\\sei\\operatorname{abs}\\bigl(\\midp(K)-x\\bigr)\\ \\in\\reell^3,\\\\\n\\text{with}\\quad\\operatorname{abs}(y) &\\sei \\bigl(\\lvert y_1\\rvert, \\lvert y_2\\rvert, \\lvert y_3\\rvert\\bigr).\n\\end{align*}\nFor two elements $K_1,K_2$, we define the shorthand notation \n\\[\\Dist(K_1,K_2)\\sei\\operatorname{abs}\\bigl(\\midp(K_1)-\\midp(K_2)\\bigr).\\]\n\\end{df}\n\n\\begin{df}\\label{df: magic patch}\nGiven an element $K$, a grading parameter $m\\ge2$ and the polynomial degree $\\mathbf p=(p_1,p_2,p_3)$, \nwe define the open environment\n\\begin{align*}\n\\U(K)&\\sei\\{x\\in\\Omega\\mid\\Dist(K,x)<\\D(\\ell(K))\\},\n\\shortintertext{where}\n\\D(k)&\\sei\\begin{cases}m^{-k\/3}\\,\\bigl(p_1+\\tfrac32,p_2+\\tfrac32,p_3+\\tfrac32\\bigr)&\\text{if }k=0\\bmod3,\n\\\\[.7ex]\nm^{-(k-1)\/3}\\,\\bigl(\\tfrac{p_1+3\/2}m,p_2+\\tfrac32,p_3+\\tfrac32\\bigr)&\\text{if }k=1\\bmod3,\n\\\\[.7ex]\nm^{-(k-2)\/3}\\,\\bigl(\\tfrac{p_1+3\/2}m,\\tfrac{p_2+3\/2}m,p_3+\\tfrac32\\bigr)&\\text{if }k=2\\bmod3.\\end{cases}\n\\intertext{The $(\\mathbf p,m)$-patch of $K$ is defined as the set of all elements that intersect with environment of $K$,}\n\\patch\\G K &\\sei \\{K'\\in\\G\\mid K'\\cap\\U(K)\\neq\\emptyset\\}.\n\\end{align*}\nNote as a technical detail that this definition does \\emph{not} require that $K\\in\\G$. See also Figure~\\ref{fig: magic patch examples} for examples.\n\\end{df}\n\n\\begin{rem}\nBy definition, the size of the $(\\mathbf p,m)$-patch of an element $K$ scales linearly with the size of $K$ and with the polynomial degree $\\mathbf p$. Since $\\D(k)$ is decreasing in $m$, choosing $m$ large will cause small $(\\mathbf p,m)$-patches and hence more localized refinement.\n\\end{rem}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=.3\\textwidth]{refex_12}%\n\\hspace{.049\\textwidth}%\n\\includegraphics[width=.3\\textwidth]{refex_20}%\n\\hspace{.049\\textwidth}%\n\\includegraphics[width=.3\\textwidth]{refex_30}%\n\\caption{Examples for the $(\\mathbf p,m)$-patch of an element $K$, for $\\mathbf p=(3,3,3)$, $m=3$ and $\\ell(K)=2,3,4$.}\n\\label{fig: magic patch examples}\n\\end{figure}\n\n\nIn the subsequent definitions, we will give a detailed description of the elementary subdivision steps and then present the new refinement algorithm. \n\n\\begin{df}[Subdivision of an element]\nGiven an arbitrary element $K=[x,x+\\tilde x]\\times[y,y+\\tilde y]\\times[z,z+\\tilde z]$, where $x, y,z, \\tilde x,\\tilde y,\\tilde z\\in\\mathbb{R}$ and $\\tilde x,\\tilde y,\\tilde z>0$, we define the operators\n\\begin{align*}\n\\subdiv_\\mathrm x(K) &\\sei \\bigl\\{\\,[x+\\tfrac {j-1}m\\tilde x,x+\\tfrac jm\\tilde x]\\times[y,y+\\tilde y]\\times[z,z+\\tilde z]\\mid j\\in\\{1,\\dots,m\\}\\bigr\\},\\\\\n\\enspace\\subdiv_\\mathrm y(K) &\\sei \\bigl\\{\\,[x,x+\\tilde x]\\times[y+\\tfrac {j-1}m\\tilde y,y+\\tfrac jm\\tilde y]\\times[z,z+\\tilde z]\\mid j\\in\\{1,\\dots,m\\}\\bigr\\},\n\\\\\\text{and}\\enspace\n\\enspace\\subdiv_\\mathrm z(K) &\\sei \\bigl\\{\\,[x,x+\\tilde x]\\times[y,y+\\tilde y]\\times[z+\\tfrac {j-1}m\\tilde z,z+\\tfrac jm\\tilde z]\\mid j\\in\\{1,\\dots,m\\}\\bigr\\}.\n\\end{align*}\nThese operators will be used for $x$-, $y$-, and $z$-orthogonal subdivisions in the refinement procedure. Their output is illustrated in Figure~\\ref{fig: elemref}.\n\\end{df}\n\\begin{df}[Subdivision]\\label{df: subdivision\nGiven a mesh $\\G$ and an element $K\\in\\G$, we denote by $\\subdiv(\\G,K)$ the mesh that results from a level-dependent subdivision of $K$,\n\\begin{align*}\n\\subdiv(\\G,K)&\\sei\\G\\setminus\\{K\\}\\cup\\child(K),\\\\\n\\text{with}\\enspace\\child(K)&\\sei\n\\begin{cases}\\subdiv_\\mathrm x(K)&\\text{if }\\ell(K)=0\\bmod3,\\\\\\subdiv_\\mathrm y(K)&\\text{if }\\ell(K)=1\\bmod3,\\\\\\subdiv_\\mathrm z(K)&\\text{if }\\ell(K)=2\\bmod3.\\end{cases}\n\\end{align*}\n\\begin{figure}[ht]\n\\centering\n\\includegraphics{elemref_1}\\qquad\n\\includegraphics{elemref_2}\n\\includegraphics{elemref_3}\n\\caption{Elementary subdivision routines for $m=3$:\n$x$-orthogonal subdivision of an element with level 0 (left),\n$y$-orthogonal subdivision of an element with level 1 (middle), and\n$z$-orthogonal subdivision of an element with level 2 (right).}\n\\label{fig: elemref}\n\\end{figure}\n\\end{df}\n\\begin{df}[Multiple subdivisions]\nWe introduce the shorthand notation $\\subdiv(\\G,\\M)$ for the subdivision of several elements $\\M=\\{K_1,\\dots,K_J\\}\\subseteq\\G$, defined by successive subdivisions in an arbitrary order,\n\\[\\subdiv(\\G,\\M)\\sei\\subdiv(\\subdiv(\\dots\\subdiv(\\G,K_1),\\dots),K_J).\\]\n\\end{df}\n\nWe will now define the new refinement algorithm through the subdivision of a superset $\\clos(\\G,\\M)$ of the marked elements $\\M$. In the remaining part of this section, we characterize the class of meshes generated by this refinement algorithm.\n\n\\begin{alg}[Closure]\\label{alg: closure}\nGiven a mesh $\\G$ and a set of marked elements $\\M\\subseteq\\G$ to be refined, the \\emph{closure} $\\clos(\\G,\\M)$ of $\\M$ is computed as follows.\n\\begin{algorithmic}\n\\STATE $\\Mtilde\\sei \\M$\n\\REPEAT\n\\FORALL{$K\\in\\Mtilde$}\n\\STATE $\\Mtilde\\sei\\Mtilde\\cup\\bigl\\{K'\\in\\patch\\G K\\mid \\ell(K')<\\ell(K)\\bigr\\}$\n\\ENDFOR\n\\UNTIL{$\\Mtilde$ stops growing}\n\\RETURN $\\clos(\\G,\\M)=\\Mtilde$\n\\end{algorithmic}\n\\end{alg}\n\n\\begin{alg}[Refinement]\\label{alg: refinement}\n\nGiven a mesh $\\G$ and a set of marked elements $\\M\\subseteq\\G$ to be refined, $\\refine(\\G,\\M)$ is defined by \\[\\refine(\\G,\\M)\\sei\\subdiv(\\G,\\clos(\\G,\\M)).\\]\nAn example of this algorithm is given in Figure~\\ref{fig: refinement algorithm}.\n\\end{alg}\n\\begin{figure}[ht]\n\\centering\\Large\n\\includegraphics[width=.25\\textwidth]{refex_11}%\n\\makebox[.125\\textwidth]{\\raisebox{.125\\textwidth}{$\\stackrel{1^\\text{st}\\text{ iter.}}\\rightarrow$}}%\n\\includegraphics[width=.25\\textwidth]{refex_13}%\n\\makebox[.125\\textwidth]{\\raisebox{.125\\textwidth}{$\\stackrel{2^\\text{nd}\\text{ iter.}}\\rightarrow$}}%\n\\includegraphics[width=.25\\textwidth]{refex_15}\\\\\n\\makebox[.125\\textwidth]{\\raisebox{.125\\textwidth}{$\\stackrel{3^\\text{rd}\\text{ iter.}}\\rightarrow$}}%\n\\includegraphics[width=.25\\textwidth]{refex_15}%\n\\makebox[.125\\textwidth]{\\raisebox{.125\\textwidth}{$\\stackrel{\\text{subdiv.}}\\rightarrow$}}%\n\\includegraphics[width=.25\\textwidth]{refex_18}\n\\caption{Example for Algorithm~\\ref{alg: refinement}, with $\\mathbf p=(3,3,3)$, $m=3$ and $\\M=\\{K\\}$ with $\\ell(K)=2$. In the first iteration of the \\textbf{for}-loop, all coarser (level 1) elements in the $(\\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) ``neighbours'' of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the \\textbf{for}-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. }\n\\label{fig: refinement algorithm}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{.3\\textwidth}\n\\includegraphics[width=\\textwidth]{wuerfel1}\n\\caption{$m=2$}\\label{sfig: refex 2}\n\\end{subfigure}\\quad\n\\begin{subfigure}[b]{.3\\textwidth}\n\\includegraphics[width=\\textwidth]{wuerfel2}\n\\caption{$m=4$}\\label{sfig: refex 4}\n\\end{subfigure}\\quad\n\\begin{subfigure}[b]{.3\\textwidth}\n\\includegraphics[width=\\textwidth]{wuerfel3}\n\\caption{$m=16$}\\label{sfig: refex 16}\n\\end{subfigure}\n\\caption{Refinement examples for $\\mathbf p=(3,3,3)$ and different choices of $m$. In all cases, the initial mesh consists of $4\\times5\\times8$ cubes of size $1\\times1\\times1$, and is refined by marking the lower left front corner element repeatedly until it is of the size $\\tfrac1{16}\\times\\tfrac1{16}\\times\\tfrac1{16}$. }\n\\label{fig: refex}\n\\end{figure}\n\\begin{ex}\nConsider an initial mesh that consists of $4\\times5\\times8$ cubes of size $1\\times1\\times1$.\nWe refine the mesh by marking the lower left front corner element repeatedly until it is of the size $\\tfrac1{16}\\times\\tfrac1{16}\\times\\tfrac1{16}$.\nThe resulting meshes for different choices of $m$ are illustrated in Figure~\\ref{fig: refex}, and the results are listed below.\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\nFigure & $m$ & \\parbox{7em}{\\centering number of \\\\ refinement steps} & \\parbox{6em}{\\centering number of \\\\ new elements} \\\\\\hline\n\\ref{sfig: refex 2} & 2 & 12 & 10728 \\\\\n\\ref{sfig: refex 4} & 4 & 6 & 3175 \\\\\n\\ref{sfig: refex 16} & 16 & 3 & 1030\n\\end{tabular}\n\\end{center}\n\\end{ex}\n\n\\section{Admissible meshes}\\label{sec: adm meshes}\nIn the subsequent definitions, we introduce a class of admissible meshes. We will then prove that this class coindices with the meshes generated by Algorithm~\\ref{alg: refinement}.\n\n\\begin{df}[$(\\mathbf p,m)$-admissible subdivisions]\\label{df: adm. subdivision}%\nGiven a mesh $\\G$ and an element $K\\in\\G$, the subdivision of $K$ is called \\emph{$(\\mathbf p,m)$-admissible} if all $K'\\in\\patch\\G K$ satisfy $\\ell(K')\\ge\\ell(K)$.\n\nIn the case of several elements $\\M=\\{K_1,\\dots,K_J\\}\\subseteq\\G$, the subdivision $\\subdiv(\\G,\\M)$ is $(\\mathbf p,m)$-admissible if there is an ordering $(\\sigma(1),\\dots,\\sigma(J))$ (this is, if there is a permutation $\\sigma$ of $\\{1,\\dots,J\\}$) such that \\[\\subdiv(\\G,\\M)=\\subdiv(\\subdiv(\\dots\\subdiv(\\G,K_{\\sigma(1)}),\\dots),K_{\\sigma(J)})\\] is a concatenation of $(\\mathbf p,m)$-admissible subdivisions.\n\\end{df}\n\\begin{df}[Admissible mesh]\\label{df: admissible mesh}\nA refinement $\\G$ of $\\G_0$ is \\emph{$(\\mathbf p,m)$-admissible} if there is a sequence of meshes $\\G_1,\\dots,\\G_J=\\G$ and markings $\\M_j\\subseteq\\G_j$ for $j=0,\\dots,J-1$, such that $\\G_{j+1}=\\subdiv(\\G_j,\\M_j)$ is an $(\\mathbf p,m)$-admissible subdivision for all $j=0,\\dots,J-1$. The set of all $(\\mathbf p,m)$-admissible meshes, which is the initial mesh and its $(\\mathbf p,m)$-admissible refinements, is denoted by $\\A$. For the sake of legibility, we write `admissible' instead of `$(\\mathbf p,m)$-admissible' throughout the rest of this paper.\n\\end{df}\n\n\\begin{thm}\\label{thm: ref works}\nAny admissible mesh $\\G$ and any set of marked elements $\\M\\subseteq\\G$ satisfy \\[\\refine(\\G,\\M)\\in\\A.\\]\n\\end{thm}\nThe proof of Theorem~\\ref{thm: ref works} given at the end of this section relies on the subsequent results.\n\n\\begin{lma}\\label{lma: magic patches are nested}\nGiven an admissible mesh $\\G$ and two nested elements $K\\subseteq\\hat K$ with $K,\\hat K\\in\\tcup\\A$, the corresponding $(\\mathbf p,m)$-patches are nested in the sense $\\patch\\G K\\subseteq\\patch\\G{\\hat K}$.\n\\end{lma}\nThe proof is given in Appendix~\\ref{apx: magic patches are nested}.\n\n\\begin{lma}[local quasi-uniformity]\\label{lma: levels change slowly}\nGiven $K\\in\\G\\in\\A$, \nany $K'\\in\\patch\\G K$ satisfies $\\ell(K')\\ge\\ell(K)-1$.\n\\end{lma}\nThe proof is given in Appendix~\\ref{apx: levels change slowly}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm: ref works}]\nGiven the mesh $\\G\\in\\A$ and marked elements $\\M\\subseteq\\G$ to be refined, we have to show that there is a sequence of meshes that are subsequent admissible refinements, with $\\G$ being the first and $\\refine(\\G,\\M)$ the last mesh in that sequence.\n\nSet $\\Mtilde\\sei\\clos(\\G,\\M)$ and\n\\begin{alignat}{2}\n\\notag\\overline L&\\sei\\max\\ell(\\Mtilde),\\quad\\underline L\\sei\\min\\ell(\\Mtilde)&&\\\\\n\\notag\\M_j&\\sei\\bigl\\{K\\in\\Mtilde\\mid\\ell(K)=j\\bigr\\}&&\\text{for}\\enspace j=\\underline L,\\dots,\\overline L\\\\\n\\label{eq: ref works 1}\\G_{\\underline L}&\\sei\\G,\\quad \\G_{j+1}\\sei\\subdiv(\\G_j,\\M_j)&\\enspace&\\text{for}\\enspace j=\\underline L,\\dots,\\overline L.\n\\end{alignat}\nIt follows that $\\refine(\\G,\\M)=\\G_{\\overline L+1}$. We will show by induction over $j$ that all subdivisions in \\eqref{eq: ref works 1} are admissible. \n\n\nFor the first step $j=\\underline L$, we know $\\{K'\\in\\Mtilde\\mid\\ell(K')<\\underline L\\}=\\emptyset$, and by construction of $\\Mtilde$ that for each $K\\in\\Mtilde_{\\underline L}$ holds \n$\\{K'\\in\\patch\\G K\\mid\\ell(K')<\\ell(K)\\}\\subseteq\\Mtilde$.\nTogether with $\\ell(K)=\\underline L$, it follows for any $K\\in\\Mtilde_{\\underline L}$ that there is no $K'\\in\\patch\\G K$ with $\\ell(K')<\\ell(K)$. This is, the subdivisions of all $K\\in\\Mtilde_{\\underline L}$ are admissible independently of their order and hence $\\subdiv(\\G_{\\underline L},\\Mtilde_{\\underline L})$ is admissible.\n\nConsider an arbitrary step $j\\in\\{\\underline L,\\dots,\\overline L\\}$ and assume that $\\G_{\\underline L},\\dots,\\G_j$ are admissible meshes. \nAssume for contradiction that there is $K\\in\\M_j$ of which the subdivision is not admissible, i.e., there exists $K'\\in\\smash{\\patch{\\G_j}K}$ with $\\ell(K')<\\ell(K)$ and consequently $K'\\notin\\Mtilde$, because $K'$ has not been refined yet. It follows from the closure Algorithm~\\ref{alg: closure} that $K'\\notin\\G$. Hence, there is $\\hat K\\in\\G$ such that $K'\\subset\\hat K$. \nWe have $\\ell(\\hat K)<\\ell(K')<\\ell(K)$, which implies $\\ell(\\hat K)<\\ell(K)-1$. Note that $K\\in\\G$ because $\\M_j\\subseteq\\Mtilde\\subseteq\\G$.\nFrom $K'\\in\\patch{\\G_j}K$, it follows by definition that $K'\\cap\\U(K)\\neq\\emptyset$, and $K'\\subset\\hat K$ yields \n$\\hat K\\cap\\U(K)\\neq\\emptyset$ and hence $\\hat K\\in\\patch\\G K$.\nTogether with $\\ell(\\hat K)<\\ell(K)-1$, Lemma~\\ref{lma: levels change slowly} implies that $\\G$ is not admissible, which contradicts the assumption.\n\\end{proof}\n\n\\section{T-spline definition}\\label{sec: Tspline def}\nIn this section, we define trivariate T-spline functions corresponding to a given admissible mesh. We roughly follow the definitions from \\cite{Morgenstern:Peterseim:2015}.\n\\begin{df}[Active nodes]\nFor each element $K=[x,x+\\tilde x]\\times[y,y+\\tilde y]\\times[z,z+\\tilde z]$, the corresponding set of vertices is denoted by \\[\\N(K)\\sei\\{x,x+\\tilde x\\}\\times\\{y,y+\\tilde y\\}\\times\\{z,z+\\tilde z\\}.\\] We refer to the elements of $\\N\\sei\\bigcup_{K\\in\\G}\\N(K)$ as \\emph{nodes}.\nWe define the \\emph{active region} \\[\\AR\\sei\\Bigl[\\ceilfrac{p_1}2,\\tilde X-\\ceilfrac{p_1}2\\Bigr]\\times\\Bigl[\\ceilfrac{p_2}2,\\tilde Y-\\ceilfrac{p_2}2\\Bigr]\\times\\Bigl[\\ceilfrac{p_3}2,\\tilde Z-\\ceilfrac{p_3}2\\Bigr]\\]\nand the set of \\emph{active nodes} $\\N_A\\sei\\N\\cap\\AR$. \n\\end{df}\n\\begin{df}[Skeleton]\nGiven a mesh $\\G$, denote the union of all closed $x$-orthogonal element faces by $\\xix\\sei\\tcup_{K\\in\\G}\\xix(K)$, with \n\\begin{align*}\n\\xix(K) &\\sei \\{x,x+\\tilde x\\}\\times[y,y+\\tilde y]\\times [z,z+\\tilde z]\\\\\n\\text{for any } K &= [x,x+\\tilde x]\\times[y,y+\\tilde y]\\times [z,z+\\tilde z]\\in\\G.\n\\end{align*}\nWe call $\\xix$ the \\emph{$x$-orthogonal skeleton}. Analogously, we denote the $y$-orthogonal skeleton by $\\xiy$, and the $z$-orthogonal skeleton by $\\xiz$.\n\\end{df}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=.275\\textwidth]{Xi_1}\\hspace{.08\\textwidth}\n\\includegraphics[width=.275\\textwidth]{Xi_2}\\hspace{.08\\textwidth}\n\\includegraphics[width=.275\\textwidth]{Xi_3}\n\\caption{$x$-orthogonal, $y$-orthogonal and $z$-orthogonal skeleton of the final mesh from Figure~\\ref{fig: refinement algorithm}.}\n\\label{fig: Xi_xyz}\n\\end{figure}\n\n\n\\begin{df}[Global index sets]\nFor any $x,y,z\\in\\reell$, we define\n\\begin{alignat*}{2}\n\\XX(y,z) &\\sei \\bigl\\{\\tilde x\\in[0,&\\tilde X]&\\mid(\\tilde x,y,z)\\in\\xix\\bigr\\},\\\\\n\\YY(x,z) &\\sei \\bigl\\{\\tilde y\\in[0,&\\tilde Y]&\\mid(x,\\tilde y,z)\\in\\xiy\\bigr\\},\\\\\n\\ZZ(x,y) &\\sei \\bigl\\{\\tilde z\\in[0,&\\tilde Z]&\\mid(x,y,\\tilde z)\\in\\xiz\\bigr\\}.\n\\end{alignat*}\nNote that in an admissible mesh, the entries $\\bigl\\{ 0,\\dots,\\ceilfrac {p_1}2-1,\\enspace \\tilde X-\\ceilfrac {p_1}2+1,\\dots,\\tilde X\\bigr\\}$ are always included in $\\XX(y,z)$ (and analogously for $\\YY(x,z)$ and $\\ZZ(x,y)$).\n\\end{df}\n\n\\begin{df}[Local index vectors]\nTo each active node $v=(v_1,v_2,v_3)\\in\\N_A$, we associate a local index vector $\\xx(v)\\in\\reell^{p_1+2}$, which is obtained by taking the unique $p_1+2$ consecutive elements in $\\XX(v_2,v_3)$ having $v_1$ as their $\\tfrac{p_1+3}2$-th (this is, the middle) entry. We analogously define $\\yy(v)\\in\\reell^{p_2+2}$ and $\\zz(v)\\in\\reell^{p_3+2}$.\n\\end{df}\n\\begin{df}[T-spline blending function]\n We associate to each active node $v\\in\\N_A$ a trivariate B-spline function, referred as \\emph{T-spline blending function}, defined as the product of the B-spline functions on the corresponding local index vectors, \\[B_v(x,y,z)\\coloneqq N_{\\xx(v)}(x) \\cdot N_{\\yy(v)}(y)\\cdot N_{\\zz(v)}(z).\\]\n\\end{df}\n\n\\section{Analysis-Suitability}\\label{sec: AS}\nIn this section, we give an abstract definition of Analysis-Suitability. Instead of using T-junction extensions as in the 2D case, we define \\emph{perturbed regions} through the intersection of particular T-spline supports. Analysis-Suitability is then defined as the absence of intersections between these perturbed regions. This idea is comparable to the 2D case, where Analysis-Suitability is defined as the absence of intersections between T-junction extensions.\nSubsequent to these definitions, we prove that all previously defined admissible meshes are analysis-suitable.\n\n\\begin{df}[Perturbed regions]\\label{df: perturbed regions}%\nFor $q,r,s\\in\\reell$ define the slices\n\\begin{align*}\n\\Sx(q) &\\sei\\left\\{(\\tilde x,\\tilde y,\\tilde z)\\in\\AR\\mid \\tilde x=q\\right\\},\\\\\n\\Sy(r) &\\sei\\left\\{(\\tilde x,\\tilde y,\\tilde z)\\in\\AR\\mid \\tilde y=r\\right\\},\\\\\n\\Sz(s) &\\sei\\left\\{(\\tilde x,\\tilde y,\\tilde z)\\in\\AR\\mid \\tilde z=s\\right\\}.\n\\intertext{Moreover, we denote by}\n\\Nx(q) &\\sei\\left\\{(v_1,v_2,v_3)\\in\\N_A\\mid(q,v_2,v_3)\\in\\xix\\right\\}\n\\intertext{the set of all nodes of which the projection on the slice $\\Sx(q)$ lies in some element's face.\nDefine analogously}\n\\Ny(r) &\\sei\\left\\{(v_1,v_2,v_3)\\in\\N_A\\mid(v_1,r,v_3)\\in\\xiy\\right\\},\\\\\n\\Nz(s) &\\sei\\left\\{(v_1,v_2,v_3)\\in\\N_A\\mid(v_1,v_2,s)\\in\\xiz\\right\\}.\\\\\n\\intertext{For any $q,r,s\\in\\reell$ we define \\emph{slice perturbations}}\n\\Rx(q) &\\sei \\Sx(q)\\cap\\mbigcup{v\\in\\Nx(q)}\\supp B_v\\ \\cap \\mbigcup{v\\in\\N_A\\setminus\\Nx(q)}\\supp B_v,\\\\\n\\Ry(r) &\\sei \\Sy(r)\\cap\\mbigcup{v\\in\\Ny(r)}\\supp B_v\\ \\cap \\mbigcup{v\\in\\N_A\\setminus\\Ny(r)}\\supp B_v,\\\\\n\\Rz(s) &\\sei \\Sz(s)\\cap\\mbigcup{v\\in\\Nz(s)}\\supp B_v\\ \\cap \\mbigcup{v\\in\\N_A\\setminus\\Nz(s)}\\supp B_v.\n\\end{align*}\nThe \\emph{perturbed regions} $\\Rx$, $\\Ry$, $\\Rz$ are defined by \n\\[\\Rx\\sei\\bigcup_{q\\in\\reell}\\Rx(q),\\quad\\Ry\\sei\\bigcup_{r\\in\\reell}\\Ry(r),\\quad\\Rz\\sei\\bigcup_{s\\in\\reell}\\Rz(s).\\]\nIn a uniform mesh, the perturbed regions are empty. In a non-uniform mesh, the perturbed regions are a superset of all hanging nodes and edges (this is, all kinds of 3D T-junctions). See Figure~\\ref{fig: AS example 1} for a 2D visualization of these definitions.\n\\end{df}\n\n\\begin{df}[Analysis-suitability]\\label{df: AS}%\nA given mesh $\\G$ is \\emph{analysis-suitable} if the above-defined perturbed regions do not intersect, i.e. if\n\\[\\Rx\\cap\\Ry =\\Ry \\cap \\Rz=\\Rz\\cap \\Rx= \\emptyset.\\]\nThe set of analysis-suitable meshes is denoted by $\\AS$.\n\\end{df}\n\\begin{rem}\nWhen applied in the two-dimensional case, the above definitions may yield perturbed regions that are larger than the T-junction extensions from \\cite{ZSHS:2012, BBCS:2012} (see Fig.~\\ref{fig: AS example 2}). However, this occurs only in meshes that are not analysis-suitable, and \nthe 2D version of Definition~\\ref{df: AS} is, regarding refinements of tensor-product meshes, equivalent to the classical definition of analysis-suitability%\n.\n\\end{rem}\n\n\\begin{figure}[ht]\n\\centering\n\\begin{tikzpicture}[baseline=0]\n\\draw[line width=3pt, red!40] (1.5,-.5) coordinate (q) -- (1.5,5.5) coordinate (sx);\n\\node[above] at (sx) {$\\Sx(q)$};\n\\node[below] at (q) {$q$};\n\\draw[<->] (-.5,6) node [above] {$y$} |- (5,-.5) node [right] {$x$};\n\\draw (q)++(0,-.1)--++(0,.2);\n\\draw[thick] (0,0) grid (4,5)\n (0,1.5)--(3,1.5) (1.5,0)--(1.5,2);\n\\foreach \\a in {0,...,4}\n{ \\foreach \\b in {0,1,2}\n \\draw[fill=orange!40] (\\a,\\b) circle (3pt);\n \\foreach \\b in {3,4,5}\n \\draw[fill=blue!40] (\\a,\\b) circle (3pt);\n}\n\\foreach \\b in {0,1,2}\n \\draw[fill=orange!40] (1.5,\\b) circle (3pt);\n\\foreach \\b in {0,1,1.5,2,3}\n \\draw[fill=orange!40] (\\b,1.5) circle (3pt);\n\\node[right] (na) at (4.5,3.5) {$\\N_A\\setminus\\Nx(q)$};\n\\node[right] (nx) at (4.5,1.5) {$\\Nx(q)$};\n\\foreach \\a in {3,4,5}\n \\draw[->,shorten >= 7pt] (na) -- (4,\\a);\n\\foreach \\a in {0,1,2}\n \\draw[->,shorten >= 7pt] (nx) -- (4,\\a);\n\\end{tikzpicture}\n\\begin{tikzpicture}[baseline=0]\n\\draw[line width=3pt, orange!40] (-.6,-.5) |- (4.4,4) |- (-.6,-.5);\n\\draw[line width=3pt, blue!40] (4.6,5.5) |- (2,1) |- (-.4,1.5) |- (4.6,5.5);\n\\draw[line width=3pt,red] (1.5,1.5)--(1.5,4) ++(.5,-1.5) coordinate (rx);\n\\node at (rx) {$\\Rx(q)$};\n\\draw[thick] (0,0) grid (4,5)\n (0,1.5)--(3,1.5) (1.5,0)--(1.5,2);\n\\node[left] at (-.6,0) {$\\mbigcup{v\\in\\Nx(q)}\\supp B_v$};\n\\node[left] at (-.4,5) {$\\mbigcup{v\\in\\N_A\\setminus\\Nx(q)}\\supp B_v$};\n\\end{tikzpicture}\n\n\\caption{2D example for the construction of the slice perturbation $\\Rx(q)$ in an analysis-suitable mesh. The left figure illustrates the construction of $\\Nx(q)$ and its complement $\\N_A\\setminus\\Nx(q)$, and the right figure shows the resulting slice perturbation, which coincides with the corresponding classical T-junction extension. }\n\\label{fig: AS example 1}\n\\end{figure}\n\\begin{figure}[ht]\n\\centering\n\\begin{tikzpicture}[baseline=0]\n\\draw[line width=3pt, red!40] (1.5,-.5) coordinate (q) -- (1.5,5.5) coordinate (sx);\n\\node[above] at (sx) {$\\Sx(q)$};\n\\node[below] at (q) {$q$};\n\\draw[<->] (-.5,6) node [above] {$y$} |- (5,-.5) node [right] {$x$};\n\\draw (q)++(0,-.1)--++(0,.2);\n\\draw[thick] (0,0) grid (4,5)\n (0,1.5)--(2,1.5) (1.5,0)--(1.5,2);\n\\foreach \\a in {0,...,4}\n{ \\foreach \\b in {0,1,2}\n \\draw[fill=orange!40] (\\a,\\b) circle (3pt);\n \\foreach \\b in {3,4,5}\n \\draw[fill=blue!40] (\\a,\\b) circle (3pt);\n}\n\\foreach \\b in {0,1,2}\n \\draw[fill=orange!40] (1.5,\\b) circle (3pt);\n\\foreach \\b in {0,1,1.5,2}\n \\draw[fill=orange!40] (\\b,1.5) circle (3pt);\n\\node[right] (na) at (4.5,3.5) {$\\N_A\\setminus\\Nx(q)$};\n\\node[right] (nx) at (4.5,1.5) {$\\Nx(q)$};\n\\foreach \\a in {3,4,5}\n \\draw[->,shorten >= 7pt] (na) -- (4,\\a);\n\\foreach \\a in {0,1,2}\n \\draw[->,shorten >= 7pt] (nx) -- (4,\\a);\n\\end{tikzpicture}\n\\begin{tikzpicture}[baseline=0]\n\\draw[line width=3pt, orange!40] (-.6,-.5) |- (4.4,4) |- (-.6,-.5);\n\\draw[line width=3pt, blue!40] (4.6,5.5) |- (1,1) |- (-.4,1.5) |- (4.6,5.5);\n\\draw[line width=3pt,red] (1.5,1)--(1.5,4) ++(.5,-1.5) coordinate (rx);\n\\node at (rx) {$\\Rx(q)$};\n\\draw[thick] (0,0) grid (4,5)\n (0,1.5)--(2,1.5) (1.5,0)--(1.5,2);\n\\node[left] at (-.6,0) {$\\mbigcup{v\\in\\Nx(q)}\\supp B_v$};\n\\node[left] at (-.4,5) {$\\mbigcup{v\\in\\N_A\\setminus\\Nx(q)}\\supp B_v$};\n\\end{tikzpicture}\n\\caption{2D example for the construction of the slice perturbation $\\Rx(q)$ in a mesh that is not analysis-suitable. The left figure illustrates the construction of $\\Nx(q)$ and its complement $\\N_A\\setminus\\Nx(q)$, and the right figure shows the resulting slice perturbation, which is strictly larger than the corresponding classical T-junction extension. }\n\\label{fig: AS example 2}\n\\end{figure}\n\n\n\\begin{thm}\\label{thm: A in AS}%\n$\\A\\subseteq\\AS$ for all $m\\ge 2$.\n\\end{thm}\n\n\\begin{proof}\nWe prove the claim by induction over admissible subdivisions.\nAssume $K\\in\\G\\in\\A\\cap\\AS$ and let $\\hat\\G\\sei\\subdiv(\\G,K)\\in\\A$ be an admissible subdivision of $\\G$. \nWe have to show that $\\hat\\G\\in\\AS$. We assume without loss of generality that $\\ell(K)=0\\bmod 3$. Hence subdividing $K$ adds $m-1$ faces to the mesh, which are $x$-orthogonal. Set $K\\eqqcolon[x,x+\\tilde x]\\times[y,y+\\tilde y]\\times[z,z+\\tilde z]$ and \n$\\tXi\\sei \\{x+\\tfrac jm\\tilde x\\mid j\\in\\{1,\\dots, m-1\\}\\}$, then the skeletons of $\\hat\\G$ are given by\n\\begin{equation*}\n\\hatxix=\\xix\\cup\\tXi\\times[y,y+\\tilde y]\\times[z,z+\\tilde z] ,\\quad\n\\hatxiy=\\xiy,\\quad\\hatxiz=\\xiz.\n\\end{equation*}\nLet $\\hat v\\in\\hat\\N_A\\setminus\\N_A$ be a new active node.\nUsing the local quasi-uniformity from Lemma~\\ref{lma: levels change slowly}, it can be verified that for all $r\\in\\reell$ such that $\\hat v\\in\\hat\\Ny(r)$ follows $\\Ry(r)\\cap\\supp B_{\\hat v}=\\emptyset$. Consequently, $\\hat\\Ry=\\Ry$ and analogously $\\hat \\Rz=\\Rz$. Moreover, $\\hat\\Rx(q)=\\Rx(q)$ for all $q\\notin\\tXi$. It remains to characterize \n\\[\\hat\\Rx(\\xi) = \\Sx(\\xi)\\cap\\mbigcup{v\\in\\hat\\Nx(\\xi)}\\supp B_v\\cap\\mbigcup{v\\in\\hat\\N_A\\setminus\\hat\\Nx(\\xi)}\\supp B_v\\]\nfor any $\\xi\\in\\tXi$.\nWith \n\\begin{equation}\\label{eq: A in AS: active nodes}\n\\begin{aligned}\n\\hat\\Nx(\\xi)&=\\Nx(\\xi)\\cup\\hat\\N_A\\setminus\\N_A\\\\\\text{and}\\enspace\\hat\\N_A\\setminus\\hat\\Nx(\\xi)&=\\N_A\\setminus\\hat\\Nx(\\xi)=\\N_A\\setminus\\Nx(\\xi),\n\\end{aligned}\n\\end{equation}\nit follows\n\\begin{align*}\n\\hat\\Rx(\\xi) &=\n\\Sx(\\xi)\\cap\\mbigcup{v\\in\\hat\\Nx(\\xi)}\\supp B_v\\ \\cap \\mbigcup{v\\in\\hat\\N_A\\setminus\\hat\\Nx(\\xi)}\\supp B_v,\\\\\n&\\Stackrel{\\eqref{eq: A in AS: active nodes}}= \\Sx(\\xi)\\cap\\Bigl(\\mbigcup{v\\in\\Nx(\\xi)}\\supp B_v\\cup\\mbigcup{v\\in\\hat\\N_A\\setminus\\N_A}\\supp B_v\\Bigr)\\cap\\mbigcup{v\\in\\N_A\\setminus\\Nx(\\xi)}\\supp B_v\\\\\n&= \\Rx (\\xi) \\cup \\Bigl(\\underbrace{\\Sx(\\xi)\\ \\cap\\mbigcup{\\hat v\\in\\hat\\N_A\\setminus\\N_A}\\supp B_{\\hat v}}_\\Sigma\\ \\cap\\mbigcup{v\\in\\N_A\\setminus\\Nx(\\xi)}\\supp B_v\\Bigr).\n\\end{align*}\nWe will prove below that $\\Sigma\\cap\\hat\\Rz=\\Sigma\\cap\\hat\\Ry=\\emptyset$. See Figures \\ref{fig: Sx-levels} and \\ref{fig: Sx-perturbations} for an example with $\\ell(K)=3$ and $m=2$.\nAssume for contradiction that there is $s\\in\\reell$ with $\\hat\\Rz(s)\\cap\\Sigma\\neq\\emptyset$. Then there exist $v\\in\\hat\\Nz(s)$ and $w\\in\\hat\\N_A\\setminus\\hat\\Nz(s)$ such that \n\\begin{equation}\\label{eq: A in AS: eqn to contradict}\n\\Sz(s)\\cap \\supp B_v\\cap\\supp B_w\\cap\\Sigma\\neq\\emptyset.\n\\end{equation}\nSince the subdivision of $K$ is admissible, we know that all elements in $\\patch\\G K$ are at least of level $\\ell(K)$. This implies that all those elements are of equal or smaller size than $K$. \nDenote $\\midp(K)\\eqqcolon(\\sigma,\\nu,\\tau)$ and $\\varepsilon\\sei\\tfrac{m^{-\\ell(K)\/3}}2$.\nIt follows\n\\begin{equation}\\label{eq: A in AS: Sigma in patch}\n\\Sigma\\subseteq\\tcup\\patch\\G K,\n\\end{equation}\nand with \\[\\hat\\N_A\\setminus\\N_A\\ \\subset\\ [\\sigma-\\varepsilon,\\sigma+\\varepsilon]\\times[\\nu-\\varepsilon,\\nu+\\varepsilon]\\times[\\tau-\\varepsilon,\\tau+\\varepsilon],\\]\nwe get more precisely\n\\begin{equation}\\label{eq: A in AS: def Sigma}\n\\Sigma\\subseteq\\left\\{\\xi\\right\\}\\times\\bigl[\\nu-\\varepsilon(p_2+2),\\nu+\\varepsilon(p_2+2)\\bigr]\\times\\bigl[\\tau-\\varepsilon(p_3+2),\\tau+\\varepsilon(p_3+2)\\bigr].\n\\end{equation}\nThe second-order patch $\\patch\\G{\\patch\\G K}\\sei\\bigcup_{K'\\in\\patch\\G K}\\patch\\G{K'}$ consists of elements that may be larger in $z$-direction, but are of same or smaller size than $K$ in $x$- and $y$-direction. For $w=(w_1,w_2,w_3)$, Equation \\eqref{eq: A in AS: eqn to contradict} implies $\\supp B_w\\cap\\Sigma\\neq\\emptyset$, and we conclude from \\eqref{eq: A in AS: def Sigma} that \n\\begin{equation}\\label{eq: A in AS: w is near Sigma}\n(w_1,w_2)\\ \\in\\ \\bigl[\\xi-\\varepsilon(p_1+1),\\xi+\\varepsilon(p_1+1)\\bigr]\\times\\bigl[\\nu-\\varepsilon(2p_2+3),\\nu+\\varepsilon(2p_2+3)\\bigr]\n\\end{equation}\nWe assume that there is no element in $\\G$ with level higher than $\\ell(K)+1$. This is an eligible assumption, since every admissible mesh can be reproduced by a sequence of level-increasing admissible subdivisions; see \\cite[Proposition 4.3]{Morgenstern:Peterseim:2015} for a detailed construction. This assumption implies that the $z$-orthogonal skeleton $\\xiz$ is a subset of the $z$-orthogonal skeleton of a uniform $(\\ell(K)+1)$-leveled mesh, \n\\begin{equation}\n\\xiz(\\G)\\subseteq\\xiz(\\Guni{\\ell(K)+1}), \\label{eq: A in AS: xiz in xiz-uni} \n\\end{equation}\nand with $\\min\\ell(\\patch\\G K)=\\ell(K)$, we have even equality on the patch $\\patch\\G K$,\n\\begin{equation}\\label{eq: A in AS: xiz-patch is xiz-uni-patch}\n\\xiz\\bigl(\\patch\\G K\\bigr)=\\xiz\\bigl(\\patch{\\Guni{\\ell(K)}}K\\bigr)=\\xiz\\bigl(\\patch{\\Guni{\\ell(K)+1}}K\\bigr), \n\\end{equation}\nusing the notation $\\xiz\\bigl(\\patch\\G K\\bigr)\\sei\\xiz(\\G)\\cap\\tcup\\patch\\G K$. \nSince $v\\in\\hat\\Nz(s)$, we know that \\mbox{$\\hat\\Nz(s)\\neq\\emptyset$}, \nwhich means that there are elements in $\\G$ that have $z$-orthogonal faces at the $z$-coordinate $s$, i.e., \n\\mbox{$\\Sz(s)\\cap \\xiz(\\G) \\neq\\emptyset$}. With \\eqref{eq: A in AS: xiz in xiz-uni} we get $\\Sz(s)\\cap \\xiz(\\Guni{\\ell(K)+1}) \\neq\\emptyset$.\nSince $\\Guni{\\ell(K)+1}$ is a tensor-product mesh, its $z$-orthogonal skeleton consists of global domain slices, which yields\n$\\Sz(s)\\subseteq \\xiz(\\Guni{\\ell(K)+1}).$\nThe restriction to the patch $\\patch\\G K$ yields\n\\begin{equation}\\label{eq: A in AS: Szs-GpK in xiz-GpK}\n\\Sz(s)\\cap\\tcup\\patch\\G K\\subseteq \\xiz\\bigl(\\patch{\\Guni{\\ell(K)+1}}K\\bigr) \\stackrel{\\eqref{eq: A in AS: xiz-patch is xiz-uni-patch}}=\\xiz\\bigl(\\patch\\G K\\bigr)\\subseteq\\xiz(\\G).\n\\end{equation}\nEquation \\eqref{eq: A in AS: eqn to contradict} implies that $\\Sz(s)\\cap\\Sigma\\neq\\emptyset$, and\nwith \\eqref{eq: A in AS: Sigma in patch} we get that $\\Sz(s)\\cap\\tcup\\patch\\G K\\neq\\emptyset$. \nHence\n\\begin{align}\n\\Sz(s)\\cap\\tcup\\patch\\G K\\ &\\supseteq\\ \\Sz(s)\\cap\\U(K) \\notag \\\\\n&=\\ \\bigl[\\xi-\\varepsilon(2p_1+3),\\xi+\\varepsilon(2p_1+3)\\bigr]\\times\\bigl[\\nu-\\varepsilon(2p_2+3),\\nu+\\varepsilon(2p_2+3)\\bigr]\\times\\{s\\}.\n\\end{align}\nSince $w\\notin\\hat\\Nz(s)$, we know by definition that $(w_1,w_2,s)\\notin\\xiz$. Then it follows from \\eqref{eq: A in AS: Szs-GpK in xiz-GpK} that $(w_1,w_2,s)\\notin\\Sz(s)\\cap\\tcup\\patch\\G K$,\nand hence \n\\begin{equation}\n(w_1,w_2)\\ \\notin\\ \\bigl[\\xi-\\varepsilon(2p_1+3),\\xi+\\varepsilon(2p_1+3)\\bigr]\\times\\bigl[\\nu-\\varepsilon(2p_2+3),\\nu+\\varepsilon(2p_2+3)\\bigr] \n\\end{equation}\nin contradiction to \\eqref{eq: A in AS: w is near Sigma}. This proves that $\\hat\\Rz\\cap\\Sigma=\\emptyset$. Similar arguments prove that $\\Sigma\\cap\\Ry=\\emptyset$, which concludes the proof.\n\\end{proof}\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tikzpicture}[scale=.65]\n\\sf\\scriptsize\n\\fill[blue!30] (6,9.5) rectangle (10.5,12);\n\\draw[ultra thick, blue!70] (5,7) rectangle (11.5,14);\n\\draw (0,0) grid (17,21);\n\\foreach \\a in {5,...,11}\n \\draw (\\a+.5,7)--(\\a+.5,14);\n\\foreach \\a in {9.5,10.5,11.5}\n \\draw (6,\\a)--(10.5,\\a);\n\\foreach \\a in {0,...,20}\n{ \\foreach \\b in {0,1,2,14,15,16}\n \\node at (\\b+.5,\\a+.5) {0};\n}\n\\foreach \\a in {3,...,13}\n{ \\foreach \\b in {0,1,2,18,19,20}\n \\node at (\\a+.5,\\b+.5) {0};\n}\n\\foreach \\a in {3,...,17}\n{ \\foreach \\b in {3,4,12,13}\n \\node at (\\b+.5,\\a+.5) {1};\n}\n\\foreach \\a in {5,...,11}\n{ \\foreach \\b in {3,4,5,6,14,15,16,17}\n \\node at (\\a+.5,\\b+.5) {1};\n}\n\\foreach \\a in {7,...,13}\n{ \\foreach \\b in {5,5.5,10.5,11,11.5}\n \\node at (\\b+.25,\\a+.5) {2};\n}\n\\foreach \\a in {12,...,20}\n{ \\foreach \\b in {7,8,12,13}\n \\node at (\\a\/2+.25,\\b+.5) {2};\n}\n\\foreach \\a in {12,...,20}\n{ \\foreach \\b in {9,9.5,10,11,11.5}\n \\node at (\\a\/2+.25,\\b+.25) {3};\n}\n\\foreach \\a in {12,13,14,15,17,18,19,20}\n \\node at (\\a\/2+.25,10.75) {3};\n\\node at (8.25,10.75) {4};\n\\end{tikzpicture}\n\\caption{$yz$-view on the slice $\\Sx(\\xi)$. The numbers denote element levels, and the element in the center with level 4 is a child of $K$. The patch $\\patch{\\hat\\G}K$ is highlighted in blue, and the second-order patch $\\patch{\\hat\\G}{\\patch{\\hat \\G}K}$ is indicated by a thick blue line.}\n\\label{fig: Sx-levels}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tikzpicture}[scale=.5]\n\\fill[red!50] (7,9.5) rectangle (9.5,12)\n(1,1) -- (16,1) -- (16,20) -- (1,20) -- (1,1) -- (4,4) -- (4,17) -- (13,17) -- (13,4) -- (4,4) -- (1,1);\n\\foreach \\a in {5,...,11}\n \\draw[ultra thick, red!50] (\\a+.5,5)--(\\a+.5,8) (\\a+.5,13)--(\\a+.5,16);\n\\foreach \\a in {9.5,10.5,11.5}\n \\draw[ultra thick, red!50] (5,\\a)--(6.5,\\a) (10,\\a)--(11.5,\\a);\n\\draw (0,0) grid (17,21);\n\\foreach \\a in {5,...,11}\n \\draw (\\a+.5,7)--(\\a+.5,14);\n\\foreach \\a in {9.5,10.5,11.5}\n \\draw (6,\\a)--(10.5,\\a);\n\\end{tikzpicture}\n\\caption{$yz$-view on the slice $\\Sx(\\xi)$. $\\Rx$ is indicated by red areas. $\\Ry$ is depicted by horizontal red lines, $\\Rz$ are vertical red lines. At the same time, the squared red area in the center coincides with $\\Sigma$.}\n\\label{fig: Sx-perturbations}\n\\end{figure}\n\\section{Dual-Compatibility}\\label{sec: DC}%\nThis section recalls the concept of Dual-Compatibility, which is a sufficient criterion for linear independence of the T-spline functions, based on dual functionals. We follow the ideas of \\cite{BBSV:2014} for the definitions and for the proof of linear independence. In addition, we prove that all analysis-suitable (and hence all admissible) meshes are dual-compatible and thereby generalize a 2D result from \\cite{BBCS:2012}.\n\n\\begin{prp}[Dual functional, {\\cite[Theorem 4.41]{Schumaker:2007}}]\\label{prp:overlap}%\nGiven the local index vector $X=(x_1,\\dots,x_{p+2})$, there exists an $L^2$-functional $\\lambda_{X}$ with $\\supp\\lambda_X=\\supp N_X$ such that for any $\\tilde X=(\\tilde x_1,\\dots,\\tilde x_{p+2})$ satisfying\n\\begin{equation}\\label{eq: overlap}\n \\begin{alignedat}{4}\n \\forall\\ x&\\in\\{x_1,\\dots,x_{p+2}\\}&&:&\\quad \\tilde x_1&\\leq x\\leq \\tilde x_{p+2}&&\\Rightarrow x\\in \\{\\tilde x_1,\\dots,\\tilde x_{p+2}\\}\\\\\n \\text{and}\\quad\\forall\\ \\tilde x&\\in\\{\\tilde x_1,\\dots,\\tilde x_{p+2}\\}&&:&\\quad x_1&\\leq \\tilde x\\leq x_{p+2}&&\\Rightarrow \\tilde x\\in \\{x_1,\\dots,x_{p+2}\\},\n \\end{alignedat}\n\\end{equation}\nfollows $\\lambda_{X}(N_{\\tilde X}) = \\delta_{X\\tilde X}$.\n\\end{prp}\n\\begin{proof}\nFollowing \\cite{Schumaker:2007}, we construct a dual functional\non the same local knot vector $X$ which we denote by $\\lambda_{X}:L^2\\bigl([0,1]\\bigr)\\to\\reell$. For details, see \\cite[Theorem 4.34, 4.37, and 4.41]{Schumaker:2007}.\nLet $y_j=\\cos\\bigl(\\tfrac{p-j+1}{p+1}\\pi\\bigr)$ for $j=0,\\dots,p+1$.\nUsing divided differences, the perfect B-spline of order $p+1$ is defined by\n\\[B^*_{p+1}(x)\\sei (p+1)\\,(-1)^{p+1}\\bigl[y_0,\\dots,y_{p+1}\\bigr]\\left((x-\\bullet)_+\\right)^p\\]\nand satisfies (amongst other things) $\\int_{-1}^1B^*_{p+1}(x)\\de x=1$ as depicted in Figure~\\ref{pic: perfect B-spline}. \n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=.5\\textwidth]{perfBS}\n \\caption{Plot of the perfect B-splines $B^*_4$ (solid), $B^*_6$ (dotted), $B^*_{10}$ (dashed) and the corresponding antiderivatives.}\n \\label{pic: perfect B-spline}\n\\end{figure}\nSet \\[G_{X}(x) \\sei \\int_{-1}^{\\tfrac{2x-{x_1}-{x_{p+2}}}{{x_{p+2}}-{x_1}}}B^*_{p+1}(t)\\de t \\quad\\text{for }{x_1}\\le x\\le {x_{p+2}}\\] and\n\\[\\phi_{X}(x)=\\tfrac1{p!}\\left(x-{x_2}\\right)\\cdots\\bigl(x-{x_{p+1}}\\bigr).\\] We define the dual functional by \n\\begin{equation}\\label{eq:def_lambda}\n\\lambda_{X}(f) = \\int_{{x_1}}^{{x_{p+2}}}\\mspace{-9mu}f\\, D^{p+1}(G_{X}\\,\\phi_{X})\\de x\\quad\\text{for all }f\\in L^2\\bigl([0,1]\\bigr).\n\\end{equation}\nNote in particular that for all $f\\in L^2(\\reell)$ with $f|_{[x_1,x_{p+2}]}=0$ follows $\\lambda_{X}(f)=0$.\nIf \\eqref{eq: overlap} holds then the claim follows by construction, see \\cite[Theorem 4.41]{Schumaker:2007}.\n\\end{proof}\nWe say that two index vectors verifying \\eqref{eq: overlap} \\emph{overlap}. \nIn order to define the set of T-spline blending functions of which we desire linear independence, we construct local index vectors for each active node.\n\n\\begin{df}\nWe define the functional $\\lambda_v$ by\n \\[\\lambda_v(B_{w})\\sei \\lambda_{\\xx(v)}(N_{\\xx(w)})\\cdot\\lambda_{\\yy(v)}(N_{\\yy(w)})\\cdot\\lambda_{\\zz(v)}(N_{\\zz(w)})\\]\n using the one-dimensional functional $\\lambda_X$ defined in \\eqref{eq:def_lambda}.\n\\end{df}\n\n\\begin{df}\\label{df: partial overlap}%\nWe say that a couple of nodes $v,w\\in\\N$ \\emph{partially overlap} if their index vectors overlap in at least two out of three dimensions; this is, if (at least) two of the pairs \\[\\bigl(\\xx(v),\\xx(w)\\bigr),\\ \\bigl(\\yy(v),\\yy(w)\\bigr),\\ \\bigl(\\zz(v),\\zz(w)\\bigr)\\] overlap in the sense of Proposition~\\ref{prp:overlap}.\\bigskip\n\\end{df}\n\n\\begin{df}\nA mesh $\\G$ is \\emph{dual-compatible (DC)} if any two active nodes $v,w\\in\\N_A$ with $\\bigl|\\supp B_v \\cap\\supp B_w\\bigr|>0$ partially overlap.\nThe set of dual-compatible meshes is denoted by $\\DC$.\n\\end{df}\n\\begin{rem}\nThe above Definition~\\ref{df: partial overlap} fulfills the definition of \\emph{partial overlap} given in \\cite[Def.\\ 7.1]{BBSV:2014}, which is not equivalent. The definition given in \\cite{BBSV:2014} is more general, and the corresponding mesh classes are nested in the sense \n$\\DC\\subseteq\\DC_{\\text{\\cite{BBSV:2014}}}$. However, we do have equivalence of these definitions in the two-dimensional setting.\n\\end{rem}\n\nThe following lemma states that the perturbed regions from Definition~\\ref{df: perturbed regions} indicate non-overlapping knot vectors, and it is applied in the proof of Theroem~\\ref{thm: AS in DC} below.\n\\begin{lma}\\label{lma: perturbed regions indicate non-overlapping}%\nLet $q\\in[0,\\tilde X]$ and $v_1,v_2\\in\\N_A$.\nIf $v_1\\in\\Nx(q)\\notni v_2$ and $\\Sx(q)\\cap\\supp B_{v_1}\\cap\\supp B_{v_2}\\neq\\emptyset$, then $\\xx(v_1)$ and $\\xx(v_2)$ do not overlap in the sense of \\eqref{eq: overlap}.\n\\end{lma}\nThis holds analogously for $\\Ny(r),r\\in[0,\\tilde Y]$ and $\\Nz(s),s\\in[0,\\tilde Z]$.\n\\begin{proof}\nLet $v_1=(x_1,y_1,z_1)$. From $v_1\\in\\Nx(q)$ and Definition~\\ref{df: perturbed regions}, we conclude that $(q,y_1,z_1)\\in\\xix$, and hence $q\\in\\XX(y_1,z_1)$. Let $\\xx(v_1)=(x_1^1,\\dots,x_1^{p_1+2})$ be the local $x$-direction knot vector associated to $v_1$, then \\mbox{$\\supp B_{v_1}\\cap \\Sx(q)\\neq\\emptyset$} implies that $x_1^1\\le q\\le x_1^{p_1+2}$. This and $q\\in\\XX(y_1,z_1)$ yield $q\\in\\xx(v_1)$.\nLet \\mbox{$v_2=(x_2,y_2,z_2)$}. From $v_2\\notin\\Nx(q)$, we get $(q,y_2,z_2)\\notin\\xix$, hence $q\\notin\\XX(y_2,z_2)$, and in particular \\mbox{$q\\notin\\xx(v_2)$}. Let \\mbox{$\\xx(v_2)=(x_2^1,\\dots,x_2^{p_1+2})$} be the local knot vector associated to $v_2$, then \\mbox{$\\supp B_{v_2}\\cap \\Sx(q)\\neq\\emptyset$} implies that $x_2^1\\le q\\le x_2^{p_1+2}$. Together with $\\xx(v_1)\\ni q\\notin\\xx(v_2)$, we see that $v_1$ and $v_2$ do not overlap.\n\\end{proof}\n\n\n\\begin{thm}\\label{thm: AS in DC}%\n$\\AS=\\DC$.\n\\end{thm}\n\n\\begin{proof\n``$\\subseteq$''.\nAssume for contradiction a mesh $\\G$ which is not DC, hence there exist active nodes \\mbox{$v,w\\in\\N_A$} with \\mbox{$\\bigl|\\supp B_v\\cap\\supp B_w\\bigr|>0$} that do not overlap in two dimensions, without loss of generality $x$ and $y$.\nWe will show that there exist two slice perturbations $\\Rx(q)$ and $\\Ry(r)$ with nonempty intersection.\nWe denote $v=(v_1,v_2,v_3)$, $w=(w_1,w_2,w_3)$ and $\\xx(v)=(x^v_1,\\dots,x^v_{p_1+2})$. The elements of $\\yy(v),\\xx(w),\\yy(w)$ are denoted analogously. Moreover we define\n\\begin{alignat*}{2}\nx\\mn&\\sei\\max(x^v_1,x^w_1),&\\enspace x\\mx&\\sei\\min(x^v_{p_1+2},x^w_{p_1+2})\\\\\ny\\mn&\\sei\\max(y^v_1,y^w_1),& y\\mx&\\sei\\min(y^v_{p_2+2},y^w_{p_2+2})\\\\\nz\\mn&\\sei\\max(z^v_1,z^w_1),& z\\mx&\\sei\\min(z^v_{p_3+2},z^w_{p_3+2})\n\\end{alignat*}\nand note that\n\\begin{align*}\n\\supp B_v \\cap \\supp B_w = [x\\mn,x\\mx]\\times[y\\mn,y\\mx]\\times[z\\mn,z\\mx].\n\\end{align*}\nSince $\\xx(v)$ and $\\xx(w)$ do not overlap, there exists $q\\in[x\\mn,x\\mx]$ with either \\mbox{$\\xx(v)\\ni q\\notin\\xx(w)$}\\enspace or \\mbox{$\\xx(v)\\notni q\\in\\xx(w)$}. Without loss of generality we assume $\\xx(v)\\ni q\\notin\\xx(w)$. Since $\\xx(v)\\subseteq \\XX(v_2,v_3)$, it follows by definition that $(q,v_2,v_3)\\in\\xix$ and hence $v\\in\\Nx(q)$.\nSince $q\\notin\\xx(w)$ and hence $(q,w_2,w_3)\\notin\\xix$, it follows that $w\\notin\\Nx(q)$.\nThen\n\\begin{align*}\n\\Rx(q) &= \\Sx(q)\\ \\cap\\ \\mathbox{ \\bigcup_{v'\\in\\Nx(q)} } \\supp B_{v'}\\ \\cap\\ \\mathbox{ \\bigcup_{v'\\in\\N_A\\smallsetminus\\Nx(q)} } \\supp B_{v'}\\\\\n &\\supseteq \\Sx(q)\\ \\cap \\ \\supp B_v\\ \\cap \\ \\supp B_w\\\\\n &= \\{q\\}\\times[y\\mn,y\\mx]\\times[z\\mn,z\\mx].\n \\intertext{Analogously, we have}\n \\Ry(r) &\\supseteq [x\\mn,x\\mx]\\times\\{r\\}\\times[z\\mn,z\\mx]\n \\intertext{and hence}\n \\Rx(q) \\cap \\Ry(r) &\\supseteq \\{q\\}\\times\\{r\\}\\times[z\\mn,z\\mx]\\neq\\emptyset,\n \\end{align*}\nwhich means that the mesh $\\G$ is not analysis-suitable.\\bigskip\n\n``$\\supseteq$''.\nAssume for contradiction that the mesh is not analysis-suitable, and w.l.o.g.\\ that there is \\mbox{$v=(q,r,s)\\in\\reell^3$} such that $\\Rx\\cap\\Ry\\supseteq\\{v\\}\\neq\\emptyset$. Definition~\\ref{df: perturbed regions} implies that there exist\n$v_1,v_2,v_3,v_4\\in\\N_A$ with\n\\mbox{$v_1\\in\\Nx(q)\\notni v_2$} and \\mbox{$v_3\\in\\Ny(r)\\notni v_4$}\\enspace such that\n\\[v\\ \\in\\ \\Sx(q)\\cap\\Sy(r)\\cap\\supp B_{v_1}\\cap\\supp B_{v_2}\\cap\\supp B_{v_3}\\cap\\supp B_{v_4}.\\]\nLemma~\\ref{lma: perturbed regions indicate non-overlapping} yields that $\\xx(v_1)$ and $\\xx(v_2)$ do not overlap, and that $\\yy(v_3)$ and $\\yy(v_4)$ do not overlap.\n\n\\textit{Case 1.} If $v_1\\in\\Ny(r)\\notni v_2$, or $v_1\\notin\\Ny(r)\\ni v_2$, then $v_1$ and $v_2$ do not partially overlap.\n\n\\textit{Case 2.} If $v_1\\in\\Ny(r)$ and $v_4\\notin\\Nx(q)$, then $v_1$ and $v_4$ do not partially overlap.\n\n\\textit{Case 3.} If $v_1\\notin\\Ny(r)$ and $v_3\\notin\\Nx(q)$, then $v_1$ and $v_3$ do not partially overlap.\n\n\\textit{Case 4.} If $v_2\\in\\Ny(r)$ and $v_4\\in\\Nx(q)$, then $v_2$ and $v_4$ do not partially overlap.\n\n\\textit{Case 5.} If $v_2\\notin\\Ny(r)$ and $v_3\\in\\Nx(q)$, then $v_2$ and $v_3$ do not partially overlap.\n\nIn all cases (see the table in Fig.~\\ref{tb: cases}), the mesh is not dual-compatible. This concludes the proof.\n\\end{proof}\n\\begin{figure}[ht]\n\\renewcommand \\arraystretch 1\n\\newcommand \\Hline {\\hhline{*{3}{-|}-||-}}\n\\centering\n\\begin{tabular}{c|c|c|c||l}\n$v_1\\in\\Ny(r)$ & $v_2\\in\\Ny(r)$ & $v_3\\in\\Ny(r)$ & $v_4\\in\\Ny(r)$ & case(s) \\\\\\hhline{*{3}{=:}=::=}\n\\true & \\true & \\true & \\true & 4 \\\\\\Hline\n\\true & \\true & \\true & \\false & 2 \\\\\\Hline\n\\true & \\true & \\false & \\true & 4 \\\\\\Hline \n\\true & \\true & \\false & \\false & 2 \\\\\\Hline \n\\true & \\false & \\true & \\true & 1, 5 \\\\\\Hline \n\\true & \\false & \\true & \\false & 1, 2, 5 \\\\\\Hline \n\\true & \\false & \\false & \\true & 1 \\\\\\Hline \n\\true & \\false & \\false & \\false & 1, 2 \\\\\\Hline \n\\false & \\true & \\true & \\true & 1, 4 \\\\\\Hline \n\\false & \\true & \\true & \\false & 1 \\\\\\Hline \n\\false & \\true & \\false & \\true & 1, 3, 4 \\\\\\Hline \n\\false & \\true & \\false & \\false & 1, 3 \\\\\\Hline \n\\false & \\false & \\true & \\true & 5 \\\\\\Hline \n\\false & \\false & \\true & \\false & 5 \\\\\\Hline \n\\false & \\false & \\false & \\true & 3 \\\\\\Hline \n\\false & \\false & \\false & \\false & 3\n\\end{tabular}\n\\caption{The five cases considered in the proof of Theorem~\\ref{thm: AS in DC} cover all possible configurations.}\n\\label{tb: cases}\n\\end{figure}\n\n\n\\begin{thm}\\label{thm: DC has dual basis}\n Let $\\G$ be a DC T-mesh. Then the set of functionals $\\{\\lambda_v\\mid v\\in\\N_A\\}$ is a set of dual functionals for the set $\\{B_v\\mid v\\in\\N_A\\}$.\n\\end{thm}\nThe proof below follows the ideas of \\cite[Proposition~5.1]{BBCS:2012} and \\cite[Proposition~7.3]{BBSV:2014}.\n\\begin{proof}\nLet $v,w\\in\\N_A$. We need to show that \n\\begin{equation}\\label{eq:DC_claim}\n\\lambda_v(B_w) = \\delta_{vw},\n\\end{equation}\nwith $\\delta$ representing the Kronecker symbol.\n\nIf $\\supp B_v$ and $\\supp B_w$ are disjoint (or have an intersection of empty interior), then at least one of the pairs \n\\[\\bigl(\\supp(N_{\\xx(v)}),\\supp(N_{\\xx(w)})\\bigr),\\ \\bigl(\\supp(N_{\\yy(v)}),\\supp(N_{\\yy(w)})\\bigr),\\ \\bigl(\\supp(N_{\\zz(v)}),\\supp(N_{\\zz(w)})\\bigr)\\]\nhas an intersection with empty interior. Assume w.l.o.g.\\ that $\\left|\\supp(N_{\\xx(v)})\\cap\\supp(N_{\\xx(w)})\\right|=0$, then \n\\[\\lambda_v(B_w) = \\underbrace{\\lambda_{\\xx(v)}(N_{\\xx(w)})}_0\\cdot \\lambda_{\\yy(v)}(N_{\\yy(w)})\\cdot \\lambda_{\\zz(v)}(N_{\\zz(w)})=0.\\]\nAssume that $\\supp B_v$ and $\\supp B_w$ have an intersection with nonempty interior.\nSince the mesh $\\G$ is DC, the two nodes overlap in at least two dimensions. Without loss of generality we may assume the index vectors $\\bigl(\\xx(v),\\xx(w)\\bigr)$ and $\\bigl(\\yy(v),\\yy(w)\\bigr)$ overlap. Proposition~\\ref{prp:overlap} yields \n\\[\\lambda_{\\xx(v)}(N_{\\xx(w)}) = \\delta_{v_1w_1} \\enspace\\text{and}\\enspace \\lambda_{\\yy(v)}(N_{\\yy(w)}) = \\delta_{v_2w_2} .\\]\nThe above identities immediately prove \\eqref{eq:DC_claim} if $v_1\\neq w_1$ or $v_2\\neq w_2$. If on the contrary, $v_1=w_1$ and $v_2\\neq w_2$, then $v$ and $w$ are aligned in $z$-direction, this is, $\\zz(v)$ and $\\zz(w)$ are both vectors of $p+2$ consecutive indices from the same index set \n$\\ZZ(v_1,v_2)=\\ZZ(w_1,w_2)$. \nHence $v$ and $w$ must overlap also in $z$-direction. Again, Proposition~\\ref{prp:overlap} yields\n\\[\\lambda_{\\zz(v)}(N_{\\zz(w)}) = \\delta_{v_3w_3},\\]\nwhich concludes the proof.\n\\end{proof}\n\n\\begin{crl}[{\\cite[Proposition~7.4]{BBSV:2014}}]\nLet $\\G$ be a DC T-mesh. Then the set $\\{B_v\\mid v\\in\\N_A\\}$ is linear independent.\n\\end{crl}\n\\begin{proof}\nAssume \\[\\sum_{v\\in\\N_A}c_vB_v=0\\] for some coefficients $\\{c_v\\}_{v\\in\\N_A}\\subseteq\\reell$. Then, for any $w\\in\\N_A$, applying $\\lambda_w$ to the sum, using linearity and Theorem~\\ref{thm: DC has dual basis}, we get\n\\[c_w = \\lambda_w\\,\\Bigl(\\sum_{v\\in\\N_A}c_vB_v\\Bigr)=0.\\]\n\\raiseqed\n\\end{proof}\n\n\n\\section{Linear Complexity}\\label{sec: complexity}\nThis section is devoted to a complexity estimate in the style of a famous estimate for the Newest Vertex Bisection on triangular meshes given by Binev, Dahmen and DeVore \\cite{BDV:2004} and, in an alternative version, by Stevenson~\\cite{Stevenson:2007}.\nLinear Complexity of the refinement procedure is an inevitable criterion for optimal convergence rates in the Adaptive Finite Element Method (see e.g.\\ \\cite{BDV:2004,Stevenson:2007,CFPP:2014} and \\cite[Conclusions]{Buffa:Giannelli:2015}). The estimate and its proof follow our own work \\cite{Morgenstern:Peterseim:2015,BGMP:2016}, which we generalize now to three dimensions and $m$-graded refinement.\nThe estimate reads as follows.\n\n\\begin{thm}\\label{thm: complexity}\nAny sequence of admissible meshes $\\G_0,\\G_1,\\dots,\\G_J$ with \\[\\G_j=\\refine(\\G_{j-1},\\M_{j-1}),\\quad\\M_{j-1}\\subseteq\\G_{j-1}\\quad\\text{for}\\enspace j\\in\\{1,\\dots,J\\}\\] satisfies\n\\[\\left|\\G_J\\setminus\\G_0\\right|\\ \\le\\ C_{\\mathbf p,m}\\sum_{j=0}^{J-1}|\\M_j|\\ ,\\]\nwith $C_{\\mathbf p,m}=\\tfrac{m^{1\/3}}{1-m^{-1\/3}}\\,\\bigl(4d_1+1\\bigr)\\,\\bigl(4d_2+m^{1\/3}\\bigr)\\,\\bigl(4d_3+m^{2\/3}\\bigr)$ and $d_1,d_2,d_3$ from Lemma~\\ref{lma: K1 in refMS => K2 in S} below.\n\\end{thm}\n\n\\begin{lma}\\label{lma: K1 in refMS => K2 in S}\nGiven $\\M\\subseteq\\G\\in\\A$ and $K\\in\\refine(\\G,\\M)\\setminus\\G$, there exists $K'\\in \\M$ such that $\\ell(K)\\le\\ell(K')+1$ and \\[\\Dist(K,K')\\le m^{-\\ell(K)\/3}(d_1,d_2,d_3),\\]\nwith ``$\\le$'' understood componentwise and constants \n\\begin{align*}\nd_1&\\sei \\tfrac1{1-m^{-1\/3}} \\,\\bigl(p_1+\\tfrac{3+m^{1\/3}}2+\\tfrac{m^{1\/3}-1}{m^2}\\bigr),\\\\\nd_2&\\sei \\tfrac{m^{1\/3}}{1-m^{-1\/3}}\\,\\bigl(p_2+\\tfrac{3+m^{1\/3}}2+\\tfrac{m^{1\/3}-1}{m^2}\\bigr),\\\\\nd_3&\\sei \\tfrac{m^{2\/3}}{1-m^{-1\/3}}\\,\\bigl(p_3+\\tfrac{3+m^{1\/3}}2+\\tfrac{m^{1\/3}-1}{m^2}\\bigr).\n\\end{align*}\n\\end{lma}\nThe proof is given in Appendix~\\ref{apx: K1 in refMS => K2 in S}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm: complexity}]\\ \n\n\\pnumpx For $K\\in\\tcup\\A$ and $\\KM\\in\\M\\sei\\M_0\\cup\\dots\\cup\\M_{J-1}$, define $\\lambda(K,\\KM)$ by \\[\\lambda(K,\\KM)\\sei\\begin{cases}m^{(\\ell(K)-\\ell(\\KM))\/3}&\\text{if }\\ell(K)\\le\\ell(\\KM)+1\\text{ and }\\Dist(K,\\KM)\\le 2m^{-\\ell(K)\/3}(d_1,d_2,d_3),\\\\[.3em]0&\\text{otherwise.}\\end{cases}\\]\n\n\\pnumpx[Main idea of the proof.]\n\\begin{alignat*}{2}\n\\left|\\G_J\\setminus\\G_0\\right| &= \\mathbox[2.5em]{\\sum_{K\\in\\G_J\\setminus\\G_0}}1 &&\\Stackrel{\\numref{sum_lambda > 1}}\\le\\sum_{K\\in\\G_J\\setminus\\G_0}\\sum_{\\KM\\in\\M}\\lambda(K,\\KM) \\\\\n&\\Stackrel{\\numref{summe aller lambdas beschraenkt}}\\le\\sum_{\\KM\\in\\M} C_{\\mathbf p,m} &&=C_{\\mathbf p,m}\\,\\sum_{j=0}^{J-1} |\\M_j|.\n\\end{alignat*}\n\n\\pnumpx[Each $K\\in\\G_J\\setminus\\G_0$ satisfies \\[\\sum_{\\KM\\in\\M}\\lambda(K,\\KM)\\ \\ge\\ 1.\\]]%\n\\label{sum_lambda > 1}%\nConsider $K\\in\\G_J\\setminus\\G_0$. Set $j_1 K2 in S} states the existence of $K_1\\in\\M_{j_1}$ with $\\Dist(K,K_1)\\le m^{-\\ell(K)\/3}(d_1,d_2,d_3)$ and $\\ell(K)\\le\\ell(K_1)+1$. Hence $\\lambda(K,K_1)=m^{\\ell(K)-\\ell(K_1)}>0$.\nThe repeated use of Lemma~\\ref{lma: K1 in refMS => K2 in S} yields $j_1>j_2>j_3>\\dots$ and $K_2,K_3,\\dots$ with $K_{i-1}\\in\\G_{j_i+1}\\setminus\\G_{j_i}$ and $K_i\\in\\M_{j_i}$ such that \n\\begin{equation}\\label{eq: complexity -last}\n\\Dist(K_{i-1},K_i)\\le m^{-\\ell(K_{i-1})\/3}(d_1,d_2,d_3)\\enspace\\text{and}\\enspace\\ell(K_{i-1})\\le\\ell(K_i)+1.\n\\end{equation}\nWe repeat applying Lemma~\\ref{lma: K1 in refMS => K2 in S} as $\\lambda(K,K_i)>0$ and $\\ell(K_i)>0$, and we stop at the first index $L$ with $\\lambda(K,K_L)=0$ or $\\ell(K_L)=0$. \nIf $\\ell(K_L)=0$ and $\\lambda(K,K_L)>0$, then\n\\[\\sum_{\\KM\\in\\M}\\lambda(K,\\KM)\\ge\\lambda(K,K_L)=m^{(\\ell(K)-\\ell(K_L))\/3}\\ge m^{1\/3}.\\]\nIf $\\lambda(K,K_L)=0$ because $\\ell(K)>\\ell(K_L)+1$, then \\eqref{eq: complexity -last} yields $\\ell(K_{L-1})\\le\\ell(K_L)+1<\\ell(K)$ and hence\n\\[\\sum_{\\KM\\in\\M}\\lambda(K,\\KM)\\ge\\lambda(K,K_{L-1})=m^{(\\ell(K)-\\ell(K_{L-1}))\/3}>m^{1\/3}.\\]\nIf $\\lambda(K,K_L)=0$ because $\\Dist(K,K_L)>2m^{-\\ell(K)\/3}(d_1,d_2,d_3)$, then a triangle inequality shows\n\\begin{align*}\n2m^{-\\ell(K)\/3}(d_1,d_2,d_3) &< \\Dist(K,K_1)+\\sum_{i=1}^{L-1}\\Dist(K_i,K_{i+1}) \n\\\\&\\le\\ m^{-\\ell(K)\/3}(d_1,d_2,d_3)+\\sum_{i=1}^{L-1} m^{-\\ell(K_i)\/3}(d_1,d_2,d_3),\n\\end{align*}\nand hence $\\smash{\\displaystyle m^{-\\ell(K)\/3} \\le\\sum_{i=1}^{L-1} m^{-\\ell(K_i)\/3}}$. The proof is concluded with \n\\[ 1\\ \\le\\ \\sum_{i=1}^{L-1} m^{(\\ell(K)-\\ell(K_i))\/3}\\ =\\ \\sum_{i=1}^{L-1} \\lambda(K,K_i)\\ \\le\\ \\sum_{\\KM\\in\\M}\\lambda(K,\\KM).\\]\n\n\\pnumpx[For all $j\\in\\{0,\\dots,J-1\\}$ and $\\KM\\in\\M_j$ holds \\[\\sum_{K\\in\\G_J\\setminus\\G_0}\\lambda(K,\\KM)\\ \\le\\ \\tfrac{m^{1\/3}}{1-m^{-1\/3}}\\,\\bigl(4d_1+1\\bigr)\\,\\bigl(4d_2+m^{1\/3}\\bigr)\\,\\bigl(4d_3+m^{2\/3}\\bigr)\\ =\\ C_{\\mathbf p,m}\\ .\\]]%\n\\label{summe aller lambdas beschraenkt}%\nThis is shown as follows. By definition of $\\lambda$, we have\n\\begin{align*}\n\\mathbox[1cm]{\\sum_{K\\in\\G_J\\setminus\\G_0}}\\lambda(K,\\KM)\n&\\le \\mathbox[1cm]{\\sum_{K\\in\\bigcup\\A\\setminus\\G_0}}\\lambda(K,\\KM)\\\\\n&= \\sum_{j=1}^{\\ell(\\KM)+1}m^{(j-\\ell(\\KM))\/3}\\,\\#\\underbrace{\\bigl\\{K\\in\\tcup\\A\\mid\\ell(K)=j\\text{ and }\\Dist(K,\\KM)\\le 2m^{-j\/3}(d_1,d_2,d_3)\\bigl\\}}_B.\n\\end{align*}\nSince we know by definition of the level that $\\ell(K)=j$ implies $|K|=m^{-j}$, we know that $m^j\\left|\\tcup B\\right|$ is an upper bound of $\\#B$. The cuboidal set $\\tcup B$ is the union of all admissible elements of level $j$ having their midpoints inside a cuboid of size \n\\[4m^{-j\/3}d_1\\,\\times\\,4m^{-j\/3}d_2\\,\\times\\,4m^{-j\/3}d_3.\\]\nAn admissible element of level $j$ is not bigger than $m^{-j\/3}\\,\\times\\,m^{(1-j)\/3}\\,\\times\\,m^{(2-j)\/3}$. \nTogether, we have \\[\\bigl|\\tcup B\\bigr|\\le m^{-j}\\,\\bigl(4d_1+1\\bigr)\\,\\bigl(4d_2+m^{1\/3}\\bigr)\\,\\bigl(4d_3+m^{2\/3}\\bigr),\\] and hence $\\#B\\le \\bigl(4d_1+1\\bigr)\\,\\bigl(4d_2+m^{1\/3}\\bigr)\\,\\bigl(4d_3+m^{2\/3}\\bigr)$. An index substitution $k\\sei1-j+\\ell(\\KM)$ proves the claim with\n\\[\\sum_{j=1}^{\\ell(\\KM)+1}m^{(j-\\ell(\\KM))\/3}=\\sum_{k=0}^{\\ell(\\KM)}m^{(1-k)\/3}