diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznmth" "b/data_all_eng_slimpj/shuffled/split2/finalzznmth" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznmth" @@ -0,0 +1,5 @@ +{"text":"\\section{Supplemental material}\n\n\\begin{figure}[h!]\n \\vspace*{.3cm}\n \\centerline{\\includegraphics[width = 1 \\textwidth]{qsearch_circuit.png}}\n \\vspace*{.5cm}\n \\caption{Decomposition of a single cycle of the quantum algorithm in Fig.~\\ref{fig:circuit} in terms of single qubit rotations (U1,3) and \\textsc{cnot} gates using the \\texttt{qsearch} compiler of Ref.~\\cite{davis2019heuristics}. Here \\texttt{q\\textsubscript{0}} corresponds to the system qubit, \\texttt{q\\textsubscript{1,2}} are the auxiliary qubits and \\texttt{c03} represents three classical bits for the readout. The result of $P_0(t)$ from the final trace-out and measurement can be written as $P_0(t) = \\sum_{i,j=0}^1 \\langle 0 ij | \\rho(t) | 0ij\\rangle$ where $\\langle 0 ij | \\rho(t) | 0ij\\rangle$ is the measurement result for $\\texttt{q\\textsubscript{0}}=0$, $\\texttt{q\\textsubscript{1}}=i$, $\\texttt{q\\textsubscript{2}}=j$. }\n \\label{fig:circuit_qsearch}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centerline{\\includegraphics[width = 0.5 \\textwidth]{calibration.pdf}}\n \\caption{The response matrix of the qubits \\texttt{q\\textsubscript{0-2}} of IBM~Q Vigo device~\\cite{IBMQVigo} which is used for the readout error mitigation in Fig.~\\ref{fig:device_result}. The $2^3$ states are prepared by applying $X$ gates and then corresponding measurements are performed. The error mitigation is implemented using the constrained matrix inversion approach which is implemented in IBM's \\texttt{qiskit-ignis} package~\\cite{Qiskit}.}\n \\label{fig:circuit_qsearch}\n\\end{figure}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nEstimating the motion of soft tissues undergoing deformation is a major topic in medical imaging research. Magnetic resonance~(MR) tagging~\\cite{axel1989heart, axel1989mr} has a wide range of applications in diagnosing and characterizing coronary artery disease~\\cite{mcveigh1998imaging, edvardsen2006regional}, cardiac imaging~\\cite{kolipaka2005relationship, ibrahim2011myocardial}, speech and swallowing research~\\cite{parthasarathy2007measuring, xing2019atlas, gomez2020analysis}, and brain motion in traumatic injuries~\\cite{knutsen2014improved}. MR tagging temporarily magnetizes tissue with a spatially modulated periodic pattern, creating transient tags in the image sequence that move with the tissue, thus capturing motion information. However, MR tagging has not been widely used in clinical settings~\\cite{budinger1998cardiac} due to the long post-processing time and the problem of tag fading caused by T1 relaxation. \n\nThe harmonic phase (HARP) method~\\cite{osman1999cardiac,osman2000imaging,osman2000visualizing} addresses these issues by computing phase images from sinusoidally tagged MR images using bandpass filters in the Fourier domain.\nBased on the fact that the harmonic phases of material points do not change with motion, HARP tracking uses harmonic phase images as proxies in the image registration framework to avoid the violation of brightness consistency caused by tag fading. However, interpolating phase values during registration can be challenging, as it requires local phase unwrapping~\\cite{PVIRA}. Also, the unwrapping operation is not differentiable, making it difficult to leverage in an end-to-end learning framework. Global phase unwrapping, used as a preprocessing step, is one possible solution to this problem, but it has been found to be extremely difficult and error-prone~\\cite{jenkinson2003fast}. In this paper, we propose a sinusoidal transformation for harmonic phases that eliminates the need for phase interpolation or phase unwrapping, while still being resistant to imaging artifacts and tag fading. Given this transformed input, we designed an unsupervised multi-channel image registration framework for a set of 3D tagged images with different tag orientations.\n\nIn this study, we focus on estimating tongue motion. According to the American Cancer Society, approximately 48,000 people in the US are diagnosed with oral or oropharyngeal cancer annually, and 33\\% of these cases affect the tongue~\\cite{ACS2016}. Understanding the motion differences between healthy subjects and those who have undergone a glossectomy can help inform surgical decisions and assist in speech and swallowing remediation. Compared to extensively-studied cardiac motion, tongue motion has four unique properties that make motion estimation challenging: 1)~the tongue moves quickly relative to the temporal resolution of the scan during speech; 2)~the motion is aperiodic and highly variable among subjects; 3)~the tongue is highly deformable during speech due to its interdigitated muscle architecture~\\cite{abd1955part}; and 4)~the air gap present in the oral cavity can significantly affect the quality of imaging, causing severe artifacts. To address these issues, we developed an unsupervised deep learning model that utilizes phase information of tagged MR images~(MRIs) to estimate the motion field. Our model has shown superior performance on healthy subject data and has also demonstrated good generalization to patients who have had a glossectomy.\n\n \nIncompressibility is another crucial factor to consider when analyzing biological movements. For example, research has shown that the volume change of the myocardium during a cardiac cycle is less than 4\\%~\\cite{yin1996compressibility}, and the volume change of the tongue during speech and swallowing is even smaller~\\cite{gilbert2007anatomical}. Therefore, it is necessary to assume that muscle motion is incompressible in order to accurately represent the physical tissue properties~\\cite{kier1985tongues}. In this paper, we propose a determinant-based learning objective that allows the network to learn to estimate incompressible flow fields. To the best of our knowledge, this is the first work that learns to estimate incompressible motion fields within a deep learning-based image registration framework. \n\n\\subtitle{Contributions} \n1)~We propose a sinusoidal transformation for harmonic phases, which is resistant to noise, artifacts, and tag fading, and can be easily incorporated into the end-to-end training of modern deep learning-based registration frameworks.\n2)~We propose a determinant-based objective for learning 3D incompressible flow that better represents the motion of human biological tissue.\n3)~With the aforementioned two features, we propose a novel unsupervised deep learning-based method to directly estimate 3D dense incompressible motion field on tagged MRI. Our approach is robust to tag fading, large motion, and can generalize well to pathological data.\n\n\n\\section{Background \\& Related Work}\n\n\n\\subtitle{Tagging MRI-based 3D motion estimation} Tracking 3D motion is generally necessary when estimating the motion of biological structures. In the past, traditional 2D MR tagging motion estimation methods have been extended to 3D~\\cite{ryf2002myocardial, abd2007three, spottiswoode20083d}. However, these methods require the acquisition of a large number of closely spaced image slices, making them impractical for routine clinical use due to the large amount of time required. Other approaches estimate 3D motion directly from sparse imaging geometries, such as using finite element or finite difference methods~\\cite{o1995three}, tag line tracking based on 2D images~\\cite{denney1995reconstruction}, or spline interpolation~\\cite{denney1995reconstruction, huang1999spatio}. These methods typically require a 3D tissue segmentation, which can be time-consuming and may require human intervention or automated segmentation algorithms. \nPVIRA~\\cite{PVIRA} proposed to interpolate tag images to form a finer grid for later tracking and also uses HARP magnitude images to eliminate the need for a segmentation model. However, PVIRA uses phase images for registration, requiring local phase unwrapping during interpolation, which can be error-prone and is non-differentiable. In contrast, we propose a novel sinusoidal transformation for 3D harmonic phase images, thus avoiding the need for phase unwrapping and allowing for differentiable interpolation techniques such as trilinear interpolation. By incorporating this transformation into a learning-based registration framework, we can achieve end-to-end training and improved accuracy. More recently, \\cite{ye2021deeptag} proposed a deep learning-based registration method for tracking cardiac tagged images. However, it only tackles 2D motion and does not account for tag-fading or the incompressibility of the tissue during heartbeats. In contrast, our approach is based on phase information and can directly estimate a dense 3D incompressible motion field.\n\n\n\\subtitle{Deep learning-based image registration}\nIterative optimization approaches~\\cite{ANTs, diffeoDemons, ilogdemons2011} have been successful in achieving good accuracy in intensity-based deformable image registration. However, these methods can be slow and require manual tuning for each new image pair. Alternatively, deep learning-based approaches can directly take a pair of moving and fixed images as input and output the corresponding estimated deformations, resulting in faster inference speeds and comparable accuracy to iterative methods~\\cite{balakrishnan2018unsupervised,voxelmorph2019,mok2020fast,qiu2021learning,im2grid2022, Transmorph2022, LKUnet2022}. Such registration frameworks learn the function $g_{\\theta}(F, M) = \\bm{\\phi}$ which gives the transformation $\\bm{\\phi}$ that is used to align the fixed $F$ and moving image $M$. The function $g$ is parametrized by learnable parameters $\\theta$. \nThe parameters are learned by optimizing the generalized objective: \n\\begin{equation}\n\t\\hat{\\theta} = \\argmin_{\\theta} \\mathcal{L}_\\mathrm{sim}(F, M \\circ g_{\\theta}(F, M)) + \\lambda \\mathcal{L}_\\mathrm{smooth}(g_{\\theta}(F, M))\\,.\n\\end{equation}\nThe first term, $\\mathcal{L}_\\mathrm{sim}$, encourages image similarity between the fixed and warped moving image. The second term, $\\mathcal{L}_\\mathrm{smooth}$, imposes a smoothness constraint on the transformation. The parameter $\\lambda$ determines the trade-off between these two terms. However, these approaches are limited by their reliance on only two scalar images---one fixed and one moving---as input. In contrast, our method allows for the input of three pairs of tagged images with different tag orientations, referred to as multi-channel registration. Additionally, these approaches are unable to preserve the incompressibility of the motion field, whereas our approach estimates an incompressible flow field whose quality is comparable to the iterative methods~\\cite{ilogdemons2011, PVIRA, NewStartPoint2023}, while being much faster. \n\n\n\\subtitle{Incompressiblity} \nIncompressible flow fields, also known as divergence-free vector fields or\nvolume-preserving transformations, have been a longstanding research topic in fluid dynamics~\\cite{majda2002vorticity, ma2005geometric, aris2012vectors} and image registration~\\cite{song1991computation, gorce1997estimation}. In this paper, we focus on their application to biological image registration. Previously, there have been two main types of approaches: iterative approaches~\\cite{mansi2009physically,ilogdemons2011,PVIRA, NewStartPoint2023} and determinant-based approaches~\\cite{rappoport1995volume,rohlfing2003volume, haber2004numerical, bistoquet2008myocardial}.\nIterative approaches incorporate incompressibility into diffeomorphic demons~\\cite{diffeoDemons} when updating the stationary velocity fields iteratively, making them computationally expensive. Determinant-based approaches focus on constraining the deformation field with a penalty on the Jacobian determinant of the deformation. This assumes that tissue can be deformed locally, but the volume (local and total) remains approximately constant.\n In the past, the Jacobian determinant constraint has been applied to achieve volume preservation during interactive deformation of volumetric models~\\cite{rappoport1995volume}.\nIt has also been used as an incompressibility regularization term to constrain coordinate transformation during B-spline-based nonrigid image registration~\\cite{rohlfing2003volume}. However, this constraint is non-linear and requires ad-hoc numerical schemes that are computationally demanding~\\cite{haber2004numerical, ilogdemons2011}. Recent work~\\cite{mok2020fast} enforces orientation consistency of deformation field using determinant constraint for diffeomorphism while leaving incompressilibility unsolved. In contrast, we introduce a novel Jacobian determinant-based learning objective into the unsupervised deep learning-based registration framework and show its effectiveness in preserving volume and achieving a diffeomorphism. Furthermore, our learned model takes less than a second to process a single pair of frames, compared to tens of minutes required by previous works.\n\n\n\\section{Method} \n\n\\begin{figure}[!tb]\n\n\n\t\\floatconts\n\t{fig:pipeline}\n\t{\\caption{Top: HARP processing pipeline. Bottom: HARP phases of fixed and moving images are taken as input. They are first transformed by sinousoidal transformation and sent into UNet-like multi-channel registration network. }\\vspace{-1em}}\n\t{\\includegraphics[width= 0.9\\linewidth]{figures\/pipeline.pdf}\\vspace{-1em}}\n\\end{figure}\n\n\\subtitle{HARP processing}\nThe harmonic phase~(HARP) algorithm is a well-established method for processing and analyzing tagged MRIs~\\cite{osman2000imaging}. \nSpecifically, HARP filtering involves extracting the first spectral peak in the Fourier domain of a tagged image slice to obtain a complex-valued image, where the phase part~(HARP phase) contains motion information and the magnitude part~(HARP magnitude) contains anatomical information. Since tag images are typically acquired with lower through-plane resolution, they are interpolated onto a finer grid before HARP processing~\\cite{PVIRA}. \\figureref{fig:pipeline} shows the 3D tagged image processing using interpolation and HARP. Our method applies HARP filtering to the three interpolated tag volumes \\ensuremath{I_{\\mathrm{Sh}}}\\xspace, \\ensuremath{I_{\\mathrm{Sv}}}\\xspace, and \\ensuremath{I_{\\mathrm{Av}}}\\xspace. For example, for the vertically-tagged axial volume $\\ensuremath{I_{\\mathrm{Av}}}\\xspace(\\bm{x})$, the complex image $J_\\mathrm{Av}(\\bm{x})$ after HARP filtering is computed as follows:\n\\begin{equation}\n\tJ_\\mathrm{Av}(\\bm{x})=\\ensuremath{D_{\\mathrm{Av}}}\\xspace(\\bm{x}) e^{j \\ensuremath{\\Psi_{\\mathrm{Av}}}\\xspace(\\bm{x})}\\,,\n \\label{eq:compleximage}\n\\end{equation}\nwhere \\ensuremath{D_{\\mathrm{Av}}}\\xspace is the HARP magnitude volume and \\ensuremath{\\Psi_{\\mathrm{Av}}}\\xspace is the HARP phase volume. The same notation applies for the horizaonally- and veritcally-tagged sagital volumes, yielding \\ensuremath{D_{\\mathrm{Sh}}}\\xspace, \\ensuremath{\\Psi_{\\mathrm{Sh}}}\\xspace, \\ensuremath{D_{\\mathrm{Sv}}}\\xspace, and \\ensuremath{\\Psi_{\\mathrm{Sv}}}\\xspace. We average over the three maginitude images to obatain \\ensuremath{I_{\\mathrm{Mag}}}\\xspace. \n\n\n\\subtitle{Sinousoidal transform}\nPhase-based registration is recognized as more robust than intensity-based registration for dealing with tag fading, geometric distortions between frames, and noise~\\cite{fleet1993stability, hemmendorff2002phase}. Interpolating phase values requires local phase unwrapping when the phase difference between two points exceeds $\\pi$~\\cite{PVIRA}. However, phase unwrapping is non-differentiable, which is a problem for deep learning-based registration methods, whose training typically rely on backpropagation.\n\nTo address this, we propose a simple sinusoidal transformation for phase images. Specifically, given a phase image $\\Psi$, we apply element-wise $\\sin$ and $\\cos$ operations,\n\\begin{equation}\nI^{\\sin}(\\bm{x}) = \\sin(\\Psi (\\bm{x})) \\quad \\text{ and } \\quad I^{\\cos}(\\bm{x}) = \\cos(\\Psi (\\bm{x})),\n\\end{equation}\nto obtain two corresponding images $(I^{\\sin}, I^{\\cos})$.\nThus a phase value is uniquely associated with a $(\\sin, \\cos)$ pair and vice versa, i.e., one-one mapping. This transformation allows for smooth interpolation while still retaining the robustness of phase-based registration to tag fading, distortions between frames, and noise. Additionally, the flat (low-contrast) regions and steep (high-contrast) region in $\\sin$ and $\\cos$ are compensated for by each other, as demonstrated in \\figureref{fig:pipeline}.\nAn alternative way to view this sinusoidal transformation is by writing the complex image in~\\equationref{eq:compleximage} as $J(\\bm{x})= D(\\bm{x}) (\\cos(\\Psi(\\bm{x})) + j \\sin(\\Psi(\\bm{x})))$, where the subscript is omitted. Thus, $(I^{\\cos}, I^{\\sin})$ can be seen as the real and imaginary parts of the complex image $J(\\bm{x})\/D(\\bm{x})$.\n\n\n\\subtitle{Multi-channel registration} \nHarmonic phase is a material property that can be used to solve the aperture problem in optical flow by tracking the three harmonic phase values that come from three linearly independent tag directions. Instead of tracking phase values directly, however, because of the one-to-one nature of the sinusoidal transformation, we can track the patterns in the sinusoidally transformed image pairs. Our task differs from existing registration networks~\\cite{voxelmorph2019, Transmorph2022, im2grid2022} which only take a single pair of fixed and moving images as input; here, must match \\textit{multiple} fixed and moving sinusoidal images at the same time. To do this, we used the following mean squared error~(MSE) as our similarity loss during training:\n\\begin{equation}\n\t\\mathcal{L}_\\mathrm{sim} = \\sum_{k\\in \\{\\mathrm{Av}, \\mathrm{Sh}, \\mathrm{Sv} \\}} \\hspace*{1ex} \\sum_{l\\in \\{\\sin,\\cos\\}} \\mathrm{MSE}(F_k^l, M_k^l\\circ \\bm{\\phi})\\,,\n\\end{equation}\nwhere \n\\begin{equation}\n\t\\bm{\\phi} = g_\\theta \\left( \\left\\{ F_k^l, M_k^l \\mid k\\in \\{\\mathrm{Av},\\mathrm{Sh},\\mathrm{Sv}\\}, l\\in \\{\\sin, \\cos\\} \\right\\} \\right)\\,.\n\\end{equation}\n\n\\subtitle{Incompressible constraint} Volume preservation, also known as incompressibility, is an important feature for image registration in moving biological tissues. Accordingly, we introduce the following Jacobian determinant-based learning objective on the transformation field to encourage incompressiblity:\n\\begin{equation}\n\t\\mathcal{L}_\\mathrm{incompress} = \\sum_{\\bm{x}} \\ensuremath{I_{\\mathrm{Mag}}}\\xspace (\\bm{x}) \\left| \\log \\max \\left( \\left |J_\\phi(\\bm{x}) \\right| ,\\epsilon \\right) \\right| - \\sum_{\\bm{x}} \\min \\left( \\left| J_\\phi(\\bm{x}) \\right| , 0 \\right)\\,,\n\t\\label{eq:incompress}\n\\end{equation}\nwhere $\\epsilon$ is a small positive number and \\ensuremath{I_{\\mathrm{Mag}}}\\xspace is the HARP magnitude image with a range of [0,1]. \\ensuremath{I_{\\mathrm{Mag}}}\\xspace serves as a soft mask for tissues such as the tongue. The first term penalizes the deviation of the Jacobian determinant $\\left| J_\\phi(\\bm{x}) \\right|$ from unity and is spatially weighted by \\ensuremath{I_{\\mathrm{Mag}}}\\xspace. \nWe introduced a second term in \\equationref{eq:incompress} both to prevent the solution where all determinants are negative and to encourage a diffeomorphism by directly penalizing negative determinants. The proposed loss encourages $\\bm{\\phi}$ to be incompressible in tissue regions and diffeormophic everywhere including in air gaps.\nWhile the L1 and L2 penalties $|\\left| J_\\phi(\\bm{x}) \\right| - 1|_{\\{1, 2\\}}$ are also viable, they were found to be less effective than \\equationref{eq:incompress}.\n\n\n\\subtitle{Overall training objective} We encourage the spatial smoothness of the displacement $\\bm{u}$, with the smoothness loss $\\mathcal{L}_\\mathrm{smooth} = \\sum_{\\bm{x}} \\| \\nabla \\bm{u}(\\bm{x}) \\|^2$. Thus the overall loss for training is\n$\\mathcal{L}_\\mathrm{total} = \\mathcal{L}_\\mathrm{sim} + \\lambda \\mathcal{L}_\\mathrm{smooth} + \\beta \\mathcal{L}_\\mathrm{incompress},$\nwhere $\\lambda$ and $\\beta$ are hyper-parameters. \n\n\\section{Experiments}\n\n\\subtitle{Materials} The present study includes a dataset of 25 unique (subject-phrase) pairs, consisting of 8 healthy controls~(HCs) and 2 post-partial glossectomy patients with tongue flaps. To capture tongue movement during speech, the participants were asked to say one or more phrases---``a thing'', ``a souk'', or ``ashell''---while tagged MRIs were acquired. The recorded phrases had a duration of 1 second, with 26 time frames captured during this period. The in-plane resolution of the MR images is 1.875 $\\times$ 1.875 mm, and the slice thickness is 6 mm. The tagged MR images were collected using the CSPAMM pulse sequence in both sagittal (both vertical and horizontal tags) and axial (vertical tags only) orientations.\nThe HC data was split into training, validation, and test datasets in a ratio of 0.6:0.2:0.2; patient data was reserved for testing. \nThe images are manually cropped to include the tongue region and then zero-padded to $64 \\times 64 \\times 64$. \n\n\\subtitle{Network architecture} As our goal is to explore the value of the proposed sinousoidal transformation and incompressible objective in a generic setting, we used the well-established 3D U-Net~\\cite{Unet} architecture with a convolutional kernel of size ($5\\times5\\times5$) to increase the effective receptive field~\\cite{LKUnet2022}. \nTo accommodate our multi-channel setting, we set the input channel of the first convolutional kernel to 12 to accept 6-channels from each of the fixed and moving sinusoidal images---$\\sin$ and $\\cos$ for each of the three acquired images. A scaling and squaring layer~\\cite{dalca2018unsupervised} is used as the final layer to encourage our models to be diffeomorphic. To test the scalibility of our model, we trained a larger model, termed ``Ours-L'', by doubling the number of intermediate feature channels. \n\n\n\\subtitle{Training details} During training, fixed and moving image pairs are randomly selected from a speech sequence. We augment each pair of images with random center-crops during training. No overfitting is observed. The best loss weights (hyper-parameters) are determined by grid search for each model to ensure fair comparision. We set $\\lambda=0.01$ and $\\beta=0.4$ for the models denoted as ``Ours'' and ``Ours-L''. We set $\\lambda = 0.08$ for the ``Ours w\/o inc.'' model where the $\\mathcal{L}_\\mathrm{incompress}$ is not applied. In all experiments, we used the Adam optimizer with a batch size of one and a fixed learning rate of $1 \\times 10^{-4}$ throughout training.\n\n\n\\section{Results}\n\n\\begin{table}[!tb]\n\t\\floatconts\n\t{tab:main}%\n\t{\\caption{Quantitative measurement of performance on registration accuracy (w\/ RMSE), incompressiblity (w\/ Det\\_AUC), diffeomorphims (w\/ NegDet), and speed (w\/~Time).\n \n The p-values of Wilcoxon signed-rank tests between ``Ours\" and others are reported.} \\vspace{-1.5em}}%\n\t{\\resizebox{0.95\\textwidth}{!}{%\n\t\t\\begin{tabular}{lcccccccc}\n\t\t\t\\toprule\n\t\t\t&\n\t\t\t\\multicolumn{3}{c}{\\textbf{Registration Acc: RMSE $\\downarrow$}} &\n\t\t\t\\multicolumn{3}{c}{\\textbf{Incompressibility: Det\\_AUC $\\uparrow$}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{NegDet} (\\%) $\\downarrow$} &\n\t\t\t\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}\\textbf{Time} $\\downarrow$ \\\\ (s\/pair) \\end{tabular}} \\\\\n \\cmidrule(rl){2-4} \\cmidrule(rl){5-7} \\cmidrule(rl){8-8}\n\t\t\t& mean $\\pm$ std & median & $p$ & mean $\\pm$ std & median & $p$ & mean $\\pm$ std & \\\\ \\cmidrule(rl){2-4} \\cmidrule(rl){5-7} \\cmidrule(rl){8-8} \\cmidrule(rl){9-9} \n\t\t\tPVIRA & 0.153 $\\pm$ 0.053& 0.160 & \\textless{}0.001 & 0.936 $\\pm$ 0.031& 0.935 & \\textless{}0.001 & 0.000 $\\pm$ 0.005 & 49 \\\\\n\t\t\tOurs w\/o inc. & 0.122 $\\pm$ 0.041& 0.126 & \\textless{}0.001 & 0.862 $\\pm$ 0.077 & 0.870 & \\textless{}0.001 & 0.019 $\\pm$ 0.039 & \\textless{}0.1 \\\\\n\t\t\tOurs & 0.132 $\\pm$ 0.045 & 0.137 & -- & \\textbf{0.950 $\\pm$ 0.038} & \\textbf{0.956} & -- & \\textbf{0.000 $\\pm$ 0.000} & \\textless{}0.1 \\\\\n\t\t\tOurs-L & \\textbf{0.122 $\\pm$ 0.038} & \\textbf{0.126} & \\textless{}0.001 & 0.950 $\\pm$ 0.039 & 0.956 & \\textless{}0.001& 0.000 $\\pm$ 0.001 & \\textless{}0.1 \\\\ \\bottomrule\n\t\t\\end{tabular}%\n\t}}\n\\end{table}\n\n\\begin{figure}[!tb]\n\t\\floatconts\n\t{fig:quali}\n\t{\\caption{\\textbf{(A)}~An example of a fixed and moving frame pair is shown, with only the sin pattern displayed (the cos pattern is omitted). The contour of the tongue region is annotated in the fixed image with a red dotted line. \\textbf{(B-D)}~Results of three different methods are shown, including 3D motion field, the warped moving images, and a sagittal and an axial slice of the 3D Jacobian determinant map.}\\vspace{-1em}}\n\t{\\includegraphics[width=0.90\\linewidth]{figures\/quali.pdf}\\vspace{-1em}}\n\\end{figure}\n\n\n\n\n\\subtitle{Registration accuracy} It is often difficult to obtain the true dense motion field for evaluating the accuracy of registration algorithms. However, we can use the harmonic phase as a ground truth, as it is a property of tissue that moves with the tissue. The sinusoidal transformation is a one-to-one mapping between phase and sinusoidal pattern, so we use the root mean squared error~(RMSE) between the sinusoidal-transformed fixed and moved images as a measure of registration accuracy. A more accurate motion field should better match the sinusoidal patterns of the fixed and moved images, resulting in a smaller RMSE.\n\n\n\n\\subtitle{Incompressility} A perfect incompressible flow field has a $\\left| J_\\phi(\\bm{x}) \\right|$ of $1$ everywhere. To assess incompressibility, we compute the histogram of determinant errors (\\emph{i.e.} $\\left| \\left| J_\\phi(\\bm{x}) \\right| - 1\\right|$) and weight it by \\ensuremath{I_{\\mathrm{Mag}}}\\xspace. We then calculate the area under the cumulative distribution function~(CDF) curve~(AUC) as a scalar metric. A higher AUC indicates a more incompressible motion field.\n\nAs shown in \\tableref{tab:main}, The proposed method (labeled ``Ours\") significantly outperforms PVIRA in terms of both registration accuracy and incompressibility. Without the proposed incompressibility constraint ($\\mathcal{L}_\\mathrm{incompress}$), ``Ours w\/o inc.\" model fails to preserve incompressibility and is non-diffeomorphic in some instances. However, it still performs well in terms of registration accuracy, indicating a general trade-off between these two factors. Interestingly, our larger model (``Ours-L\") achieves the best registration accuracy of all the models while also maintaining nearly the same ability to estimate incompressible flow as the best model (``Ours\"). \\figureref{fig:quali} shows an example when the subject initiates the phrase ``a thing\" from a neutral position, the tongue moves back and downward. The example shows our model accurately registers tags while maintaining incompressibility of the flow fields. \n\n\n\n\\begin{figure}[!tb]\n\\begin{tabular}{cc cc}\n\\includegraphics[width = 0.22\\linewidth]{figures\/RMSE_Gap.pdf} &\n\\includegraphics[width = 0.22\\linewidth]{figures\/Det_Gap.pdf} &\n\\includegraphics[width = 0.22\\linewidth]{figures\/for_patient\/R_MSE_patient.pdf} & \n\\includegraphics[width = 0.22\\linewidth]{figures\/for_patient\/R_Det_AUC_patient.pdf} \\\\\n\\textbf{(a)} & \\textbf{(b)} & \\textbf{(c)} & \\textbf{(d)}\\\\\n\\end{tabular}\n\\caption{Shown in \\textbf{(a)} and \\textbf{(b)}~are the performance of the various methods against large motion. While~\\textbf{(c)} and \\textbf{(d)}~show performance on the pathological cases.}\\vspace{-1em}\n\\label{fig:timegap}\n\\label{fig:patient}\n\\end{figure}\n\n\n\n\n\\subtitle{Degradation with large time gaps} We also evaluated the methods on pairs of frames with different time gaps, which are assumed to be proportional to the magnitude of motion (within a short time window during speech, e.g., 8-frame window). As shown in \\figureref{fig:timegap}, our model degrades less severely as the motion becomes larger. Additionally, since the effect of tag-fading accumulates with larger time gaps, these results also demonstrate the robustness of our models to tag-fading.\n\n\n\n\\subtitle{Generalizability to pathological subjects} \nOur model also demonstrates better performance on patients who have undergone a glossectomy, as shown in \\figureref{fig:patient}. We note that our models were only trained on data from HCs and were then directly evaluated on patient data without any fine-tuning. This demonstrates the generalizability of our models, despite their being trained on a different population.\n\n\n\n\\section{Conclusion}\n\nWe proposed a sinusoidal transformation and determinant-based objective for unsupervised estimation of dense 3D incompressible motion fields from tagged MRI. The method is robust to tag fading and large motion, and can generalize well to pathological data. We believe that the success of our preliminary tongue motion study indicates the potential of our proposed techniques for cardiac and brain motion tracking with tagged MRI.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nCell-free massive multiple-input multiple-output (MIMO) is a promising beyond-5G technology that aims at providing a uniform service over its coverage area \\cite{Raj20,Int19}. In a cell-free massive MIMO system, a large number of base stations (BSs) are distributed across the network and are connected to a central processing unit (CPU) via backhaul links to exchange the channel state information (CSI) and the user-specific data \\cite{Ngo17}. All the BSs jointly serve all the user equipments (UEs) simultaneously, which eliminates the inter-cell interference and enhances the spectral efficiency. Cell-free massive MIMO systems can outperform traditional cellular massive MIMO, and small-cell networks in many practical scenarios \\cite{Int19,Ngo17,Bjo20}. Moreover, their performance can be considerably improved by allowing coordination among the BSs \\cite{Bjo20}.\n\nThe demand for multicasting transmissions is increasing on account of applications such as vehicular communications and augmented\/mixed reality \\cite{Raj20,Mur19}. In multicasting, the same data is transmitted to a group of UEs with a group-specific precoder. The multicast precoding design framework was presented in \\cite{Sid06} and extended to the multi-group setting in \\cite{Kar08}. Recently, multi-group multicasting in massive MIMO has been studied in \\cite{Sad18,Mah21}. A few works have considered multi-group multicasting via cell-free massive MIMO, which adopts conjugate beamforming for the data transmission \\cite{Doa17,Zha19b,Far21,Zho21}. In \\cite{Doa17}, equal transmit powers are allocated to all the UEs to avoid any backhaul signaling for CSI exchange. On the other hand, \\cite{Zha19b,Far21,Zho21} tackle the max-min fairness problem to compute the transmit powers at the UEs and guarantee the same rate within each multicast group at the cost of extra backhaul signaling. While \\cite{Doa17,Zha19b,Far21} consider single-antenna UEs, \\cite{Zho21} assumes multi-antenna UEs and antenna-specific pilots for the CSI acquisition.\n\nConsidering multi-group multicasting via cell-free massive MIMO with multi-antenna UEs, we aim to cooperatively design the multi-group multicast precoders at each BS and the combiners at each UE in a distributed fashion. To this end, we initially target the minimization of the sum of the maximum mean squared errors (MSEs) over the multicast groups, which we refer to in the following as the \\textit{sum-group MSE}. While the sum-group MSE minimization achieves absolute fairness within each multicast group, the resulting distributed precoding design depends on slowly varying dual variables, which results in slow convergence. Instead, we propose to solve a simpler sum MSE minimization problem, which also provides a degree of in-built fairness among all the UEs. This approach provides a good approximation for the sum-group MSE minimization, especially at a medium-to-high signal-to-interference-plus-noise ratio (SINR). Then, we propose a novel distributed precoding design for multi-group multicasting based on iterative bi-directional training \\cite{Tol19}. Differently from our previous works \\cite{Atz21,Gou20,Atz20} on the unicasting scenario, we introduce an additional group-specific uplink training resource that entirely eliminates the need for backhaul signaling for CSI exchange in multi-group multicasting. We also present a simpler precoding design that relies uniquely on group-specific pilots \\cite{Yan13}, which can be useful when the training resources are scarce. Numerical results show that the proposed distributed methods bring substantial gains over conventional cell-free massive MIMO precoding designs that utilize only local CSI.\n\n\n\\section{System Model} \\label{sec:SM}\n\nConsider a cell-free massive MIMO system where a set of BSs $\\mathcal{B} \\triangleq \\{1, \\ldots, B\\}$ serves a set of UEs $\\mathcal{K} \\triangleq \\{1,\\ldots,K\\}$ in the downlink. Each BS and UE is equipped with $M$ and $N$ antennas, respectively. The UEs are divided into a set of non-overlapping multicast groups $\\mathcal{G} \\triangleq\\{1,\\ldots,G\\}$. We use $\\mathcal{K}_g$ to denote the set of UEs in group~$g \\in \\mathcal{G}$, whereas $g_{k}$ represents the index of the group that contains UE~$k$. The BSs transmit a single data stream to each multicast group, i.e., all the UEs in $\\mathcal{K}_g$ are intended to receive the same data symbol~$d_{g}$. Assuming time division duplex (TDD) mode, let $\\H_{b,k} \\in \\mbox{$\\mathbb{C}$}^{M \\times N}$ denote the uplink channel matrix between UE~$k \\in \\mathcal{K}$ and BS~$b \\in \\mathcal{B}$, and let $\\mathbf{w}_{b,g} \\in \\mbox{$\\mathbb{C}$}^{M \\times 1}$ be the BS-specific precoder used by BS~$b$ for group~$g$. We use $\\H_{k} \\triangleq [\\H_{1,k}^{\\mathrm{T}}, \\ldots, \\H_{B,k}^{\\mathrm{T}}]^{\\mathrm{T}} \\in \\mbox{$\\mathbb{C}$}^{B M \\times N}$ and $\\mathbf{w}_{g} \\triangleq [\\mathbf{w}_{1,g}^{\\mathrm{T}}, \\ldots, \\mathbf{w}_{B,g}^{\\mathrm{T}}]^{\\mathrm{T}} \\in \\mbox{$\\mathbb{C}$}^{B M \\times 1}$ to denote the aggregated uplink channel matrix of UE~$k$ and the aggregated precoder used for group~$g$, respectively, which imply $\\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g} = \\sum_{b \\in \\mathcal{B}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,g}$. We assume the per-BS transmit power constraints $\\sum_{g \\in \\mathcal{G}} \\|\\mathbf{w}_{b,g}\\|^{2} \\leq \\rho_{\\textnormal{\\tiny{BS}}},~\\forall b \\in \\mathcal{B}$, where $\\rho_{\\textnormal{\\tiny{BS}}}$ denotes the maximum transmit power at each BS. Hence, the received signal at UE~$k$ is given by\n\\begin{align}\n\\nonumber \\mathbf{y}_{k} \\triangleq \\sum_{b \\in \\mathcal{B}} \\! \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,g_{k}} d_{g_{k}} \\! + \\! \\sum_{\\bar{g} \\neq g_{k}} \\sum_{b \\in \\mathcal{B}} \\! \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,\\bar{g}} d_{\\bar{g}} \\! + \\! \\mathbf{z}_{k} \\! \\in \\! \\mbox{$\\mathbb{C}$}^{N \\times 1},\n\\end{align}\n\\vspace{-6mm}\n\\begin{align}\n\\label{eq:y_k}\n\\end{align}\nwhere $\\mathbf{z}_{k} \\in \\mathcal{C} \\mathcal{N} (\\mathbf{0}, \\sigma_{\\textnormal{\\tiny{UE}}}^{2} \\mathbf{I}_{N})$ is the additive white Gaussian noise (AWGN). Upon receiving $\\mathbf{y}_{k}$, UE~$k$ applies the combiner $\\v_{k} \\in \\mbox{$\\mathbb{C}$}^{N \\times 1}$ to obtain a soft estimate of $d_{g}$ and the resulting SINR can be expressed as\n\\begin{align} \\label{eq:SINR_k}\n\\mathrm{SINR}_{k} & \\triangleq \\frac{|\\sum_{b \\in \\mathcal{B}} \\v_{k}^{\\mathrm{H}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,g_{k}}|^{2}}{\\sum_{\\bar g \\ne g_{k}} |\\sum_{b \\in \\mathcal{B}} \\v_{k}^{\\mathrm{H}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,\\bar{g}}|^{2} + \\sigma_{\\textnormal{\\tiny{UE}}}^{2} \\| \\v_{k} \\|^{2}}.\n\\end{align}\nFinally, the sum of the rates over the multicast groups, which we refer to in the following as the \\textit{sum-group rate}, is given by\n\\begin{align} \\label{eq:R}\nR \\triangleq \\sum_{g \\in \\mathcal{G}} \\min_{k \\in \\mathcal{K}_g} \\log_{2}(1 + \\mathrm{SINR}_{k}) \\quad \\textrm{[bps\/Hz]}.\n\\end{align}\nNote that \\eqref{eq:R} represents an upper bound on the system performance that is attained in presence of perfect global CSI.\n\nIn Section~\\ref{sec:distr}, we propose distributed precoding designs based on iterative bi-directional training to cooperatively~compute the multi-group multicast precoders at each BS and the combiners at each UE. We also present the centralized precoding design along with the problem formulation~in~Section~\\ref{sec:problem}.\n\n\n\\subsection{Bi-Directional Training and Channel Estimation} \\label{sec:SM_est}\n\n\\vspace{-0.5mm}\n\nThe proposed distributed precoding designs rely on iterative bi-directional training, where uplink and downlink precoded pilots are transmitted at each iteration \\cite{Tol19}. On the other hand, the centralized precoding design requires each BS to estimate the antenna-specific uplink channels. We now describe the different types of pilot-aided channel estimation that will be used in Sections~\\ref{sec:problem} and~\\ref{sec:distr}.\n\n\\smallskip\n\n\\textit{\\textbf{Antenna-specific uplink channel estimation.}} The estimation of the channel matrix $\\H_{b,k}$ involves $N$ antenna-specific uplink pilots for UE~$k$. Let $\\P_{k} \\in \\mbox{$\\mathbb{C}$}^{\\tau \\times N}$ be the pilot matrix assigned to UE~$k$, with $\\|\\P_{k}\\|_{\\mathrm{F}}^{2} = \\tau N$ and $\\P_{k}^{\\mathrm{H}} \\P_{k} = \\tau \\mathbf{I}_{N}$. Moreover, let $\\rho_{\\textnormal{\\tiny{UE}}}$ denote the maximum transmit power at each UE. Each UE~$k$ synchronously transmits its pilot matrix, i.e.,\n\\begin{align} \\label{eq:X_k_ul}\n\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL}}} \\triangleq \\sqrt{\\beta^{\\textnormal{\\tiny{UL}}}} \\P_{k}^{\\mathrm{H}} \\in \\mbox{$\\mathbb{C}$}^{N \\times \\tau},\n\\end{align}\nwhere the power scaling factor $\\beta^{\\textnormal{\\tiny{UL}}} \\triangleq \\frac{\\rho_{\\textnormal{\\tiny{UE}}}}{N}$ ensures that $\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL}}}$ complies with the UE transmit power constraint. Then, the received signal at BS~$b$ is given by\n\\begin{align}\n\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL}}} & \\ \\triangleq \\sum_{k \\in \\mathcal{K}} \\H_{b,k} \\mathbf{X}_{k}^{\\textnormal{\\tiny{UL}}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL}}} \\\\\n\\label{eq:Y_b_ul} & = \\ \\sqrt{\\beta^{\\textnormal{\\tiny{UL}}}} \\sum_{k \\in \\mathcal{K}} \\H_{b,k} \\P_{k}^{\\mathrm{H}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL}}} \\in \\mbox{$\\mathbb{C}$}^{M \\times \\tau},\n\\end{align}\nwhere \\hfill $\\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL}}}$ \\hfill is \\hfill the \\hfill AWGN \\hfill with \\hfill i.i.d. \\hfill $\\mathcal{C} \\mathcal{N} (0, \\sigma_{\\textnormal{\\tiny{BS}}}^{2})$ \\hfill elements.\n\n\\noindent Finally, the least-squares (LS) estimate of $\\H_{b,k}$ is obtained as\n\\begin{align}\n\\hat{\\H}_{b,k} & \\triangleq \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL}}}}} \\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL}}} \\P_{k} \\\\\n\\label{eq:H_bk_hat_orth} & = \\H_{b,k} + \\frac{1}{\\tau} \\sum_{\\bar{k} \\ne k} \\H_{b,\\bar{k}} \\P_{\\bar{k}}^{\\mathrm{H}} \\P_{k} + \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL}}}}} \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL}}} \\P_{k}.\n\\end{align}\n\n\\smallskip\n\n\\textit{\\textbf{UE-specific effective uplink channel estimation.}} Let $\\mathbf{h}_{b,k} \\triangleq \\H_{b,k} \\v_{k} \\in \\mbox{$\\mathbb{C}$}^{M \\times 1}$ denote the effective uplink channel between UE~$k$ and BS~$b$, and let $\\mathbf{p}_{k} \\in \\mbox{$\\mathbb{C}$}^{\\tau \\times 1}$ be the pilot assigned to UE~$k$, with $\\|\\mathbf{p}_{k}\\|^{2} = \\tau$. Each UE~$k$ synchronously transmits its pilot $\\mathbf{p}_{k}$ using its combiner $\\v_{k}$ as precoder, i.e.,\n\\begin{align} \\label{eq:X_k_ul1}\n\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-1}}} \\triangleq \\sqrt{\\beta^{\\textnormal{\\tiny{UL-1}}}} \\v_{k} \\mathbf{p}_{k}^{\\mathrm{H}} \\in \\mbox{$\\mathbb{C}$}^{N \\times \\tau},\n\\end{align}\nwhere the power scaling factor $\\beta^{\\textnormal{\\tiny{UL-1}}}$ ensures that $\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-1}}}$ complies with the UE transmit power constraint. Then, the received signal at BS~$b$ is given by\n\\begin{align}\n\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}} & \\triangleq \\sum_{k \\in \\mathcal{K}} \\H_{b,k} \\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-1}}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-1}}} \\\\\n\\label{eq:Y_b_ul1} & = \\sqrt{\\beta^{\\textnormal{\\tiny{UL-1}}}} \\sum_{k \\in \\mathcal{K}} \\mathbf{h}_{b,k} \\mathbf{p}_{k}^{\\mathrm{H}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-1}}} \\in \\mbox{$\\mathbb{C}$}^{M \\times \\tau},\n\\end{align}\nwhere $\\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-1}}}$ is the AWGN with i.i.d. $\\mathcal{C} \\mathcal{N} (0, \\sigma_{\\textnormal{\\tiny{BS}}}^{2})$ elements. Finally, the LS estimate of $\\mathbf{h}_{b,k}$ is obtained as \n\\begin{align}\n\\hat{\\mathbf{h}}_{b,k} & \\triangleq \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL-1}}}}} \\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}} \\mathbf{p}_{k} \\\\\n\\label{eq:h_bk_hat} & = \\mathbf{h}_{b,k} + \\frac{1}{\\tau} \\sum_{\\bar{k} \\ne k} \\mathbf{h}_{b,\\bar{k}} \\mathbf{p}_{\\bar{k}}^{\\mathrm{H}} \\mathbf{p}_{k} + \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL-1}}}}} \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-1}}} \\mathbf{p}_{k}.\n\\end{align}\n\n\\smallskip\n\n\\textit{\\textbf{Group-specific effective uplink channel estimation.}} Let $\\mathbf{f}_{b,g} \\triangleq \\sum_{ k \\in \\mathcal{K}_g} \\H_{b,k} \\v_{k} \\in \\mbox{$\\mathbb{C}$}^{M \\times 1}$ denote the effective uplink channel between $\\mathcal{K}_{g}$ and BS~$b$, and let $\\mathbf{p}_{g} \\in \\mbox{$\\mathbb{C}$}^{\\tau \\times 1}$ be the pilot assigned to group~$g$, with $\\|\\mathbf{p}_{g}\\|^{2} = \\tau$. Each UE~$k$ synchronously transmits its pilot $\\mathbf{p}_{g_{k}}$ using its combiner $\\v_{k}$ as precoder, i.e.,\n\\begin{align} \\label{eq:X_g_ul2}\n\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-2}}} \\triangleq \\sqrt{\\beta^{\\textnormal{\\tiny{UL-2}}}} \\v_{k} \\mathbf{p}_{g_{k}}^{\\mathrm{H}} \\in \\mbox{$\\mathbb{C}$}^{N \\times \\tau},\n\\end{align}\nwhere the power scaling factor $\\beta^{\\textnormal{\\tiny{UL-2}}}$ ensures that $\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-2}}}$ complies with the UE transmit power constraint. Then, the received signal at BS~$b$ is given by\n\\begin{align}\n\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}} & \\triangleq \\sum_{k \\in \\mathcal{K}} \\H_{b,k} \\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-2}}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-2}}} \\\\\n\\label{eq:Y_b_ul2} & = \\sqrt{\\beta^{\\textnormal{\\tiny{UL-2}}}} \\sum_{g \\in \\mathcal{G}} \\mathbf{f}_{b,g} \\mathbf{p}_{g}^{\\mathrm{H}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-2}}} \\in \\mbox{$\\mathbb{C}$}^{M \\times \\tau},\n\\end{align}\nwhere $\\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-2}}}$ is the AWGN with i.i.d. $\\mathcal{C} \\mathcal{N} (0, \\sigma_{\\textnormal{\\tiny{BS}}}^{2})$ elements. Finally, the LS~estimate~of~$\\mathbf{f}_{b,g}$~is~obtained~as\n\\begin{align}\n\\hat{\\mathbf{f}}_{b,g} & \\triangleq \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL-2}}}}} \\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}} \\mathbf{p}_{g} \\\\\n\\label{eq:f_bk_hat} & = \\mathbf{f}_{b,g} + \\frac{1}{\\tau} \\sum_{\\bar{g} \\ne g} \\mathbf{f}_{b,\\bar{g}} \\mathbf{p}_{\\bar{g}}^{\\mathrm{H}} \\mathbf{p}_{g} + \\frac{1}{\\tau \\sqrt{\\beta^{\\textnormal{\\tiny{UL-2}}}}} \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-2}}} \\mathbf{p}_{g}.\n\\end{align}\n\n\\smallskip\n\n\\textit{\\textbf{Effective downlink channel estimation.}} Let $\\mathbf{g}_{k} \\triangleq \\sum_{b \\in \\mathcal{B}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,g} \\in \\mbox{$\\mathbb{C}$}^{N \\times 1}$ denote the effective downlink channel between all the BSs and UE~$k$. Each BS~$b$ synchronously transmits a superposition of the pilots $\\{\\mathbf{p}_{g}\\}_{g \\in \\mathcal{G}}$ after precoding them with the corresponding group-specific precoders $\\{\\mathbf{w}_{b,g}\\}_{g \\in \\mathcal{G}}$, i.e.,\n\\begin{align} \\label{eq:X_b_dl}\n\\mathbf{X}_{b}^{\\textnormal{\\tiny{DL}}} \\triangleq \\sum_{g \\in \\mathcal{G}} \\mathbf{w}_{b,g} \\mathbf{p}_{g}^{\\mathrm{H}} \\in \\mbox{$\\mathbb{C}$}^{M \\times \\tau}.\n\\end{align}\nThen, the received signal at UE~$k$ is given by \n\\begin{align}\n\\mathbf{Y}_{k}^{\\textnormal{\\tiny{DL}}} & \\triangleq \\sum_{b \\in \\mathcal{B}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{X}_{b}^{\\textnormal{\\tiny{DL}}} + \\mathbf{Z}_{k}^{\\textnormal{\\tiny{DL}}} \\\\\n\\label{eq:Y_k_dl} & = \\sum_{b \\in \\mathcal{B}} \\sum_{g \\in \\mathcal{G}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,g} \\mathbf{p}_{g}^{\\mathrm{H}} + \\mathbf{Z}_{k}^{\\textnormal{\\tiny{DL}}} \\in \\mbox{$\\mathbb{C}$}^{N \\times \\tau},\n\\end{align}\nwhere $\\mathbf{Z}_{k}^{\\textnormal{\\tiny{DL}}}$ is the AWGN with i.i.d. $\\mathcal{C} \\mathcal{N} (0, \\sigma_{\\textnormal{\\tiny{UE}}}^{2})$ elements. Finally, the LS estimate of $\\mathbf{g}_{k}$ is obtained as \n\\begin{align}\n\\hat{\\mathbf{g}}_{k} & \\triangleq \\frac{1}{\\tau} \\mathbf{Y}_{k}^{\\textnormal{\\tiny{DL}}} \\mathbf{p}_{g_{k}} \\\\\n\\label{eq:g_k_hat} & = \\mathbf{g}_{k} + \\frac{1}{\\tau} \\sum_{b \\in \\mathcal{B}} \\sum_{\\bar{g} \\ne g_{k}} \\H_{b,k}^{\\mathrm{H}} \\mathbf{w}_{b,\\bar{g}} \\mathbf{p}_{\\bar{g}}^{\\mathrm{H}} \\mathbf{p}_{g_{k}} + \\frac{1}{\\tau} \\mathbf{Z}_{k}^{\\textnormal{\\tiny{DL}}} \\mathbf{p}_{g_{k}}.\n\\end{align}\n\n\n\\section{Problem Formulation} \\label{sec:problem}\n\nIn this section, we present the problem formulation for the proposed multi-group multicast precoding design focusing on the centralized method, where the aggregated precoders are computed at the CPU. In doing so, we first target the sum-group MSE minimization in Section~\\ref{sec:problem_sum-groupMSE} and then propose to solve a simpler sum MSE minimization problem in Section~\\ref{sec:problem_sum-groupMSE}.\n\n\n\\subsection{Sum-Group MSE Minimization} \\label{sec:problem_sum-groupMSE}\n\nThe sum-group MSE minimization achieves absolute fairness within each multicast group through the min-max MSE criterion and can be expressed as\n\\begin{align} \\label{eq:probForHi}\n\\begin{array}{cl}\n\\displaystyle \\underset{{\\{\\mathbf{w}_g, \\v_{k}\\}}}{\\mathrm{minimize}} & \\displaystyle \\sum_{g \\in \\mathcal{G}} \\max_{k \\in \\mathcal{K}_g} \\mathrm{MSE}_k \\\\\n\\mathrm{s.t.} & \\displaystyle \\sum_{g \\in \\mathcal{G}} \\| \\mathbf{E}_{b} \\mathbf{w}_{g}\\|^{2} \\leq \\rho_{\\textnormal{\\tiny{BS}}}, \\quad \\forall b \\in \\mathcal{B},\n\\end{array}\n\\end{align}\nwhere $\\mathrm{MSE}_k$ is defined as\n\\begin{align}\n\\mathrm{MSE}_{k} & \\triangleq \\mathbb{E}\\big[ |\\v_{k}^{\\mathrm{H}} \\mathbf{y}_{k} - d_{g_k}|^{2} \\big] \\\\\n&= \\label{eq:MSE_k}\n\\sum_{g \\in \\mathcal{G}} | \\v_{k}^{\\mathrm{H}} \\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g} |^{2} - 2 \\Re [ \\v_{k}^{\\mathrm{H}} \\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g_k}] \\nonumber \\\\\n& \\phantom{=} \\ + \\sigma_{\\textnormal{\\tiny{UE}}}^{2} \\| \\v_{k} \\|^{2} + 1,\n\\end{align}\nand $\\mathbf{E}_{b} \\in \\mbox{$\\mathbb{R}$}^{M \\times B M}$ is such that $\\mathbf{E}_b \\mathbf{w}_g = \\mathbf{w}_{b,g}$. The problem in \\eqref{eq:probForHi} is convex with respect to either the precoders or the combiners. Hence, we use \\textit{alternating optimization}, whereby the precoders are optimized for fixed combiners and vice versa in an iterative best-response fashion. Before describing each step of the alternating optimization, let us define $t_g \\triangleq \\max_{k \\in \\mathcal{K}_g} \\mathrm{MSE}_k$ and rewrite \\eqref{eq:probForHi} as\n\\begin{align} \\label{eq:probFor}\n\\begin{array}{cl}\n\\displaystyle \\underset{{\\{t_g, \\mathbf{w}_g, \\v_{k}\\}}}{\\mathrm{minimize}} & \\displaystyle \\sum_{g \\in \\mathcal{G}} t_g \\\\\n\\mathrm{s.t.} & \\displaystyle \\mathrm{MSE}_{k} \\le t_{g}, \\quad \\forall k \\in \\mathcal{K}_g,~\\forall g \\in \\mathcal{G} \\\\\n& \\displaystyle \\sum_{g \\in \\mathcal{G}} \\| \\mathbf{E}_{b} \\mathbf{w}_{g}\\|^{2} \\leq \\rho_{\\textnormal{\\tiny{BS}}}, \\quad \\forall b \\in \\mathcal{B}.\n\\end{array}\n\\end{align}\n\n\\smallskip\n\n\\textit{\\textbf{Optimization of the combiners.}} For a fixed set of precoders, the combiners $\\{\\v_{k}\\}_{k \\in \\mathcal{K}}$ are obtained by solving\n\\begin{align} \\label{eq:probUE}\n\\begin{array}{cl}\n\\displaystyle \\underset{{\\{t_g, \\v_{k}\\}}}{\\mathrm{minimize}} & \\displaystyle \\sum_{g \\in \\mathcal{G}} t_g \\\\\n\\mathrm{s.t.} & \\displaystyle \\mathrm{MSE}_{k} \\le t_{g}, \\quad \\forall k \\in \\mathcal{K}_g,~\\forall g \\in \\mathcal{G}.\n\\end{array}\n\\end{align}\n\n\\noindent Specifically, the optimal $\\v_{k}$ can be obtained by computing the stationary point of the Lagrangian of \\eqref{eq:probUE}, which yields \n\\begin{align}\\label{eq:uebf}\n\\v_{k} = \\bigg( \\sum_{g \\in \\mathcal{G}}\\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g} \\mathbf{w}_{g}^{\\mathrm{H}} \\H_{k} + \\sigma_{\\textnormal{\\tiny{UE}}}^{2} \\mathbf{I}_{N} \\bigg)^{-1}\\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g_k}.\n\\end{align}\n\n\\smallskip\n\n\\textit{\\textbf{Optimization of the precoders.}} For a fixed set of combiners, the precoders $\\{\\mathbf{w}_{g}\\}_{g \\in \\mathcal{G}}$ are obtained by solving\n\\begin{align} \\label{eq:probBS}\n\\begin{array}{cl}\n\\displaystyle \\underset{{\\{t_g, \\mathbf{w}_g \\}}}{\\mathrm{minimize}} & \\displaystyle \\sum_{g \\in \\mathcal{G}} t_g \\\\\n\\mathrm{s.t.} & \\displaystyle \\mathrm{MSE}_{k} \\le t_{g}, \\quad \\forall k \\in \\mathcal{K}_g,~\\forall g \\in \\mathcal{G} \\\\\n& \\displaystyle \\sum_{g \\in \\mathcal{G}} \\| \\mathbf{E}_{b} \\mathbf{w}_{g}\\|^{2} \\leq \\rho_{\\textnormal{\\tiny{BS}}}, \\quad \\forall b \\in \\mathcal{B}. \n\\end{array}\n\\end{align}\nSpecifically, the optimal $\\mathbf{w}_{g}$ can be obtained by computing the stationary point of the Lagrangian of \\eqref{eq:probBS}, which yields\n\\begin{align}\\label{eq:bsbf}\n\\mathbf{w}_{g} = \\bigg( \\! \\sum_{k \\in \\mathcal{K}} \\! \\nu_{k} \\H_{k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\H_{k}^{\\mathrm{H}} \\! + \\! \\sum_{b \\in \\mathcal{B}} \\!\\lambda_{b} \\mathbf{E}_{b}^{\\mathrm{H}} \\mathbf{E}_{b} \\! \\bigg)^{-1} \\! \\sum_{k \\in \\mathcal{K}_g} \\! \\nu_{k} \\H_{k} \\v_{k},\n\\end{align}\nwhere $\\nu_k$ and $\\lambda_b$ are the dual variables corresponding to the first and second constraints in \\eqref{eq:probBS}, respectively. Note that $\\nu_k$ and $\\lambda_b$ can be updated using the sub-gradient and ellipsoid methods, respectively.\n\n\\smallskip\n\nThe combiner in \\eqref{eq:uebf} can be computed locally at each UE. However, the local computation of the precoder in \\eqref{eq:bsbf} at each BS requires BS-specific CSI from all the other BSs \\cite{Atz21}. Moreover, the sub-gradient update of the dual variables $\\{\\nu_{k}\\}_{k \\in \\mathcal{K}}$ significantly slows down the convergence. To simplify the distributed precoding design, we thus propose to replace the sum-group MSE minimization with the sum MSE minimization, as described next.\n\n\n\n\n\\subsection{Sum MSE Minimization} \\label{sec:problem_sumMSE}\n\nTo circumvent the shortcomings of the sum-group MSE minimization, we propose to tackle the (weighted) sum MSE minimization, which can be expressed as\n\\begin{align} \\label{eq:EqvprobForHi}\n\\begin{array}{cl}\n\\displaystyle \\underset{{\\{\\mathbf{w}_g, \\v_{k}\\}}}{\\mathrm{minimize}} & \\displaystyle \\sum_{k \\in \\mathcal{K}} \\mu_k\\mathrm{MSE}_k \\\\\n\\mathrm{s.t.} & \\displaystyle \\sum_{g \\in \\mathcal{G}} \\| \\mathbf{E}_{b} \\mathbf{w}_{g}\\|^{2} \\leq \\rho_{\\textnormal{\\tiny{BS}}}, \\quad \\forall b \\in \\mathcal{B}, \n\\end{array}\n\\end{align}\nwhere $\\mu_k$ is the weight of UE~$k$. This choice stems from the fact that \\eqref{eq:EqvprobForHi} provides a degree of in-built fairness among all the UEs, especially at high SINR. More specifically, at high SINR, the sum MSE minimization corresponds to maximizing the minimum SINR across all the UEs and well approximates the sum-group MSE minimization in \\eqref{eq:probForHi}. This is formalized in Proposition~\\ref{pre:1}, whose proof is omitted due to the space limitations and will be presented in the longer version of this paper.\n\n\\begin{proposition}\\label{pre:1}\nAt high SINR, both the sum-group MSE minimization in \\eqref{eq:probForHi} and the sum MSE minimization in \\eqref{eq:EqvprobForHi} maximize the minimum SINR across all the UEs.\n\\end{proposition}\n\nNote that, regardless of the high-SINR assumption, \\eqref{eq:EqvprobForHi} becomes equivalent to \\eqref{eq:probForHi} if $\\mu_k = \\nu_k,~\\forall k$ at each iteration of the alternating optimization. However, optimally tuning $\\{\\mu_k\\}_{k \\in \\mathcal{K}}$ at each iteration leads to the same complexity and signaling requirements as the sum-group MSE minimization. To simplify the distributed precoding design, we fix $\\{\\mu_k\\}_{k \\in \\mathcal{K}}$ to the same value at all the iterations. Though slightly suboptimal, this approach leads to much simpler computation and signaling as well as faster convergence. In this context, the optimal $\\mathbf{w}_{g}$ can be obtained by computing the stationary point of the Lagrangian of \\eqref{eq:EqvprobForHi} for a fixed set of combiners, which yields\n\\begin{align}\\label{eq:bsbf_mse}\n\\mathbf{w}_{g} = \\bigg( \\! \\sum_{k \\in \\mathcal{K}} \\! \\mu_{k} \\H_{k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\H_{k} ^{\\mathrm{H}} \\! + \\! \\sum_{b \\in \\mathcal{B}} \\! \\lambda_{b} \\mathbf{E}_{b}^{\\mathrm{H}} \\mathbf{E}_{b} \\! \\bigg)^{-1} \\! \\sum_{k \\in \\mathcal{K}_g} \\! \\mu_{k} \\H_{k} \\v_{k}.\n\\end{align}\nFurthermore, the optimal $\\v_k$ of \\eqref{eq:EqvprobForHi} for a fixed set of precoders corresponds to \\eqref{eq:uebf}. Hereafter, we consider the sum MSE minimization to design the multi-group multicast precoders.\n\nAt this stage, we briefly illustrate the centralized precoding design with pilot-aided channel estimation. The algorithm begins with the antenna-specific uplink channel estimation described in Section~\\ref{sec:SM_est}, whereby each BS~$b$ obtains $\\{\\hat \\H_{b,k}\\}_{k \\in \\mathcal{K}}$ and forwards them to the CPU via backhaul signaling. Then, the CPU computes the combiners $\\{\\v_k\\}_{k \\in \\mathcal{K}}$ and the aggregated precoders $\\{\\mathbf{w}_g\\}_{g \\in \\mathcal{G}}$ via alternating optimization by replacing $\\H_k$ with $\\hat \\H_k \\triangleq [\\hat \\H_{1,k}^{\\mathrm{T}}, \\ldots, \\hat \\H_{B,k}^{\\mathrm{T}}]^{\\mathrm{T}}$ in \\eqref{eq:uebf} and \\eqref{eq:bsbf_mse}, respectively. After convergence, the resulting BS-specific precoders are fed back to the corresponding BSs via backhaul signaling. Finally, the effective downlink channel estimation described in Section~\\ref{sec:SM_est} is performed and each UE~$k$ computes its final combiner as\n\\begin{align}\\label{eq:rxmmse}\n\\v_{k} & = \\big(\\mathbf{Y}_k^{\\textnormal{\\tiny{DL}}} (\\mathbf{Y}_k^{\\textnormal{\\tiny{DL}}})^{\\mathrm{H}}\\big)^{-1} \\mathbf{Y}_k^{\\textnormal{\\tiny{DL}}} \\mathbf{p}_{g_{k}},\n\\end{align}\nwhich is equal to \\eqref{eq:uebf} for perfect channel estimation.\n\n\n\\section{Distributed Precoding Design} \\label{sec:distr}\n\nIn the distributed precoding design, the optimal $\\mathbf{w}_{b,g}$ can be obtained by computing the stationary point of the Lagrangian of \\eqref{eq:EqvprobForHi} with respect to $\\mathbf{w}_{b,g}$, which yields\n\\begin{align}\\label{eq:dis_bsbf}\n\\mathbf{w}_{b,g} & = \\bigg( \\! \\sum_{k \\in \\mathcal{K}} \\! \\mu_{k} \\H_{b,k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\H_{b,k} ^{\\mathrm{H}} \\! + \\! \\lambda_{b} \\mathbf{I}_M \\! \\bigg)^{-1} \\! \\bigg( \\! \\sum_{k \\in \\mathcal{K}_g} \\! \\mu_{k} \\H_{b,k} \\v_{k} \\nonumber \\\\\n& \\phantom{=} \\ - \\underbrace{\\sum_{\\bar b \\ne b} \\sum_{k \\in \\mathcal{K}} \\mu_{k}\\H_{b,k}\\v_{k} \\v_{k}^{\\mathrm{H}} \\H_{\\bar b,k} ^{\\mathrm{H}}\\mathbf{w}_{\\bar b,g}}_{\\textnormal{cross terms}}\\bigg).\n\\end{align}\nThis can be computed locally at each BS upon receiving the cross terms from all the other BSs via backhaul signaling. Furthermore, ensuring global convergence of the distributed precoding design requires an iterative best-response update as\n\\begin{align} \\label{eq:w_bk_i}\n\\mathbf{w}_{b,g}^{(i)} & = \\mathbf{w}_{b,g}^{(i-1)} + \\alpha \\underbrace{(\\mathbf{w}_{b,g}-\\mathbf{w}_{b,g}^{(i-1)})}_{\\triangleq \\mathbf{w}_{b,g}^{\\star}},\n\\end{align}\nwhere $i$ is the iteration index and $\\alpha \\in (0,1]$ determines the trade-off between convergence speed and accuracy \\cite{Atz21}. Assuming \\hfill a \\hfill single-iteration \\hfill backhaul \\hfill delay \\hfill to \\hfill exchange \\hfill the\n\n\\noindent cross terms among the BSs, $\\mathbf{w}_{b,g}^{\\star}$ in \\eqref{eq:w_bk_i} can be written as\n\\begin{align} \\label{eq:w_bg_*}\n\\hspace{-3mm} \\mathbf{w}_{b,g}^{\\star} & = \\bigg( \\! \\sum_{k \\in \\mathcal{K}} \\! \\mu_{k} \\H_{b,k} \\v_{\\bar k} \\v_{k}^{\\mathrm{H}} \\H_{b,k}^{\\mathrm{H}} \\! + \\! \\lambda_{b} \\mathbf{I}_M \\! \\bigg)^{-1} \\! \\bigg( \\! \\sum_{k \\in \\mathcal{K}_g} \\! \\mu_{k} \\H_{b,k} \\v_{k} \\nonumber \\\\\n& \\phantom{=} \\ - \\sum_{\\bar b \\in \\mathcal{B}} \\sum_{k \\in \\mathcal{K}} \\! \\mu_{k} \\H_{b,k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\H_{\\bar b,k}^{\\mathrm{H}}\\mathbf{w}_{\\bar b,g}^{(i-1)} \\! - \\! \\lambda_{b} \\mathbf{w}_{b,g}^{(i-1)} \\! \\bigg). \\hspace{-2mm}\n\\end{align}\n\n\\begin{figure*}[ht]\n\\addtocounter{equation}{+3}\n\\begin{align}\n\\label{eq:bsbflocal} \\mathbf{w}_{b,g}^{\\textnormal{\\tiny{dis}}} & = \\bigg(\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}} \\mathbf{D}_{\\mu} ({\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}}})^{\\mathrm{H}} + \\tau({\\beta^{\\textnormal{\\tiny{UL-1}}}}\\lambda_{b} - \\sigma_{\\textnormal{\\tiny{BS}}}^2)\\mathbf{I}_M\\bigg)^{-1} \\bigg( \\sqrt{\\beta^{\\textnormal{\\tiny{UL-1}}}} \\sum_{k \\in \\mathcal{K}_g} \\mu_k\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}}\\mathbf{p}_k - \\frac{{\\beta^{\\textnormal{\\tiny{UL-1}}}}}{\\sqrt{\\beta^{\\textnormal{\\tiny{UL-3}}}}}\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}\\mathbf{p}_g^{\\mathrm{H}} - {{\\beta^{\\textnormal{\\tiny{UL-1}}}}} \\tau\\lambda_b \\mathbf{w}_{b,g}^{(i-1)}\\bigg)\n\\end{align}\n\\addtocounter{equation}{0}\n\\hrulefill \\vspace{-3mm}\n\\end{figure*}\n\\begin{figure*}[ht]\n\\addtocounter{equation}{0}\n\\begin{align}\n\\label{eq:bsbflocalgs} \\mathbf{w}_{b,g}^{\\textnormal{\\tiny{dis-gs}}} &= \\bigg(\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}}({\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}}})^{\\mathrm{H}} + \\tau({\\beta^{\\textnormal{\\tiny{UL-2}}}}\\lambda_{b} - \\sigma_{\\textnormal{\\tiny{BS}}}^2)\\mathbf{I}_M\\bigg)^{-1} \\bigg( \\sqrt{\\beta^{\\textnormal{\\tiny{UL-2}}}} \\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}}\\mathbf{p}_g - \\frac{{\\beta^{\\textnormal{\\tiny{UL-2}}}}}{\\sqrt{\\beta^{\\textnormal{\\tiny{UL-3}}}}}\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}\\mathbf{p}_g^{\\mathrm{H}} - {{\\beta^{\\textnormal{\\tiny{UL-2}}}}} \\tau\\lambda_b \\mathbf{w}_{b,g}^{(i-1)}\\bigg) \n\\end{align}\n\\addtocounter{equation}{-5}\n\\hrulefill \\vspace{-5mm}\n\\end{figure*}\n\\subsection{Best-Response Distributed Precoding Design} \\label{sec:distr_BR}\n\nThe proposed distributed precoding design is enabled by iterative bi-directional training \\cite{Tol19}. A key difference with our previous works \\cite{Atz21,Gou20,Atz20} on the unicasting scenario is the addition of a group-specific uplink training resource that entirely eliminates the need for backhaul signaling for CSI exchange in the multi-group multicasting scenario.\n\nAt each bi-directional training iteration, the UE-specific effective uplink channel estimation and the effective downlink channel estimation described in Section~\\ref{sec:SM_est} are performed, and each UE~$k$ computes its combiner as in \\eqref{eq:rxmmse}. The computation of the precoders at each BS requires the cross terms from all the other BSs (see \\eqref{eq:dis_bsbf}). To avoid exchanging such cross terms via backhaul signaling, we use an over-the-air signaling scheme similar to that proposed in \\cite{Atz21}. Accordingly, each~UE~$k$ transmits $\\mathbf{Y}_{k}^{\\textnormal{\\tiny{DL}}}$ in \\eqref{eq:Y_k_dl} after precoding it with $\\mu_{k} \\v_{k} \\v_{k}^{\\mathrm{H}}$, i.e.,\n\\begin{align}\\label{eq:Disultx3}\n\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-3}}} \\triangleq \\sqrt{\\beta^{\\textnormal{\\tiny{UL-3}}}}\\mu_{k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\mathbf{Y}_{k}^{\\textnormal{\\tiny{DL}}},\n\\end{align}\nwhere the scaling factor $\\sqrt{\\beta^{\\textnormal{\\tiny{UL-3}}}}$ ensures that $\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-3}}}$ complies with the UE transmit power constraint. Then, the received signal at BS~$b$ is given by\n\\begin{align}\n\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}} & \\triangleq \\sum_{k \\in \\mathcal{K}} \\mu_{k} \\H_{b,k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\mathbf{Y}_{k}^{\\textnormal{\\tiny{DL}}} + \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-3}}}\\\\\n\\label{eq:Disulrx3} & = \\sqrt{\\beta^{\\textnormal{\\tiny{UL-3}}}} \\! \\sum_{k \\in \\mathcal{K}} \\! \\mu_{k} \\H_{b,k} \\v_{k} \\v_{k}^{\\mathrm{H}} \\bigg( \\! \\sum_{g \\in \\mathcal{G}} \\! \\H_{k}^{\\mathrm{H}} \\mathbf{w}_{g}\\mathbf{p}_{g}^{\\mathrm{H}} \\! + \\! \\mathbf{Z}_k \\! \\bigg) \\! + \\! \\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-3}}},\n\\end{align}\nwhere $\\mathbf{Z}_{b}^{\\textnormal{\\tiny{UL-3}}} \\in \\mbox{$\\mathbb{C}$}^{M \\times \\tau}$ is the AWGN with i.i.d. $\\mathcal{C} \\mathcal{N} (0, \\sigma_{\\textnormal{\\tiny{BS}}}^{2})$ elements. Consequently, the minimum number of uplink pilot symbols required to obtain $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}}$ in \\eqref{eq:Y_b_ul1} and $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}$ in \\eqref{eq:Disulrx3} with orthogonal pilots at each bi-directional training iteration is $K+G$. Finally, $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}}$ and $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}$ are used to compute $\\mathbf{w}_{b,g}^{\\textnormal{\\tiny{dis}}}$ in \\eqref{eq:bsbflocal} at the top of the page, with $\\mathbf{D}_{\\mu} \\triangleq \\mathrm{Diag} (\\mu_1,\\ldots,\\mu_K)$. Such a precoder replaces $\\mathbf{w}_{b,g}^{\\star}$ in \\eqref{eq:w_bk_i} to achieve global convergence. Note that $\\mathbf{w}_{b,g}^{\\textnormal{\\tiny{dis}}} \\to \\mathbf{w}_{b,g}^{\\star}$ as $\\tau \\to \\infty$. We point out that the additional uplink training resource $\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-3}}}$ conveys the group-specific effective uplink channels of the other BSs instead of the UE-specific effective uplink channels as in the unicasting scenario \\cite{Atz21}. The implementation of the best-response distributed precoding design is summarized~in~Algorithm~\\ref{alg:disNW}.\n\n\n\n\\begin{figure}[t!]\n\\begin{algorithm}[H]\n\\textbf{Data:} Pilots $\\{\\mathbf{p}_{k}\\}_{k \\in \\mathcal{K}}$ (used in UL-1 and UL-3) and $\\{\\mathbf{p}_{g}\\}_{g \\in \\mathcal{G}}$ (used in DL). \n\n\\textbf{Initialization:} Combiners $\\{\\v_{k}\\}_{k \\in \\mathcal{K}}$.\n\n\\textbf{Until} a predefined termination criterion is satisfied, \\textbf{do:}\n\\begin{itemize}\n\\item[1)] \\textbf{UL-1:} Each UE~$k$ transmits~$\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-1}}}$ in $\\eqref{eq:X_k_ul1}$; each BS~$b$ receives $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-1}}}$ in \\eqref{eq:Y_b_ul1}.\n\\item[2)] \\textbf{UL-3:} Each UE~$k$ transmits~$\\mathbf{X}_{k}^{\\textnormal{\\tiny{UL-3}}}$ in $\\eqref{eq:Disultx3}$; each BS~$b$ receives~$\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}$ in \\eqref{eq:Disulrx3}.\n\\item[3)] Each BS~$b$ computes the precoders $\\{\\mathbf{w}_{b,g}\\}_{g \\in \\mathcal{G}}$ in \\eqref{eq:bsbflocal} and updates them as in \\eqref{eq:w_bk_i}. \n\\item[4)] \\textbf{DL:} Each BS~$b$ transmits a superposition of the pilots $\\{\\mathbf{p}_{g}\\}_{g \\in \\mathcal{G}}$ after precoding them with the corresponding precoders $\\{\\mathbf{w}_{b,g}\\}_{g \\in \\mathcal{G}}$ as in \\eqref{eq:X_b_dl}.\n\\item[5)] Each UE~$k$ computes its combiner $\\v_{k}$ as in \\eqref{eq:rxmmse}.\n\\end{itemize}\n\\textbf{End}\n\\caption{(Best-Response Distributed Precoding Design)} \\label{alg:disNW}\n\\end{algorithm} \\vspace{-5mm}\n\\end{figure}\n\n\n\\subsection{Distributed Precoding Design with Group-Specific Pilots} \\label{sec:distr_GS}\n\nThe best-response distributed precoding design discussed above relies on UE-specific effective uplink channel estimation, which requires the transmission of $K$ orthogonal pilots at each bi-directional training iteration. In the case of scarce training resources, we propose a distributed precoding design based solely on group-specific pilots. This method is obtained by replacing the UE-specific effective uplink channel estimation with its group-specific counterpart described in Section~\\ref{sec:SM_est}. Consequently, the minimum number of uplink pilot symbols required to obtain $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-2}}}$ in \\eqref{eq:Y_b_ul2} and $\\mathbf{Y}_{b}^{\\textnormal{\\tiny{UL-3}}}$ in \\eqref{eq:Disulrx3} with orthogonal pilots in each bi-directional training iteration is $2G\\epsilon_c$. Here, $\\epsilon_c$ is set to $10^{-6}$ eV. The problem to be solved is to separate the total space of connected nodes into isolated clusters so that any two nodes within the same cluster can be linked with a continuous path (not necessarily directed connected with an edge), while such a path does not exist for any two nodes belonging to different clusters. We provide the algorithm for solving this clustering problem in pseudocodes in the Appendix. Each cluster will then represent a separate subspace. To ensure the VQE calculation is confined in one of these subspaces, we use the following procedure to select the fermionic excitation operators. Assume the $m$ qubit basis vectors of the subspace correspond to $m$ Fock states: $\\lvert\\psi_0\\rangle,\\lvert\\psi_1\\rangle,\\cdots$, and $\\lvert\\psi_{m-1}\\rangle$. $\\lvert\\psi_0\\rangle$ is pre-selected as the initial state for VQE calculations. Every other state $\\psi_i$ ($0