diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeytf" "b/data_all_eng_slimpj/shuffled/split2/finalzzeytf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeytf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nUnsupervised dimensionality reduction is a key step in many applications, including visualization \\cite{maaten2008visualizing} \\cite{mcinnes2018umap}, clustering \\cite{cohen2015dimensionality} \\cite{niu2011dimensionality}, and preprocessing for downstream supervised learning \\cite{pechenizkiy2004pca}. Principal Component Analysis (PCA) is one well-known technique for dimensionality reduction, which notably makes no assumptions about the ordering of the samples in the data matrix $X \\in \\RR^{N \\times D}$. Multivariate Singular Spectrum Analysis (MSSA) \\cite{hassani2013multivariate} is an extension of PCA for time series data, which been successfully applied in applications like signal decomposition and forecasting \\cite{hassani2009forecasting} \\cite{mahmoudvand2015forecasting} \\cite{patterson2011multivariate}. In MSSA, each row is read at a certain time step, and thus is influenced by the ordering of the samples. MSSA works primarily by identifying key oscillatory modes in a signal, which also makes it useful as a general-purpose signal denoiser. However, MSSA (like PCA, upon which it is based) is limited to finding the principal components that capture the maximal variance in the data. In situations where the information of interest explains little overall variance, these methods fail to reveal it. Recently, extensions like contrastive PCA (cPCA) \\cite{abid2018exploring, zou2013contrastive,ge2016rich} have shown that utilizing a background dataset $Y \\in \\RR^{M \\times D}$ can help better discover structure in the foreground (target) $X$ that is of interest to the analyst.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\columnwidth]{diagram}\n \\caption{Schematic illustrating the relations among PCA, cPCA, MSSA, and cMSSA.}\n \\label{fig:diagram}\n\\end{figure}\n\nContrastive Multivariate Singular Spectrum Analysis (cMSSA) generalizes cPCA and applies it to time series data. Figure~\\ref{fig:diagram} visualizes the relationships between the four methods. As a contrastive method, cMSSA emphasizes salient and unique sub-signals in time series data rather than just the sub-signals comprise the majority of the structure. So while standard MSSA is useful for denoising a signal, cMSSA additionally ``denoises'' signals of structured but irrelevant information.\n\n\\section{Contrastive Multivariate Singular Spectrum Analysis}\\label{cmssa}\n\n\\textbf{Standard MSSA}\nConsider a centered one-channel times series $\\mathbf{x} \\in \\RR^T$. We construct a Hankel matrix $H_\\mathbf{x} \\in \\RR^{T' \\times W}$ with window size $W$ as follows:\n\\[\nH_\\mathbf{x} = \n\\begin{pmatrix}\nx_1 & x_2 & \\ldots & x_W \\\\\nx_2 & x_3 & \\ldots & x_{W+1} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nx_{T'} & x_{T'+1} & \\ldots & x_T \\\\\n\\end{pmatrix}\n\\]\nwhere $T' = T-W+1$.\nTo extend to the multivariate case, let $X \\in \\RR^{T \\times D}$ be a $D$-channel time series that runs for $T$ steps. We construct the Hankelized matrix $H_X$ with window $W$ by horizontally concatenating the per-channel Hankel matrices into a $T'$-by-$DW$ matrix:\n$H_X = [H_{\\mathbf{x}^{(1)}} ; H_{\\mathbf{x}^{(2)}} ; \\ldots ; H_{\\mathbf{x}^{(D)}}]$. Next we compute the covariance matrix $C_X \\in \\RR^{DW\\times DW}$ for $H_X$. The next step is to perform the eigendecomposition on $C_X$, yielding $DW$ eigenvectors. Of these we take the top $K$ vectors with the largest corresponding eigenvalues. We denote $\\mathbf{e}^{(k)}$ as the eigenvector with the $k$th largest eigenvalue. We collect the vectors into a matrix $E \\in \\RR^{DW \\times K}$.\n\nTo transform our original time series $X$, we have two options: (a) Project $X$ into the principal component (PC) space defined by $E$:\n$A = H_X E$ or (b) use $A$ to compute the $k$th reconstructed component (RC) $R^{(k)}$ as done in the SSA literature:\n\n\\[\nR^{(k)}_{tj} = \\frac{1}{W_t} \\sum^{U_t}_{t' = L_t} A_{t-t'+1, k} \\cdot \\mathbf{e}^{(k)}_{(j-1)W + t'}\n\\]\n\nwhere $L_t = \\max(1, t-T+W)$, $U_t = \\min(t, W)$, and $W_t = U_t - L_t + 1$. The rows of $R^{(k)}$ are indexed by time $t \\in \\{1,\\ldots,T\\}$ and the columns by channel $j \\in \\{1,\\ldots,D\\}$. Summing up the reconstructed components reproduces a denoised version of the original signal. For our purposes, we opt instead to take the horizontal concatenation of the reconstructed components as the second transform:\n$R = [R^{(1)} ; R^{(2)} ; \\ldots ; R^{(K)}].$\nTo handle multiple time series, one simply vertically stacks each Hankelized matrix. The algorithm proceeds identically from there.\n\n\\bigskip\n\\noindent \\textbf{Contrastive MSSA}\nThe modification to MSSA we introduce is via a new variable $\\alpha \\geq 0$ we call the \\emph{contrastive} hyperparameter. We construct $H_Y$ for another $D$-channel times series $Y$ (the background data) via the same process. It is not required that $X$ and $Y$ run for the same number of time steps, only that their channels are aligned. We compute a contrastive covariance matrix $C = C_X - \\alpha C_Y$ and perform the eigendecomposition on $C$ instead of $C_X$. The intuition for this is that by subtracting out a portion of the variance in $Y$, the remaining variance in $X$ is likely to be highly specific to $X$ but not $Y$.\nThis is the key additional mechanism behind cMSSA --- if $\\alpha = 0$, then no contrast is performed, and cMSSA reduces down to just MSSA.\n\n\\begin{algorithm}[htb]\n\\caption{Spectral $\\alpha$-Search\n}\n\\label{algo:gen_alpha}\n\\begin{algorithmic}[1]\n\n\\Require Minimum $\\alpha$ to consider $\\alpha_{\\min}$, maximum $\\alpha$ to consider $\\alpha_{\\max}$, number of $\\alpha$s to consider $n$, number of $\\alpha$s to return $m$, foreground signal $X$, background signal $Y$, window $W$, and number of components $K$.\n\n\\Procedure{}{}\n \\Let{$Q$}{\\textsc{LogSpace}($\\alpha_{\\min}$, $\\alpha_{\\max}$, $n$) $\\cup \\{0\\}$}\n \\For{$\\alpha^{(i)} \\in Q$}\n \\Let{$H_X$, $H_Y$}{\\textsc{Hankel}($X$, $W$), \\textsc{Hankel}($Y$,$W$)}\n \\Let{$C_X$, $C_Y$}{\\textsc{Cov}($H_X$), \\textsc{Cov}($H_Y$)}\n \\Let{$E^{(i)}$}{\\textsc{EigenDecomp}($C_X - \\alpha^{(i)}C_Y$, $K$)}\n \\EndFor\n \\Let{$S$}{\\textsc{Empty}($\\RR^{n+1 \\times n+1}$)}\n \\For{$i \\in \\{1, \\ldots, n+1\\}$, $j \\in \\{i, \\ldots, n+1\\}$}\n \\Let{$S_{i,j}, S_{j,i}$}{$\\left\\lVert {E^{(i)}}^T E^{(j)} \\right\\rVert_*$}\n \\EndFor\n \\Let{$Z$}{\\textsc{SpectralCluster}($S$, $Q$, $m$)}\n \\Let{$Q^*$}{\\{0\\}}\n \\For{$z \\in Z$}\n \\If{$0 \\notin z$}\n \\Let{$\\alpha^*$}{\\textsc{ClusterMediod}($z$, $S$)}\n \\Let{$Q^*$}{$Q^* \\cup \\{\\alpha^*\\}$}\n \\EndIf\n \\EndFor\n \\\\\n \\Return{$Q^*$, set of $m$ best $\\alpha$s, including zero.}\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\nThe choice of $\\alpha$ is non-trivial. Algorithm~\\ref{algo:gen_alpha} outlines a routine for auto-selecting a small number of promising values for $\\alpha$. Because cMSSA is designed to assist data exploration, Algorithm~\\ref{algo:gen_alpha} uses spectral clustering to identify a diverse set of $\\alpha$ values corresponding to diverse eigenspaces and representations of $X$. The procedure works by first generating a large number of $\\alpha$s spread evenly in log-space. For each candidate $\\alpha$, we use cMSSA to compute its corresponding eigenvector matrix $E$. The procedure then performs spectral clustering, which requires a pairwise distance matrix as input. The distance metric used takes the nuclear norm of the matrix computed by multiplying the eigenvector matrices $E$ for any pair of $\\alpha$s. After specifying the number of clusters desired, we take the mediod $\\alpha$ of each cluster and return them as output. We always include 0 in this set, as the analyst may want to perform their analysis without contrast as control.\n\n\\section{Experiments}\\label{experiements}\n\n\\textbf{Synthetic example}\nTo illustrate cMSSA, we present a simple synthetic example. We generate an artificial one-channel signal $Y$ by sampling 500 sinusoids with different frequencies, amplitudes, phases, and vertical shifts. White Gaussian noise sample from $\\mathcal{N}(0,1)$ is added in as well. We generate $X$ in the same manner, but add in a very specific sub-signal (Figure~\\ref{fig:syn_sub}) that has comparatively low variance compared to the whole time series. The signals $X$ and $Y$ are generated independently as to rule out simple signal differencing as an explanation. We take $X$ as foreground and $Y$ as background.\n\nWe set $W=100$, $\\alpha=2$, and use only the top $K=2$ RCs. Fig.~\\ref{fig:syn_exp} displays the reconstructions computed by MSSA versus cMSSA, alongside the sub-signal that was injected into $X$. Specifically, we see that the cMSSA reconstruction shown in Fig.~\\ref{fig:syn_x_rcs_contrast} yields a noisy approximation of the sub-signal of interest, Fig.~\\ref{fig:syn_sub}. The variance of the noise here is comparable to the variance of the sub-signal---more noise would eventually overpower cMSSA's ability to extract the sub-signal.\n\n\\begin{figure}[htb]\n\\centering\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_sub}\n \\caption{Sub-signal specific to the foreground data $X$, which is of much lower amplitude than the other sinusoidal sub-signals in $X$.}\n \\label{fig:syn_sub}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_x_rcs}\n \\caption{Without contrast, the reconstructed time series consists of the high-amplitude sinusoidal sub-signals in $X$.}\n \\label{fig:syn_x_rcs}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{syn_x_rcs_contrast}\n \\caption{With contrast, the reconstructed time series is able to identify the unique sub-signal in $X$.}\n \\label{fig:syn_x_rcs_contrast}\n\\end{subfigure}\n\n\\caption{Results of a synthetic experiment that demonstrates that cMSSA is able to identify unique sub-signals in a time series, even when they are of much lower amplitude than background components.}\n\n\\label{fig:syn_exp}\n\\end{figure}\n\n\\bigskip\n\\noindent \\textbf{Clustering of electrocardiograms}\nThe data used in this experiment is taken from the public MHEALTH dataset \\cite{banos2014mhealthdroid}. In the dataset, 10 individuals were asked to perform 12 physical activities as several sensors recorded varied motion data. The researchers also collected two-lead electrocardiogram (ECG) readings, which we take as dual-channel time series data. In addition to the 12 activities, there is a 13th \\textsc{NULL} class that represents ECG signals collected between each activity but which don't have labels themselves. To increase the number of individual time series, we partition each one in half. \n\n \n \n\nFor our experiments, the foreground data are all time series labelled as either \\textsc{Jogging}, \\textsc{Running}, \\textsc{Jumping}, or \\textsc{Cycling}, 20 time series each for a total of 80. These four, being the more cardio-intensive of the 12, had much more signal activity that would be needed to be sifted through, exactly the type of environment cMSSA is intended to handle. For background data, we take all 272 time series belonging to the \\textsc{NULL} class.\n\nTo evaluate the effectiveness of cMSSA over its non-contrastive counterpart, we run both cMSSA and MSSA with a variety of hyperparameter settings. For each fitted model, we transform the foreground data to both the PC and RC spaces. Once the transformations are had, we perform spectral clustering into 4 clusters and compare the resulting clusters to the activity labels on the time series data, which were hitherto withheld from the algorithms. There are 3 hyperparameters: the window size $W \\in \\{8, 16, 32, 84, 18\\}$, the number of desired components $K \\in \\{1,2,4,6,8,10,12,14,16,18,20\\}$, and the contrastive parameter $\\alpha$. We set $K$ only if the value is less than or equal to $DW$ (where $D=2$ in this case). For $\\alpha$, we used our automatic routine to compute five key values to try for each setting of $W$ and $K$. For each run of the routine, a total of 300 candidate $\\alpha$s we considered, with the minimum and maximum $\\alpha$s being $10^{-3}$ and $10^{3}$, respectively. Of the five ultimately returned, one was zero, representing standard MSSA. Altogether, we run 530 experiments, 106 of which are standard MSSA, and the remaining cMSSA.\n\nThe spectral clustering requires an affinity matrix $S \\in \\RR^{N \\times N}$ which contains the similarities between any pair of time series, where $N$ is the number of times series we wish to cluster. Let $X^{(i)}$ and $X^{(j)}$ be two time series. Using the FastDTW metric \\cite{salvador2007toward} with a euclidean norm\\footnote{FastDTW is not a symmetric metric, so we take the minimum between the two orderings of the operands.}, we define the similarity $S_{ij}$ to be\n$\n\\frac{1}{\\textsc{FastDTW}(X^{(i)}, X^{(j)}) + 1}.\n$\nThe cluster evaluation uses the well-rounded BCubed metric \\cite{amigo2009comparison} to compute the precision, recall, and F1 harmonic mean for a particular cluster prediction. We also perform the evaluation in the model-free sense where we simply cluster the time series with no transformation as a basic baseline.\n\n\\begin{table}[htb]\n \\caption{Best cMSSA and MSSA results in terms of maximum F1 score. Model-free clustering baseline also included. For the best MSSA and cMSSA models (with $\\alpha$ automatically selected via Algorithm 1), PC transform outperformed RC transform. Best result per metric (precision, recall, or F1) is bolded.}\n \\label{tab:mhealth_best}\n \\centering\n \\begin{tabular}{l | c c c}\n \\toprule\n Model &\n $W$ &\n $K$ &\n P \/ R \/ F1 \\\\\n \\midrule\n \n Model-free & - & - & 50.49 \/ 48.82 \/ 49.54 \\\\\n MSSA & 16 & 16 & 57.67 \/ 64.63 \/ 60.95 \\\\\n cMSSA ($\\alpha = 12.41$) & 128 & 1 & \\textbf{65.44} \/ \\textbf{75.88} \/ \\textbf{70.27} \\\\\n \n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nTable~\\ref{tab:mhealth_best} reports the best representative contrastive and non-contrastive models, comparing both to the model-free baseline. We observe a number of things. First, both MSSA and cMSSA outperform the model-free baseline. Second, cMSSA has 9-10 point gains over cMSSA in each of precision, recall, and F1. Third, both find that using $A$ over $R$ as the transform yielded better results.\nFinally, of the $DW$ number of PCs available, MSSA gets its best performance using half (16 out of 32), while cMSSA only uses one PC out of the maximum of 256 available. This highlights an interesting efficiency of cMSSA. By filtering out unnecessary components, the remaining not only account for less signal variance, but provide diminishing returns with each subsequent component used.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\columnwidth]{f1_vs_F1_A}\n \\caption{Plot of paired F1 scores. Each point is for a particular setting of $W$ and $K$. The contrastive F1 score used is the maximum of the four runs (one per automatically selected $\\alpha$) for that setting of the hyperparams. $x=y$ line drawn as guidance. The points look at only those where the transform used is $A$.}\n \\label{fig:f1_vs_F1_A}\n\\end{figure}\n\nFigure~\\ref{fig:f1_vs_F1_A} shows a more granular view of the general gains to be had from using cMSSA. For a particular setting of $W$ and $K$, we plot the F1 score for the non-contrastive case vs the contrastive case. Due to the four values of $\\alpha$s used in the contrastive case, we take the model that had the greatest F1. Points below the diagonal line mean that the contrast was useful for a particular setting of the hyperparameters.\n\n\\begin{figure}[htb]\n\\begin{subfigure}{0.46\\columnwidth}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_cycling_k16_no_contrast_best_compare}\n \\label{fig:random_cycling_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jumping_k16_no_contrast_best_compare}\n \\label{fig:random_jumping_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jogging_k16_no_contrast_best_compare}\n \\label{fig:random_jogging_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_running_k16_no_contrast_best_compare}\n \\label{fig:random_running_k16_no_contrast_best_compare}\n\\end{subfigure}\n\\end{subfigure}\n\\qquad\n\\begin{subfigure}{0.46\\columnwidth}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_cycling_k1_contrast_best_compare}\n \\label{fig:random_cycling_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jumping_k1_contrast_best_compare}\n \\label{fig:random_jumping_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_jogging_k1_contrast_best_compare}\n \\label{fig:random_jogging_k1_contrast_best_compare}\n\\end{subfigure}\n\\begin{subfigure}{\\columnwidth}\n \\includegraphics[width=\\columnwidth]{random_running_k1_contrast_best_compare}\n \\label{fig:random_running_k1_contrast_best_compare}\n\\end{subfigure}\n\\end{subfigure}\n\n\\caption{Reconstructed time series after performing MSSA ($W=16$, $K=16$) and cMSSA ($W=128$, $K=1$, $\\alpha = 12.41$). Each row is the same random signal reconstructed using MSSA (left) and cMSSA (right), one row per activity. The two colors correspond to the dual recording channels. \n}\n\\label{fig:random_compare}\n\\end{figure}\n\nFinally, Figure~\\ref{fig:random_compare} shows a visual comparison of MSSA versus cMSSA, using their respective hyperparameters settings as shown in Table~\\ref{tab:mhealth_best}. Each row depicts how a random signal is processed with contrast on or off. We immediately see that cMSSA finds simpler signals than those found by MSSA. In the case of MSSA, the processed signals do not look substantially different from the originals. This is due to the fact that the high variance signals are shared across activities, so MSSA favors them during reconstruction. This is not the case with cMSSA, which identifies the differentiating signals that can disambiguate the activities.\n\n\\section{Conclusion}\n\nWe have developed cMSSA, a general tool for dimensionality reduction and signal decomposition of temporal data. By introducing a background dataset, we can efficiently identify sub-signals that are enhanced in one time series data relative to another. In an empirical experiment, we find that for virtually any setting of the hyperparameters, cMSSA is more effective at unsupervised clustering than MSSA, contingent on appropriate choices for the foreground and background data. It is worth emphasizing that cMSSA is an unsupervised learning technique. It does \\emph{not} aim to discriminate between time series signals, but rather discover structure and sub-signals within a given time series more effectively by using a second time series as background. This distinguishes it from various discriminant analysis techniques for time series that are based on spectral analysis \\cite{maharaj2014discriminant}, \\cite{krafty2016discriminant}.\n\nSome basic heuristics should be kept in mind when choosing to use cMSSA. First, the data ideally should exhibit periodic behavior, as MSSA (and by extension, cMSSA) is particularly well suited to finding oscillatory signals. Second, the data of interest $X$ and background $Y$ should not be identical, but should share common structured signal such that the contrast retains some information in the foreground. As an example, the ECG foreground data consisted of subjects performing very specific activities, whereas the background consisted of unlabelled ECG signals in which the participants performed no specific activity. We would expect a good amount of overlap in signal variance, but signals specific to the four activities would be under-represented in the background. Thus contrast is a plausible way to extract this signal.\n\nFinally, we note that the only hyper parameter of cMSSA is the contrast\nstrength, $\\alpha$. In our default algorithm, we developed an automatic\nsubroutine that selects\nthe most informative values of $\\alpha$. The experiments\nperformed used the automatically\ngenerated values. We believe that this default will be sufficient\nin many use cases of cMSSA, but the user may also\nset specific values for $\\alpha$ if more granular exploration\nis desired.\n\n\n\n\n\\vfill\\pagebreak\n\n\\nocite{*}\n\\bibliographystyle{IEEEbib}\n\\input{Template.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\IEEEPARstart{C}{ell}-free massive multiple-input multiple-output (MIMO) systems have been proposed to effectively alleviate intercell interference by coordinating a large number of distributed access points (APs), which are connected through fronthaul links to the central processing unit (CPU) \\cite{NAY+17, NAM+17, ZBM+20}. Intelligent reflecting surface (IRS), also known as reconfigurable intelligent surface (RIS), has been considered as one of the prospective multiple antenna technologies for beyond the fifth-generation (5G) networks \\cite{ZBM+20}. By adjusting phase shifts of the IRS elements, the propagation environment can be favorably manipulated.\n\nCell-free MIMO systems powered by IRSs have been recently introduced to further enhance the performance of cell-free MIMO systems at low and affordable cost and energy consumption by integrating multiple IRSs with the cell-free MIMO systems. The existing works, which studied the active and passive beamforming design, focused on the instantaneous performance metrics, i.e., instantaneous sum-rate \\cite{ZD21, ZDZ+21, HYX+21} or energy efficiency \\cite{ZDZ+21a, LND20X}. Theses works adopted alternating optimization algorithms, in which the active and passive beamformers are derived based on instantaneous channel state information (I-CSI). Thus, these algorithms would incur huge channel acquisition complexity and related pilot overhead since I-CSI is required for all AP-UE, AP-IRS, and IRS-UE links separately. Moreover, these algorithms would incur immense computational complexity and enormous fronthaul signaling overhead, since the active and passive beamformers need to be computed many times and transferred over fronthaul links for each coherence time. It is also challenging to control the IRSs in real-time, which requires stringent time synchronization \\cite{ZBM+20}. These disadvantages can be alleviated by designing a passive beamformer based on long-term channel statistics \\cite{HTJ+19, ZWZ+21}.\n\nTo the best of our knowledge, this paper is the first attempt for the cell-free MIMO systems powered by IRSs to consider the max-min achievable rate, where the achievable rate is a lower bound of the average rate. This metric aims to provide uniform performance and thus is widely used in the cell-free MIMO systems \\cite{NAY+17, NAM+17}. Moreover, we propose a novel non-iterative two-timescale algorithm to obtain 1) short-term active precoders for the APs that depend on I-CSI, 2) long-term power allocation for the APs using statistical CSI (S-CSI), and 3) long-term passive beamformers for the IRSs using S-CSI.\n\nThe rest of this paper is organized as follows. In Section~\\ref{sec:model}, we present the system model and formulate the max-min achievable rate optimization problem. In Section \\ref{sec:two-step}, we propose a two-step algorithm to solve the problem. Simulation results are provided to evaluate the proposed algorithm in Section~\\ref{sec:result}. Finally, we conclude the paper in Section \\ref{sec:conclusion}.\n\n\\textit{Notation:} Vectors and matrices are denoted by lower-case and upper-case boldface letters. The notations $\\otimes$ and $\\odot$ denote the Kronecker product and Hadamard product. $|\\cdot|$ and $\\angle(\\cdot)$ return the magnitude and angle of a complex argument. $\\mathbb{E}\\{\\cdot\\}$ represents the expectation operator. For a square matrix $\\bS$, $\\trace(\\bS)$ denotes the trace operation, and $\\bS \\succeq 0$ means that $\\bS$ is positive semidefinite. A random variable $x \\sim \\mathcal{CN}(m, \\sigma^2)$ is circularly symmetric complex Gaussian (CSCG) distributed with mean~$m$ and variance~$\\sigma^2$. \n\t\n\\section{System Model} \\label{sec:model}\n\nWe consider a downlink cell-free MIMO system powered by IRSs as illustrated in Fig. \\ref{fig:model}, where $L$ APs and $R$ IRSs are distributed to cooperatively serve $K$ single-antenna user equipments (UEs). All APs and IRSs are connected by wired or wireless fronthaul links to the CPU, which coordinates them. Each AP is equipped with $M$ antennas, and each IRS is comprised of $N$ passive reflecting elements. \n\n\\begin{figure}[t!] \n\t\\centering\n\t\\includegraphics[width=0.7 \\columnwidth]{fig1_model.eps}\n\t\\caption{A downlink cell-free MIMO system powered by IRSs.} \\label{fig:model}\n\\end{figure}\n\nWe assume that the data symbols for $K$ UEs $\\bs \\in \\mathbb{C}^{K \\times 1}$ are transmitted from all APs \\cite{NAY+17, NAM+17}. The transmit signal from AP~$l$ is given by \n\\begin{align}\n\\bx_l = \\textstyle\\sum_{k=1}^K \\bw_{l,k} s_k, \n\\end{align}\nwhere $\\bw_{l,k} \\in \\mathbb{C}^{M \\times 1}$ and $s_k$ are the active beamforming vector and data symbol for UE $k$. Assuming $\\mathbb{E}\\{|s_k|^2\\} = 1$ for all~$k$, the transmit power constraint of AP $l$ can be written as\n\\begin{align}\n\\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\bw_{l,k}}^2 \\} \\le \\bar{P}_l,\n\\end{align}\nwhere $\\bar{P}_l$ denotes the maximum transmit power of AP~$l$.\n\nThe channel between an AP and a UE consists of the direct (AP-UE) channel and the $R$ reflection (AP-IRS-UE) channels.\\footnote{Note that the signals reflected by the IRSs twice or more are weak enough to be neglected due to the harsh propagation loss of multiple hops \\cite{ZD21, ZDZ+21, HYX+21, ZDZ+21a, LND20X}.} Then, the overall channel from AP~$l$ to UE~$k$ can be expressed as\n\\begin{align}\n\\bh_{l,k}^H = \\bd_{l,k}^H + \\textstyle \\sum_{r=1}^R \\bv_{r,k}^H \\boldsymbol{\\Theta}_r \\bG_{l,r}, \\label{eq:overallCh}\n\\end{align}\nwhere $\\bd_{l,k}^H \\in \\mathbb{C}^{1 \\times M}$, $\\bG_{l,r} \\in \\mathbb{C}^{N \\times M}$, and $\\bv_{r,k}^H \\in \\mathbb{C}^{1 \\times N}$ denote the channel from AP~$l$ to UE~$k$, from AP~$l$ to IRS~$r$, and from IRS~$r$ to UE~$k$, respectively. The reflection coefficient matrix of IRS~$r$ is denoted by $\\boldsymbol{\\Theta}_r = \\diag(\\theta_{r,1}, \\cdots, \\theta_{r,N}) \\in \\mathbb{C}^{N \\times N}$, where $|\\theta_{r,n}| = 1, \\forall r,n$ represents the unit-modulus constraint on the IRSs elements.\n\nWe assume the Rician fading channel model for all channels \\cite{ZD21, ZDZ+21, ZDZ+21a, WZ19}. Specifically, the channel between AP~$l$ and UE~$k$ is given by\n\\begin{align}\n\\bd_{l,k} &= \\textstyle \\sqrt{\\xi_{l,k}^\\mathrm{d}} \\sqrt{\\frac{\\beta_\\mathrm{d}}{1+\\beta_\\mathrm{d}}}\\bar{\\bd}_{l,k}' + \\sqrt{\\xi_{l,k}^\\mathrm{d}} \\sqrt{\\frac{1}{1+\\beta_\\mathrm{d}}}\\tilde{\\bd}_{l,k}' \\notag \\\\\n&= \\bar{\\bd}_{l,k} + \\tilde{\\bd}_{l,k}, \\label{eq:ricianCh}\n\\end{align}\nwhere $\\bar{\\bd}_{l,k}'$, $\\tilde{\\bd}_{l,k}'$, and $\\beta_\\mathrm{d}$ denote the line-of-sight (LoS) component, non-line-of-sight (NLoS) component, and Rician K-factor of the channel~$\\bd_{l,k}$, respectively. The channel~$\\bd_{l,k}$ includes the distance-dependent path loss $\\xi_{l,k}^\\mathrm{d}$. The AP-IRS and IRS-UE channels follow the same model as in \\eqref{eq:ricianCh} with proper notation changes. It is assumed that the NLoS components of all AP-UE, AP-IRS, and IRS-UE channels are independent each other, and each NLoS component has independent and identically distributed (i.i.d.) $\\mathcal{CN}(0, 1)$ entries.\n\nThe received signal at UE $k$ can be written as\n\\begin{align}\ny_k &= \\textstyle \\sum_{l=1}^L \\bh_{l,k}^H \\bx_l + z_k \\notag \\\\\n\t&= \\bh_{k}^H \\bw_{k} s_k + \\textstyle \\sum_{k' \\neq k}^K \\bh_{k}^H \\bw_{k'} s_{k'} + z_k,\n\\end{align}\nwhere $\\bh_k = [\\bh_{1,k}^T, \\cdots, \\bh_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$, $\\bw_k = [\\bw_{1,k}^T, \\cdots, \\bw_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$, and $z_k \\sim \\mathcal{CN}(0, \\sigma^2)$ is the i.i.d. complex additive white Gaussian noise (AWGN). To analyze the theoretic performance gain with the IRSs, we assume that the perfect I-CSI of direct and reflection channels is available at the CPU. We also assume that the UE $k$ has the knowledge of the average of effective channel $\\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\}$ and adopt the \\emph{hardening bound}, which is widely used in the massive MIMO literature \\cite{BS20a}. Then, the achievable rate of UE $k$ is $\\log_2(1+\\mathsf{SINR}_k)$, where the effective signal-to-interference-plus-noise ratio (SINR) of UE~$k$ is given by\n\\begin{align}\n\\mathsf{SINR}_k = \\frac{ | \\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\} |^2 }\n{ \\textstyle \\sum_{k'=1}^K \\mathbb{E}\\{ |\\bh_{k}^H \\bw_{k'}|^2 \\} - | \\mathbb{E}\\{ \\bh_{k}^H \\bw_{k}\\} |^2 + \\sigma^2}. \\label{eq:SINR}\n\\end{align}\n\nIn this paper, we aim to maximize the minimum achievable rate by jointly designing active and passive beamformers subject to the per AP transmit power constraint and unit-modulus constraint on the IRSs elements. This optimization problem can be formulated as\n\\begin{alignat}{3}\n& \\max_{ \\{\\bw_{k}\\}, \\{\\boldsymbol{\\Theta}_r\\} } ~ && \\min_{k} \\log_2(1+\\mathsf{SINR}_k) \\label{eq:prbA} \\\\\n&~ \\quad~~ \\mathrm{s.t.} &&~ C_1: \\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\bw_{l,k}}^2 \\} \\le \\bar{P}_l, && ~~\\forall l, \\notag \\\\\n&&&~ C_2: |\\theta_{r,n}| = 1, && ~~\\forall r, n, \\notag\n\\end{alignat}\nwhere $\\{\\bw_{k}\\}$ and $\\{\\boldsymbol{\\Theta}_r\\}$ represent the active and passive beamformers, respectively. The joint optimization of the problem \\eqref{eq:prbA} is very challenging since the active and passive beamformers are tightly coupled.\n\n\n\n\\section{Proposed Two-Step Algorithm} \\label{sec:two-step}\nIn this section, we propose a suboptimal two-step algorithm to solve the problem \\eqref{eq:prbA}. We first design an active beamforming technique, which consists of active precoding and power allocation. Then, we design a passive beamforming technique based on S-CSI. Finally, we summarize the proposed algorithm.\n\n\n\n\\subsection{Active Beamforming Design} \nWe decompose an active beamformer into a short-term active precoder and long-term power allocation to reduce computational complexity and fronthaul signaling overhead. We considered a zero-forcing (ZF) precoder since it shows better max-min rate performance than conjugate beamforming precoder in cell-free MIMO systems \\cite{NAM+17}. Moreover, the ZF precoder eliminates inter-user interference, and thus it makes the optimal power allocation simple.\n\nWe can express the received signal $\\by$ for $K$~UEs as\n\\begin{align}\n\\by = \\bH^H\\bW\\bs + \\bz,\n\\end{align}\nwhere $\\by = [y_1, \\cdots, y_K]^T \\in \\mathbb{C}^{K \\times 1}$, $\\bH = [\\bh_1, \\cdots, \\bh_K] \\in \\mathbb{C}^{LM \\times K}$, $\\bW = [\\bw_1, \\cdots, \\bw_K] \\in \\mathbb{C}^{LM \\times K}$, and $\\bz = [z_1, \\cdots, z_K]^T \\in \\mathbb{C}^{K \\times 1}$.\nThe active beamformer can be set as $\\bW = \\widetilde{\\bW} \\bP^{\\frac{1}{2}}$ with the ZF precoder $\\widetilde{\\bW} = \\bH \\left( \\bH^H\\bH \\right)^{-1}$, where the related condition $LM \\ge K$ can be easily fulfilled in the cell-free MIMO systems \\cite{NAM+17}. The long-term power allocation $\\bP = \\diag(p_1, \\cdots, p_K) \\in \\mathbb{C}^{K \\times K}$ is applied to all APs.\n\n\nWith the ZF precoder and long-term power allocation, the effective SINR \\eqref{eq:SINR} is simply reduced to $\\frac{p_k}{\\sigma^2}$. With a fixed passive beamformer, the problem \\eqref{eq:prbA} boils down to the long-term power allocation problem as\n\\begin{alignat}{3}\n& \\cP_1: && \\max_{ \\bP } ~ && \\min_{k} \\frac{p_k}{\\sigma^2} \\label{eq:prb1} \\\\\n&&&~~ \\mathrm{s.t.} &&~ C_1: \\textstyle \\sum_{k=1}^K p_k \\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\} \\le \\bar{P}_l, ~\\forall l, \\notag\n\\end{alignat} \nwhere $\\widetilde{\\bW} = [\\tilde{\\bw}_1, \\cdots, \\tilde{\\bw}_K] \\in \\mathbb{C}^{LM \\times K}$ and $\\tilde{\\bw}_k = [\\tilde{\\bw}_{l,k}^T, \\cdots, \\tilde{\\bw}_{L,k}^T]^T \\in \\mathbb{C}^{LM \\times 1}$.\n\n\nThe objective function in the problem \\eqref{eq:prb1} forces the power allocation for all UEs to be the same\\footnote{Note that the instantaneous transmit power for UE~$k$ at AP~$l$ is equal to $p_k\\norm{\\tilde{\\bw}_{l,k}}^2$, and thus the actual transmit power is different per UE.}, i.e., $p_1 = \\cdots = p_K = p^{\\mathrm{opt}}$. Under the typical condition that $\\bar{P}_1 = \\cdots = \\bar{P}_L = \\bar{P}$ with a fixed $\\bar{P}$, the optimal power allocation $p^{\\mathrm{opt}}$ is determined by the AP that consumes the largest power for the active precoder, i.e., $\\max_l \\textstyle \\sum_{k=1}^K \\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\}$. As the largest power for the active precoder reduces, the optimal power allocation increases, and thus the minimum achievable rate improves accordingly.\n\n\n\\subsection{Passive Beamforming Design} \nBased on the proposed active beamforming design, we find that the passive beamformers are irrelevant to the objective function and only related to the transmit power constraint and unit-modulus constraint in the problem (7). Therefore, we can design a long-term passive beamformer to minimize the largest power for the active precoder. The corresponding optimization problem can be formulated as\n\\begin{alignat}{2}\n& \\min_{ \\boldsymbol{\\theta} } ~ && \\max_{l} \\textstyle \\sum_{k=1}^K\\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\} \\\\\n&~~ \\mathrm{s.t.} &&~ C_2: |\\theta_{r,n}| = 1, ~\\forall r, n, \\notag\n\\end{alignat}\nwhere $\\boldsymbol{\\theta} = \\boldsymbol{\\Theta}^H\\bone_{RN} \\in \\mathbb{C}^{RN \\times 1}$ and $\\boldsymbol{\\Theta} = \\diag(\\boldsymbol{\\Theta}_1, \\cdots, \\boldsymbol{\\Theta}_R) \\in \\mathbb{C}^{RN \\times RN}$.\n\nTo the best of our knowledge, there is no closed-form expression of $\\mathbb{E}\\{ \\norm{\\tilde{\\bw}_{l,k}}^2 \\}$ in terms of the long-term passive beamformer~$\\boldsymbol{\\theta}$. It is worth noting, however, that the transmit power reduces as the channel gain increases \\cite{WZ19}. Considering this fact, we propose a suboptimal optimization problem that maximizes the minimum average channel gain by passive beamforming at the IRSs as \n\\begin{alignat}{2}\n& \\max_{ \\boldsymbol{\\theta} } ~ && \\min_{k} \\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\} \\label{eq:prbB} \\\\\n&~~ \\mathrm{s.t.} &&~ C_2: |\\theta_{r,n}| = 1, ~\\forall r, n. \\notag\n\\end{alignat}\t\t\n\nBy exploiting S-CSI, the average channel gain of UE~$k$ can be expressed as an explicit function of~$\\boldsymbol{\\theta}$ that is given as\n\\begin{align}\n\\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\}\n= \\boldsymbol{\\theta}^H \\bA_k \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_k + \\bb_k^H \\boldsymbol{\\theta} + c_k, \\label{eq:avChGain}\n\\end{align}\nwhere $\\bA_k$, $\\bb_k$, and $c_k$ are defined in Appendix A. Since $\\bA_k~\\succeq~0$, the average channel gain is a convex function of~$\\boldsymbol{\\theta}$. However, the problem \\eqref{eq:prbB} is a non-convex optimization problem since the objective function is not a concave function of~$\\boldsymbol{\\theta}$, and the unit-modulus constraint is not a convex set.\n\nWe apply semidefinite relaxation (SDR) to convert the non-convex problem \\eqref{eq:prbB} to a convex problem \\cite{NAK+20}. At first, by introducing an auxiliary variable~$q$, the average channel gain of UE~$k$ can be rewritten as \n\\begin{align}\n\\bar{\\boldsymbol{\\theta}}^H \\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} + c_k,\n\\end{align}\t\nwhere $\\bar{\\boldsymbol{\\theta}} = \\begin{bmatrix} \\boldsymbol{\\theta} \\\\ q \\end{bmatrix}$ and $\\boldsymbol{\\Psi}_k = \\begin{bmatrix} \\bA_k & \\bb_k \\\\ \\bb_k^H & 0 \\end{bmatrix} \\succeq 0$. Note that $\\bar{\\boldsymbol{\\theta}}^H \\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} = \\trace(\\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\theta}} \\bar{\\boldsymbol{\\theta}}^H)$. We define $\\bar{\\boldsymbol{\\Theta}} = \\bar{\\boldsymbol{\\theta}} \\bar{\\boldsymbol{\\theta}}^H$, where $\\bar{\\boldsymbol{\\Theta}} \\succeq 0$ and $\\mathrm{rank}(\\bar{\\boldsymbol{\\Theta}})=1$. By relaxing the rank-one constraint on $\\bar{\\boldsymbol{\\Theta}}$, which is non-convex, the problem \\eqref{eq:prbB} can be reformulated as \n\\begin{alignat}{3}\n& \\cP_2: && \\max_{ \\bar{\\boldsymbol{\\Theta}} } ~ && \\min_{k} \\trace(\\boldsymbol{\\Psi}_k \\bar{\\boldsymbol{\\Theta}}) + c_k \\label{eq:prb2} \\\\\n&&&~~ \\mathrm{s.t.} &&~ [\\bar{\\boldsymbol{\\Theta}}]_{i,i} = 1 , i = 1, \\ldots, RN+1, \\notag \\\\\n&&&&&~ \\bar{\\boldsymbol{\\Theta}} \\succeq 0. \\notag\n\\end{alignat}\n\nSince the problem \\eqref{eq:prb2} is a convex semidefinite program (SDP), it can be efficiently solved by existing convex optimization solvers. If the optimal $\\bar{\\boldsymbol{\\Theta}}^{\\mathrm{opt}}$ is a rank-one matrix, then the optimal $\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ is derived by taking the eigenvector corresponding to the maximum eigenvalue of $\\bar{\\boldsymbol{\\Theta}}^{\\mathrm{opt}}$. Otherwise, Gaussian randomization is applied to find $\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ \\cite{NAK+20}. Finally, the optimal solution of the problem \\eqref{eq:prbB} is recovered by taking ${\\boldsymbol{\\theta}}^{\\mathrm{opt}} = \\exp\\left( j\\angle \\left( \\left[ \\frac{\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}}{\\bar{\\theta}_{RN+1}^{\\mathrm{opt}}} \\right]_{(1:RN)} \\right) \\right)$, where $[\\bx]_{(1:RN)}$ denotes the vector that contains the first $RN$ entries in $\\bx$, and $\\bar{\\theta}_{RN+1}^{\\mathrm{opt}}$ is the last element of~$\\bar{\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\n\\subsection{Overall Algorithm Description} \n\\begin{algorithm}[t]\n\t\\caption{Proposed Two-Step Algorithm}\n\t\\hspace{1mm} \\textbf{Input}: S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$ \\\\\n\t\\hspace*{1mm} \\textbf{Step 1}: Passive beamforming design\n\t\\begin{itemize} \t\t\t\n\t\t\\item Solve the problem $\\cP_2$ to obtain the optimal long-term passive beamformer ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\t\\end{itemize}\n\t\\hspace{1mm} \\textbf{Step 2}: Active beamforming design\n\t\\begin{itemize} \n\t\t\\item Apply the ZF precoder to instantaneous channels with the given ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$.\n\t\t\\item Solve the problem $\\cP_1$ to obtain the optimal long-term power allocation $\\bP^{\\mathrm{opt}}$.\n\t\\end{itemize}\n\\end{algorithm} \nIn the previous subsections, we first explained the active beamforming design and then described the passive beamforming design. The proposed algorithm, however, actually operates as summarized in Algorithm 1. First, the optimal long-term passive beamformer ${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$ is obtained by solving $\\cP_2$ based on the S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$. Then, the ZF precoder is applied to instantaneous channels with the given~${\\boldsymbol{\\theta}}^{\\mathrm{opt}}$, and the optimal long-term power allocation $\\bP^{\\mathrm{opt}}$ is derived by solving $\\cP_1$.\n\n\n\n\\section{Simulation Results} \\label{sec:result}\nIn this section, we provide simulation results to validate the minimum achievable rate performance of the proposed two-step algorithm. We consider a hotspot deployment scenario, where the UEs are placed in a hotspot, the APs are deployed a little far from the hotspot, and the IRSs are installed on a circle surrounding the hotspot in order to improve the rate performance \\cite{ZD21, ZDZ+21a}. This scenario is illustrated in Fig. \\ref{fig:hotspot}, where $L=4$ APs are located at $(0,0)$, $(D,0)$, $(D,D)$, and $(0,D)$, respectively. Up to $R=8$ IRSs are placed on a circle centered at $(d,d)$ with radius $r$, and $K=4$ UEs are uniformly distributed within the circle. We simulate three cases of $d = \\{40, 60, 120 \\}$~m with $r=30$~m and $D=300$~m.\n\n\t\n\n\n\\begin{figure}[h] \n\t\\centering\n\t\\includegraphics[width=0.5 \\columnwidth]{fig2_hotspot.eps}\n\t\\caption{Hotspot deployment scenario.} \\label{fig:hotspot}\n\\end{figure}\n\nAll APs are equipped with uniform linear arrays (ULAs) at a height of $10$~m with $M = \\{4, 8, 16\\}$ transmit antennas. Uniform planar array (UPA) is installed at a height of $5$~m for each IRS with $N = \\{8, 16, 32, 64, 128\\}$ elements. All UEs have single antenna, which is placed at a height of $1.5$~m \\cite{M.2412}. We evaluate three cases of $R = \\{2, 4, 8\\}$, where when $R=2$, only the first and fifth IRSs are present, and when $R=4$, the odd-numbered IRSs are present. It is assumed that all IRSs are deployed on building facades \\cite{ZBM+20} and look towards UEs. We consider an AP-IRS blockage model that when a signal from an AP arrives to the back of an IRS, then this signal is not reflected to the UEs, e.g., the signals from the first AP to the first, second, and eighth IRSs are blocked.\n\nThe distance-dependent path loss of all channels is modeled as $\\xi(d_{\\mathrm{link}}) = \\xi_0d_{\\mathrm{link}}^{-\\alpha_\\mathrm{X}}$, where $\\xi_0$ is the path loss at the reference distance $1$~m, $\\alpha_\\mathrm{X}$ denotes the path loss exponent of the channel~$\\bX$, and $d_{\\mathrm{link}}$ represents three-dimensional distance of a channel link considering vertical difference among the APs, IRSs, and UEs. We set $\\xi_0 = -30$~dB, $\\alpha_\\mathrm{d} = 3.4$, $\\alpha_\\mathrm{v} = \\alpha_\\mathrm{G} = 2.2$, $\\beta_\\mathrm{d} = -5$~dB, and $\\beta_\\mathrm{v} = \\beta_\\mathrm{G} = 5$~dB considering that the AP-UE channels would suffer from severer attenuation than the AP-IRS and IRS-UE channels \\cite{ZWZ+21}. Other system parameters are set as follows: $\\bar{P}_l = \\{20, 30, 40\\}$~dBm for all~$l$, $\\sigma^2 = -97$~dBm assuming $10$~MHz of system bandwidth, and $7$~dB of noise figure \\cite{M.2412}. We simulate $1,000$ uniform UE drops and generate $1,000$ independent instantaneous channels for each UE drop. \n\nWe consider three benchmark schemes. One is \\emph{No-IRS} and another is \\emph{Random Passive Beamforming}: the long-term passive beamformers at the IRSs are randomly selected. The active beamforming for both benchmarks is the same as that of the proposed algorithm. To the best of our knowledge, there are no other schemes that can be directly applied to the max-min achievable rate problem in the cell-free MIMO systems powered by IRSs. Instead, we compare with the third benchmark scheme of \\emph{Sum-Rate-Max} that maximizes instantaneous sum-rate by utilizing an alternating optimization algorithm to derive the active and passive beamformers [4].\n\nFig. \\ref{fig:cdf} depicts the empirical cumulative distribution functions (CDFs) of the minimum achievable rate by varying the maximum transmit power $\\bar{P}_l$. The proposed scheme provides a significant gain over No-IRS and the random passive beamforming for all values of $\\bar{P}_l$. Specifically for $\\bar{P}_l=20$~dBm, the median rate gains of the proposed scheme over No-IRS are equal to $3.4\\%$, $7.1\\%$, and $12.7\\%$ with $N=32$, $64$, and $128$, respectively. By doubling the number of IRS elements, the performance gain is almost doubled. Furthermore, the proposed scheme achieves comparable performance to the Sum-Rate-Max, which requires much higher computational complexity and signaling overhead than the proposed scheme.\n\n\\begin{figure}[t] \n\t\\centering\n\t\\includegraphics[width=0.95 \\columnwidth]{fig3_cdf.eps}\n\t\\caption{CDFs of the minimum achievable rate by varying the maximum transmit power $\\bar{P}_l$ with $d=40$~m, $M=8$, and $R=4$.} \\label{fig:cdf}\n\\end{figure}\n\nIn Fig.~\\ref{fig:rate}, we plot the minimum achievable rate versus the number of IRS elements~$N$ by varying the center of hotspot~$d$, the number of AP transmit antennas~$M$, and the number of IRSs $R$ with $\\bar{P}_l=20$~dBm. For all cases, the minimum achievable rate of the proposed scheme significantly improves with~$N$ and outperforms that of both benchmark schemes.\nFig.~\\ref{fig:rate}(a) shows that the performance of all schemes decreases as $d$ increases, i.e., the UEs move towards the center of service area. This is attributed to the fact that the received signal power from the first AP, which is the dominant AP to the UEs, decreases. However, the performance gain of the proposed scheme over No-IRS increases with $d$, i.e., when $N = 128$, the gains are equal to $12.5\\%$, $12.9\\%$, and $16.1\\%$ for $d=40$~m, $60$~m, and $120$~m, respectively.\nIn Fig.~\\ref{fig:rate}(b), it is seen that the smaller $M$, the lower the performance of all schemes.\nThe proposed scheme, however, provides higher performance gain over No-IRS as $M$ decreases, i.e., when $N = 128$, the gains are equal to $10.4\\%$, $12.5\\%$, and $12.8\\%$ for $M=16$, $8$, and $4$, respectively. \nFig.~\\ref{fig:rate}(c) shows that the performance of the proposed scheme increases with $R$ as expected. \nWhen the total number of IRS elements $RN$ is the same, similar performance is observed, showing that the proposed scheme is robust to IRS deployment scenarios. \n\n\\begin{figure*}[t] \n\t\\centering\n\t\\includegraphics[width=0.9 \\textwidth]{fig4_rate.eps}\n\t\\caption{The minimum achievable rate vs. the number of IRS elements $N$: (a) Varying $d$ with $R=4$, $M=8$; (b) Varying $M$ with $d=40$~m, $R=4$; (c) Varying $R$ with $d=40$~m, $M=8$.} \n\t\\label{fig:rate}\n\\end{figure*}\t\n\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this paper, we considered a joint beamforming framework in a cell-free MIMO system powered by IRSs. We formulated a maximization of minimum achievable rate problem and proposed a novel non-iterative two-timescale algorithm that derives the long-term passive beamformers and power allocation and short-term active precoders by exploiting S-CSI. Simulation results revealed that the proposed scheme can significantly improve the minimum achievable rate of the cell-free MIMO systems powered by IRSs compared to the benchmark schemes.\n\n\n\n\\begin{appendices}\n\\section{Derivation of \\eqref{eq:avChGain}}\nThe overall channel from AP~$l$ to UE~$k$ in \\eqref{eq:overallCh} can be rewritten as \n\\begin{align}\n\\bh_{l,k}^H = \\boldsymbol{\\theta}^H \\bV_k^H \\bG_l + \\bd_{l,k}^H, \n\\end{align}\nwhere, $\\bG_l = [\\bG_{l,1}^T, \\cdots, \\bG_{l,R}^T]^T \\in \\mathbb{C}^{RN \\times M}$, $\\bV_k^H = \\diag(\\bv_k^H) \\in \\mathbb{C}^{RN \\times RN}$, and $\\bv_k^H = [\\bv_{1,k}^H, \\cdots, \\bv_{R,k}^H] \\in \\mathbb{C}^{1 \\times RN}$. By decomposing the Rician fading channels into the LoS and NLoS components in \\eqref{eq:ricianCh}, the average channel gain from AP~$l$ to UE~$k$ can be expressed as\n\\begin{align}\n&\\mathbb{E} \\{ \\norm{\\bh_{l,k}}^2 \\} \\notag \\\\\n&~~= \\mathbb{E} \\left\\{ \\left\\| (\\bar{\\bG}_l^H + \\tilde{\\bG}_l^H)(\\bar{\\bV}_k + \\tilde{\\bV}_k) \\boldsymbol{\\theta} \n\t+ (\\bar{\\bd}_{l,k} + \\tilde{\\bd}_{l,k}) \\right\\|^2 \\right\\} \\notag \\\\\n&~~= \\boldsymbol{\\theta}^H \\bA_{l,k} \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_{l,k} + \\bb^H_{l,k}\\boldsymbol{\\theta} + c_{l,k},\n\\end{align}\nwhere $\\bb_{l,k} = \\bar{\\bV}_k^H \\bar{\\bG}_l \\bar{\\bd}_{l,k}$, $c_{l,k} = \\norm{\\bar\\bd_{l,k}}^2 + \\frac{M\\xi_{l,k}^\\mathrm{d}}{1+\\beta_\\mathrm{d}}$, and $\\bA_{l,k}$ is defined below in \\eqref{eq:A_lk}, where $\\boldsymbol{\\Xi}_k^\\mathrm{v} = \\diag{(\\xi_{1,k}^\\mathrm{v}, \\cdots, \\xi_{R,k}^\\mathrm{v}) \\in \\mathbb{C}^{R \\times R}}$, $\\boldsymbol{\\Xi}_l^\\mathrm{G} = \\diag{(\\xi_{l,1}^\\mathrm{G}, \\cdots, \\xi_{l,R}^\\mathrm{G}) \\in \\mathbb{C}^{R \\times R}}$, and $\\bA_{l,k} \\succeq 0$. All the variables $\\bA_{l,k}$, $\\bb_{l,k}$, and $c_{l,k}$ are expressed in terms of the S-CSI $\\{ \\bar{\\bd}_{l,k}, \\bar{\\bG}_l, \\bar{\\bv}_{k} \\}$ and path loss $\\{ \\xi_{l,k}^\\mathrm{d}, \\boldsymbol{\\Xi}_l^\\mathrm{G}, \\boldsymbol{\\Xi}_k^\\mathrm{v} \\}$ of all channel links \\cite{HTJ+19, ZWZ+21}. The details are omitted here due to the space limitation. Finally, the average channel gain of UE~$k$ can be written as\t\n\\begin{align}\n\\textstyle \\sum_{l=1}^L\\mathbb{E}\\{ \\norm{\\bh_{l,k}}^2 \\}\n= \\boldsymbol{\\theta}^H \\bA_k \\boldsymbol{\\theta} + \\boldsymbol{\\theta}^H \\bb_k + \\bb^H_k \\boldsymbol{\\theta} + c_k,\n\\end{align}\t\nwhere $\\bA_k = \\textstyle \\sum_{l=1}^L \\bA_{l,k}$, $\\bb_k = \\textstyle \\sum_{l=1}^L \\bb_{l,k}$, and $c_k = \\textstyle \\sum_{l=1}^L c_{l,k}$.\n\n\\begin{figure*} [b]\t\n\t\\hrule \\vspace{5mm}\t\n\t\\begin{align}\n\t\t\\bA_{l,k} = \\bar{\\bV}_k^H \\left( \\bar{\\bG}_l \\bar{\\bG}_l^H + \\frac{M}{1+\\beta_\\mathrm{G}} \\boldsymbol{\\Xi}_l^\\mathrm{G} \\otimes \\bI_N \\right) \\bar{\\bV}_k\n\t\t+ \\left( \\frac{1}{1+\\beta_\\mathrm{v}} \\boldsymbol{\\Xi}_k^\\mathrm{v} \\otimes \\bI_N \\right) \\odot \\left( \\bar{\\bG}_l \\bar{\\bG}_l^H \\right) \n\t\t+ \\frac{1}{1+\\beta_\\mathrm{v}}\\frac{M}{1+\\beta_\\mathrm{G}} \\boldsymbol{\\Xi}_k^\\mathrm{v} \\boldsymbol{\\Xi}_l^\\mathrm{G} \\otimes \\bI_N \\label{eq:A_lk}\n\t\\end{align}\n\\end{figure*}\n\n\\end{appendices}\n\t\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCondensed matter provides a platform to realize many physical objects in\nother subjects such as Majorana and Dirac Weyl fermions which are proposed\nin particle physics but not be discovered in nature \\cite{XWAN, LLU, SMH,\nHWENG, SYXU, BQLV, LLU2, VMO, SNA}. Another example is the Dirac monopole,\nwhich is a point source of a magnetic field proposed by Dirac \\cite{PAM}. It\nhas a quantum analogy in quantum physics, where the Berry curvature of\nenergy band acts as the magnetic field generated by degeneracy points as\nDirac monopoles \\cite{DXIAO}.\\ As the extension of degeneracy points, nodal\nloops as closed $1$-dimensional ($1$D) manifolds in $3$D momentum space can\nbe classified as nodal rings \\cite{AAB}, nodal chains \\cite{TB}, nodal links\n\\cite{ZYAN}, and nodal knots \\cite{RBI}. It has been extensively studied\nboth theoretically\\cite{XQSUN, SNIE, JAHN, CFANG1, TK, JYL, CFANG2, PYC,\nWCHEN, MEZAWA, YZHOU, ZYANG, MXIAO} and experimentally \\cite{QXU, RYU, QYAN,\nGCHANG, XFENG}. In the recent work, it has turned out that the relation\nbetween degeneracy lines and the corresponding polarization field in the\nparameter space is topologically isomorphic to Biot-Savart law in\nelectromagnetism \\cite{RWANG}.\n\n\\begin{figure}[tbp]\n\\includegraphics[ bb=112 370 500 623, width=0.45\\textwidth, clip]{1.eps}\n\\caption{Schematic illustration of the aim of present work. (a) We consider\na $2$D system with the Bloch Hamiltonian related to two periodic\nvector functions $\\mathbf{r}_{1}(k_{x})$\nand $\\mathbf{r}_{2}(k_{y})$ in auxiliary\nspace, which correspond to two knots. The topological index of the energy\nband is determined by the relations of two knots: The Chern number of the\nband equals to the linking number of two knots.\\ Several representative\nconfigurations of $[\\mathbf{r}_{1}(k_{x}),$ $\\mathbf{r}_{2}(k_{y})]$ is\npresented. Here $\\mathbf{r}_{2}$ is a trefoil knot, while three types of \n\\mathbf{r}_{1}$\\ are taken as simple loops, but at different positions,\nresulting linking numbers $\\mathcal{N}=0$, $1$, and $2$, respectively. The\ncorresponding Bloch Hamiltonians describe the systems with Chern numbers \nc=0 $, $1$, and $2$, respectively. (b) The main purpose of this work. For a \n1$D model with a fixed function $\\mathbf{r}_{2}(k_{y})$, $\\mathbf{r}_{2}(k_{y})$ is\nreferred as to the degeneracy circuit. For an\narbitrary point $\\mathbf{r}_{1}(k_{x})$, the polarization filed $\\mathbf{P}\n\\mathbf{r}_{1})$ obeys the Biot-Savart law for magnetic field arising from\nthe degeneracy loop as a current loop. Polarization field $\\mathrm{d}\\mathbf\nP}$ at point $\\mathbf{r}_{1}$\\ generated from an infinitesmall length of\ndegeneracy line $\\mathrm{d}\\mathbf{r}_{2}$ at $\\mathbf{r}_{2}$, has the\nidentical form with the Biot-Savart Law related magnetic field generated\nfrom the current loop. Finding the polarization field $\\mathbf{P}$ at\narbitrary point $\\mathbf{r}_{1} $ resulting from a degeneracy line can be\nsimply obtained by the Biot-Savart law in electromagnetism.}\n\\label{fig1}\n\\end{figure}\n\nIn this work, we provide another quantum analogy of classical\nelectromagnetism. We consider a class of Bloch Hamiltonians, which contains\ntwo periodic vector functions with respect to two independent variables,\nsuch as momentum $k_{x}$\\textbf{\\ }and\\textbf{\\ }$k_{y}$ for a $2$D lattice\nsystem, respectively. These two periodic vector functions correspond to two\nknots in $3$D auxiliary space (see Fig. \\ref{fig1}(a)). The Bloch vector is\nthe difference of two vectors. When we only consider one of two knots, the\nsystem reduces to a $1$D lattice system. The Zak phase and polarization\nfield at a fixed point in $3$D auxiliary space can be obtained.\\ We show\nexactly that the knot as a degeneracy line has a simple relation of its\ncorresponding polarization field, obeying the Biot-Savart law: The\ndegeneracy line acts as a current-carrying wire, while the polarization\nfield corresponds to the generated magnetic field. The relationship between\ntwo knots can be characterized by applying the Amp\\`{e}re's circuital law on\nthe field integral arising from one knot along another knot. For a\nnontrivial topological system, the integral is nonzero, due to the fact that\ntwo Bloch knots entangle with each other, forming a link with the linking\nnumber being the value of Chern number of the energy band. In Fig. \\ref{fig1\n, we schematically illustrate the main conclusion of this work.\n\nWe propose two lattice models to exemplify the application of our approach.\nThe first one is an extended QWZ model. We show that the Bloch Hamiltonian\nis an example of our concerned system. Two knots of the original QWZ model\nsimply reduce to two circles. The second one is a time-dependent quasi-$1$D\nmodel with magnetic flux. In this case, the Amp\\`{e}re circulation integral\nis equivalent to the topological invariant. In the aid of the Biot-Savart\nlaw, the pumping charge acts as a dynamic measure of the Chern number. We\nperform numerical simulation for several representative quasi-adiabatic\nprocesses to demonstrate this application.\n\nThe remainder of this paper is organized as follows. In Sec. \\ref{Model with\ndegeneracy loop}, we present a class of models, whose Bloch Hamiltonian\nrelates to two knots. In Sec. \\ref{Kitaev model on square lattice} We\npropose the extended QWZ model to exemplify the application of our approach.\nSec. \\ref{Ladder system} gives another example, which is a time-dependent\nquasi-$1$D model with magnetic flux. Sec. \\ref{Pumping charge} devotes to a\ndynamic measure of Chern number, the pumping charge, which can be computed\nnumerically for several representative quasi-adiabatic processes to\ndemonstrate our work. Finally, we present a summary and discussion in Sec.\n\\ref{Summary}.\n\n\\section{Double-knot model}\n\n\\label{Model with degeneracy loop}\n\nConsider a Bloch Hamiltonian $h_{\\mathbf{k}}$ in the for\n\\begin{eqnarray}\nh_{\\mathbf{k}} &=&\\left(\n\\begin{array}{cc}\n\\left( z_{1}-z_{2}\\right) & x_{1}-x_{2}-i\\left( y_{1}-y_{2}\\right) \\\\\nx_{1}-x_{2}+i\\left( y_{1}-y_{2}\\right) & -\\left( z_{1}-z_{2}\\right\n\\end{array\n\\right) \\notag \\\\\n&=&\\left[ \\mathbf{r}_{1}(k_{x})\\mathbf{-r}_{2}(k_{y})\\right] \\mathbf{\\cdot\n\\sigma }, \\label{hk}\n\\end{eqnarray\nwhich is the starting point of our study. It is consisted of two periodic\nvector functions $\\mathbf{r}_{1}(k_{x})=\\mathbf{r}_{1}(2\\pi +k_{x})=x_{1\n\\mathbf{i}+y_{1}\\mathbf{j}+z_{1}\\mathbf{k}$\\textbf{\\ }and\\textbf{\\ }$\\mathbf\nr}_{2}(k_{y})=\\mathbf{r}_{2}(2\\pi +k_{y})=x_{2}\\mathbf{i}+y_{2}\\mathbf{j\n+z_{2}\\mathbf{k}$, representing two knots (loops) in $3$D auxiliary space.\nHere $\\mathbf{\\sigma =(}\\sigma _{x},\\sigma _{y},\\sigma _{z}\\mathbf{)}$ are\nPauli matrices and $h_{\\mathbf{k}}$\\ represents a class of models, which is\nreferred\\ as to double-knot (double-loop) model. Matrix $h_{\\mathbf{k}}$\\\ncan take the role of a core matrix of crystalline system for non-interacting\nHamiltonian, or Kitaev Hamiltonian. We note that the spectrum of $h_{\\mathbf\nk}}$\\ is two-band and the gap closes when two knots have crossing points.\nThe aim of this work is to reveal the feature of the system which is\noriginated from the character of two knots.\n\nTo this end, we first consider the case with a fixed $k_{x}$. Then the model\nonly contains a point $\\mathbf{r}_{1}$ and a knot $\\mathbf{r}_{2}\\left(\nk_{y}\\right) $. The Hamiltonian reduces t\n\\begin{equation}\nh_{k_{y}}=\\left[ \\mathbf{r}_{1}\\mathbf{-r}_{2}(k_{y})\\right] \\mathbf{\\cdot\n\\sigma },\n\\end{equation\nwhich is a $1$D system in real space. Here $r_{2}(k_{y})$\\ is a degeneracy\nline, at which the gap closes.\\ The solution of equation \nh_{k_{y}}\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle =\\varepsilon _{k_{y}}^{\\pm\n}\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle $ has the form\n\\begin{equation}\n\\left\\vert u_{+}^{k_{y}}\\right\\rangle =\\left(\n\\begin{array}{c}\n\\cos \\frac{\\theta _{k_{y}}}{2}e^{-i\\varphi _{k_{y}}} \\\\\n\\sin \\frac{\\theta _{k_{y}}}{2\n\\end{array\n\\right) ,\\left\\vert u_{-}^{k_{y}}\\right\\rangle =i\\left(\n\\begin{array}{c}\n-\\sin \\frac{\\theta _{k_{y}}}{2} \\\\\n\\cos \\frac{\\theta _{k_{y}}}{2}e^{i\\varphi _{k_{y}}\n\\end{array\n\\right)\n\\end{equation\nwith $\\varepsilon _{k_{y}}^{\\pm }=\\pm \\left\\vert \\mathbf{r}_{1}\\mathbf{-r\n_{2}\\left( k_{y}\\right) \\right\\vert $, where the azimuthal and polar angles\nare defined a\n\\begin{equation}\n\\cos \\theta _{k_{y}}=\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}\\mathbf{-r\n_{2}\\right\\vert },\\tan \\varphi _{k_{y}}=\\frac{y_{1}-y_{2}}{x_{1}-x_{2}}.\n\\end{equation\nFor this $1$D system, the corresponding Zak phases for upper and lower bands\nare defined a\n\\begin{equation}\n\\mathcal{Z}_{\\pm }=\\frac{i}{2\\pi }\\int_{-\\pi }^{\\pi }\\left\\langle u_{\\pm\n}^{k_{y}}\\right\\vert \\frac{\\partial }{\\partial k_{y}}\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle \\mathrm{d}k_{y}.\n\\end{equation\nIt is well known that the Zak phase is gauge-dependent and the present\nexpression of $\\left\\vert u_{\\pm }^{k_{y}}\\right\\rangle $\\ results i\n\\begin{equation}\n\\mathcal{Z}=\\mathcal{Z}_{+}=-\\mathcal{Z}_{-}=\\frac{1}{2\\pi }\\oint_{\\mathrm{L\n}\\cos ^{2}\\frac{\\theta _{k_{y}}}{2}\\mathrm{d}\\varphi _{k_{y}},\n\\end{equation\nwhere \\textrm{L} denotes the integral loop about the solid angle.\nAccordingly, the polarization vector field is defined a\n\\begin{equation}\n\\mathbf{P}=-\\mathbf{\\nabla }\\mathcal{Z},\n\\end{equation\nwhere $\\mathbf{\\nabla }$ is the nabla operato\n\\begin{equation}\n\\mathbf{\\nabla }=(\\frac{\\partial }{\\partial x_{1}}\\mathbf{i}+\\frac{\\partial\n}{\\partial y_{1}}\\mathbf{j}+\\frac{\\partial }{\\partial z_{1}}\\mathbf{k}),\n\\end{equation\nwith unitary vectors $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ in $3$D\nauxiliary space. Straightforward derivation (see Appendix) shows that\n\n\\begin{equation}\n\\mathbf{P}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathrm{d}\\mathbf{r\n_{2}\\times \\left( \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right) }{\\left\\vert \\mathbf{\n}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}, \\label{Polarization}\n\\end{equation\nwhere $\\mathrm{L}$ denotes the integral loop about the degeneracy loop. It\nis clear that if we consider a degeneracy loop as current-carrying wire with\nsteady current strength $I=1\/\\mu _{0}$, flowing in the direction of\nincreasing $k_{y}$ from $0$\\ to $2\\pi $, the field $\\mathbf{P}$ is identical\nto the magnetic field generated by the current loop,\\ where $\\mu _{0}$\\ is\nthe vacuum permittivity of free space. Since the Eq. (\\ref{Polarization})\nholds for an arbitrary loop $\\mathrm{L}$, one can have its differential for\n\\begin{equation}\n\\mathrm{d}\\mathbf{P}=\\frac{1}{4\\pi }\\frac{\\mathrm{d}\\mathbf{r}_{2}\\times\n\\left( \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right) }{\\left\\vert \\mathbf{r}_{1}\n\\mathbf{r}_{2}\\right\\vert ^{3}}, \\label{Biot-Savart law}\n\\end{equation\nwhich is illustrated in Fig. \\ref{fig1}. It indicates that the relationship\nbetween $\\mathbf{P}$ and the degeneracy loop obeys the Biot-Savart law. It\nreveals the topological characteristics of the degeneracy lines in a clear\nphysical picture. We will regard degeneracy loops as a \\textit{band\ndegeneracy circuit}. This result helps us to determine the polarization of\nany loops in the auxiliary space. In addition, the Amp\\`{e}re circulation\nintegral $\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d}\\mathbf{r}$\nalong a loop $\\ell $\\ has clear physical means: (i) It equals to the sum of\nthe current through the surface spanned by the loop $\\ell $; (ii) It is the\npumping charge for the adiabatic passage $\\ell $.\n\n\\begin{figure*}[tbp]\n\\includegraphics[ bb=3 414 620 765, width=0.92\\textwidth, clip]{links.eps}\n\\caption{Schematic several representative configurations of double-knot \n\\left\\{ \\mathbf{r}_{1}(k_{x}),\\mathbf{r}_{2}(k_{y})\\right\\} $ for the\nextended QWZ model. The plots are obtained from parameter equations in Eq. \n\\protect\\ref{XYZ}) with parameters indicated in the panels. The arrows on\nthe loops indicate the directions of the knots with various topologies. The\ncorresponding Chern numbers are labeled, that match the linking numbers\nexactly.}\n\\label{fig2}\n\\end{figure*}\n\nNow we go back to $h_{\\mathbf{k}}$, taking the loop $\\ell $\\ as the knot \n\\mathbf{r}_{1}(k_{x})$, which has no crossing point on the knot $\\mathbf{r\n_{2}(k_{y})$. We find that the corresponding Amp\\`{e}re circulation integral\nis connected to the topology of two knots and the band structure of the\nsystem\n\\begin{equation}\n-\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d}\\mathbf{r=}c=\\mathcal{N\n\\mathbf{.} \\label{CN}\n\\end{equation\nHere the Chern number for lower band is defined as \\cite{XLQI, GYCHO\n\\begin{equation}\nc=\\frac{1}{4\\pi }\\int_{0}^{2\\pi }\\int_{0}^{2\\pi }\\frac{\\mathbf{r}^{\\prime }}\n\\left\\vert \\mathbf{r}^{\\prime }\\right\\vert ^{3}}\\mathbf{\\cdot }\\left( \\frac\n\\partial \\mathbf{r}^{\\prime }}{\\partial k_{x}}\\times \\frac{\\partial \\mathbf{\n}^{\\prime }}{\\partial k_{y}}\\right) \\mathrm{d}k_{x}\\mathrm{d}k_{y},\n\\end{equation\nwith $\\mathbf{r}^{\\prime }=\\mathbf{r}_{1}-\\mathbf{r}_{2}$, which also equals\nto the linking number \\cite{RICCA} of two knots $\\mathbf{r}_{1}(k_{y})$ and \n\\mathbf{r}_{2}(k_{x})$\n\\begin{equation}\n\\mathcal{N}=\\frac{1}{4\\pi }\\int_{0}^{2\\pi }\\int_{0}^{2\\pi }\\frac{\\mathbf{r\n^{\\prime }}{\\left\\vert \\mathbf{r}^{\\prime }\\right\\vert ^{3}}\\mathbf{\\cdot \n\\left( \\frac{\\partial \\mathbf{r}_{1}}{\\partial k_{x}}\\times \\frac{\\partial\n\\mathbf{r}_{2}}{\\partial k_{y}}\\right) \\mathrm{d}k_{x}\\mathrm{d}k_{y}.\n\\end{equation\nThese relations are evident demonstrations of the system's topological\nfeature and clearly reveal the physical significance of the Amp\\`{e}re\ncirculation integral $\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\mathrm{d\n\\mathbf{r}$. Furthermore, it corresponds to the jump of Zak phase for an\nadiabatic passage along a knot, which\\ can be measured by the Thouless\npumping charge in a quasi $1$D system. In the following, we present two\nexamples to illustrate our results.\n\n\\section{Extended QWZ model}\n\n\\label{Kitaev model on square lattice}\n\nIn this section, we consider a model, which is an extension of QWZ model\nintroduced by Qi, Wu and Zhang \\cite{QWZ}, to illustrate our result. The\nBloch Hamiltonian is\n\n\\begin{equation}\nh_{\\mathbf{k}}=B_{x}\\sigma _{x}+B_{y}\\sigma _{y}+B_{z}\\sigma _{z},\n\\end{equation\nwhere the field components ar\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nB_{x}=\\sin k_{x}+\\lambda \\sin \\left( 2k_{x}\\right) \\\\\nB_{y}=\\sin k_{y}+\\lambda \\sin \\left( 2k_{y}\\right) \\\\\nB_{z}=u+\\cos k_{x}+\\cos k_{y} \\\\\n+\\lambda \\left[ \\cos \\left( 2k_{x}\\right) +\\cos \\left( 2k_{y}\\right) \\right\n\\end{array\n\\right. . \\label{XYZ}\n\\end{equation\nIt reduces to original QWZ model when taking $\\lambda =0$.\n\nNow we rewrite it in the for\n\\begin{equation}\nh_{\\mathbf{k}}=\\left[ \\mathbf{r}_{1}(k_{x})\\mathbf{-r}_{2}(k_{y})\\right]\n\\mathbf{\\cdot \\sigma },\n\\end{equation\nwhere two vector functions ar\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n\\mathbf{r}_{1}=(\\sin k_{x}+\\lambda \\sin \\left( 2k_{x}\\right) ,0,u+\\cos\nk_{x}+\\lambda \\cos \\left( 2k_{x}\\right) ) \\\\\n\\mathbf{r}_{2}=-(0,\\sin k_{y}+\\lambda \\sin \\left( 2k_{y}\\right) ,\\cos\nk_{y}+\\lambda \\cos \\left( 2k_{y}\\right) \n\\end{array\n\\right. .\n\\end{equation\nIt is clear that $\\mathbf{r}_{1}(k_{x})$\\ and $\\mathbf{r}_{2}(k_{y})$\\\nrepresent two limacons within $xz$ and $yz$ plane, respectively. When taking\n$\\left\\vert \\lambda \\right\\vert <0.5$, the crossing point of the limacon\ndisappears. Particularly, when taking $\\lambda =0$, limacons reduce to\ncircles. The radiuses of two circles are both $1$, but the centers are \n(0,0,u)$\\ and $(0,0,0)$, respectively. Chern numbers can be easily obtained\nfrom the linking numbers of these two circles: $c=0$, for $\\left\\vert\nu\\right\\vert >2$, and $c=\\pm 1$, for $0<\\pm u<2$. When taking $\\left\\vert\n\\lambda \\right\\vert >0.5$, the crossing point of the limacon appears. Since\nlimacons with crossing point cannot be classified as knots, we add\nperturbation terms $\\kappa \\sin 2k_{x}$\\ to $r_{1y}$ and $\\kappa \\sin 2k_{y}$\nto\\ $r_{2x}$\\ to untie the crossing point ($\\left\\vert \\kappa \\right\\vert\n\\ll 1$), then limacons become knots again. The possible linking numbers of\nsuch two knots are still equal to the Chern numbers $c=0$, $\\pm 1$, $\\pm 3$,\nand $\\pm 4$. The absence of $c=\\pm 2$ is due to the fact that we take the\nidentical $\\lambda $ in the expressions of $\\mathbf{r}_{1}(k_{x})$ and \n\\mathbf{r}_{2}(k_{y})$. In Fig. \\ref{fig2}, we plot some representative\nconfigurations to demonstrate this point. Comparing to the direct\ncalculation of Chern number from the Berry connection, the example shows\nthat the Chern number can be easily obtained by the geometrical\nconfigurations hidden in the Bloch Hamiltonian.\n\n\\section{Ladder system}\n\n\\label{Ladder system}\n\nAs a simple application of our result, we consider a quasi $1$D system with\nperiodically time-dependent parameters. The Bloch Hamiltonian has the for\n\\begin{equation}\nh_{k}(t)=\\left[ \\mathbf{r}(t)\\mathbf{-r}_{c}(k)\\right] \\mathbf{\\cdot \\sigma \n,\n\\end{equation\nwhere $\\mathbf{r}(t)=\\mathbf{r}(t+T)$ represents a loop $\\ell $ without\ncrossing point on the degeneracy loop $\\mathbf{r}_{c}(k)$. The result\nobtained above still apply to the case of replacing $(k_{x},k_{y})$\\ with \n(t,k)$, and replacing $\\left\\vert u_{\\pm }^{\\mathbf{k}}\\right\\rangle $ with \n|u_{\\pm }^{k}(t)\\rangle $ accordingly. In this section we will demonstrate\nour result and its physical implications through an alternative\ntight-binding model, which is two coupled SSH chains, or a ladder system\nwith staggered magnetic flux, on-site potential and long range hopping\nterms. These ingredients allow the system to support multiple types of\ndegeneracy loops with different geometric topologies.\n\nWe consider a ladder system which is illustrated in Fig. \\ref{fig3},\nrepresented by the Hamiltonian\n\n\\begin{eqnarray}\n&&H_{\\text{L}}=\\sum_{j=1}^{N}\\{r_{\\bot }e^{i\\phi }c_{2j}^{\\dag\n}c_{2j-1}+\\alpha c_{2j-1}^{\\dag }c_{2\\left( j+1\\right) }+\\beta c_{2\\left(\nj+1\\right) -1}^{\\dag }c_{2j} \\notag \\\\\n&&+\\mu c_{2\\left( j+2\\right) -1}^{\\dag }c_{2j}+\\nu c_{2j-1}^{\\dag\n}c_{2\\left( j+2\\right) }+i\\kappa \\lbrack c_{2\\left( j+3\\right) -1}^{\\dag\n}c_{2j-1} \\notag \\\\\n&&-c_{2\\left( j+3\\right) }^{\\dag }c_{2j}]+\\text{\\textrm{H.c.}\n\\}+z\\sum_{j=1}^{2N}\\left( -1\\right) ^{j+1}c_{j}^{\\dag }c_{j},\\label{ladd}\n\\end{eqnarray\non a $2N$ lattice. Here $c_{j}^{\\dag }$ is the creation operator of a\nfermion at the $j$th site with the periodic boundary condition \nc_{2N+1}=c_{1}$. The inter-sublattice hopping amplitudes are $\\left( \\alpha\n,\\beta ,\\mu ,\\nu \\right) $ and the intra-sublattice hopping amplitude is \n\\kappa $.\\ Besides, two time-dependent parameters, $2\\phi (t)$ is the\nstaggered magnetic flux threading each plaquette and $z(t)$\\ is the strength\nof staggered potentials. The ladder system is essentially two coupled SSH\nchains. As a building block of the system, the SSH model \\cite{SSH} has\nserved as a paradigmatic example of the $1$D system supporting topological\ncharacter \\cite{Zak}. It has an extremely simple form but well manifests the\ntypical feature of topological insulating phase, and the transition between\nnon-trivial and trivial topological phases, associated with the number of\nzero energy and edge states as the topological invariant \\cite{Asboth}. It\nhas been demonstrated that all the parameters of this model can be easily\naccessed within the existing technology of cold-atomic experiments \\cit\n{Clay,Ueda,Jo}. We schematically illustrate this model in Fig.~\\ref{fig3}.\nWe introduce the fermionic operators in $k$ space\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\na_{k}=\\frac{1}{\\sqrt{N}}\\sum_{j=1}^{N}e^{-ikj}c_{2j-1} \\\\\nb_{k}=\\frac{1}{\\sqrt{N}}\\sum_{j=1}^{N}e^{-ikj}c_{2j\n\\end{array\n\\right. , \\label{Fourier 1}\n\\end{equation\nand the wave vector $k=\\pi (2n-N)\/N$, $(n=0,1,...,N-1)$. Then we hav\n\\begin{equation}\nH_{\\text{L}}=\\sum_{k}(a_{k}^{\\dagger },b_{k}^{\\dagger })h_{k}\\left(\n\\begin{array}{c}\na_{k} \\\\\nb_{k\n\\end{array\n\\right) ,\n\\end{equation\n\\begin{figure}[tbp]\n\\includegraphics[ bb=130 440 476 540, width=0.5\\textwidth, clip]{3.eps}\n\\caption{Schematics of the two coupled SSH chains with staggered flux and\npotential. The system consists of two sublattices A and B with on-site\npotentials $z$\\ and $-z$, indicated by filled and empty circles,\nrespectively. Hopping amplitudes along each chain are staggered by $\\protec\n\\alpha $ (blue solid line) and $\\protect\\beta $ (blue dotted line). The\ninterchain hopping amplitude is $r_{\\perp }$ (thick black line) associated\nwith a phase factor and interchain diagonal hopping amplitude $\\protect\\mu $\n(gray solid line), $\\protect\\nu $ (gray dotted line) and $i\\protect\\kappa $\n(light green solid line). The red arrows indicate the hopping directions for\ncomplex amplitudes, which are induced by the staggered flux threading each\nplaquettes (arrow circles).}\n\\label{fig3}\n\\end{figure}\nwhere the core matrix has the for\n\\begin{equation}\nh_{k}=\\left(\n\\begin{array}{cc}\nz+2\\kappa \\sin \\left( 3k\\right) & R(\\phi ,k) \\\\\nR^{\\ast }(\\phi ,k) & -z-2\\kappa \\sin \\left( 3k\\right\n\\end{array\n\\right) ,\n\\end{equation\nand the off-diagonal matrix element i\n\\begin{equation}\nR(\\phi ,k)=r_{\\bot }e^{-i\\phi }+\\alpha e^{ik}+\\beta e^{-ik}+\\mu e^{-2ik}+\\nu\ne^{2ik}.\n\\end{equation\nTaking\n\n\\begin{equation}\nx+iy=r_{\\bot }e^{i\\phi },\n\\end{equation\nthe parameter equations for degeneracy loop i\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\nx_{c}=-\\left( \\alpha +\\beta \\right) \\cos k-\\left( \\mu +\\nu \\right) \\cos\n\\left( 2k\\right) \\\\\ny_{c}=-\\left( \\beta -\\alpha \\right) \\sin k-\\left( \\mu -\\nu \\right) \\sin\n\\left( 2k\\right) \\\\\nz_{c}=-2\\kappa \\sin \\left( 3k\\right\n\\end{array\n\\right. , \\label{trefoil knot}\n\\end{equation\nwhich is plotted in Fig. \\ref{fig4} for the case with parameters $\\alpha\n=\\mu =0.5$, $\\beta =1$, $\\nu =1.5$, and $\\kappa =0.1$. One can see that the\ndegeneracy curve is a trefoil knot. Intuitively, it should result in\ntopological features with indices $2$, $1$, and $0$. We will demonstrate\nthis point in the next section.\n\n\\begin{figure*}[tbp]\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4a.eps}\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4b.eps}\n\\includegraphics[ bb=19 8 1241 380, width=0.92\\textwidth, clip]{fig4c.eps}\n\\caption{ (a1-a3) Schematics of three adiabatic passages in $3$D auxiliary\nspace for pumping charge. The degeneracy curve (red) is a trefoil knot with\nparameters $\\protect\\alpha =\\protect\\mu =0.5$, $\\protect\\beta =1$, $\\protec\n\\nu =1.5$, and $\\protect\\kappa =0.1$ of the system in Eq. (\\protect\\ref{ladd\n) (Fig. \\protect\\ref{fig3}). The adiabatic passages are straight lines\n(blue)\\ at positions $(x,y):$ (a1) $(3.80,0)$, (a2) $(1.15,0.84)$, and (a3) \n(0.40,0.01)$, respectively. (b1-b3) and (c1-c3)\\ are plots of current and\nthe corresponding total charge transfer for quasi-adiabatic process. The\nresults are obtained by numerically exact diagonalization method for the\nsystem in Eq. (\\protect\\ref{ladd}) with $N=100 $. The speed of time\nevolution is $\\protect\\omega =1\\times 10^{-3}$. It indicates that the\ntopological invariant can be obtained by dynamical process.}\n\\label{fig4}\n\\end{figure*}\n\n\\section{Pumping charge}\n\n\\label{Pumping charge}\n\nFor a $2$D system with Bloch Hamiltonian in the form of Eq. (\\ref{hk}), the\nphysical and geometric meanings of\\ Chern number is well established. For a\nquasi $1$D system with Bloch Hamiltonian in the form of Eq. (\\ref{hk}) by\nreplacing $(k_{x},k_{y})$\\ with $(t,k)$, the Chern number is connected to an\nadiabatic passage driven by the parameters from $t$\\ to $t+T$, or a periodic\nloop $\\mathbf{r=r}(t)$ in auxiliary space. In a $1$D model, it has been\nshown that the adiabatic particle transport over a time period takes the\nform of the Chern number\\ and it is quantized \\cite{DXIAO}. The pumped\ncharge counts the net number of degeneracy point enclosed by the loop. This\ncan be extended to the loop $\\mathbf{r=r}(t)$\\ in the present model.\n\nActually, one can rewrite Eq. (\\ref{CN}) in the for\n\\begin{equation}\nc=\\mathcal{N}=-\\oint_{\\ell }\\mathbf{P(\\mathbf{r})\\cdot }\\frac{\\partial\n\\mathbf{\\mathbf{r}}}{\\partial t}\\mathrm{d}t\\mathbf{.}\n\\end{equation\nwhere $\\mathbf{\\mathbf{r}}$\\ (or $r_{\\bot },\\phi ,$ and $z$) is periodic\nfunction of time $t$. Furthermore, we can find out the physical meaning of\nthe Chern number by the relation\n\\begin{equation}\nc=\\int_{0}^{T}\\mathcal{J}(t)\\mathrm{d}t,\n\\end{equation\nwher\n\\begin{equation}\n\\mathcal{J}=\\frac{i}{2\\pi }\\int_{0}^{2\\pi }[(\\partial _{t}\\langle\nu_{-}^{k}|)\\partial _{k}|u_{-}^{k}\\rangle -(\\partial _{k}\\langle\nu_{-}^{k}|)\\partial _{t}|u_{-}^{k}\\rangle ]\\mathrm{d}k\n\\end{equation\nis the adiabatic current. Then $c$ is pumped charge of all channel $k$\ndriven by the time-dependent Hamiltonian varying in a period, which can be\nmeasured through a quasi adiabatic process.\n\nInspired by these analysis,\\ we expect that the Chern number can be unveiled\nby the pumping charge of all the energy levels. This can be done in\nsingle-particle sub-space. The accumulated charge passing the unit cell $l$\nduring the period $T$ i\n\\begin{equation}\nQ_{l}=\\sum_{k}\\int_{0}^{T}j_{l}\\mathrm{d}t,\n\\end{equation\nwhere current across two neighboring unit cells i\n\\begin{eqnarray}\n&&j_{l}=\\frac{1}{i}\\left\\langle u_{-}^{k}\\left( t\\right) \\right\\vert [\\alpha\na_{j}^{\\dag }b_{j+1}+\\beta b_{j}^{\\dag }a_{j+1}+\\mu b_{j}^{\\dag }a_{j+2}+\n\\notag \\\\\n&&\\nu a_{j}^{\\dag }b_{j+2}-i\\kappa a_{j}^{\\dag }a_{j+3}+i\\kappa b_{j}^{\\dag\n}b_{j+3}-\\text{\\textrm{H.c.}}]\\left\\vert u_{-}^{k}\\left( t\\right)\n\\right\\rangle .\n\\end{eqnarray}\n\n\nAs we mentioned above, there are three types of adiabatic loop $\\mathbf{r=r\n(t)$ in auxiliary space, with pumping charges $Q_{l}=0$, $1$, and $2$,\nrespectively. In general, three periodic functions $r_{\\bot }(t)$, $\\phi (t)\n, and $z(t)$\\ should be taken to measure the pumping charge. However, a\nquasi adiabatic loop is tough to be realized in practice. Thanks to the\nBiot-Savart law for the field $\\mathbf{P}(r)$, we can take the adiabatic\npassage along a straight line with fixed $r_{\\bot }$ and $\\phi $, since the\nfield $\\mathbf{P}$\\ far from the trefoil knot $\\mathbf{r}_{c}(k)$ has no\ncontribution to the Amp\\`{e}re circulation integral, or the pumping charge.\n\nWe consider the case by taking $z=\\omega t$ with $\\omega \\ll 1$. According\nto the analysis above, if $t$ varies from $-\\infty $ to $\\infty $, $Q_{l}$\\\nshould be $0$, $1$, and $2$, respectively. To examine how the scheme works\nin practice, we simulate the quasi-adiabatic process by computing the time\nevolution numerically for finite system. In principle, for a given initial\neigenstate $\\left\\vert u_{-}^{k}\\left( 0\\right) \\right\\rangle $, the time\nevolved state under a Hamiltonian $H_{\\text{L}}\\left( t\\right) $ i\n\\begin{equation}\n\\left\\vert \\Phi \\left( t\\right) \\right\\rangle =\\mathcal{T}\\{\\exp\n(-i\\int_{0}^{t}H_{\\text{L}}\\left( t\\right) \\mathrm{d}t)\\left\\vert\nu_{-}^{k}\\left( 0\\right) \\right\\rangle \\},\n\\end{equation\nwhere $\\mathcal{T}$ is the time-ordered operator. In low speed limit $\\omega\n\\rightarrow 0$, we hav\n\\begin{equation}\nf\\left( t\\right) =\\left\\vert \\langle u_{-}^{k}\\left( t\\right) \\left\\vert\n\\Phi \\left( t\\right) \\right\\rangle \\right\\vert \\rightarrow 1,\n\\end{equation\nwhere $\\left\\vert u_{-}^{k}\\left( t\\right) \\right\\rangle $\\ is the\ncorresponding instantaneous eigenstate of $H_{\\text{L}}\\left( t\\right) $.\nThe computation is performed by using a uniform mesh in the time\ndiscretization for the time-dependent Hamiltonian $H_{\\text{L}}$. In\norder to demonstrate a quasi-adiabatic process, we keep $f\\left( t\\right)\n>0.9$\\ during the whole process by taking sufficient small $\\omega $. Fig.\n\\ref{fig4} plots the simulations of particle current and the corresponding\ntotal probability, which shows that the obtained dynamical quantities are in\nclose agreement with the expected Chern number.\n\n\\section{Summary and discussion}\n\n\\label{Summary}\n\nWe have analyzed a family of $2$D tight-binding model with various Chern\nnumbers, which are directly connected to the topology of two knots. When\nreduced to $1$D single-knot degeneracy model, a polarization vector field\ncan be established for a gapped band. We have exactly shown an interesting\nanalogy between the topological feature of the band and classical\nelectromagnetism: polarization vector field acts as the static magnetic\nfield generated by the degeneracy knot as a current circuit. It indicates\nthat there is a quantum analogy of\\ Biot-Savart law in quantum matter.\nBefore ending this paper, we would like to point out that our findings also\nreveal the topological feature hidden in the case with zero Chern number. In\nFig. \\ref{fig2}(a) and (b), we find out that though the linking numbers of\nthese two sets of loops are zero, the configurations are different. It\nshould imply certain topological feature in a single direction, which will\nbe investigated in future work. This finding extends the understanding of\ntopological feature in matter and provides methodology and tool for dealing\nwith the calculation and detection of Chern numbers.\n\n\\section{Appendix: Proof of the Biot-Savart law}\n\nIn this appendix, we provide the proof of Eq. (\\ref{Polarization}) in the\nmain text. To this end, we first revisit the Biot-Savart law\\ for a current\ncarrying loop, and then compare it with the polarization field in the\npresent work.\n\n\\subsection{The magnetic field}\n\nConsider a current carrying loop $\\mathrm{L}$\\ with current strength $I=1\n\\mu\n_{0}$, which is described by a periodic function $\\mathbf{r}_{2}\\left(\nk_{y}\\right) =x_{2}\\mathbf{i+}y_{2}\\mathbf{j+}z_{2}\\mathbf{k}$ in a $3$D\nspace. Here \n\\mu\n_{0}$ is the vacuum permittivity of free space and the current\\ flows in the\ndirection of increasing $k_{y}$ from $0$ to $2\\pi $. According to the\nBiot-Savart law, the magnetic field $\\mathbf{B}$ at position $\\mathbf{r\n_{1}=x_{1}\\mathbf{i+}y_{1}\\mathbf{j+}z_{1}\\mathbf{k}$ generated by the loop \n\\mathrm{L}$ is\n\n\\begin{equation}\n\\mathbf{B}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2}-\\mathbf{r\n_{1}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}\\times\n\\mathrm{d}\\mathbf{r}_{2}.\n\\end{equation\nFor the sake of simplicity we only give the proof for $\\mathbf{B}$\\ and \n\\mathbf{P}$\\ in the $x$ component as an example. The explicit form of the\ncomponent i\n\\begin{equation}\nB_{x}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\left( y_{2}-y_{1}\\right)\n\\mathrm{d}z_{2}\\mathbf{-}\\left( z_{2}-z_{1}\\right) \\mathrm{d}y_{2}}\n\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}}.\n\\end{equation\nAccording to the Stokes' theorem, the line integral of $B_{x}$ can be\nexpressed as a double integra\n\\begin{eqnarray}\nB_{x} &=&\\frac{1}{4\\pi }\\iint\\nolimits_{\\mathrm{S}}[\\frac{3\\left(\nx_{2}-x_{1}\\right) ^{2}-\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d\ny_{2}\\mathrm{d}z_{2} \\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( y_{1}-y_{2}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}z_{2}\\mathrm{d}x_{2}\n\\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( z_{1}-z_{2}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}x_{2}\\mathrm{d\ny_{2}],\n\\end{eqnarray}\nwhere $S$ represents a smooth surface spanned by the loop $L$.\n\n\\subsection{The polarization vector field}\n\nNow we turn to the quantum analogy of Biot-Savart law. For a fixed $k_{x}$, \nh_{\\mathbf{k}}$\\ reduces to a $1$D system $h_{k_{y}}$, and the corresponding\nZak phases for upper and lower bands are defined a\n\\begin{equation}\n\\mathcal{Z}_{\\pm }=\\frac{i}{2\\pi }\\int_{-\\pi }^{\\pi }\\left\\langle u_{\\pm\n}^{k_{y}}\\right\\vert \\frac{\\partial }{\\partial k_{y}}\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle \\mathrm{d}k_{y},\n\\end{equation\nwhich is gauge-dependent. For the present expression of $\\left\\vert u_{\\pm\n}^{k_{y}}\\right\\rangle $, we have\n\\begin{equation}\n\\mathcal{Z}=\\mathcal{Z}_{+}=-\\mathcal{Z}_{-}=\\frac{1}{2\\pi }\\oint_{\\mathrm{L\n}\\cos ^{2}\\frac{\\theta }{2}\\mathrm{d}\\varphi ,\n\\end{equation\nwhere $\\mathrm{L}$ denotes loop $\\mathbf{r}_{2}(k_{y})$\\ an\n\\begin{equation}\n\\cos \\theta =\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert },\\tan \\varphi =\\frac{y_{1}-y_{2}}{x_{1}-x_{2}}.\n\\end{equation\nThe polarization vector field is defined a\n\\begin{equation}\n\\mathbf{P}=-\\mathbf{\\nabla }\\mathcal{Z}, \\label{P1}\n\\end{equation\nwhere $\\mathbf{\\nabla }$\\ is the nabla operato\n\\begin{equation}\n\\mathbf{\\nabla }=(\\frac{\\partial }{\\partial x_{1}}\\mathbf{i}+\\frac{\\partial\n}{\\partial y_{1}}\\mathbf{j}+\\frac{\\partial }{\\partial z_{1}}\\mathbf{k}),\n\\end{equation\nwith unitary vectors $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ in $3$D\nauxiliary space. We note the fact that\n\n\\begin{equation}\n\\mathbf{k\\cdot \\lbrack }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2\\bot }-\\mathbf{\n}_{1\\bot }}{\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert\n^{2}}\\times \\mathrm{d}\\left( \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right)\n]=\\oint_{\\mathrm{L}}\\mathrm{d}\\varphi =2\\pi w,\n\\end{equation\nwhere $w$ is winding number of the integral loop $\\mathbf{r}_{2\\bot\n}(k_{y})=x_{2}\\mathbf{i}+y_{2}\\mathbf{j}$\\ around the point $\\mathbf{r\n_{1\\bot }=x_{1}\\mathbf{i}+y_{1}\\mathbf{j}$. Then the Zak phase can be\nrewritten a\n\\begin{equation}\n\\mathcal{Z}=\\frac{\\mathbf{k}}{4\\pi }\\mathbf{\\cdot }\\oint_{\\mathrm{L}}\\left[\n1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert \n\\right] \\frac{\\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }}{\\left\\vert \\mathbf{r\n_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}}\\times \\mathrm{d}\\left(\n\\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right) .\n\\end{equation\nThe projection of the polarization vector field $\\mathbf{P}$ in the $x$\ndirection is represented a\n\\begin{equation}\nP_{x}=-\\frac{\\partial }{\\partial x_{1}}\\mathcal{Z}=\\frac{1}{4\\pi }\\oint_\n\\mathrm{L}}\\left( G\\mathrm{d}x_{2}+Q\\mathrm{d}y_{2}+R\\mathrm{d}z_{2}\\right) ,\n\\end{equation\nwher\n\\begin{eqnarray}\nG &=&-\\frac{\\left( z_{1}-z_{2}\\right) \\left( x_{2}-x_{1}\\right) \\left(\ny_{2}-y_{1}\\right) }{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{3}\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}} \\\\\n&&-\\left( 1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert }\\right) \\frac{2\\left( x_{2}-x_{1}\\right) (y_{2}-y_{1})}\n\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{4}}, \\notag\n\\end{eqnarray\nan\n\\begin{eqnarray}\nQ &=&\\left( 1+\\frac{z_{1}-z_{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r\n_{2}\\right\\vert }\\right) \\frac{\\left[ \\left( x_{2}-x_{1}\\right) ^{2}-\\left(\ny_{2}-y_{1}\\right) ^{2}\\right] }{\\left\\vert \\mathbf{r}_{2\\bot }-\\mathbf{r\n_{1\\bot }\\right\\vert ^{4}} \\notag \\\\\n&&+\\frac{\\left( z_{1}-z_{2}\\right) \\left( x_{2}-x_{1}\\right) ^{2}}\n\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}\\left\\vert \\mathbf{r\n_{2\\bot }-\\mathbf{r}_{1\\bot }\\right\\vert ^{2}}.\n\\end{eqnarray\nBy the Stokes' theorem, the line integral of $P_{x}$ can be expressed as a\ndouble integral\n\n\\begin{eqnarray}\nP_{x} &=&\\frac{1}{4\\pi }\\iint\\nolimits_{\\mathrm{S}}[\\frac{3\\left(\nx_{2}-x_{1}\\right) ^{2}-\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert\n^{2}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d\ny_{2}\\mathrm{d}z_{2} \\notag \\\\\n&&-\\frac{3\\left( x_{2}-x_{1}\\right) \\left( y_{1}-y_{2}\\right) \\allowbreak }\n\\left\\vert r_{2}-r_{1}\\right\\vert ^{5}}\\mathrm{d}z_{2}\\mathrm{d}x_{2} \\notag\n\\\\\n&&-\\frac{3\\left( x_{1}-x_{2}\\right) \\left( z_{2}-z_{1}\\right) }{\\left\\vert\n\\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{5}}\\mathrm{d}x_{2}\\mathrm{d\ny_{2}],\n\\end{eqnarray\nwhich results i\n\\begin{equation}\nP_{x}=B_{x}.\n\\end{equation\nSimilarly, the projection of polarization vector field $\\mathbf{P}$ and\nmagnetic field $\\mathbf{B}$ in the $y$ and $z$ direction can be calculated\nin the same way. Eventually, we can come to a conclusio\n\\begin{equation}\n\\mathbf{P}=\\mathbf{B}=\\frac{1}{4\\pi }\\oint_{\\mathrm{L}}\\frac{\\mathbf{r}_{2}\n\\mathbf{r}_{1}}{\\left\\vert \\mathbf{r}_{1}-\\mathbf{r}_{2}\\right\\vert ^{3}\n\\times \\mathrm{d}\\mathbf{r}_{2}.\n\\end{equation}\n\n\\acknowledgments This work was supported by the National Natural Science\nFoundation of China (under Grant No. 11874225)..\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Ice model} \\label{sec:mathmodel}\n\n\\subsection{The full Stokes (FS) equations}\n\\label{sec:Stokes}\nWe use the FS equations in 2D with coordinates $\\mathbf x=(x, z)^T$ for modeling of the flow of an ice sheet \\cite{Hutter83}.\nThese nonlinear partial differential equations (PDEs) in the interior of the ice $\\Omega$ are given by \n\\begin{equation}\n\\begin{cases}\n \\nabla\\cdot\\mathbf{u}=0,\\\\\n - \\nabla\\cdot{\\mathbf{\\mathbb{\\sigma}}} =\\rho \\mathbf{g},\n \\end{cases}\n \\label{eq:FS}\n\\end{equation}\n where the stress tensor is ${\\mathbf{\\mathbb{\\sigma}}} = 2\\eta(\\mathbf{u})\\mathbf{\\mathbb{\\tau}}(\\mathbf{u})-p\\mathbb{I}$. The symmetric strain rate tensor is defined by\n\\begin{equation}\\label{eq:taudef}\n \\mathbf{\\mathbb{\\tau}}(\\mathbf{u})=\\frac{1}{2}(\\nabla\\mathbf{u}+\\nabla\\mathbf{u}^T)=\\left(\\begin{array}{cc}\\tau_{11}&\\tau_{12}\\\\\\tau_{12}&\\tau_{22}\\end{array}\\right),\n\\end{equation}\n$\\mathbb{I}$ is the identity matrix, and the viscosity is defined by Glen's flow law\n\\begin{equation}\\label{eq:visc}\n \\eta(\\mathbf{u})=\\frac{1}{2}\\left(\\mathcal{A}(T^\\prime)\\right)^{-\\frac{1}{n}}\\mathbf{\\mathbb{\\tau}}_e^{\\frac{1-n}{n}},\\qquad \\mathbf{\\mathbb{\\tau}}_e = \\sqrt{\\frac{1}{2}\\text{tr}(\\mathbf{\\mathbb{\\tau}}(\\mathbf{u})\\mathbf{\\mathbb{\\tau}}(\\mathbf{u}))}.\n\\end{equation}\n\nHere ${\\mathbf{u}}=(u, w)^T$ is the vector of velocities, $\\rho$ is the density of the ice, $p$ denotes the pressure, and the gravitational acceleration in the $z$-direction is denoted by ${\\bf g}$. The rate factor $\\mathcal{A}(T^\\prime)$ describes how the viscosity depends on the pressure melting point corrected temperature $T^\\prime$. For isothermal flow assumed here, the rate factor $\\mathcal{A}$ is constant. Finally, $n$ is usually taken to be 3.\n\n\n\\subsection{Boundary conditions}\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/GL} \n \\caption{A two dimensional schematic view of a marine ice sheet.}\n\\label{fig:ice}\n\\end{figure}\n\n\nAt the boundary $\\Gamma$ of the ice we define the normal outgoing vector $\\mathbf{n}$ and tangential vector $\\mathbf{t}$, see Figure~\\ref{fig:ice}.\nIn a 2D case considered here, $y$ is constant in the figure. The upper boundary is denoted by $\\Gamma_s$ and the lower boundary is $\\Gamma_b$.\nAt $\\Gamma_s$ and $\\Gamma_{bf}$, the floating part of $\\Gamma_b$, we have that \n\\begin{equation}\n{\\mathbf{\\mathbb{\\sigma}}}{\\mathbf{n}}=\\mathbf{f}_s.\n\\label{eq:bc_s}\n\\end{equation}\nThe ice is stress-free at $\\Gamma_s$, $\\mathbf{f}_s=0$, and $\\mathbf{f}_s=-p_w\\mathbf{n}$ at the ice\/ocean interface $\\Gamma_{bf}$ where $p_w$ is the\nwater pressure. Let\n\\[\n \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\mathbf{t}}=\\mathbf{t}\\cdot\\mathbf{\\mathbb{\\sigma}}\\mathbf{n},\\; \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}=\\mathbf{n}\\cdot\\mathbf{\\mathbb{\\sigma}}\\mathbf{n},\\; u_\\mathbf{t}=\\mathbf{t}\\cdot\\mathbf{u}.\n\\]\nThen for the slip boundary $\\Gamma_{bg}$, the part of $\\Gamma_b$ where the ice is grounded, we have a friction law for the sliding ice\n \\begin{equation}\n {\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\mathbf{t}} + \\beta(\\mathbf{u},\\mathbf x) u_\\mathbf{t}=0,\\quad u_\\mathbf{n}=\\mathbf{n}\\cdot\\mathbf{u}=0, \\quad -{\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}\\geq p_w. \\label{eq:BCGI}\n \\end{equation} \nThe type of friction law is determined by the friction coefficient $\\beta$.\nThere is a balance between ${\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}$ and $p_w$ at $\\Gamma_{bf}$ and the contact is friction-free, $\\beta=0$,\n \\begin{equation}\n {\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\mathbf{t}} = 0, \\qquad -{\\mathbf{\\mathbb{\\sigma}}}_{\\mathbf{n}\\bn}= p_w.\n \\label{eq:BCFI}\n \\end{equation}\nThe GL is located where the boundary condition switches from $\\beta>0$ and $u_\\mathbf{n}=0$ on $\\Gamma_{bg}$ to $\\beta=0$ and a free $u_\\mathbf{n}$ on $\\Gamma_{bf}$. In 2D,\nthe GL is the point $(x_{GL}, z_{GL})$ between $\\Gamma_{bg}$ and $\\Gamma_{bf}$. \n\nWith the ocean surface at $z=0$, $p_w=-\\rho_w g z_b$ where $\\rho_w$ is the density of sea water, $z_b$ is the $z$-coordinate of $\\Gamma_b$, and $g$ is the gravitation constant. \n\n\n\\subsection{The free surface equations}\n\\label{sec:height}\n\nThe boundaries $\\Gamma_s$ and $\\Gamma_b$ are time-dependent and move according to two free surface equations. The boundary $\\Gamma_{bg}$ follows the\nfixed bedrock with coordinates $(x, b(x))$.\n\nThe $z$-coordinate of the free surface position $z_s(x,t)$ at $\\Gamma_s$ (see Fig. \\ref{fig:ice}) is the solution of an advection equation\n\\begin{equation}\n \\frac{\\partial z_s}{\\partial t}+u_s \\frac{\\partial z_s}{\\partial x}-w_s=a_s,\n\\label{eq:freeSurface}\n\\end{equation}\nwhere $a_s$ denotes the net surface accumulation\/ablation of ice and ${\\mathbf{u}}_s=(u_s, w_s)^T$ the velocity at the free surface in contact with the atmosphere. Similarly, the $z$-coordinate for the lower surface $z_b$ of the floating ice at $\\Gamma_{bf}$ satisfies\n\\begin{equation}\n \\frac{\\partial z_b}{\\partial t}+u_b \\frac{\\partial z_b}{\\partial x}-w_b=a_b,\n\\label{eq:lowerSurface}\n\\end{equation}\nwhere $a_b$ is the net accumulation\/ablation at the lower surface and ${\\mathbf{u}}_b=(u_b, w_b)^T$ the velocity of the ice at $\\Gamma_{bf}$. On $\\Gamma_{bg}$, $z_b=b(x)$.\n\nThe thickness of the ice is denoted by $H=z_s-z_b$ and depends on $(x, t)$. \n\n\\subsection{The solution close to the grounding line}\n\\label{sec:GLsol}\n\nThe 2D solution of the FS equations in Eq. \\eqref{eq:FS} with a constant viscosity, $n=1$ in Eq. \\eqref{eq:visc}, is expanded in small parameters in \\cite{Schoof11}. The solutions in different\nregions around the GL are connected by matched asymptotics. Upstream of the GL at the bedrock, $xx_{GL}$ then $\\chi=0$ and Eq. \\eqref{eq:flot} holds true\non $\\Gamma_{bf}$.\nIn numerical experiments with the linear FS $(n=1)$ in \\cite{NoWi08}, $\\chi(x, z_b)$ in the original variables varies linearly in $x$ for $x0$ on $\\Gamma_{bf}$ then the ice is not in contact with the bedrock and $\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w=0$ and if $\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w<0$ on $\\Gamma_{bg}$ then the ice and the bedrock are in contact and $d=0$. Hence, the\ncomplementarity relation in the vertical direction is\n \\begin{equation}\\label{eq:complv}\n\\begin{array}{ll}\n z_b(x, t)-b(x)\\ge 0,\\; \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w\\le 0,\\\\ \n (z_b(x, t)-b(x))(\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w)=0\\;\\textrm{on}\\;\\Gamma_b.\n\\end{array}\n \\end{equation}\nThe contact friction law is such that $\\beta>0$ when $xx_{GL}$. The complementarity relation along the slope at $x$ is then the non-negativity of $d$ and \n \\begin{equation}\\label{eq:compls}\n \\beta\\ge 0,\\; \\beta(x, t)(z_b(x, t)-b(x))=0\\;\\textrm{on}\\;\\Gamma_b.\n \\end{equation}\nIn particular, these relations are valid at the nodes $x=x_j$, $j=0,1,\\dots,N$.\n \n\nThe complementarity condition also holds for $u_\\mathbf{n}$ and $\\sigma_{\\mathbf{n}\\bn}$ such that\n \\begin{equation}\\label{eq:complu}\n\\begin{array}{ll}\n \\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w\\le 0,\\\\ \n u_\\mathbf{n}(\\mathbf{\\mathbb{\\sigma}}_{\\mathbf{n}\\bn}+p_w)=0\\;\\textrm{on}\\;\\Gamma_b,\n\\end{array}\n \\end{equation}\nwithout any sign constraint on $u_\\mathbf{n}$ except for the retreat phase when the ice leaves the ground and $u_\\mathbf{n}<0$. \n\n\nSimilar implementations for contact problems using Nitsche's method are found in \\cite{chouly2017overview,chouly2017nitsche}, where the unknowns in the PDEs are the displacement fields\ninstead of the velocity in Eq. \\eqref{eq:FS}.\nAnalysis in \\cite{chouly2017overview} suggests that Nitsche's method for the contact problem can provide a stable numerical solution with an optimal convergence rate.\n\nThe nonlinear equations for the nodal values of $\\mathbf{u}$ and $p$ are solved by Newton iterations. The system of linear equations in every Newton iteration is solved iteratively by using the\nGeneralised Conjugate Residual (GCR) method in Elmer\/ICE. The condition on $d_j$ in a node $x_j$ is used for a so called grounded mask, which is computed at each timestep and not changed during the nonlinear iterations.\n\n\n\n\\subsection{Discretization of the advection equations}\\label{sec:updlower}\n\nThe advection equations for the moving ice boundary in Eq. \\eqref{eq:freeSurface} and \\eqref{eq:lowerSurface} are discretized in time by a finite difference method and in\nspace by FEM with linear Lagrange elements for $z_s$ and $z_b$. A stabilization term is added, making the spatial discretization behave \nlike an upwind scheme in the direction of the velocity as implemented in Elmer\/ICE.\n\n\nThe advection equations Eq. \\eqref{eq:freeSurface} and Eq. \\eqref{eq:lowerSurface} are integrated in time by a semi-implicit method of first order accuracy. \nLet $c=s$ or $b$. Then the solution is advanced\nfrom time $t^n$ to $t^{n+1}=t^n+\\Delta t$ with the timestep $\\Delta t$ by \n\\begin{equation}\\label{eq:zint}\n z_c^{n+1}=z_c^n+\\Delta t(a_c^n-u_c^n \\frac{\\partial{z_{c}^{n+1}}}{\\partial x}+w_c^n).\n\\end{equation}\nThe spatial derivative of $z_c$ is approximated by FEM. A system of linear equations is solved at $t^{n+1}$ for $z_c^{n+1}$. This time discretization and its properties are \ndiscussed in \\cite{cheng2017accurate}.\n\nA stability problem in $z_b$ is encountered in the boundary condition at $\\Gamma_{bf}$ in \\cite{Durand09b}. \nIt is solved by expressing $z_b$ in $p_w$ at $\\Gamma_{bf}$ with a damping term in \\cite{Durand09b}.\nAn alternative interpretation of the idea in \\cite{Durand09b} and an explanation follow below.\n\n\nThe relation between $u_\\mathbf{n}$ and $u_\\mathbf{t}$ at $\\Gamma_{bf}$ and $\\mathbf{u}_b=\\mathbf{u}(x, z_b(x))$ is\n\\begin{equation}\n \\mathbf{u}_b=\\left(\\begin{array}{c} u_b \\\\ w_b\\end{array}\\right)=\\left(\\begin{array}{c} z_{bx} \\\\ -1\\end{array}\\right)\\frac{u_\\mathbf{n}}{\\sqrt{1+z_{bx}^2}}\n +\\left(\\begin{array}{c} 1 \\\\ z_{bx}\\end{array}\\right)\\frac{u_\\mathbf{t}}{\\sqrt{1+z_{bx}^2}},\n\\label{eq:udef}\n\\end{equation}\nwhere $z_{bx}$ denotes $\\partial z_b\/\\partial x$. Insert $u_b$ and $w_b$ from Eq. \\eqref{eq:udef} into Eq. \\eqref{eq:lowerSurface} to obtain\n\\begin{equation}\n \\frac{\\partial z_b}{\\partial t}=a_b-u_\\mathbf{n}\\sqrt{1+z_{bx}^2},\n\\label{eq:zbeq2}\n\\end{equation}\nInstead of discretizing Eq. \\eqref{eq:zbeq2} explicitly at $t^n$ with $u_\\mathbf{n}^{n-1}$ to determine $p_w^n$, the base coordinate is updated implicitly\n\\begin{equation}\n z_{b}^n=z_{b}^{n-1}+\\Delta t\\left(a_b^n-u_\\mathbf{n}^n\\sqrt{1+z_{bx}^2}\\right)\n\\label{eq:zbimpl}\n\\end{equation}\nin the solution of Eq. \\eqref{eq:FSweakform}.\n\nAssume that $z_{bx}$ is small.\nThe timestep restriction in Eq. \\eqref{eq:zbimpl} is estimated by considering a 2D slab of the floating ice of width $\\Delta x$ and thickness $H$. Newton's law of motion yields\n\\[\n M \\dot{u}_\\mathbf{n}= M g-\\Delta x p_w,\n\\]\nwhere $M=\\Delta x(z_s-z_b)\\rho$ is the mass of the slab. Divide by $M$, integrate in time for $u_\\mathbf{n}(t^m)$, let $m=n$ or $n-1$, and approximate the integral by the trapezoidal rule for the quadrature to obtain\n\\[\n\\begin{split}\n u_\\mathbf{n}(t^m)&=\\displaystyle{\\int_0^{t^m} g+\\frac{g\\rho_w}{\\rho}\\frac{z_b}{z_s-z_b}\\,\\textrm{d}s} \\\\\n &\\approx \\displaystyle{gt^m+\\frac{g\\rho_w}{\\rho}\\sum_{i=0}^m\\alpha_i\\frac{z_b^i}{z_s^i-z_b^i}\\Delta t,}\\\\\n\\end{split}\n\\] \n\\[\n \\alpha_i=0.5, i=0, m,\\quad \\alpha_i=1, i=1,\\ldots,m-1. \n\\]\nThen insert $u_\\mathbf{n}^m$ into Eq. \\eqref{eq:zbimpl}. All terms in $u_\\mathbf{n}^m$ from timesteps $i b(x)$, as the blue line in Fig. \\ref{fig:GL}, the boundary conditions are given by Eq. \\eqref{eq:BCFI}, and where the ice is in contact with the bedrock, as the red line in Fig. \\ref{fig:GL}, the boundary conditions are given by Eq. \\eqref{eq:BCGI}. \nHowever, there is another case as shown in Fig. \\ref{fig:GL2} when the net force at $x_i$ is pointing inward, namely $\\sigma_{\\mathbf{n}\\bn}(x_i)+p_w(x_i)>0$.\nThen, the floating boundary condition Eq. \\eqref{eq:BCFI} should be imposed up until the node $x_{i-1}$.\nThis can happen at some point due to the low spatial and temporal resolutions, but the node $x_i$ will move upward as long as $\\mathbf{u}\\cdot\\mathbf{n}<0$, or the net force switches signs and the condition transforms into the case in Fig. \\ref{fig:GL} when $\\sigma_{\\mathbf{n}\\bn}(x_i)+p_w(x_i)<0$.\nDenote the situation in Fig. \\ref{fig:GL} by case {\\romannumeral 1}, and the one in Fig. \\ref{fig:GL2} by case {\\romannumeral 2}.\nWe call the node `grounded' when it is in contact with the bedrock with net force from the ice pointing outward ($\\sigma_{\\mathbf{n}\\bn}+p_w<0$), and `floating' when the net force is pointing inward ($\\sigma_{\\mathbf{n}\\bn}+p_w\\geq0$).\nThe element which contains both grounded and floating nodes is called the GL element and the grounded node in it is called the last grounded node and the floating one is called the first floating node.\n\nIn coarse meshes, the true position of the GL is generally not in one of the nodes, but usually between the last grounded and the first floating nodes. \nInstead of refining the mesh around GL, which would lead to very small time steps for stability reasons, we will here introduce a subgrid model for the GL element.\n\n\nWe let $\\chi(x)=\\sigma_{\\mathbf{n}\\bn}(x)+p_w(x)$ and assume that it is linear as in Eq. \\eqref{eq:chidef} to determine the position of the GL, $x_{GL}$, in the GL element. \nIn case {\\romannumeral 2}, the GL is located between $x_{i-1}$ and $x_i$ even though the whole element $[x_{i-1},x_i]$ is geometrically grounded.\nThe equation $\\chi(x_{GL})=0$ is solved by linear interpolation between $\\chi(x_{i-1})<0$ and $\\chi(x_i)>0$ yielding a unique solution satisfying $x_{i-1}x_i$, we have $b(x)p_w(x)$.\nTherefore, $\\tilde{\\chi}(x_{i+1})>\\chi(x_{i+1})=0$ and $\\tilde{\\chi}(x_i)=\\chi(x_i)<0$.\nThen, a linear interpolation between $\\tilde{\\chi}(x_i)$ and $\\tilde{\\chi}(x_{i+1})$ guarantees a unique solution of $\\tilde{\\chi}(x_{GL})=0$ in the GL element $[x_i,x_{i+1}]$, see Fig. \\ref{fig:GL}.\nIn case {\\romannumeral 2}, $p_b$ can also be used since $p_b(x)=p_w(x)$ as long as the element is on the bedrock.\n\nConceptually, the linear interpolation of the function $\\tilde{\\chi}(x)$ can be considered separately by looking at the two linear functions $\\sigma_{\\mathbf{n}\\bn}(x)$ and $p_b(x)$. \nAs the GL always rests on the bedrock, $p_b(x_{GL})=p_w(x_{GL})$ is actually an exact representation of the water pressure imposed on the ice at GL, although geometrically $z_b(x_{GL})$ may not coincide with $b(x_{GL})$, especially on coarse meshes.\nThis also leads to the fact that the interpolated normal stress $\\sigma_{\\mathbf{n}\\bn}(x_{GL},z_b(x_{GL}))$ is a first order approximation of the normal stress at the exact GL position $(x_{GL},b(x_{GL}))$.\n\nThis correction is not necessary when the GL is advancing since the implicit treatment of the bottom surface is equivalent to additional water pressure at the stress boundary as discussed in Sect. \\ref{sec:updlower}.\n\n\n\nAfter the GL position is determined, the domains $\\Gamma_{bg}$ and $\\Gamma_{bf}$ are separated at $x_{GL}$ as in Eq. \\eqref{eq:Nitscheint} and the integrals are calculated with a high-order integration scheme as in \\cite{Seroussi14} to achieve a better resolution within the element shown in Figures \\ref{fig:GL} and \\ref{fig:GL2}.\nFor a smoother transition of $\\beta$ at $GL$, the slip coefficient is multiplied by 1\/2 at the whole GL element before integrating using the high order scheme.\n\n\n\nThe penalty term from Nitsche's method restricts the motion of the element in the normal direction. It should only be imposed on the element which is fully on the ground.\nOn the contrary, in case {\\romannumeral 1}, the GL element $[x_i,x_{i+1}]$ is not in contact with the bedrock as in Fig. \\ref{fig:GL}, so only the floating boundary condition should be used on the element $[x_i,x_{i+1}]$.\nAdditionally, the implicit representation of the bottom surface in Eq. \\eqref{eq:zbimpl} also implies that the case {\\romannumeral 2}\\, with retreating GL should be merged to case {\\romannumeral 1}\\, since the surface is leaving the bedrock and the normal velocity should not be forced to zero.\nTo summarize, Nitsche's penalty term should be imposed on all the fully grounded elements and partially on the GL element in the advance phase.\n\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth,page=1]{.\/Figures\/subgrid} \n \\caption{Schematic figure of Grounding Line in case {\\romannumeral 1}.\n Upper panel: the last grounded and first floating nodes as defined in Elmer\/ICE. \n Lower panel: linear interpolation to compute a more accurate position of the Grounding Line.}\n \\label{fig:GL}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth,page=2]{.\/Figures\/subgrid} \n \\caption{Schematic figure of Grounding Line in case {\\romannumeral 2}.\n Upper panel: the last grounded and first floating nodes as defined in Elmer\/ICE. \n Lower panel: linear interpolation to compute a more accurate position of the Grounding Line.}\n \\label{fig:GL2}\n\\end{figure}\n\n\nEquations (\\ref{eq:FS}), (\\ref{eq:freeSurface}), and (\\ref{eq:lowerSurface}) form a system of coupled nonlinear equations. They are solved in the same manner as in Elmer\/ICE v.8.3.\nThe $x_{GL}$ position is determined dynamically within every nonlinear iteration when solving the FS equations and the high order integrations are based on the current $x_{GL}$.\nThe nonlinear FS is solved with fixed-point iterations to $10^{-5}$ relative error with a limit of maximal 25 nonlinear iterations and the grounded condition is set if the distance between of the bottom surface and the bedrock is smaller than $10^{-3}$~m.\n\n\n\n\\section{Results} \\label{sec:results}\n\nThe numerical experiments follow the MISMIP benchmark \\cite{MISMIP} and comparison is made with the results in \\cite{gagliardini2016impact}.\nUsing the experiment MISMIP 3a, the setups are exactly the same as in the advancing and retreating simulations in \\cite{gagliardini2016impact}.\nThe experiments are run with spatial resolutions of $\\Delta x=4$~km, 2~km and 1~km with 20 vertical extruded layers.\nThe timestep is $\\Delta t=0.125$~year for all the three resolutions to eliminate time discretization errors when comparing different spatial resolutions.\n\n\nThe dependence on $\\gamma_0$ for the retreating ice is shown in Fig. \\ref{fig:gammas} with $\\gamma_0$ between $10^4$ and $10^9$.\nThe estimated GL positions do not vary with different choices of $\\gamma_0$ from $10^5$ to $10^8$ which suggests a suitable range of $\\gamma_0$.\nIf $\\gamma_0$ is too small ($\\gamma_0\\ll10^4$), oscillations appear in the estimated GL positions. \nIf $\\gamma_0$ is too large ($\\gamma_0\\gg10^8$), then more nonlinear iterations are needed for each time step.\nThe same dependency of $\\gamma_0$ is observed for the advance experiments and for different mesh resolutions as well.\nFor the remaining experiments, we fix $\\gamma_0=10^6$.\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.4\\textwidth]{.\/Figures\/MISMIP_3_gammas.pdf} \n \\caption{The MISMIP 3a retreat experiment with $\\Delta x=1000$~m for different choices of $\\gamma_0$ in\n the time interval $[0,10000]$ years.}\n\\label{fig:gammas}\n\\end{figure}\n\n\nThe GL position during 10000 years in the advance and retreat phases are displayed in Fig. \\ref{fig:MISMIP3} for different mesh sizes.\nThe range of the results from \\cite{gagliardini2016impact} with mesh resolutions $\\Delta x=25$ and $50$~m are shown as background shaded regions with colors purple and pink.\nWe achieve similar GL migration results both for the advance and retreat experiments with at least 20 times larger mesh sizes.\n\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/MISMIP_3.pdf} \n \\caption{The MISMIP 3a experiments for the GL position when $t\\in[0, 10000]$ with $\\Delta x=4000, 2000$ and $1000$~m for the advance (solid) and retreat (dashed) phases. \n The shaded regions indicate the range of the results in \\cite{gagliardini2016impact} with $\\Delta x=50$~m in red and $\\Delta x=25$~m in blue. }\n\\label{fig:MISMIP3}\n\\end{figure}\n\n\nWe observed oscillations at the top surface near the GL in all the experiments as expected from \\cite{Durand09b, Schoof11}.\nA zoom-in plot of the surface elevation with $\\Delta x=1$~km at $t=10000$ years is shown to the left in Fig. \\ref{fig:surfOsc}, where the red dashed line indicates the estimated GL position. \nObviously, the estimated GL position does not coincide with any nodes even at the steady state.\n\\begin{figure}[htbp]\n\\center\n \\includegraphics[width=0.45\\textwidth]{.\/Figures\/retreatDetails} \n \\caption{Details of the solutions for the retreat experiment with $\\Delta x=1$~km after 10000 years. \n The solid dots represent the nodes of the elements and the vertical, red, dashed lines indicate the GL position.\n \\emph{Left panel}: The oscillations at top surface near GL.\n \\emph{Right panel}: The flotation criterion is evaluated by $H_{bw}\/H$. The ratio between $\\rho\/\\rho_w$ is drawn in a horizontal, purple, dash-dotted line. \n }\n\\label{fig:surfOsc}\n\\end{figure}\n\n\nThe ratio between the thickness below sea level $H_{bw}$ and the ice thickness $H$ is shown in Fig. \\ref{fig:surfOsc}.\nThe horizontal, purple, dash-dotted line indicates the ratio of $\\rho\/\\rho_w$ and the estimated GL is located at the red, dashed line.\nThis result confirms that the hydrostatic assumption $H\\rho=H_{bw}\\rho_w$ is not valid in the FS equations for $x>x_{GL}$ close to the GL and at the GL position, cf. \\cite{Durand09b, Schoof11}. For $x