diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjdgx" "b/data_all_eng_slimpj/shuffled/split2/finalzzjdgx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjdgx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn recent years, there has been significant focus on explaining deep networks~ \\cite{ribeiro2016should,ScottNIPS2017,ElenbergNIPS17,netdissect2017,Zhou_2018_ECCV,ZhangICNN17,DavidSelf18}. Explainability is important for humans to trust the deep learning model, especially in crucial decision-making scenarios.\nIn the computer vision domain, one of the most important explanation techniques is the heatmap approach \\cite{MatDeconv,SimonyanVZ13,Gradcam17,zhang16excitationBP}, which focuses on generating heatmaps that highlight parts of the input image that are most important to the decision of the deep networks on a particular classification target.\nThe heatmaps can be used to diagnose deep models to understand whether the models utilize the right contents to make the classification. This diagnosis is important when deep networks malfunction in high-stake cases, e.g. autonomous driving. In medical imaging and other image domains that humans currently lack understanding, heatmaps can also be used to help humans gain further insights on which part of the images are important.\n\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=0.95\\linewidth]{IntroAB.pdf}\n\\end{center}\n\\vskip -0.15in\n\\caption{(a) Examples generated by I-GOS in the deletion and insertion tasks using VGG19 as the baseline model. Heatmaps can be verified by testing the CNN on (multiple) deletion images (column 3), which blur the most highlighted areas on the heatmap, and (multiple) insertion images (column 4), which blur areas not highlighted on the heatmap. \n(b) The first two rows show that Integrated Gradients, Mask\nmay fail on these evaluations. In the third row, I-GOS performs significiantly better since the CNN no longer classifies the image to the same category with a small deleted area (column 3), and classifies the image correctly with only few pixels revealed (column 4), showing the correlation between the I-GOS heatmap and CNN decision making. For all approaches the same amount of pixels (6.4\\% in this figure) were blurred\/revealed. (Best viewed in color)} \n\\label{fig:intr}\n\\end{figure*}\n\n\n\nSome heatmap approaches achieve good visual qualities for human understanding, such as several one-step backpropagation-based visualizations including Guided Backpropagation (GBP) \\cite{JTguided2015} and the deconvolutional network (DeconvNet) \\cite{MatDeconv}. \nThese approaches utilize the gradient or variants of the gradient and backpropagate them back to the input image, in order to decide which pixels are more relevant to the change of the deep network prediction.\nHowever, whether they are actually correlated to the decision-making of the network is not that clear~\\cite{2018Nie}.\n\\cite{2018Nie} proves that GBP and DeconvNet are essentially doing (partial) image recovery, and thus generate more human-interpretable visualizations that highlight object boundaries, which do not necessarily represent what the model has truly learned.\n\n\n\n\n\n\nAn issue with these one-step approaches is that they only reflect infinitesimal changes of the prediction of a deep network. In the highly nonlinear function estimated by the deep network, such infinitesimal changes are not necessarily reflective of changes large enough to alter the decision of the model. \\cite{2018RISE} proposed evaluation metrics based on masking the image with heatmaps and verifying whether the masking will indeed change deep network predictions. Ideally, if the highlighted regions for a category are removed from the image, the deep network should no longer predict that category. This is measured by the \\textit{deletion} metric. On the other hand, the network should predict a category only using the regions highlighted by the heatmap, which is measured by the \\textit{insertion} metric (Fig.~\\ref{fig:intr}). \n\n\n\n\nIf these are the goals of a heatmap, a natural idea would be to directly optimize them. \nThe mask approach proposed in \\cite{ClassicMask} generates heatmaps by solving an optimization problem, which aims to find the smallest and smoothest area that maximally decrease the output of a neural network, directly optimizing the \\textit{deletion} metric. \nIt can generate very good heatmaps, but usually takes a long time to converge, and sometimes the optimization can be stuck in a bad local optimum due to the strong nonconvexity of the solution space (Fig.~\\ref{fig:intr}). \n\n\n\n\n\n\nIn this paper, we propose a novel visualization approach I-GOS (Integrated-Gradients Optimized Saliency) which utilizes an idea called integrated gradients to improve the mask optimization approach in~\\cite{ClassicMask}. The integrated gradients approach explicitly find a \\textit{baseline} image with very low prediction confidence -- a completely grey or highly blurred image -- then compute a straight line between the original image and this baseline. The gradients on images on this line are then integrated\n\\cite{IntegratedGradient}. \nThe idea is that the direction provided by the integrated gradients may lead better towards the global optimum than the normal gradient which may tend to lead to local optima. However, the original integrated gradient~\\cite{IntegratedGradient} paper is a one-step visualization approach and generate diffuse heatmaps difficult for human to understand (Fig.~\\ref{fig:intr}). In this paper, we replace the gradient in mask optimization with the integrated gradients. Due to the high cost of computing the integrated gradients, we employ a line-search based gradient-projection method to maximally utilize each computation of the integrated gradients. We also utilize some empirical perturbation strategies to avoid the creation of adversarial masks. In the end, \nour approach generates better heatmaps (Fig.~\\ref{fig:intr}) and utilizes less computational time than the original mask optimization, as line search is more efficient in finding appropriate step sizes, allowing significantly less iterations to be used.\nWe highlight our contributions as follows:\n\\begin{itemize}\n\\item[(1)]{We developed a novel heatmap visualization approach I-GOS, which optimizes a mask using the integrated gradients as descent steps.}\n\\item[(2)]{Through regularization and perturbation we better avoided generating adversarial masks at higher resolutions, enabling more detailed heatmaps that are more correlated with the decision-making of the model.}\n\\item[(3)]{Extensive evaluations show that the proposed approach performs better than the state-of-the-art approaches, especially in the \\textit{insertion} and \\textit{deletion} metrics.}\n\\end{itemize}\n\n \n\n\n\n\n\n\n\\section{Related Work}\nThere are several different types of the visualization techniques for generating heatmaps for a deep network. We classify them into one-step backpropagation-based approaches \\cite{MatDeconv,SimonyanVZ13,JTguided2015,InputGradient,IntegratedGradient,LRP15,zhang16excitationBP,Gradcam17}, and perturbation-based approaches, e.g., \\cite{Occlude15,Dabkowski2017,ClassicMask,2018RISE}.\n\nThe basic idea of one-step backpropagation-based visualizations is to backpropagate the output of a deep neural network back to the input space using the gradient or its variants. \nDeconvNet \\cite{MatDeconv}, Saliency Maps \\cite{SimonyanVZ13}, and GBP \\cite{JTguided2015} are similar approaches, with the difference among them in the way they deal with the ReLU layer. \nLRP \\cite{LRP15} and DeepLIFT \\cite{InputGradient} compute the contributions of each input feature to the prediction.\nExcitation BP \\cite{zhang16excitationBP} passes along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. GradCAM \\cite{Gradcam17} uses the gradients of a target concept, flowing only into the final convolutional layer to produce a coarse localization map. \\cite{unifyBP} analyzes various backpropagation-based methods, and provides a unified view to explore the connections among them.\n\nPerturbation-based methods first perturb parts of the input, and then run a forward pass to see which ones are most important to preserve the final decision. The earliest approach \\cite{Occlude15} utilized a grey patch to occlude part of the image. This approach is direct but very slow, usually taking hours for a single image \\cite{unifyBP}. An improvement is to introduce a mask, and solve for the optimal mask as an optimization problem \\cite{Dabkowski2017,ClassicMask}. \\cite{Dabkowski2017} develop a trainable masking model that can produce the masks in a single forward pass. However, it is difficult to train a mask model, and different models need to be trained for different networks. \\cite{ClassicMask} directly solves the optimization, and find the mask iteratively. Instead of only occluding one patch of the image, RISE \\cite{2018RISE} generates thousands of randomized input masks simultaneously, and averages them by their output scores. However, it consumes significant time and GPU memory\n\n\nAnother seemingly related but different domain is the saliency map from human fixation \\cite{eyefix}. Fixation Prediction \\cite{Kruthiventi_2016_CVPR,Kummerer_2017_ICCV} aims to identify the fixation points that human viewers would focus on at first glance of a given image, usually by training a network to predict those fixation points. This is different from deep explanation because deep models may use completely different mechanisms to classify from humans, hence human fixations should not be used to train or evaluate heatmap models.\n\n\n\n\n\n\n\n\n\n\\section{Model Formulation}\n\n\n\n\n\n\\subsection{Gradient and Mask Optimization}\nGradient and its variants are often utilized in visualization tools to demonstrate the importance of each dimension of the input. Its motivation comes from the linearization of the model. Suppose a black-box deep network $f$ predicts a score $f_c(I)$ on class $c$ (usually the logits of a class before the softmax layer) from an image $I$. Assume $f$ is smooth at the current image $I_0$, then a local approximation can be obtained using the first-order Taylor expansion:\n{\\small\n\\begin{align}\nf_c(I) \\approx f_c(I_0) + \\langle \\nabla f_c(I_0), I-I_0 \\rangle,\n\\end{align}\n}The gradient $\\nabla f_c(I_0)$ is indicative of the local change that can be made to $f_c(I_0)$ if a small perturbation is added to it, and hence can be visualized as an indication of salient image regions to provide a local explanation for image $I_0$ \\cite{SimonyanVZ13}.\nIn \\cite{InputGradient}, the heatmap is computed by multiplying the gradient feature-wise with the input itself, i.e., $\\nabla f_c(I_0) \\odot I_0$, to improve the sharpness of heatmaps.\n\n\nHowever, gradient only illustrates the infinitesimal change of the function $f_c(I)$ at $I_0$, \nwhich is not necessarily indicative of the salient regions that lead to a significant change on $f_c(I)$, especially when the function is highly nonlinear.\nWhat we would expect is that the heatmaps indicate the areas that would really change the classification result significantly.\nIn \\cite{ClassicMask}, a perturbation based approach is proposed which introduces a mask $M$ as the heatmap to perturb the input $I_0$.\n$M$ is optimized by solving the following objective function:\n{\\small\n\\begin{align}\n&\\argmin_M~ F_c(I_0, M) = f_c\\big(\\Phi(I_0, M)\\big) + g(M), \\notag\\\\\n&\\text{where~~} \n g(M) = \\lambda_1 ||{\\bf 1}-M||_1 +\\lambda_2 \\text{TV}(M), \\label{eq:classicMask}\\\\\n&~~~~~~~~\\Phi(I_0, M) = I_0 \\odot M + \\tilde{I}_0 \\odot ({\\bf 1}-M), ~~~~~{\\bf 0} \\leq M \\leq {\\bf 1}, \\notag\n\\end{align}\n}In (\\ref{eq:classicMask}), $M$ is a matrix which has the same shape as the input image $I_0$ and whose elements are all in $[0,1]$; $\\tilde{I}_0$ is a baseline image with the same shape as $I_0$, which should have a low score on the class $c$, \n$f_c\\big(\\tilde{I_0}\\big) \\approx \\min_I f_c(I)$, \nand in practice either a constant image, random noise, or a highly blurred version of $I_0$. This optimization seeks to find a deletion mask that significantly decreases the output score\n$f_c\\big(\\Phi(I_0, M)\\big)$, i.e., $f_c\\big(I_0 \\odot M + \\tilde{I}_0 \\odot ({\\bf 1}-M)\\big) \\ll f_c(I_0)$ under the regularization of $g(M)$. $g(M)$ contains two regularization terms, with the first term on the magnitude of $M$, and the second term a total-variation (TV) norm \\cite{ClassicMask} to make $M$ more piecewise-smooth.\n\n\n\n\n\n\n\n\n\n\n\nAlthough this approach of optimizing a mask performs significantly better than the gradient method, there exist inevitable drawbacks when using a traditional first-order algorithm to solve the optimization. \nFirst, it is slow, usually taking hundreds of iterations to obtain the heatmap for each image. \nSecond, since the model $f_c$ is highly nonlinear in most cases, optimizing (\\ref{eq:classicMask}) may only achieve a local optimum, with no guarantee that it indicates the right direction for a significant change related to the output class. \nFig. \\ref{fig:intr} and Fig. \\ref{fig:compare1} show some heatmaps generated by the mask approach.\n\n\n\n\\subsection{Integrated Gradients}\n\n\n\nNote that the problem of finding the mask is not a conventional non-convex optimization problem. For $F_c(I_0,M) = f_c(I_0,M) +g(M)$, we (approximately) know the global minimum (or, at least a reasonably small value) of $f_c(I_0, M)$ in a baseline image $\\tilde{I}_0$, which corresponds to $M = \\mathbf{0}$.\nThe integrated gradients \\cite{IntegratedGradient} consider the straight-line path from the baseline $\\tilde{I}_0$ to the input $I_0$. Instead of evaluating the gradient at the provided input $I_0$ only, the integrated gradients would be obtained by accumulating all the gradients along the path:\n{\\small\n\\begin{align}\nIG_i(I_0) = (I_0^i-\\tilde{I}_0^i) \\cdot \\int_{\\alpha=0}^{1} \\frac{\\partial f_c\\big(\\tilde{I}_0 + \\alpha(I_0-\\tilde{I}_0)\\big)}{\\partial I_0^i} d\\alpha ,\n\\label{eq:IG}\n\\end{align}\n}where $IG(I_0) = \\nabla_{I_0}^{IG}f_c(I_0)$ is the integrated gradients of $f_c$ at $I_0$; $i$ represents the $i$-th pixel.\n\n\n\nIn practice, the integral in (\\ref{eq:IG}) is approximated via a summation.\nWe sum the gradients at points occurring at sufficiently small intervals along the straight-line path from the input $M$ to a baseline $\\tilde{M}=\\mathbf{0}$:\n{\\small\n\\begin{align}\n\\nabla^{IG} f_c(M) =\\frac{1}{S} \\sum_{s=1}^{S} \\frac{ \\partial f_c\\left(\\Phi\\big(I_0, \\frac{s}{S}M\\big)\\right)}{\\partial M},\n\\label{eq:bIG}\n\\end{align}\n}where $S$ is a constant, usually $20$. However, \\cite{IntegratedGradient} only proposed to use integrated gradients as a one-step visualization method,\nand the heatmaps generated by the integrated gradients are still diffuse. \nFig. \\ref{fig:intr} and Fig. \\ref{fig:compare1} show some heatmaps generated by the integrated gradients approach where a grey zero image is utilized as the baseline.\nWe can see that the integrated gradient contains many false positives in the area wherever the pixels have a large value of $I_0^i-\\tilde{I}_0^i$ (either the white or the black pixels).\n\n\n\n\n\\subsection{Integrated Gradients Optimized Heatmaps}\nWe believe the above two approaches can be combined for a better heatmap approach. The integrated gradient naturally provides a better direction than the gradient in that it points more directly to the global optimum of a part of the objective function. \nOne can view the convex constraint function $g(M)$ as equivalent to the Lagrangian of a constrained optimization approach with constraints $\\|{\\bf 1}-M\\|_1 \\leq B_1$ and $TV(M) \\leq B_2$, $B_1$ and $B_2$ being positive constants, hence consider the optimization problem (\\ref{eq:classicMask}) to be a constrained minimization problem on $f_c(\\Phi(I_0, M))$. In this case, we know the unconstrained solution in $M = {\\bf 0}$ is outside the constraint region. We speculate that an optimization algorithm may be better than gradient descent if it directly attempts to move to the unconstrained global optimum.\n\n\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=0.78\\columnwidth]{introFinalNew.pdf}\n\\vskip -0.05in\n\\caption{(Best viewed in color) Suppose we are optimizing in a region with a starting point $A$, a local optimum $C$, and a baseline $B$ which is the unconstrained global optimum; the area within the black dashed line is the constraint region which is decided by the constraint terms $g(I,M)$ and the bound constraints $\\mathbf{0} \\leq M \\leq \\mathbf{1}$, we may find a better solution by always moving towards $B$ rather than following the gradient and end up at $C$.}\n\\label{fig:1}\n\\end{figure}\n\n\nTo illustrate this, Fig.\n\\ref{fig:1} shows a 2D optimization with a starting point $A$, a local optimum $C$, and a baseline $B$.\nThe area within the black dashed line is the constraint region which is decided by the constraint function $g(M)$ and the boundary of $M$.\nA first-order algorithm will follow the gradient descent direction (the purple line) to the local optimum $C$; \nwhile the integrated gradients computed along the path $P_B$ from $A$ to the baseline $B$ may enable the optimization to reach an area better than $C$ within the constraint region. We can see that the integrated gradients with an appropriate baseline have a global view of the space and may generate a better descent direction.\nIn practice, the baseline does not need to be the global optimum. A good baseline near the global optimum could still improve over the local optimum achieved by gradient descent\n\n\n\nHence, we utilize the integrated gradients to substitute the gradient of the partial objective $f_c(M)$ in optimization (\\ref{eq:classicMask}), and introduce a new visualization method called Integrated-Gradient Optimized Saliency (I-GOS). For the regularization terms $g(M)$ in optimization (\\ref{eq:classicMask}), we still compute the partial (sub)gradient with respect to $M$:\n\n{\\small\n\\begin{align}\n\\nabla g(M) = \\lambda_1 \\cdot \\frac{\\partial||{\\bf 1} -M||_1}{\\partial M} + \\lambda_2 \\cdot \\frac{\\partial \\text{TV}(M)}{\\partial M},\n\\end{align}\n\n\nThe total (sub)gradient of the optimization for $M$ at each step is the combination of the integrated gradients for the $f_c(M)$ and the gradients of the regularization terms $g(M)$:\n{\\small\n\\begin{align}\n{TG}(M) = \\nabla^{IG} f_c(M) + \\nabla g(M),\n\\label{eq:total}\n\\end{align}\n}Note that this is no longer a conventional optimization problem, since it contains $2$ different types of gradients.\nThe integrated gradients are utilized to indicate a direction for the partial objective $f_c(M)$; the gradients of the $g(M)$ are used to regularize this direction and prevent it to be diffuse. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.8\\columnwidth]{compareLabelALLpartnew.pdf}\n\\caption{Different heatmap approaches at $224\\times 224$ resolution. The red plot illustrates how the CNN predicted probability drops with more areas masked, and the blue plot illustrates how the prediction increases with more areas revealed. \nThe x axis for the red\/blue plot represents the percentage of pixels masked\/revealed;\nand the y axis represents the predicted class probability.\nOne can see with I-GOS the red curve drops earlier and the blue plot increase earlier, leading to more area under the insertion curve (insertion metric) and less area under the deletion curve (deletion metric). (Best viewed in color) }\n\\label{fig:compare1}\n\\end{figure*}\n\n\\subsection{Computing the step size}\n\\label{sec:step}\nSince the time complexity of computing $\\nabla^{IG} f_c(M)$ is high, we utilize a backtracking line search method and revise the Armijo condition \\cite{numerical2000} to help us compute the appropriate step size for the total gradient $TG(M)$ in formula (\\ref{eq:total}). \nThe Armijo condition tries to find a step size such that:\n{\\small\n\\begin{align}\nf(M_k+\\alpha_k \\cdot d_k) - f(M_k) \\leq \\alpha_k \\cdot \\beta \\cdot \\nabla f(M_k)^Td_k,\n\\label{eq:Armijo}\n\\end{align}\n}where $d_k$ is the descent direction; $\\alpha_k$ is the step size; $\\beta$ is a parameter in $(0,1)$; $\\nabla f(M_k)$ is the gradient of $f$ at point $M_k$. \n\n\n\n\nThe descent direction $d_k$ for our algorithm is set to be the inverse direction of the total gradient $TG(M_k)$.\nHowever, since $TG(M_k)$ contains the integrated gradients, it is uncertain whether $\\nabla F_c(M_k)^Td_k = -\\nabla F_c(M_k)^T TG(M_k)$ is negative or not. Hence, we replace $\\nabla F_c(M_k)$ with $TG(M_k)$ and obtain a revised Armijo condition as follows:\n{\\small\n\\begin{align}\nF_c\\bigg(M_k-\\alpha_k \\cdot TG(M_k)\\bigg) - F_c(M_k) \\leq \\notag\\\\\n-\\alpha_k \\cdot \\beta \\cdot TG(M_k)^{T}TG(M_k),\n\\label{eq:Armijo2}\n\\end{align}\n}\nThe detailed backtracking line search works as follows:\n{\\begin{itemize}\n\\item[(1)]{Initialization: set the values of the parameter $\\beta$, a decay $\\eta$, a upper bound $\\alpha_{u}$ and a lower bound $\\alpha_{l}$ for the step size; let $j=0$, and $\\alpha^0 = \\alpha_{u}$;}\n\\item[(2)]{Iteration: if $\\alpha^j$ satisfies condition (\\ref{eq:Armijo2}), or $\\alpha_j \\leq \\alpha_{l}$, end iteration; else, let $\\alpha^{j+1}=\\alpha^j \\eta$, $j=j+1$, test condition (\\ref{eq:Armijo2}) again with $P_{[0,1]}(M_k-\\alpha_k \\cdot TG(M_k))$, where $P_{[0,1]}(M)$ clips the mask values to the closed interval $[0,1]$;}\n\\item[(3)]{Output: if $\\alpha^j \\leq \\alpha_{l}$, the step size $\\alpha_k$ for $TG(M_k)$ equals to the lower bound $\\alpha_l$; else, $\\alpha_k=\\alpha^j$}\n\\end{itemize}}\nA projection step is needed in the iteration because the mask $M_k$ is bounded by the closed interval $[0,1]$.\nSince we have an integrated gradient in $TG(M)$, a large upper bound $\\alpha_u$ and a small $\\beta$ are needed in order to obtain a large step that satisfies condition (\\ref{eq:Armijo2}), similar to satisfying the Goldstein conditions for convergence in conventional Armijo-Goldstein line search. \n\n\n\n\n\n\nNote that we cannot prove the convergence properties of the algorithm in non-convex optimization. However, the integrated gradient reduces to a scaling on the conventional gradient in a quadratic function (see supplementary material). \nIn practice, it converges much faster than the original mask approach in ~\\cite{ClassicMask} and we have never observed it diverging, although in some cases we do note that even with this approach the optimization stops at a local optimum. With the line search, we usually only run the iteration for $10-20$ steps. Intuitively, the irrelevant parts of the integrated gradients are controlled gradually by the regularization function $g(M)$ and only the parts that truly correlate with output scores would remain in the final heatmap.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \\newsavebox{\\insertionvgg}\n \\begin{lrbox}{\\insertionvgg}\n\n\\begin{tabular}{||l|c|c|c|c|c|c|c|c||}\n\\hline\n \\multirow{2}{*}{} & \\multicolumn{2}{c|}{224$\\times$224} & \\multicolumn{2}{c|}{112$\\times$112} & \\multicolumn{2}{c|}{28$\\times$28} & \\multicolumn{2}{c||}{14$\\times$ 14} \\\\ \\cline{2-9}\n & Deletion & Insertion & Deletion & Insertion & Deletion & Insertion & Deletion & Insertion \\\\ \\hline\\hline\nExcitation BP \\cite{zhang16excitationBP} & 0.2037 & 0.4728 & 0.2053 & 0.4966 & 0.2202 & 0.5256 & 0.2328 & 0.5452 \\\\ \\hline\nMask \\cite{ClassicMask} & 0.0482 & 0.4158 & 0.0728 & 0.4377 & 0.1056 & 0.5335 & 0.1753 & 0.5647 \\\\ \\hline\nGradCam \\cite{Gradcam17} & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & 0.1527 & 0.5938 \\\\ \\hline\nRISE \\cite{2018RISE} & 0.1082 & 0.5139 & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\nIntegrated Gradients \\cite{IntegratedGradient} & 0.0663 & 0.2551 & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\\hline\nI-GOS (ours) & \\textbf{0.0336} & \\textbf{0.5246} & \\textbf{0.0609} & \\textbf{0.5153} & \\textbf{0.0899} & \\textbf{0.5701} & \\textbf{0.1213} & \\textbf{0.6387} \\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table*} \n\\caption{{ Evaluation in terms of deletion (lower is better) and insertion (higher is better)\nscores on the ImageNet dataset using the VGG19 model. GradCam can only generate $14\\times 14$ heatmaps for VGG19; RISE and Integrated Gradients can only generate $224\\times 224$ heatmaps}}\n\n \\centering \n \\scalebox{0.80}{\\usebox{\\insertionvgg}}\n\\label{tab:insertionvgg}\n\n \\end{table*} \n\n\n\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.8\\columnwidth]{GradRisepartnew.pdf}\n\\caption{Comparisons between GradCam, RISE, and I-GOS, see Fig. 3 caption for explanations of the meaning of the curves.}\n\\label{fig:GradCamRISE}\n\\end{figure*}\n\n\n\n\\subsection{Avoiding adversarial examples}\n\\label{sec:ad}\nSince the mask optimization (\\ref{eq:classicMask}) is similar to the adversarial optimization \\cite{szegedy2013,goodfellow2014} except the TV term, it is concerning whether the solution would merely be an adversarial attack to the original image rather than explaining the relevant information.\nAn adversarial example is essentially a mask that drives the image off the natural image manifold, hence the approach in ~\\cite{ClassicMask} utilize a blurred version of the original image as the baseline to avoid creating strong adversarial gradients off the image manifold. We follow \\cite{ClassicMask} and also use a blurred image as the baseline. The total variation constraints also defeats adversarial masks by making the mask piecewise-smooth. We also added other methods to avoid finding an adversarial perturbation:\n\n\\begin{algorithm}[]\n\\SetAlgoLined\n {\\small {\\bf Optimization objective}: formula (\\ref{eq:upsample})\\;\n\n {\\bf Initialization}: set $M_0 =\\mathbf{1}$\\;\n \\While{not converged and within the maximum steps}{\n {\\bf Add} different random noise $n_s$ to $I_0$ when computing the integrated gradient: $\\nabla^{IG} f_c(M_k) =\\frac{1}{S} \\sum_{s=1}^{S} \\partial f_c\\left(\\Phi\\big(I_0+n_s, \\frac{s}{S}\\text{up}(M_k)\\big)\\right)\/\\partial M_k$ \\;\n {\\bf Compute} the total (sub)gradient $TG(M_k)$ of the optimization for $M_k$ using formula (\\ref{eq:total})\\;\n {\\bf Compute} the step size $\\alpha_k$ using the introduced backtracking line search algorithm \\;\n {\\bf Update}: $M_{k+1} = M_k - \\alpha_k \\cdot TG(M_k)$\\;\n }}\n \\caption{\\small I-GOS}\n\\label{alg:1}\n\\end{algorithm}\n\n\n(1) When computing the integrated gradients using formula (\\ref{eq:bIG}), we add different random noise $n_s$ to $I_0$ at each point along the straight-line path.\n\n(2) When computing a mask $M$ whose resolution is smaller than that of the input image $I_0$, we upsample it before perturbing the input $I_0$, and rewrite formula (\\ref{eq:classicMask}) as:\n{\\small\n\\begin{align}\nM^* = \\argmin~ &f_c\\big(\\Phi(I_0, \\text{up}(M))\\big) + \\lambda_1 ||{\\bf 1}-M||_1 \\notag\\\\\n&+\\lambda_2 \\text{TV}(M), \n\\label{eq:upsample}\n\\end{align}\n}where up($M$) upsamples $M$ to the original resolution with bilinear upsampling. The resolution of $M$ is lower, the generated heatmap is smoother.\n\n\n\nWhether a mask is adversarial can be evaluated using the \\textit{insertion metric}, detailed in the experiments section. \nWe summarize an overview of the proposed I-GOS approach in Algorithm \\ref{alg:1}.\n\n\n\n\n\n\n \\newsavebox{\\insertionres}\n \\begin{lrbox}{\\insertionres}\n\n \\begin{tabular}{||l|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }||}\n\\hline\n \\multirow{2}{*}{} & \\multicolumn{2}{c|}{224$\\times$224} & \\multicolumn{2}{c|}{112$\\times$112} & \\multicolumn{2}{c|}{28$\\times$28} & \\multicolumn{2}{c|}{14$\\times$14} & \\multicolumn{2}{c||}{7$\\times$7} \\\\ \\cline{2-11}\n & Deletion & Insertion & Deletion & Insertion & Deletion & Insertion & Deletion & Insertion & Deletion & Insertion \\\\ \\hline\\hline\nMask \\cite{ClassicMask} & 0.0468 & 0.4962 & 0.0746 & 0.5090 & 0.1151 & 0.5559 & 0.1557 & 0.5959 & 0.2259 & 0.6003 \\\\ \\hline\nGradCam \\cite{Gradcam17} & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & 0.1675 & 0.6521 \\\\ \\hline\nRISE \\cite{2018RISE} & 0.1196 & 0.5637 & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\nIntegrated Gradients \\cite{IntegratedGradient} & 0.0907 & 0.2921 & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\nI-GOS (ours) & \\textbf{0.0420} & \\textbf{0.5846} & \\textbf{0.0704} & \\textbf{0.5943} & \\textbf{0.1059} & \\textbf{0.5986} & \\textbf{0.1387} & \\textbf{0.6387} & \\textbf{0.1607} & \\textbf{0.6632} \\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table*} \n \\caption{{\\small Comparative evaluation in terms of deletion (lower is better) and insertion (higher is better)\nscores on ImageNet using ResNet50 as the base model. GradCam can only generate $7\\times 7$ heatmaps for ResNet50; RISE and Integrated Gradients only generate $224\\times 224$ heatmaps}}\n\n \\centering \n \\scalebox{0.80}{\\usebox{\\insertionres}}\n \\label{tab:insertionres}\n\n \\end{table*} \n\n\n\n\n\n\n\\begin{figure*}[tbp]\n\\centering\n\\includegraphics[width=1.9\\columnwidth]{ExbpLabelnew.pdf}\n\\vskip -0.1in\n\\caption{Comparison between Excitation BP and I-GOS at resolution $28\\times 28$. See Fig. 3 for explanations of the figures} \n\\label{fig:exbp}\n\\end{figure*}\n\n\n\n\n\\section{Experiments}\n\\subsection{Evaluation Metrics and Parameter Settings}\nAlthough many recent work focus on explainable machine learning, there is still no consensus about how to measure the explainability of a machine learning model. For the heatmaps, one of the important issues is whether we are explaining the image with human's understanding or the deep model's perspective. A common pitfall is to try to use human's understanding to explain the deep model, e.g. the pointing game \\cite{zhang16excitationBP}, which measures the ability of a heatmap to focus on the ground truth object bounding box. However, there are plenty of evidences that deep learning sometimes uses background context for object classification which would invalidate pointing game evaluations. Many heatmap papers show appealing images which look plausible to humans, but \\cite{2018Nie} points out they could well be just doing partial image recovery and boundary detection, hence generate human-interpretable results that do not correlate with network prediction. Hence, it is important to utilize objective metrics that have causal effects on the network prediction for the evaluation.\n\n\nWe follow \\cite{2018RISE} to adopt {\\em deletion} and {\\em insertion} as better metrics to evaluate different heatmap approaches. In the {\\em deletion} metric, we remove $N$ pixels (dependent on the resolution of the mask)most highlighted by the heatmap each time from the original image iteratively until no pixel is left, and replace the removed ones with the corresponding pixels from the baseline image.\nThe deletion score is the area under the curve (AUC) of the classification scores after softmax~\\cite{2018RISE} (red curve in Fig. \\ref{fig:compare1}-\\ref{fig:exbp}).\nFor the {\\em insertion} metric, we replace $N$ most highlighted pixels from the baseline image with the ones from the original image iteratively until no pixel left (blue curve in Fig.\\ref{fig:compare1}-\\ref{nonoise}). The insertion score is also the AUC of the classification scores for all the replaced images.\nIn the experiments, we generate heatmaps with different resolutions, including $224\\times 224$, $112\\times 112$, $28\\times 28$, $14\\times 14$, and $7\\times 7$. \nAnd we compute the deletion and insertion scores by replacing pixels based on generated heatmaps at the original resolutions before upsampling.\n\n\n\nThe intuition behind the {\\em deletion} metric is that the removal of the pixels most relevant to a class will cause the prediction confidence to drop sharply. This is similar to the optimization goal in eq. (\\ref{eq:classicMask}). Hence, only utilizing the {\\em deletion} metric is not satisfactory enough since adversarial attacks can also achieve a quite good performance.\nThe intuition behind the {\\em insertion} metric is that only keeping the most relevant pixels will retain the original score as much as possible. Since adversarial masks usually only optimize the deletion metric, it often use irrelevant parts of the image to drop the prediction score. Thus, if only those parts are revealed to the model, usually the model would not make a confident prediction on the original class, hence a low insertion score. \nTherefore, a good {\\em insertion} metric indicates a non-adversarial mask. However, only using the insertion metric would not identify blurry masks (e.g. Fig.~\\ref{fig:GradCamRISE}), hence the {\\em deletion}-{\\em insertion} metrics should be considered jointly.\n\n\n\nFor the deletion and insertion task, we utilize the pretrained VGG19 \\cite{VGG19} and Resnet50 \\cite{ResNet} networks from the PyTorch model zoo to test $5,000$ randomly selected images from the validation set of ImageNet \\cite{ImageNet}.\nIn Eq. (\\ref{eq:Armijo2}), $\\beta= 0.0001$. $\\lambda_1$ and $\\lambda_2$ in Eq. (\\ref{eq:upsample}) were fixed across all experiments under the same heatmap resolution.\n\nWe downloaded and ran the code for most baselines, except for \\cite{IntegratedGradient} which we implemented. All baselines were tuned to best performances. For RISE, we followed \\cite{2018RISE} to generate $4,000$ $7\\times 7$ random samples for VGG, and $8,000$ $7\\times 7$ random samples for ResNet. For all experiments we used the same pre-\/post-processing with the same baseline image $\\tilde I_0$. \n\\cite{2018RISE} used a less blurred image for insertion and a grey image for deletion.\nSince we found the blurriness in \\cite{2018RISE} was not always enough to get the CNN to output $0$ confidence, we used a more blurred image for both insertion and deletion, hence the insertion and deletion scores for RISE are bit different in our paper compared with theirs. \n\n\n\\subsection{Results and Discussions}\n\n\n{\\bf Deletion and Insertion:}\nTable \\ref{tab:insertionvgg} and \\ref{tab:insertionres} show the comparative evaluations of I-GOS with other state-of-the-art approaches in terms of the {\\em deletion} and {\\em insertion} metrics on the ImageNet dataset using VGG19 and ResNet50 as the baseline model, respectively.\nFrom Table \\ref{tab:insertionvgg} and \\ref{tab:insertionres} we observe that our proposed approach I-GOS performs better than all baselines in both deletion and insertion scores for heatmaps with all different resolutions. \n\n\nIntegrated Gradients obtains the worst insertion score among all the approaches, which indicates that it indeed contains lots of pixels uncorrelated with the classification, as in the {\\em Cucumber} and {\\em Oboe} examples in Fig. \\ref{fig:compare1}.\nExcitation BP sometimes fires on irrelevant parts of the image as argued in \\cite{2018Nie}.\nThus, it performs the worst in the deletion task. GradCAM and RISE also suffer on the deletion score maybe because of the randomness on the masks they generate, which sometimes fixate on random background regions irrelevant to classification.\nFig. \\ref{fig:compare1}-\\ref{fig:exbp} shows some visual comparisons between our approach and baselines at various resolutions. \nThe reason of insertion curve going down and up is that sometimes part of the image that contains features that are indicative of other classes could be inserted, which could increase the activation for other classes, potentially driving down the softmax probability for the current class.\n\nNote that one advantage of our approach compared to the previous best RISE and GradCAM is the flexibility in terms of resolutions. RISE and Integrated Gradients can only generate $224 \\times 224$ heatmaps.\nGradCam can only generate $14 \\times 14$ heatmap on VGG19, and $7\\times 7$ heatmap on Resnet50, respectively. Our approach is better than them at their resolutions, but also offers the flexibility to use other resolutions. High resolutions are necessary especially when the image has thin parts (e.g. Fig.~\\ref{nonoise}), however may be less visually appealing since the masked pixels may be sparse. Our approach is significantly better than all baselines that can operate on all resolutions. Note that, the insertion metric is higher at lower resolutions, because a larger chunk of image with more complete context information is inserted at every point. Hence, a few percentage points lower insertion metric at higher resolutions do not necessarily mean the heatmaps are any worse. In practice, $28\\times 28$ heatmaps are usually more visually appealing, but in order to capture thin parts, we sometimes need to resort to $224\\times 224$ (Fig.~\\ref{fig:nonoise}).\n\n\n\n\n\n\n \n \n\n\n \\newsavebox{\\runtime}\n \\begin{lrbox}{\\runtime}\n\n\\begin{tabular}{||@{ }l@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }|@{ }c@{ }||}\n\\hline\n Running time (s) & 224$\\times$224 & 112$\\times$112 & 28$\\times$28 & 14$\\times$14 &7$\\times$7 \\\\ \\hline\\hline\nMask & 17.03 & 14.61 & 14.66 & 14.35 & 14.24 \\\\ \\hline\nGradCam & -- -- & -- -- & -- -- & -- -- & ${\\bm <}$\\textbf{1} \\\\ \\hline\nRISE & 61.77 & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\nIntegrated Gradients & ${\\bm <}$\\textbf{1} & -- -- & -- -- & -- -- & -- -- \\\\ \\hline\nI-GOS (ours) & \\text{6.07} & \\textbf{5.73} & \\textbf{5.70} & \\textbf{5.63}& \\text{5.62}\\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table}[t] \n\n \\caption{ Comparative evaluation in terms of runtime (averaged on $5,000$ images) on the ImageNet dataset using ResNet50 as the base model.} \n\n \\centering \n \\scalebox{0.80}{\\usebox{\\runtime}}\n \\label{tab:time}\n\n \\end{table} \n\n\n\n\n\n \n\n{\\bf Speed:} In Table \\ref{tab:time}, we summarize the average runtime for Mask, RISE, GradCam, Integrated Gradients, and I-GOS on the ImageNet dataset using ResNet50 as the base model.\nFor each approach, we only use one Nvidia 1080Ti GPU. For I-GOS, the maximal iteration is $15$; for Mask, the maximal iteration is $500$.\nOur approach is faster than Mask and RISE. Especially, it converges quickly, with the average number of iterations to converge being $13$ and the time for each iteration being $0.38s$.\nThe average running times for the backpropagation-based methods are all less than $1$ second. However, our approach achieve much better performance than these approaches, especially with higher resolutions.\nTo the best of our knowledge, our approach I-GOS is the fastest among the perturbation-based methods, as well as the one with the best performance in deletion and insertion metrics among all heatmap approaches.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{NonoiseFinal2.pdf}\n\\vskip -0.1in\n\\caption{Examples from ablation studies (at $224 \\times 224$ resolution). With added noise, the heatmap successfully reveals the entire legs of \\textit{daddy longlegs}, leading to better insertion metric, whereas without noise it is more adversarial (maybe merely by breaking each leg, CNN confidence is already reduced), leading to worse insertion metric} \n\\label{nonoise}\n\\end{figure}\n\n\n \\newsavebox{\\ablation}\n \\begin{lrbox}{\\ablation}\n\n\\begin{tabular}{||l|l|l|l|l||}\n\\hline\n & \\multicolumn{2}{l|}{224$\\times$224} & \\multicolumn{2}{l|}{28$\\times$28} \\\\ \\hline\\hline\n I-GOS & Deletion & Insertion & Deletion & Insertion \\\\ \\hline\\hline\nOurs & 0.0336 & \\textbf{0.5246} & 0.0899 & \\textbf{0.5701} \\\\ \\hline\nNo TV term & \\textbf{0.0308} & 0.3712 & \\textbf{0.0841} & 0.5181 \\\\ \\hline\nNo noise & 0.0559 & 0.4194 & 0.0872 & 0.5634 \\\\ \\hline\nFixed step size & 0.0393 & 0.5024 & 0.0906 & 0.5403 \\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table}[] \n\n \\caption{The results of the ablation study on VGG19.} \n\n \\centering \n \\scalebox{0.80}{\\usebox{\\ablation}}\n \\label{tab:abl}\n\n \\end{table} \n\n\n\n{\\bf Ablation Studies:} We show the results of ablation studies in Table \\ref{tab:abl}. \nFrom Table \\ref{tab:abl} we observe that without the TV term, insertion scores would indeed suffer significantly while deletion scores do not change much, indicating that the TV term is important to avoid adversarial masks.\nThe random noise introduced in section {\\em Avoiding adversarial examples} of the paper is very useful when the resolution of the mask is high (e.g, 224$\\times$224). From Fig. \\ref{nonoise} we observe that I-GOS with noise can achieve much better insertion scores than without noise for the same insertion ratio. When the resolution is low (e.g, 28$\\times$28), the noise is not that important since low resolution can already avoid adversarial examples. When we utilize a fixed step size (the step size is $1$ in Table \\ref{tab:abl}), both deletion and insertion scores become worse, showing the utility of the line search.\n\n\n\n\n\n\n{\\bf Failure Case:} Fig. \\ref{badexample} shows one failure case, where I-GOS found an adversarial mask and the insertion score did not increase till the end. Our observation is that optimization-based methods such as I-GOS usually do not work well when the deep model's prediction confidence is very low (less than $0.01$), or when the deep model makes a wrong prediction.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.95\\columnwidth]{badexample2.pdf}\n\\caption{One failure case for I-GOS, insertion curve does not move until almost all pixels have been inserted.} \n\\label{badexample}\n\\end{figure}\n\n\n\n \\newsavebox{\\loss}\n \\begin{lrbox}{\\loss}\n\n\\begin{tabular}{||l||l|l||l|l||l|l||}\n\\hline\n & \\multicolumn{2}{c||}{$\\lambda_1 = 0.01, \\lambda_2 = 0.2$} & \\multicolumn{2}{c||}{$\\lambda_1 = 0.1, \\lambda_2 = 2$} & \\multicolumn{2}{c||}{$\\lambda_1 = 1, \\lambda_2 = 20$} \\\\ \\hline\n & I-GOS & Mask & I-GOS & Mask & I-GOS & Mask \\\\ \\hline\\hline\n\\begin{tabular}[c]{@{}l@{}}Total\\\\ loss\\end{tabular} & 0.2241 & 0.3349 & 0.3739 & 0.4857 & 0.6098 & 0.6794 \\\\ \\hline\nDeletion & 0.0825 & 0.1056 & 0.0861 & 0.1178 & 0.0899 & 0.1340 \\\\ \\hline\nInsertion & 0.5418 & 0.5335 & 0.5624 & 0.5307 & 0.5701 & 0.5207 \\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table}[t]\n\n\n \\caption{The optimization loss on VGG19 for resolution $28\\times 28$.} \n\n \\centering \n \\scalebox{0.78}{\\usebox{\\loss}}\n \\label{tab:loss}\n\n \\end{table} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n{\\bf Convergence:} For the values of the objective after convergence with Mask \\cite{ClassicMask} vs. the proposed I-GOS, \nTable \\ref{tab:loss} shows the comparison at $28\\times 28$ with different parameters. Best parameters were used for each approach in Table \\ref{tab:insertionvgg} (0.01\/0.2 for Mask and 1\/20 for I-GOS). It can be seen at every parameter setting I-GOS has lower total loss than Mask (total loss is higher with larger $\\lambda_1$ and $\\lambda_2$ since the L1+TV terms have higher weights in total loss).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we propose a novel visualization approach I-GOS, which utilizes integrated gradients to optimize for a heatmap. We show that the integrated gradients provides a better direction than the gradient when a good baseline is known for part of the objective of the optimization. The heatmaps generated by the proposed approach are human-understandable and more correlated to the decision-making of the model. \nExtensive experiments are conducted on three benchmark datasets with four pretrained deep neural networks, which shows that I-GOS\n advances state-of-the-art deletion and insertion scores on all heatmap resolutions.\n\n\n\n\n\n\\section{Acknowledgments}\nThis work was partially supported by DARPA contract N66001-17-2-4030.\n\n\n\\section*{Supplementary Material}\n\n\n\n\\subsection*{{\\uppercase\\expandafter{\\romannumeral1}. Properties of the Integrated Gradient in Quadratic Functions}}\n\n\\begin{proposition} The integrated gradients reduce to a scaling on the conventional gradient in a quadratic function if the baseline is the optimum.\n\\end{proposition}\n\\begin{proof}\n\nGiven a quadratic function $f({\\mathbf x})= {\\mathbf x}^TA{\\mathbf x} +b^T{\\mathbf x} +c$, we have its conventional gradient as:\n$\\nabla f({\\mathbf x}) = (A+A^T){\\mathbf x} +b$.\nConsidering a straight-line path from the current point ${\\mathbf x}_k$ to the baseline ${\\mathbf x}_0$, for point ${\\mathbf x}_s$ along the path, we have:\n${\\mathbf x}_s = {\\mathbf x}_0+\\frac{s}{S}({\\mathbf x}_k-{\\mathbf x}_0)$,\n{\n\\begin{align}\n\\nabla f({\\mathbf x}_s) &= (A+A^T){\\mathbf x}_s +b \\notag\\\\\n&= (A+A^T)\\left( {\\mathbf x}_0+\\frac{s}{S}({\\mathbf x}_k-{\\mathbf x}_0)\\right)+b \\notag\\\\\n&= \\frac{s}{S}(A+A^T){\\mathbf x}_k +\\frac{S-s}{S}(A+A^T){\\mathbf x}_0 +b \\notag\\\\\n&= \\frac{s}{S}\\nabla f({\\mathbf x}_k) + \\frac{S-s}{S} \\nabla f({\\mathbf x}_0),\n\\label{eq:Pxs}\n\\end{align}\n}Thus, we obtain the integrated gradient along the straight-line path as:\n{\n\\begin{align}\n\\nabla^{IG} f({\\mathbf x}_k) &=\\frac{1}{S} \\sum_{s=1}^{S} \\nabla f({\\mathbf x}_s) \\notag\\\\\n&=\\frac{S+1}{2S}\\nabla f({\\mathbf x}_k) + \\frac{S-1}{2S} \\nabla f({\\mathbf x}_0),\n\\label{eq:IG0}\n\\end{align}\n}When the baseline ${\\mathbf x}_0$ is the optimum of the quadratic function, $\\nabla f({\\mathbf x}_0) = 0$, and then\n{\n\\begin{align}\n\\nabla^{IG} f({\\mathbf x}_k) = \\frac{S+1}{2S}\\nabla f({\\mathbf x}_k).\n\\label{eq:IG1}\n\\end{align}\n}Hence, the integrated gradients reduce to a scaling on the conventional gradient.\n\nIn this case, the revised Armijo condition also reduces to the conventional Armijo condition up to a constant.\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection*{{\\uppercase\\expandafter{\\romannumeral2}. Pointing Game}}\n\nFor the pointing game task, following \\cite{2018RISE}, if the most salient pixel lies inside the human annotated bounding box of an object, it is counted as a hit.\nThe pointing game accuracy equals to $\\frac{\\# Hits}{\\# Hits+\\# Misses}$, averaged over all categories.\nWe utilize two pretrained VGG16 models from \\cite{2018RISE} to test $2,000$ randomly selected images from the validation set of MSCOCO, and $2,000$ randomly selected images from the test set of VOC07, respectively. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTable \\ref{tab:pg} shows the comparative evaluations of I-GOS with other state-of-the-art approaches in terms of mean accuracy in the pointing game on MSCOCO and PASCAL, respectively.\nHere we utilize the same pretrained models from \\cite{2018RISE}.\nHence, we list the pointing game accuracies reported in the paper except for Mask and our approach I-GOS.\nFrom Table \\ref{tab:pg} we observe that, I-GOS beats all the other compared approaches except for RISE, and it improves significantly over of the Mask.\nDuring the experiments we notice that, some object labels for MSCOCO and PASCAL in the pointing game have very small output scores for the pretrained VGG16 models, which affects the optimization greatly for both Mask and I-GOS. \nRISE does not seem to suffer from this. We believe RISE may be good at the pointing game, but its randomness would generally lead to a mask that is too diffuse, which significantly hurts its deletion and insertion scores (Table \\ref{tab:insertionvgg} and Table \\ref{tab:insertionres} in the paper), \nwhile our approach generates a much more concise heatmap.\n\n\n \\newsavebox{\\point}\n \\begin{lrbox}{\\point}\n\n\\begin{tabular}{||l|l|l||}\n\\hline\nMean Acc (\\%) & MSCOCO & VOC07\\\\ \\hline\\hline\nAM \\cite{SimonyanVZ13} & 37.10 & 76.00 \\\\ \\hline\nDeconv \\cite{MatDeconv} & 38.60 & 75.50 \\\\ \\hline\nMWP \\cite{zhang16excitationBP} & 39.50 & 76.90 \\\\ \\hline\nExcitation BP \\cite{zhang16excitationBP} & 49.60 & 80.00 \\\\ \\hline\nRISE \\cite{2018RISE} & \\textbf{50.71} & {\\bf 87.33} \\\\ \\hline\\hline\nMask \\cite{ClassicMask} (14$\\times$14) & 40.03 & 79.45 \\\\ \\hline\nMask \\cite{ClassicMask} (28$\\times$28) & 43.24 & 77.57 \\\\ \\hline\\hline\nI-GOS (ours) (14$\\times$14) & 47.16 & {\\bf 85.81} \\\\ \\hline\nI-GOS (ours) (28$\\times$28) & \\bf{49.62} & 83.61 \\\\ \\hline\n\\end{tabular}\n \\end{lrbox}\n\n\n \\begin{table}[b] \n \\caption{\\small Mean accuracy (\\%) in the pointing game for VGG16 on MSCOCO and PASCAL VOC07, respectively.} \n\n \\centering \n \\scalebox{0.80}{\\usebox{\\point}}\n \\label{tab:pg}\n \\vskip -0.15in\n \\end{table} \n\n\n\n \\subsection*{{\\uppercase\\expandafter{\\romannumeral4}. Adversarial Examples}}\nFigure \\ref{fig:ad}-\\ref{fig:ad2} shows some examples when using I-GOS to visualize adversarial examples. \n\n Here we utilize the MI-FGSM method \\cite{Dong_2018_CVPR} on VGG19 to generate adversarial examples. \n\n\n From Fig.~\\ref{fig:ad}-\\ref{fig:ad2} we observe that the heatmaps for the original images and for the adversarial examples generated by I-GOS are totally different.\n For the original image, I-GOS can often lead to a high classification confidence on the original class by inserting a small portion of the pixels.\n For the adversarial image though, almost the entire image needs to be inserted for CNN to predict the adversarial category. We note that we are not presenting I-GOS as a defense against adversarial attacks, and that specific attacks may be designed targeting the salient regions in the image. However, these figures show that the I-GOS heatmap and the insertion metric are robust against those full-image based attacks and not performing mere image reconstruction.\n \n \n \n\n \\begin{figure*}[] \n\n\n \\centering \n\\includegraphics[width=2\\columnwidth]{AD1.pdf}\n\\caption{\\small The top row are original images and their heatmaps generated by I-GOS; the bottom row are adversarial examples and their heatmaps generated by I-GOS. \nThe red plot illustrates how the CNN predicted probability drops with more areas masked, and the blue plot illustrates how the prediction increases with more areas revealed. The x axis for the red\/blue plot represents the percentage of pixels masked\/revealed; the y axis for the red\/blue plot represents the predicted class probability.\nOne can see on normal images, CNN can classify with only the highlighted parts revealed, whereas on adversarial images one would almost need to insert the entire image to make the CNN to classify to the adversarial label.} \n \\label{fig:ad}\n \\vskip -0.15in\n \\end{figure*} \n \n \n \n \n \n \n\n \\begin{figure*}[] \n\n\n \\centering \n\\includegraphics[width=2\\columnwidth]{AD2.pdf}\n\\caption{\\small The top row are original images and their heatmaps generated by I-GOS; the bottom row are adversarial examples and their heatmaps generated by I-GOS, see Fig. \\ref{fig:ad} caption for explanations of the meaning of the curves. One can see on normal images, CNN can classify with only the highlighted parts revealed, whereas on adversarial images one would almost need to insert the entire image to make the CNN to classify to the adversarial label.} \n \\label{fig:ad2}\n \\vskip -0.15in\n \\end{figure*} \n\n\n\n\n\n\n\\begin{figure*}[t]\n\\vskip -0.05in\n\\centering\n\\includegraphics[width=1.9\\columnwidth]{compareLabelALLpart2new.jpg}\n\\caption{A comparison among different approaches with heatmaps of $224\\times 224$ resolution. The red plot illustrates how the CNN predicted probability drops with more areas masked, and the blue plot illustrates how the prediction increases with more areas revealed. \nThe x axis for the red\/blue plot represents the percentage of pixels masked\/revealed;\nthe y axis for the red\/blue plot represents the predicted class probability.\nOne can see with I-GOS the red curve drops earlier and the blue plot increases earlier, leading to more area under the insertion curve (insertion metric) and less area under the deletion curve (deletion metric). (Best viewed in color) }\n\\label{fig:compare1b}\n\\vskip -0.15in\n\\end{figure*}\n\n\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.9\\columnwidth]{GradRisepart2new.jpg}\n\\caption{Comparisons between GradCam, RISE, and I-GOS, see Fig. \\ref{fig:compare1b} caption for explanations of the meaning of the curves.}\n\\label{fig:GradCamRISEb}\n\\vskip -0.15in\n\\end{figure*}\n\n\n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=1.7\\columnwidth]{ApFig1.pdf}\n\n \\caption{Examples generated by I-GOS in the deletion and insertion task using VGG19 as the baseline model.}\n \\label{fig:vgg1}\n\\end{figure*}\n\n\n\n\n\n\n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=1.7\\columnwidth]{ApFig2.pdf}\n\n \\caption{Examples generated by I-GOS in the deletion and insertion task using VGG19 as the baseline model.}\n \\label{fig:vgg2}\n\\end{figure*}\n\n\n\n\n\n\n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=1.7\\columnwidth]{ApFig3.pdf}\n\n \\caption{Examples generated by I-GOS in the deletion and insertion task using Resnet50 as the baseline model.}\n \\label{fig:resnet1}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=1.7\\columnwidth]{ApFig4.pdf}\n\n \\caption{Examples generated by I-GOS in the deletion and insertion task using Resnet50 as the baseline model.}\n \\label{fig:resnet2}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\\subsection*{{\\uppercase\\expandafter{\\romannumeral5}. Deletion and Insertion Visualizations}}\nFig. \\ref{fig:compare1b} shows more comparison examples between different approaches on $224 \\times 224$ heatmaps.\nFig. \\ref{fig:GradCamRISEb} shows more visual comparisons between our approach, GradCAM, and RISE.\nFrom Fig. \\ref{fig:compare1b} we can see that, for Mask, it focuses on person instead of {\\em Yawl} on the left image, and focuses on grass instead of {\\em Impala} on the right image, indicating that sometimes the optimization can be stuck in a bad local optimum.\nFrom Fig. \\ref{fig:GradCamRISEb} we observe that sometimes GradCAM also fires on image border, corner, or irrelevant parts of the image ({\\em Grey whale} in Fig. \\ref{fig:GradCamRISEb}), which results in bad deletion and insertion scores. \nAnd the randomness on the mask indeed limits the performance of RISE ({\\em West Highland white terrier} in Fig. \\ref{fig:GradCamRISEb}).\n\n\nFig. \\ref{fig:vgg1}-\\ref{fig:vgg2} show some examples generated by our approach I-GOS in the deletion and insertion task using VGG19 as the baseline model.\nFig. \\ref{fig:resnet1}-\\ref{fig:resnet2} show some examples generated by I-GOS in the deletion and insertion task using Resnet50 as the baseline model.\nThe deletion or insertion image is generated by $I_0 \\odot {\\text up}(M) + \\tilde{I}_0 \\odot \\left({\\bf 1}-{\\text up}(M)\\right)$, where the resolution of $M$ is $28\\times 28$.\nFor deletion image, we initialize the mask $M$ as matrix of ones, then set the top $N$ pixels in the mask to $0$ based on the values of the heatmap, where the deletion ratio represents the proportion of pixels that are set to $0$.\nFor insertion image, we initialize mask $M$ as matrix of zeros, then set the top $N$ pixels in the mask to $1$ based on the values of the heatmap, where the insertion ratio represents the proportion of pixels that are set to $1$.\nIn Fig. \\ref{fig:vgg1}-\\ref{fig:resnet2}, the masked\/revealed regions of the images may seem a little larger than the number of the deletion\/insertion ratios. The reason is that after upsampling the mask $M$, some pixels on the border may have values between $0$ and $1$, resulting in larger regions to be masked or revealed.\nThe predicted class probability is the output value after softmax for the same category using the original image, the deletion image, and the insertion image as the input, respectively.\nFrom Fig. \\ref{fig:vgg1}-\\ref{fig:resnet2} we observe that the proposed approach I-GOS can utilize a low deletion ratio to achieve a low predicted class probability for the deletion task, and a low insertion ratio to achieve a high predicted class probability for the insertion task at the same time, indicating that I-GOS truly discovers the key features of the images that the CNN network is using.\nEspecially, we realize that CNN is indeed fixating on very small regions in the image and very local features in many cases to make a prediction, e.g. in \\textit{Pomeranian}, the face of the dog is utmostly important. Without the face the prediction is reduced to almost zero, and with only the face and a rough outline of the dog, the prediction is almost perfect. The same can be said for \\textit{Eft}, \\textit{Black grouse}, \\textit{lighthouse} and \\textit{boxer}. Interestingly, for \\textit{Container ship} and \\textit{trailer truck}, their functional parts are extremely important to the classification. \\textit{Trailer truck} almost cannot be classified without the wheels (and could be classified with only the wheels), and \\textit{container ship} cannot be classified without the containers (and could be classified with almost only the containers and a rough outline of the ship).\n\n\n\n\n\n{\\small\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introduction}\nBack in the 1970's, Sm$_2$Co$_{17}$ was the strongest known hard magnetic compound. \nAs Co is an expensive element whereas Fe is relatively cheap, \ndeveloping iron-based permanent magnets was required.\nSm$_{2}$Fe$_{17}$, more generally known as R$_{2}$Fe$_{17}$ where R represents rare-earth elements, was a natural choice.\nHowever, the Curie temperature $T_{c}$ of R$_{2}$Fe$_{17}$ is too low for practical applications.\nFor example, $T_{c}$ of Nd$_{2}$Fe$_{17}$ is about 330 K~\\cite{Nd2Fe17-3, Nd2Fe17-1, Nd2Fe17-2}.\nSagawa came up with the following idea in order to raise the $T_{c}$ of R$_{2}$Fe$_{17}$:\nThe introduction of light elements, such as B and C, at the interstitial region in R$_{2}$Fe$_{17}$, would expand the volume of R$_{2}$Fe$_{17}$~\\cite{Sagawa1, Sagawa-http}.\nThis volume expansion could make the ferromagnetism stronger by the magnetovolume effect, consequently $T_{c}$ would go up. \nHe added B to Nd$_{2}$Fe$_{17}$ based on this idea, and successfully developed Nd-Fe-B magnets having a $T_{c}$ of 585 K~\\cite{Sagawa1, Sagawa2}.\nIt is important to note that the main phase of Nd-Fe-B magnets is Nd$_{2}$Fe$_{14}$B, not Nd$_{2}$Fe$_{17}$B.\n\nAfter the Nd-Fe-B magnet was developed, the role of B on the magnetism of Nd$_{2}$Fe$_{14}$B was studied theoretically~\\cite{Kanamori1, Kanamori2, Kanamori3}. \nKanamori pointed out the importance of the $p$-$d$ hybridization between B and Fe sites. \nThe magnetic moment of Fe in the vicinity of B should be suppressed. It is called cobaltization. \nThe cobaltized Fe (pseudo Co) can enhance the magnetic moment of the surrounding Fe \ndue to the $d$-$d$ hybridization between pseudo Co and Fe, as is in Fe-Co alloys ~\\cite{Hasegawa,Hamada}.\nIt has been believed for decades that when these chemical effects, owing to hybridizations, are summed up, the total magnetic moment of Nd$_{2}$Fe$_{14}$B increases by B.\n\nAsano and Yamaguchi studied the magnetic properties of Y$_{2}$Fe$_{14}$B and the \nhypothetical compound Y$_{2}$Fe$_{14}$ through first-principles calculations ~\\cite{Asano1, Asano2}. \nTheir results show \n(i) the magnetic moment of Fe, at the nearest neighbor of B in Y$_{2}$Fe$_{14}$B, is reduced by the cobaltization, \n(ii) the magnetic moment of Fe at the nearest neighbor of pseudo Co is enhanced, and \n(iii) the effect of B is slightly negative on the magnetization in Y$_{2}$Fe$_{14}$B. \nTherefore, Y$_{2}$Fe$_{14}$B has smaller total magnetic moment than Y$_{2}$Fe$_{14}$. \nHowever, they optimized only lattice parameters within the atomic-sphere approximation, and internal coordinates were fixed. \nMoreover, magnetovolume effects and chemical effects were not distinguished. \nTherefore, the effects of B on the magnetism in R$_{2}$Fe$_{14}$B still needs to be clarified.\n\nIn this study, we systematically investigate the influence of B on the electronic states and magnetism of Nd$_{2}$Fe$_{14}$B through first-principles calculations. \nWe find that the total magnetic moment (per formula unit) and magnetization (per volume) of Nd$_{2}$Fe$_{14}$B decrease by the addition of B . \nThis result is analyzed in terms of the chemical effect and magnetovolume effect caused by B. \nThe local magnetic moment at each Fe site is computed and is discussed in connection with cobaltization. \nWe also study Nd$_{2}$Fe$_{14}X$, where $X$ represents C, N, O and F, in order to investigate the effects of the $X$ element on the magnetization, magnetocrystalline anisotropy and stability of Nd$_{2}$Fe$_{14}X$. \n\n\\section{Computational methods}\nWe perform first-principles calculations of Nd$_{2}$Fe$_{14}X$ within density functional theory. \nWe use the computational package OpenMX, which is based on pseudopotentials and pseudo-atomic-orbital basis functions~\\cite{OpenMX}. \nThe basis set of Nd, Fe and $X$ is $s2p2d2$, and the cutoff radii of these atoms are 8.0, 6.0 and 7.0 a.u., respectively.\nFor semicore states, we treat 3$s$ and 3$p$ orbitals as valence electrons in Fe, as well as 5$s$ and 5$p$ in Nd. \nAn open-core pseudopotential is used for Nd atoms, where 4$f$ electrons are treated as spin-polarized core electrons.\nThe calculation for valence electronic states does not include the spin-orbit interaction. \nWe adopt the Perdew-Burke-Ernzerhof exchange-correlation functional in the generalized gradient approximation~\\cite{GGA}.\nThe lattice parameters and atomic positions of Nd$_{2}$Fe$_{14}X$ are all relaxed. \nFor the convergence criteria, the maximum force on each atom and the total energy are 10$^{-4}$ hartree\/bohr and 10$^{-7}$ hartree, respectively.\nSpin collinear structures are assumed in all the calculations for Nd$_{2}$Fe$_{14}X$.\nThe cutoff energy is 500 Ry, and 8 $\\times$ 8 $\\times$ 6 $k$-point meshes are adopted for all the calculations. \n\nThe magnetic moment is estimated from the spin moment of valence electrons and the contribution of Nd-4$f$ electrons $g_{J}J$ = 3.273 $\\mu_{\\text{B}}$, \nwhere $g_{J}$ is the Lande $g$-factor and $J$ is the total angular momentum of Nd-4$f$ electrons. \nThe magnetization is calculated from the total magnetic moment of Nd$_{2}$Fe$_{14}X$ by dividing volume $V$ (see Appendix~\\ref{app:latticeconstants}) and multiplying the Bohr magneton $\\mu_{\\text{B}}$.\nThe crystal-field parameters $A_{l}^{m}$ of Nd are calculated by using the computational package QMAS, which is based on \nthe projector augmented-wave method~\\cite{QMAS}.\nSee Refs.~\\cite{Miyake,Harashima2} for more details.\n\nWe note that there are different notations for the Wyckoff positions of Nd$_{2}$Fe$_{14}$B~\\cite{Herbst1,Shoemaker,Givord,Herbst2}.\nThroughout this paper, the notation given in Refs.~\\onlinecite{Herbst1,Herbst2} is used.\n\n\\section{Results and Discussion}\n\\subsection{Roles of B in Nd$_{2}$Fe$_{14}$B}\n\\begin{table}[b]\n \\caption{The magnetic moment $m$ [$\\mu_{\\text{B}}$\/f.u.] and magnetization $\\mu_{0}M$ [T] \n of Nd$_{2}$Fe$_{14}$B, Nd$_{2}$Fe$_{14}$B$_{0}$, and Nd$_{2}$Fe$_{14}$.}\n \\vspace{5pt}\n \\begin{tabular}{lcc}\n \\hline \\hline\n & $m$ [$\\mu_{\\text{B}}$\/f.u.] & $\\mu_{0}M$ [T] \n \\\\\n \\hline\n Nd$_{2}$Fe$_{14}$B & 37.42 & 1.86\n \\\\\n Nd$_{2}$Fe$_{14}$B$_{0}$ & 38.87 & 1.93\n \\\\\n Nd$_{2}$Fe$_{14}$ & 38.43 & 1.95\n \\\\ [2pt]\n \\hline \\hline\n \\end{tabular}\n \\label{table:mag_nd2fe14b}\n\\end{table}\n\n\\begin{figure}[ht]\n\\includegraphics[width=8.5cm]{1.eps}\n\\caption{(Color online) The local magnetic moments of Fe at each site $m_{\\text{Fe}}$ in Nd$_{2}$Fe$_{14}$B (closed red circles) and Nd$_{2}$Fe$_{14}$B$_{0}$ (open black circles). \n The site notation follows Refs.~\\onlinecite{Herbst1,Herbst2}.\n Lines are provided as a visual guide.}\n\\label{fig:localmoment_nd2fe14b}\n\\end{figure}\n\nWe calculate the magnetic moment and magnetization of Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{14}$ in order to investigate the effects of B on Nd$_{2}$Fe$_{14}$B. \nNd$_{2}$Fe$_{14}$ is a hypothetical compound in which the $X$ site is empty and all the atomic positions and lattice parameters are optimized.\nFor comparison, we also study Nd$_{2}$Fe$_{14}$B$_{0}$, \nin which the lattice parameters and atomic positions are the same as those of Nd$_{2}$Fe$_{14}$B, but B has been removed. \nThese hypothetical compounds are introduced in order to distinguish the magnetovolume and chemical effects on the magnetic moment and magnetization of Nd$_{2}$Fe$_{14}$B. \nThe results are listed in Table~\\ref{table:mag_nd2fe14b}.\nFrom the comparison between Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{14}$, we find that the introduction of B decreases both the magnetic moment and magnetization of Nd$_{2}$Fe$_{14}$B.\nThe magnetovolume effect of B can be evaluated by comparing magnetization of Nd$_{2}$Fe$_{14}$B$_{0}$ with that of Nd$_{2}$Fe$_{14}$. \nThe difference between the two comes solely from the structural change. \nThe magnetic moment of Nd$_{2}$Fe$_{14}$B$_{0}$ is enhanced by 1\\% compared to that of Nd$_{2}$Fe$_{14}$. \nWhen it is converted to magnetization, however, \nthe magnetization is reduced by 0.017 T due to the volume expansion.\nBy comparing Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{14}$B$_{0}$, \nwe can see that the chemical effect of B is negative on both the magnetic moment and magnetization. \nThe magnitude is larger in the chemical effect than in the magnetovolume effect.\nThis leads to a decrease of the magnetic moment caused by B. \nThis result gives us a different insight from that of the current belief that B enhances the magnetic moment and magnetization of Nd$_{2}$Fe$_{14}$B. \n\nThe local magnetic moments of Fe at each site in Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{14}$B$_{0}$ are shown in Fig.\\ref{fig:localmoment_nd2fe14b}. \nBy following the concept of cobaltization, one can expect the decrease of the Fe magnetic moment at the neighbors of B.\nThe local magnetic moment at the Fe sites near the pseudo Co, in turn, can be expected to be enhanced due to the presence of pseudo Co.\nThe Fe atoms at the 16k$_{1}$ and 4e sites are the first and second neighbors of B, and the distances between these Fe atoms and B are about 2.1 \\AA. \nTherefore, these Fe atoms become pseudo Co.\nIn fact, the decrease of the Fe local moments is actually seen at these two sites in Fig.\\ref{fig:localmoment_nd2fe14b}. \nAll the other Fe sites are near the pseudo Co, thus, we can see the small enhancement of the magnetic moment at these sites.\nThe reduction at the pseudo Co sites are larger than the increase at the neighbors of the pseudo Co sites in magnitude.\nTo sum up, B reduces the magnetic moment and magnetization of Nd$_{2}$Fe$_{14}$B.\n\nNext, we examine how the magnetocrystalline anisotropy of Nd$_{2}$Fe$_{14}$B is influenced by the added B. \nTo do this, we analyze the crystal-field parameter $A_{2}^{0} \\langle r^{2} \\rangle$ of Nd. \nThe magnetocrystalline anisotropy energy $E_{\\textrm {MAE}}$ can be expressed as, \n\\begin{equation}\n E_{\\text{MAE}} = K_{1} \\sin^{2}\\theta + K_{2} \\sin^{4}\\theta + \\cdots \\;.\n\\end{equation}\nIn crystal-field theory, $K_1$ is expressed as, \n\\begin{equation}\n K_{1} = -3 J \\left(J-\\dfrac{1}{2}\\right)\\Theta_{2}^{0} A_{2}^{0} \\langle r^{2} \\rangle\\;,\n\\end{equation}\nwhere $A_{2}^{0} \\langle r^{2} \\rangle$ is a crystal-field parameter. \nThe crystal-field parameter of Nd can be calculated from the effective potential obtained through first principles calculations.\nAlthough the magnetocrystalline anisotropy energy includes higher-order contributions,\nthe magnetocrystalline anisotropy of Nd at high temperatures can be discussed by the lowest order contribution~\\cite{Buschow,Sasaki}.\nThe higher-order crystal-field parameters are discussed in Appendix~\\ref{app:crystalfieldparameters}. \nThere are two different Wyckoff positions for Nd, labeled as Nd(4f) and Nd(4g).\nThe calculated $A_{2}^{0} \\langle r^{2} \\rangle$ are 220 K at Nd(4f) and 330 K at Nd(4g) for Nd$_{2}$Fe$_{14}$B. \nThose for Nd$_{2}$Fe$_{14}$ are 206 K at Nd(4f) and 434 K at Nd(4g).\nThis result indicates that Nd$_{2}$Fe$_{14}$ already shows uniaxial anisotropy, \nand the presence of B has a minor impact on the magnetocrystalline anisotropy.\n\nFrom the above results, the advantage of adding B cannot be seen, so far, \nsince its addition does not seem to play an important role in terms of enhancing the magnetization and magnetocrystalline anisotropy of Nd$_{2}$Fe$_{14}$B. \nWhat is the benefit of adding B in Nd$_{2}$Fe$_{14}$B? \nA probable role is to stabilize the structure of Nd$_{2}$Fe$_{14}$B.\nIn order to confirm that, we examine the stability of Nd$_{2}$Fe$_{14}$B. \nWe choose Nd$_{2}$Fe$_{17}$B as a reference system from the historical reason that \nSagawa intended to insert B to Nd$_{2}$Fe$_{17}$, as explained in Sec.~\\ref{introduction}. \nWe define the formation energy $\\Delta E$ as follows; \n{\\setlength\\arraycolsep{2pt}\n \\begin{eqnarray}\n \\Delta E &\\equiv& \\left(E[\\text{Nd}_{2}\\text{Fe}_{14}\\text{B}] + 3 \\mu[\\text{Fe}]\\right)\n - E[\\text{Nd}_{2}\\text{Fe}_{17}\\text{B}]\n \\label{eq:formationenergy_1}\n \\\\\n &=& \\left\\{\\left(E[\\text{Nd}_{2}\\text{Fe}_{14}\\text{B}] + 3 \\mu[\\text{Fe}]\\right) \n - \\left(E[\\text{Nd}_{2}\\text{Fe}_{17}] + \\mu[\\text{B}]\\right)\\right\\}\n \\nonumber\n \\\\\n &-& \\left\\{E[\\text{Nd}_{2}\\text{Fe}_{17}\\text{B}] \n - \\left(E[\\text{Nd}_{2}\\text{Fe}_{17}] + \\mu[\\text{B}]\\right)\\right\\}.\n \\label{eq:formationenergy_2}\n \\end{eqnarray}\n}\n$E$[Nd$_{2}$Fe$_{14}$B], $E$[Nd$_{2}$Fe$_{17}$B], $E$[Nd$_{2}$Fe$_{17}$] denote the total energy of Nd$_{2}$Fe$_{14}$B, Nd$_{2}$Fe$_{17}$B and Nd$_{2}$Fe$_{17}$ per formula unit, respectively, \nwhile $\\mu$[Fe] and $\\mu$[B] represent chemical potentials corresponding to the total energy of $\\alpha$-Fe and $\\alpha$-B per atom, respectively. \nEquation~\\eqref{eq:formationenergy_2} shows the difference between the formation energies of Nd$_{2}$Fe$_{14}$B and the system in which one B atom is added in Nd$_{2}$Fe$_{17}$ per formula unit. \nThe resultant formation energy is $\\Delta E = -1.208$ eV\/f.u.\nFrom this negative formation energy, we conclude that B in Nd$_{2}$Fe$_{14}$B stabilizes the structure of Nd$_{2}$Fe$_{14}$B. \n\n\\subsection{Comparison with Nd$_{2}$Fe$_{14}X$ ($X$ = C, N, O, F)}\nIn this section, we systematically compare the magnetization, magnetic moments, magnetocrystalline anisotropy and stability of Nd$_{2}$Fe$_{14}X$ with the results of Nd$_{2}$Fe$_{14}$B when $X$ is C, N, O and F. \nFigure~\\ref{fig:magneticmoment_bcnof} illustrates the magnetic moment of Nd$_{2}$Fe$_{14}X$.\nBy comparing the broken red line of Nd$_{2}$Fe$_{14}$ \nwith the black open circles of Nd$_{2}$Fe$_{14}X{_0}$, we can see that the magnetovolume effect works positively to increase the magnetic moment in all the cases studied here. \nThis tendency is proportional to the volume expansion caused by the $X$ elements (see Table~\\ref{table:latticeconstant_bcnof} in Appendix~\\ref{app:latticeconstants}). \nSimilarly, we can evaluate the chemical effects of the $X$ elements on the magnetic moment \nby comparing the black circles with the red circles of Nd$_{2}$Fe$_{14}X$, because the structures of Nd$_{2}$Fe$_{14}X_{0}$ are fixed to those of Nd$_{2}$Fe$_{14}X$.\nThe difference between these two systems is entirely the absence\/presence of the $X$ elements.\nWe can see that the chemical effect strongly depends on $X$. \nWhen $X$ = B, C and N, the chemical effect is negative, while it is positive when $X$ = O and F.\nSimilar trends for the magnetovolume and chemical effects have been reported before in NdFe$_{11}$Ti$X$~\\cite{Harashima2} \n(also in a simpler system Fe$_{4}X$~\\cite{Akai2}). \nA difference between Nd$_{2}$Fe$_{14}X$ and NdFe$_{11}$Ti$X$ is seen in the total magnetic moment for $X$ = N.\nThe total magnetic moment is suppressed in the former, whereas it is enhanced in the latter; namely \nNdFe$_{11}$TiN has a larger magnetic moment than NdFe$_{11}$Ti. \n\n\\begin{figure}[t]\n\\includegraphics[width=8.5cm]{2.eps}\n\\caption{(Color online) \nThe $X$ dependence of the magnetic moment $m$ [$\\mu_{\\text{B}}$].\nThe closed red circles are data for Nd$_{2}$Fe$_{14}X$ and the open black circles are data for Nd$_{2}$Fe$_{14}X_{0}$.\nThe broken red line illustrates the magnetic moment of Nd$_{2}$Fe$_{14}$.}\n\\label{fig:magneticmoment_bcnof}\n\\end{figure}\n\\begin{figure}[bh]\n\\includegraphics[width=8.5cm]{3.eps}\n\\caption{(Color online) \nThe $X$ dependence of magnetization $\\mu_{0}M$ [T].\nThe closed red circles and the open black circles represent the magnetization of Nd$_{2}$Fe$_{14}X$ and Nd$_{2}$Fe$_{14}X_{0}$, respectively. \nLines are a visual guide. \nFor reference, the broken red line is the magnetization of Nd$_{2}$Fe$_{14}$.}\n\\label{fig:magnetization_bcnof}\n\\end{figure}\n\nThe above results are converted to the magnetization in Fig.~\\ref{fig:magnetization_bcnof}. \nThe magnetization is lower in Nd$_{2}$Fe$_{14}X$ than in Nd$_{2}$Fe$_{14}$, except when $X$ = O.\nFrom the comparison between Nd$_{2}$Fe$_{14}$ and Nd$_{2}$Fe$_{14}X_{0}$, \nwe can see that the magnetovolume effect is negative in all cases, \nwhich is in contrast to the positive magnetovolume effect for NdFe$_{11}$Ti$X$~\\cite{Harashima2}. \nEspecially, the negative effect is large when $X$ is O or F.\nOn the other hand, the chemical effect is negative when $X$ is B, C or N, but positive when $X$ is O or F. \nThe chemical effect plays an important role in enhancing the magnetization of Nd$_{2}$Fe$_{14}$O. \n\n\\begin{figure}[b]\n\\includegraphics[width=8.5cm]{4.eps}\n\\caption{\n(Color online) The $X$ dependence of the local magnetic moment of Nd$_{2}$Fe$_{14}X$.\nThe closed red circles are data for Nd$_{2}$Fe$_{14}X$ and the open black circles are data for Nd$_{2}$Fe$_{14}X_{0}$.\nThe site notation follows Refs.~\\onlinecite{Herbst1,Herbst2}.}\n\\label{fig:localmagneticmoment_bcnof}\n\\end{figure}\nFigure~\\ref{fig:localmagneticmoment_bcnof} illustrates the local magnetic moments at each Fe site in Nd$_{2}$Fe$_{14}X$ and Nd$_{2}$Fe$_{14}X_{0}$. \nWe can see the increase of the magnetic moments at 16k$_{1}$ and 4e sites when $X$ changes from N to O.\nThis enhancement arises from the $p$-$d$ hybridization between O and Fe~\\cite{Asano1,Asano2,Harashima2}. \nFigure~\\ref{fig:dos_x4g_no} shows the projected density of states on N and O. \nIn the case of $X$ = N, there is a peak above the Fermi level in the majority-spin channel.\nIt comes from the anti-bonding state between N-2$p$ and Fe-3$d$ orbitals. \nThe corresponding peak in the minority-spin channel is observed at higher energy (2--3 eV above the Fermi level). \nIn the case of $X$ = O, the state is pulled down and is occupied in the majority-spin channel.\nThis spin-dependent occupancy \nmakes a distinction in magnetization between the cases of N and O. \nA similar increase is observed in the local moment of the Fe(8j) sites in NdFe$_{11}$Ti$X$ when $X$ = C and N~\\cite{Harashima2}.\nThe difference can be explained as follows:\nIn NdFe$_{11}$Ti$X$, the anti-bonding state can be constituted from $\\pi$ bonds with surrounding Fe atoms due to the highly symmetric environment at $X$\n(even though Ti slightly breaks the symmetry).\nIn Nd$_{2}$Fe$_{14}X$, one $X$-Fe $\\pi$ bond hybridizes with other Fe atoms via $\\sigma$ bond.\nThis $\\sigma$ bond is stronger than $\\pi$ bond and makes the level of the anti-bonding state higher. \nConsequently, the anti-bonding state is shifted down to the Fermi level and occupied for an element having deeper 2$p$ level, that is, O in Nd$_{2}$Fe$_{14}X$.\n\n\\begin{figure}[t]\n\\includegraphics[width=8.5cm]{5.eps}\n\\caption{(Color online) The local density of states of N in Nd$_{2}$Fe$_{14}$N and O in Nd$_{2}$Fe$_{14}$O.}\n\\label{fig:dos_x4g_no}\n\\end{figure}\n\n\\begin{figure}[b]\n\\includegraphics[width=8.5cm]{6.eps}\n\\caption{\\label{fig:A20}\n(Color online) $A_{2}^{0} \\langle r^{2} \\rangle$ of Nd$_{2}$Fe$_{14}X$ and Nd$_{2}$Fe$_{14}X_{0}$.\nThe closed red circles are data for Nd$_{2}$Fe$_{14}X$ and open black circles for Nd$_{2}$Fe$_{14}X_{0}$. \nThe horizontal broken red line is data for Nd$_{2}$Fe$_{14}$.\nLines are a guide to the eye.}\n\\end{figure}\nFigure \\ref{fig:A20} illustrates the dependence of the crystal-field parameter $A_{2}^{0} \\langle r^{2} \\rangle$ of Nd on $X$.\nThe values of both Nd(4f) and (4g) sites are shown.\nThey are positive in all cases and $A_{2}^{0} \\langle r^{2} \\rangle$ increases as the atomic number of $X$ increases (except for Nd(4g) at $X$ = O).\nThis implies that Nd$_{2}$Fe$_{14}X$ has uniaxial anisotropy at high temperatures and that the anisotropy is enhanced as the atomic number increases.\n$A_{2}^{0} \\langle r^{2} \\rangle$ of Nd(4f) in Nd$_{2}$Fe$_{14}X_{0}$ also increases from $X$ = C to F.\nThe chemical effect on Nd(4f) is large in magnitude and seems to be similar among $X$ = C, N, O and F.\nFor Nd(4g) in Nd$_{2}$Fe$_{14}X_{0}$, $A_{2}^{0} \\langle r^{2} \\rangle$ is almost constant and\n$X$ dependence in $A_{2}^{0} \\langle r^{2} \\rangle$ of Nd(4g) is mainly due to the chemical effect.\n\nFigure~\\ref{fig:formationenergy_bcnof} shows the formation energies of Nd$_{2}$Fe$_{14}X$ as a function of the $X$ elements calculated with Eq.~\\eqref{eq:formationenergy_1}, \nbut B is replaced with C, N, O and F.\nNd$_{2}$Fe$_{14}X$ is stable when $X$ is B or C, but not when $X$ is N, O or F. \nIn other words, Nd$_{2}$Fe$_{17}X$ is more stable when $X$ is N, O or F. \nThese results are in good agreement with the fact that Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{17}$N$_{x} (0 < x \\lesssim 3)$ can be stably synthesized in experiments.\n\\begin{figure}[t]\n\\includegraphics[width=8.5cm]{7.eps}\n\\caption{(Color online) The formation energy of Nd$_{2}$Fe$_{14}X$ \ndefined by Eq.(\\ref{eq:formationenergy_1}) with replacement of B with C, N, O, and F.}\n\\label{fig:formationenergy_bcnof}\n\\end{figure}\n\\section{Conclusions}\nThe roles of added B in Nd$_{2}$Fe$_{14}$B have been carefully investigated by first-principles calculations. \nAs reported in previous studies, cobaltized Fe atoms can be seen in Nd$_{2}$Fe$_{14}$B. \nHowever, magnetization is not enhanced by the added B.\nWe clarify that both the chemical and magnetovolume effects work negatively on the magnetization of Nd$_{2}$Fe$_{14}$B. \nThe magnetocrystalline anisotropy discussed by the crystal-field parameter $A_{2}^{0} \\langle r^{2} \\rangle$ has a minor effect from the addition of B.\nWe also systematically examined the effects of the $X$ elements on the magnetism in Nd$_{2}$Fe$_{14}X$ through first-principles calculations. \nThe enhancement in the magnetization is observed only for $X$ = O.\nThe crystal-field parameter $A_{2}^{0} \\langle r^{2} \\rangle$ has a tendency to increase as the atomic number of $X$ increases. \nThe stability of Nd$_{2}$Fe$_{14}X$ was also calculated by comparing the formation energies to that of Nd$_{2}$Fe$_{17}X$. \nThe formation energies of Nd$_{2}$Fe$_{14}$B and Nd$_{2}$Fe$_{14}$C are negative relative to Nd$_{2}$Fe$_{17}X$, \nwhile Nd$_{2}$Fe$_{14}$N, Nd$_{2}$Fe$_{14}$O and Nd$_{2}$Fe$_{14}$F are not energetically stable.\nThis result is consistent with experimental results. \n\n\\begin{acknowledgments}\nWe thank Dr. Taro Fukazawa for his fruitful and stimulating discussion.\nThis work was supported by \nthe Elements Strategy Initiative Project under the auspices of MEXT, \nby ``Materials research by Information Integration''\nInitiative (MI$^{2}$I) project of the Support Program for Starting Up Innovation Hub from\nJapan Science and Technology Agency (JST),\nand also by \nMEXT as a social and scientific priority issue \n(Creation of new functional Devices and high-performance Materials to \nSupport next-generation Industries; CDMSI) to be tackled by using post-K computer, \nas well as JSPS KAKENHI Grant No.~17K04978.\nThe calculations were partly carried out by using supercomputers at ISSP, The University of Tokyo, and TSUBAME, Tokyo Institute of Technology, the supercomputer of ACCMS, Kyoto University, and also \nby the K computer, RIKEN (Project No. hp160227, No. hp170100, and No. hp170269).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTargeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over ``opinion targets'' that explicitly appear in the\nsentence. For example, given a sentence \\textit{``I hated their service, but their food was great''}, the sentiment polarities for the target \\textit{``service''} and \\textit{``food''} are negative and positive respectively.\nA target is usually an entity or an entity aspect.\n\nIn recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results ~\\cite{dong2014adaptive,tang2016effective}.\nHowever, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.\n\nAttention mechanism, which has been successfully used in\nmachine translation \\cite{bahdanau2014neural}, is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target.\nThere are already some studies use attention to generate target-specific sentence representations \\cite{wang2016attention,ma2017interactive,chen2017recurrent}\nor to transform sentence representations according to target words \\cite{li2018transformation}.\nHowever, these studies depend on complex recurrent neural networks (RNNs)\nas sequence encoder to compute hidden semantics of texts.\n\nThe first problem with previous works is that the modeling of text relies on RNNs.\nRNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation.\nMoreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales \\cite{werbos1990backpropagation}.\nAlthough LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information,\nthis usually requires a large amount of training data.\nAnother problem that previous studies ignore is the label unreliability issue,\nsince \\textit{neutral} sentiment is a fuzzy sentimental state and brings difficulty for model learning.\nAs far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.\n\nThis paper propose an attention based model to solve the problems above.\nSpecifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words.\nTo deal with the label unreliability issue, we employ a label smoothing regularization\nto encourage the model to be less confident with fuzzy labels.\nWe also apply pre-trained BERT \\cite{devlin2018bert}\nto this task and show our model enhances the performance of basic BERT model.\nExperimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.\n\nThe main contributions of this work are presented as follows:\n\\begin{enumerate}\n\\item We design an attentional encoder network to draw the hidden states and semantic interactions between target and context words.\n\\item We raise the label unreliability issue and add an effective label smoothing regularization term to the loss function for encouraging the model to be less confident with the training labels.\n\\item We apply pre-trained BERT to this task, our model enhances the performance of basic BERT model and obtains new state-of-the-art results.\n\\item We evaluate the model sizes of the compared models and show the lightweight of the proposed model.\n\\end{enumerate}\n\n\\section{Related Work}\n\nThe research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.\n\nTraditional machine learning methods, including rule-based methods \\cite{ding2008holistic} and statistic-based methods \\cite{jiang2011target}, mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier \\cite{rao2009semi}.\nThe performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.\n\nIn recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained.\nIn order to incorporate target words into a model,\nTang et al. \\shortcite{tang2016effective} propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively.\nTang et al. \\shortcite{tang2016aspect} design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information.\nWang et al. \\shortcite{wang2016attention} propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights.\nChen et al. \\shortcite{chen2017recurrent} propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs).\nMa et al. \\shortcite{ma2017interactive} propose IAN which learns the representations of the target and context with two attention networks interactively.\n\n\\section{Proposed Methodology}\n\nGiven a context sequence $\\mathbf{w^c} = \\{w_1^c, w_2^c, ..., w_n^c\\}$\nand a target sequence $\\mathbf{w^t} = \\{w_1^t, w_2^t, ..., w_m^t\\}$,\nwhere $\\mathbf{w^t}$ is a sub-sequence of $\\mathbf{w^c}$.\nThe goal of this model is to predict the sentiment polarity of the\nsentence $\\mathbf{w^c}$ over the target $\\mathbf{w^t}$.\n\nFigure \\ref{fig:model} illustrates the overall architecture of the proposed \\textbf{A}ttentional \\textbf{E}ncoder \\textbf{N}etwork (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer.\nEmbedding layer has two types: GloVe embedding and BERT embedding.\nAccordingly, the models are named \\textbf{AEN-GloVe} and \\textbf{AEN-BERT}.\n\n\\subsection{Embedding Layer}\n\n\\subsubsection{GloVe Embedding}\n\nLet $L \\in \\mathbb{R}^{d_{emb} \\times |V|}$ to be the pre-trained GloVe \\cite{pennington2014glove} embedding matrix,\nwhere $d_{emb}$ is the dimension of word vectors and $|V|$ is the vocabulary size.\nThen we map each word $w^i \\in \\mathbb{R}^{|V|}$ to its corresponding embedding vector $e_i \\in \\mathbb{R}^{d_{emb} \\times 1}$,\nwhich is a column in the embedding matrix $L$.\n\n\\subsubsection{BERT Embedding}\n\nBERT embedding uses the pre-trained BERT to generate word vectors of sequence.\nIn order to facilitate the training and fine-tuning of BERT model,\nwe transform the given context and target to\n``[CLS] + context + [SEP]'' and ``[CLS] + target + [SEP]'' respectively.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{model.pdf}\n\\caption{Overall architecture of the proposed AEN.}\n\\label{fig:model}\n\\end{figure}\n\n\\subsection{Attentional Encoder Layer} \\label{Attentional Encoder}\n\nThe attentional encoder layer is a parallelizable and interactive alternative of LSTM\nand is applied to compute the hidden states of the input embeddings.\nThis layer consists of two submodules:\nthe \\textbf{M}ulti-\\textbf{H}ead \\textbf{A}ttention (MHA) and the \\textbf{P}oint-wise \\textbf{C}onvolution \\textbf{T}ransformation (PCT).\n\n\\subsubsection{Multi-Head Attention} \\label{sec:MHA}\n\n\\textbf{M}ulti-\\textbf{H}ead \\textbf{A}ttention (MHA) is the attention that can perform multiple attention function in parallel.\nDifferent from Transformer \\cite{vaswani2017attention}, we use \\textbf{Intra-MHA} for introspective context words modeling\nand \\textbf{Inter-MHA} for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.\n\nAn attention function maps a key sequence $\\mathbf{k} = \\{k_1, k_2, ..., k_n\\}$ and\na query sequence $\\mathbf{q} = \\{q_1, q_2, ..., q_m\\}$ to an output sequence $\\mathbf{o}$:\n\\begin{align}\nAttention(\\mathbf{k}, \\mathbf{q}) &= softmax(f_{s}(\\mathbf{k}, \\mathbf{q})) \\mathbf{k}\n\\end{align}\nwhere $f_{s}$ denotes the alignment function which learns the semantic relevance between $q_j$ and $k_i$:\n\\begin{align}\nf_{s}(k_i, q_j) &= tanh([k_i; q_j] \\cdot W_{att})\n\\end{align}\nwhere $W_{att} \\in \\mathbb{R}^{2d_{hid}}$ are learnable weights.\n\nMHA can learn \\emph{n\\_head} different scores in parallel child spaces and is very powerful for alignments.\nThe $n_{head}$ outputs are concatenated and projected to the specified hidden dimension $d_{hid}$, namely,\n\\begin{align}\nMHA(\\mathbf{k}, \\mathbf{q}) &= [\\mathbf{o}^1; \\mathbf{o}^2...; \\mathbf{o}^{n_{head}}] \\cdot W_{mh} \\\\\n\\mathbf{o}^h &= Attention^h(\\mathbf{k}, \\mathbf{q})\n\\end{align}\nwhere ``$;$'' denotes vector concatenation, $W_{mh} \\in \\mathbb{R}^{d_{hid} \\times d_{hid}}$,\n$\\mathbf{o}^h = \\{o_1^h, o_2^h, ..., o_m^h\\}$ is the output of the $h$-th head attention and $h \\in [1, n_{head}]$.\n\n\n\\textbf{Intra-MHA}, or multi-head self-attention,\nis a special situation for typical attention mechanism that $\\mathbf{q} = \\mathbf{k}$.\nGiven a context embedding $\\mathbf{e^c}$, we can get the introspective context representation $\\mathbf{c^{intra}}$ by:\n\\begin{align}\n\\mathbf{c^{intra}} = MHA(\\mathbf{e^c}, \\mathbf{e^c})\n\\end{align}\nThe learned context representation\n$\\mathbf{c^{intra}}=\\{c_1^{intra}, c_2^{intra}, ..., c_n^{intra}\\}$ is aware of long-term dependencies.\n\n\\textbf{Inter-MHA} is the generally used form of attention mechanism that $\\mathbf{q}$ is different from $\\mathbf{k}$.\nGiven a context embedding $\\mathbf{e^c}$ and a target embedding $\\mathbf{e^t}$,\nwe can get the context-perceptive target representation $\\mathbf{t^{inter}}$ by:\n\\begin{align}\n\\mathbf{t^{inter}} = MHA(\\mathbf{e^c}, \\mathbf{e^t})\n\\end{align}\n\nAfter this interactive procedure,\neach given target word $e_j^t$ will have a composed representation selected from context embeddings $\\mathbf{e^{c}}$.\nThen we get the context-perceptive target words modeling $\\mathbf{t^{inter}}=\\{t_1^{inter}, t_2^{inter}, ..., t_m^{inter}\\}$.\n\n\\subsubsection{Point-wise Convolution Transformation} \\label{sec:PCT}\n\nA \\textbf{P}oint-wise \\textbf{C}onvolution \\textbf{T}\nransformation (PCT)\ncan transform contextual information gathered by the MHA.\nPoint-wise means that the kernel sizes are 1 and\nthe same transformation is applied to every single token belonging to the input.\nFormally, given a input sequence $\\mathbf{h}$, PCT is defined as:\n\\begin{align}\nPCT(\\mathbf{h}) &= \\sigma(\\mathbf{h} * W_{pc}^1 + b_{pc}^1) * W_{pc}^2 + b_{pc}^2\n\\end{align}\nwhere $\\sigma$ stands for the ELU activation,\n$*$ is the convolution operator,\n$W_{pc}^1 \\in \\mathbb{R}^{d_{hid} \\times d_{hid}}$ and $W_{pc}^2 \\in \\mathbb{R}^{d_{hid} \\times d_{hid}}$\nare the learnable weights of the two convolutional kernels,\n$b_{pc}^1 \\in \\mathbb{R}^{d_{hid}}$ and $b_{pc}^2 \\in \\mathbb{R}^{d_{hid}}$\nare biases of the two convolutional kernels.\n\nGiven $\\mathbf{c^{intra}}$ and $\\mathbf{t^{inter}}$,\nPCTs are applied to get the output hidden states of the attentional encoder layer\n$\\mathbf{h^c}=\\{h_1^c, h_2^c, ..., h_n^c\\}$\nand $\\mathbf{h^t}=\\{h_1^t, h_2^t, ..., h_m^t\\}$\nby:\n\\begin{align}\n\\mathbf{h^c} &= PCT(\\mathbf{c^{intra}}) \\\\\n\\mathbf{h^t} &= PCT(\\mathbf{t^{inter}})\n\\end{align}\n\n\n\\subsection{Target-specific Attention Layer}\n\nAfter we obtain the introspective context representation $\\mathbf{h^c}$ and\nthe context-perceptive target representation $\\mathbf{h^t}$,\nwe employ another MHA to obtain the target-specific context representation $\\mathbf{h^{tsc}}=\\{h_1^{tsc}, h_2^{tsc}, ..., h_m^{tsc}\\}$ by:\n\\begin{align}\n\\mathbf{h^{tsc}} = MHA(\\mathbf{h^c}, \\mathbf{h^t})\n\\end{align}\nThe multi-head attention function here also has its independent parameters.\n\n\\subsection{Output Layer}\n\nWe get the final representations of the previous outputs by average pooling,\nconcatenate them as the final comprehensive representation $\\mathbf{\\tilde{o}}$,\nand use a full connected layer to project the concatenated vector into the space of the targeted $C$ classes.\n\\begin{align}\n\\mathbf{\\tilde{o}} &= [h_{avg}^c; h_{avg}^t; h_{avg}^{tsc}] \\\\\nx &= \\tilde{W_o}^T{{\\mathbf{\\tilde{o}}}}+\\tilde{b_o} \\\\\ny &= softmax(x) \\\\\n &= \\frac{exp(x)}{\\sum_{k=1}^{C} exp(x)}\n\\end{align}\nwhere $y \\in \\mathbb{R}^{C}$ is the predicted sentiment polarity distribution,\n$\\tilde{W_o} \\in \\mathbb{R}^{1 \\times C}$ and $\\tilde{b_o} \\in \\mathbb{R}^{C}$ are learnable parameters.\n\n\\subsection{Regularization and Model Training} \\label{sec:LSR}\n\nSince \\textit{neutral} sentiment is a very fuzzy sentimental state, training samples which labeled \\textit{neutral} are unreliable.\nWe employ a \\textbf{L}abel \\textbf{S}moothing \\textbf{R}egularization (LSR) term in the loss function.\nwhich penalizes low entropy output distributions \\cite{szegedy2016rethinking}.\nLSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.\n\nFor a training sample $x$ with the original ground-truth label distribution $q(k|x)$,\nwe replace $q(k|x)$ with\n\\begin{align}\nq(k|x) = (1-\\epsilon) q(k|x) + \\epsilon u(k)\n\\end{align}\nwhere $u(k)$ is the prior distribution over labels ,\nand $\\epsilon$ is the smoothing parameter.\nIn this paper, we set the prior label distribution to be uniform $u(k) = 1\/C$.\n\nLSR is equivalent to the KL divergence between the prior label distribution $u(k)$ and the network's predicted distribution $p_\\theta$.\nFormally, LSR term is defined as:\n\\begin{align}\n\\mathcal{L}_{lsr} = - D_{KL}(u(k) \\| p_\\theta)\n\\end{align}\n\nThe objective function (loss function) to be optimized is the cross-entropy loss with $\\mathcal{L}_{lsr}$ and $\\mathcal{L}_2$ regularization, which is defined as:\n\n\\begin{align}\n\\mathcal{L}(\\theta) = - \\sum_{i=1}^{C} \\hat{y}^c log (y^c) + \\mathcal{L}_{lsr} + \\lambda \\sum_{\\theta \\in \\Theta} {\\theta}^2 &\n\\end{align}\nwhere $\\hat{y} \\in \\mathbb{R}^C $ is the ground truth represented as a one-hot vector,\n$y$ is the predicted sentiment distribution vector given by the output layer,\n$\\lambda$ is the coefficient for $\\mathcal{L}_2$ regularization term, and $\\Theta$ is the parameter set.\n\n\n\n\\section{Experiments}\n\n\\subsection{Datasets and Experimental Settings}\n\nWe conduct experiments on three datasets: SemEval 2014 Task 4 \\footnote{The detailed introduction of this task can be found at \\url{http:\/\/alt.qcri.org\/semeval2014\/task4}.} \\cite{pontiki2014semeval} dataset composed of \\emph{Restaurant} reviews and \\emph{Laptop} reviews, and ACL 14 \\emph{Twitter} dataset gathered by Dong et al. \\shortcite{dong2014adaptive}. These datasets are labeled with three sentiment polarities: \\emph{positive}, \\emph{neutral} and \\emph{negative}.\nTable \\ref{tab:stat} shows the number of training and test instances in each category.\n\nWord embeddings in AEN-GloVe do not get updated in the learning process,\nbut we fine-tune pre-trained BERT\n\\footnote{We use uncased BERT-base from \\url{https:\/\/github.com\/google-research\/bert}.} in AEN-BERT.\nEmbedding dimension $d_{dim}$ is 300 for GloVe and is 768 for pre-trained BERT.\nDimension of hidden states $d_{hid}$ is set to 300.\nThe weights of our model are initialized with Glorot initialization \\cite{glorot2010understanding}.\nDuring training, we set label smoothing parameter $\\epsilon$ to 0.2 \\cite{szegedy2016rethinking}, the coefficient $\\lambda$ of $\\mathcal{L}_2$ regularization item is $10^{-5}$ and dropout rate is 0.1.\nAdam optimizer \\cite{kingma2014adam} is applied to update all the parameters.\nWe adopt the \\emph{Accuracy} and \\emph{Macro-F1} metrics to evaluate the performance of the model.\n\n\\begin{table}[tp]\n \\small\n \\centering\n \\begin{threeparttable}\n \\caption{Statistics of the datasets.}\n \\begin{tabular}{ccccccc}\n \\toprule\n \\multirow{2}{*}{\\textbf{Dataset}}&\n \\multicolumn{2}{c}{\\textbf{Positive}}&\\multicolumn{2}{c}{\\textbf{Neural}}&\\multicolumn{2}{c}{\\textbf{Negative}}\\cr\n \\cmidrule(lr){2-3} \\cmidrule(lr){4-5} \\cmidrule(lr){6-7}\n &Train&Test&Train&Test&Train&Test \\cr\n \\midrule\n Twitter &1561 &173 &3127 &346 &1560 &173 \\cr\n Restaurant &2164 &728 &637 &196 &807 &196 \\cr\n Laptop &994 &341 &464 &169 &870 &128 \\cr\n \\bottomrule\n \\end{tabular}\n \\label{tab:stat}\n \\end{threeparttable}\n\\end{table}\n\n\\subsection{Model Comparisons}\n\nIn order to comprehensively evaluate and analysis the performance of AEN-GloVe,\nwe list 7 baseline models and design 4 ablations of AEN-GloVe.\nWe also design a basic BERT-based model to evaluate the performance of AEN-BERT.\n\n~\\\\\n\\textbf{Non-RNN based baselines:}\n\n$\\bullet$ \\textbf{Feature-based SVM} \\cite{kiritchenko2014nrc} is a traditional support vector machine based model with extensive feature engineering.\n\n$\\bullet$ \\textbf{Rec-NN} \\cite{dong2014adaptive} firstly uses rules to transform the dependency tree and put the opinion target at the root, and then\nlearns the sentence representation toward target via semantic composition using Recursive NNs.\n\n$\\bullet$ \\textbf{MemNet} \\cite{tang2016aspect} uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.\n\n~\\\\\n\\textbf{RNN based baselines:}\n\n$\\bullet$ \\textbf{TD-LSTM} \\cite{tang2016effective} extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.\n\n$\\bullet$ \\textbf{ATAE-LSTM} \\cite{wang2016attention} strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.\n\n$\\bullet$ \\textbf{IAN} \\cite{ma2017interactive} learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.\n\n$\\bullet$ \\textbf{RAM} \\cite{chen2017recurrent} strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.\n\n~\\\\\n\\textbf{AEN-GloVe ablations:}\n\n$\\bullet$ \\textbf{AEN-GloVe w\/o PCT} ablates PCT module.\n\n$\\bullet$ \\textbf{AEN-GloVe w\/o MHA} ablates MHA module.\n\n$\\bullet$ \\textbf{AEN-GloVe w\/o LSR} ablates label smoothing regularization.\n\n$\\bullet$ \\textbf{AEN-GloVe-BiLSTM} replaces the attentional encoder layer with two bidirectional LSTM.\n\n\n~\\\\\n\\textbf{Basic BERT-based model:}\n\n$\\bullet$ \\textbf{BERT-SPC} feeds sequence ``[CLS] + context + [SEP] + target + [SEP]''\ninto the basic BERT model for sentence pair classification task.\n\n\n\\subsection{Main Results}\n\n\\begin{table*}[tp]\n \\small\n \\centering\n \\begin{threeparttable}\n \\caption{Main results.\n The results of baseline models are retrieved from published papers.\n ``-\" means not reported.\n Top 3 scores are in \\textbf{bold}.}\n \\begin{tabular}{cccccccc}\n \\toprule\n \\multirow{2}{*}{ }&\\multirow{2}{*}{\\textbf{Models}}&\n \\multicolumn{2}{c}{\\textbf{Twitter}}&\\multicolumn{2}{c}{\\textbf{Restaurant}}&\\multicolumn{2}{c}{\\textbf{Laptop}}\\cr\n \\cmidrule(lr){3-4} \\cmidrule(lr){5-6} \\cmidrule(lr){7-8}\n &&Accuracy&Macro-F1&Accuracy&Macro-F1&Accuracy&Macro-F1\\cr\n \\midrule\n \\multirow{4}*{\\textbf{RNN baselines}}\n &TD-LSTM &0.7080&0.6900 &0.7563&- &0.6813&- \\cr\n &ATAE-LSTM &-&- &0.7720&- &0.6870&- \\cr\n &IAN &-&- &0.7860&- &0.7210&- \\cr\n &RAM &0.6936&0.6730 &0.8023&0.7080 &\\textbf{0.7449}&\\textbf{0.7135} \\cr\n \\midrule\n \\multirow{3}*{\\textbf{Non-RNN baselines}}\n &Feature-based SVM &0.6340&0.6330 &0.8016&- &0.7049&- \\cr\n &Rec-NN &0.6630&0.6590 &-&- &-&- \\cr\n &MemNet &0.6850&0.6691 &0.7816&0.6583 &0.7033&0.6409 \\cr\n \\midrule\n \\multirow{4}*{\\textbf{AEN-GloVe ablations}}\n &AEN-GloVe w\/o PCT &0.7066&0.6907 &0.8017&0.7050 &0.7272&0.6750 \\cr\n &AEN-GloVe w\/o MHA &0.7124&0.6953 &0.7919&0.7028 &0.7178&0.6650 \\cr\n &AEN-GloVe w\/o LSR &0.7080&0.6920 &0.8000&0.7108 &0.7288&0.6869 \\cr\n &AEN-GloVe-BiLSTM &0.7210&\\textbf{0.7042} &0.7973&0.7037 &0.7312&0.6980 \\cr\n \\midrule\n \\multirow{3}*{\\textbf{Ours}}\n &AEN-GloVe &\\textbf{0.7283}&0.6981 &\\textbf{0.8098}&\\textbf{0.7214} &0.7351&0.6904 \\cr\n &BERT-SPC &\\textbf{0.7355}&\\textbf{0.7214} &\\textbf{0.8446}&\\textbf{0.7698} &\\textbf{0.7899}&\\textbf{0.7503} \\cr\n &AEN-BERT &\\textbf{0.7471}&\\textbf{0.7313} &\\textbf{0.8312}&\\textbf{0.7376} &\\textbf{0.7993}&\\textbf{0.7631} \\cr\n \\bottomrule\n \\end{tabular}\n \\label{tab:result}\n \\end{threeparttable}\n\\end{table*}\n\nTable \\ref{tab:result} shows the performance comparison of AEN with other models.\nBERT-SPC and AEN-BERT obtain substantial accuracy improvements,\nwhich shows the power of pre-trained BERT on small-data task.\nThe overall performance of AEN-BERT is better than BERT-SPC,\nwhich suggests that it is important to design a downstream network customized to a specific task.\nAs the prior knowledge in the pre-trained BERT is not specific to any particular domain,\nfurther fine-tuning on the specific task is necessary for releasing the true power of BERT.\n\nThe overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words.\nATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on \\emph{Restaurant} and \\emph{Laptop} datasets.\nRAM is better than other RNN based models, but it does not perform well on \\emph{Twitter} dataset,\nwhich might because bidirectional LSTM is not good at modeling small and ungrammatical text.\n\nFeature-based SVM\nis still a competitive baseline,\nbut relying on manually-designed features.\nRec-NN gets the worst performances among all neural network baselines\nas dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments.\nLike AEN, MemNet also eschews recurrence, but its overall performance is not good\nsince it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings.\n\n\n\\subsection{Model Analysis}\n\nAs shown in Table \\ref{tab:result}, the performances of AEN-GloVe ablations are incomparable\nwith AEN-GloVe in both accuracy and macro-F1 measure.\nThis result shows that all of these discarded components are crucial for a good performance.\nComparing the results of AEN-GloVe and AEN-GloVe w\/o LSR, we observe that the accuracy of AEN-GloVe w\/o LSR drops significantly on all three datasets.\nWe could attribute this phenomenon to the unreliability of the training samples with \\textit{neutral} sentiment.\nThe overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close,\nAEN-GloVe performs better on the \\emph{Restaurant} dataset.\nMore importantly, AEN-GloVe has fewer parameters and is easier to parallelize.\n\nTo figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the \\emph{Restaurant} dataset.\nStatistical results are reported in Table \\ref{tab:result2}.\nWe implement all the compared models base on the same source code infrastructure,\nuse the same hyperparameters, and run them on the same GPU\n\\footnote{NVIDIA GTX 1080ti. }.\n\nRNN-based and BERT-based models indeed have larger model size.\nATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models,\nmemory optimization for these models will be more difficult\nas the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms.\nMemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings.\nAEN-GloVe's lightweight level ranks second,\nsince it takes some more parameters than MemNet in modeling hidden states of sequences.\nAs a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements.\n\n\\begin{table}[tp]\n \\small\n \\centering\n \\caption{Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in \\textbf{bold}.}\n \\begin{tabular}{ccc}\n \\toprule\n \\multirow{2}{*}{\\textbf{Models}}&\n \\multicolumn{2}{c}{\\textbf{Model size}}\\cr\n \\cmidrule(lr){2-3}\n &Params $\\times 10^6$ & Memory (MB) \\cr\n \\midrule\n TD-LSTM &1.44 &12.41\\\\\n ATAE-LSTM &2.53 &16.61\\\\\n IAN &2.16 &15.30\\\\\n RAM &6.13 &31.18\\\\\n MemNet &\\textbf{0.36} &\\textbf{7.82}\\\\\n \\midrule\n AEN-BERT &112.93 &451.84\\\\\n AEN-GloVe-BiLSTM &3.97 &22.52\\\\\n AEN-GloVe &\\textbf{1.16} &\\textbf{11.04}\\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:result2}\n\\end{table}\n\n\\section{Conclusion}\n\nIn this work, we propose an attentional encoder network for the targeted sentiment classification task.\nwhich employs attention based encoders for the modeling between context and target.\nWe raise the the label unreliability issue add a label smoothing regularization\nto encourage the model to be less confident with fuzzy labels.\nWe also apply pre-trained BERT to this task and obtain new state-of-the-art results.\nExperiments and analysis demonstrate the effectiveness and lightweight of the proposed model.\n\n\\bibliographystyle{acl_natbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec-1} \n Let $f_1:(M_1,J)\\to G_1\/H_1$ be a pluriharmonic map from a complex manifold \n$(M_1,J)$, and let $f_2:(M_2,I)\\to G_2\/H_2$ be a para-pluriharmonic map from a \npara-complex manifold $(M_2,I)$, where $G_i\/H_i$ are affine symmetric spaces. \n Then, the loop group method enables us to obtain a pluriharmonic potential \n$(\\eta_\\lambda,\\tau_\\lambda)$ and a para-pluriharmonic potential \n$(\\eta_\\theta,\\tau_\\theta)$ from $f_1$ and $f_2$, respectively; and \nfurthermore, the method enables us to construct pluriharmonic maps and \npara-pluriharmonic maps from their potentials, respectively (see Section \n\\ref{sec-3}). \n\\begin{center}\n\\maxovaldiam=2mm\n\\unitlength=1mm\n\\begin{picture}(136,30)\n\\put(25,24){\\oval(60,12)}\n\\put(3,26){Plurharmonic maps}\n\\put(5,20){$f_1:(M_1,J)\\to G_1\/H_1$}\n\\put(25,13){$\\Updownarrow$}\n\\put(106,13){$\\Updownarrow$}\n\\put(106,24){\\oval(60,12)}\n\\put(79,26){Para-plurharmonic maps}\n\\put(81,20){$f_2:(M_2,I)\\to G_2\/H_2$}\n\\put(25,5){\\oval(60,10)}\n\\put(3,6){Pluriharmonic potentials}\n\\put(10,2){$(\\eta_\\lambda,\\tau_\\lambda)$}\n\\put(106,5){\\oval(60,10)}\n\\put(79,6){Para-pluriharmonic potentials}\n\\put(90,2){$(\\eta_\\theta,\\tau_\\theta)$}\n\\end{picture} \n\\end{center} \n The goal of this paper is to interrelate $f_1:(M_1,J)\\to G_1\/H_1$ with \n$f_2:(M_2,I)\\to G_2\/H_2$ by interrelating $(\\eta_\\lambda,\\tau_\\lambda)$ with \n$(\\eta_\\theta,\\tau_\\theta)$. \n In this paper, we demonstrate that one can indeed locally interrelate a \npluriharmonic map with a para-pluriharmonic map in the case where its potential \nsatisfies the {\\it morphing condition} \\eqref{M} (see Theorem \\ref{thm-4.3.1}).\\par \n\n The notions of a pluriharmonic map and a para-pluriharmonic map are \ngeneralized notions of a harmonic map from a Riemann surface $\\Sigma^2$ and \na Lorentz harmonic map from a Lorentz surface $\\Sigma^2_1$, respectively. \n Consequently, Theorem \\ref{thm-4.3.1} enables us to interrelate harmonic maps \nfrom $\\Sigma^2$ with Lorentz harmonic maps from $\\Sigma^2_1$. \n Harmonic maps $f_1$ from $\\Sigma^2$ or Lorentz harmonic maps $f_2$ from \n$\\Sigma^2_1$ into $S^2$, $H^2$ or $S^2_1$ give rise to constant mean curvature \nsurfaces (CMC-surfaces, for short) in $\\mathbb{R}^3$, spacelike CMC-surfaces \nin $\\mathbb{R}^3_1$ or timelike CMC-surfaces in $\\mathbb{R}^3_1$; and vice \nversa. \n For this reason, one can interrelate CMC-surfaces in $\\mathbb{R}^3$ or \n$\\mathbb{R}^3_1$ with other CMC-surfaces in $\\mathbb{R}^3$ or $\\mathbb{R}^3_1$ \nby means of Theorem \\ref{thm-4.3.1}. \n In the appendix, we present concrete examples of the method developed in \nthis paper; and moreover, we investigate the relation among CMC-surfaces by use \nof such maps.\\par\n\n This paper is organized as follows: \n In Section \\ref{sec-2} we recall the basic definitions and results \nconcerning para-complex manifolds, para-pluriharmonic maps and pluriharmonic \nmaps. \n In Section \\ref{sec-3} we review elementary facts and results about the \nloop group method; and we study the relation between para-pluriharmonic or \npluriharmonic maps and loop groups. \n In Section \\ref{sec-4} we prove the main Theorem \\ref{thm-4.3.1}. \n Finally, in Section \\ref{sec-5} we actually interrelate some pluriharmonic \nmaps with para-pluriharmonic maps by means of Theorem \\ref{thm-4.3.1}. \n\\begin{acknowledgments}\n Many thanks are due to the members of GeometrieWerkstatt at the \nUniversit\\\"{a}t T\\\"{u}bingen. \n The first named author would like to express his sincere gratitude to \nWayne Rossman, Hui Ma, Yoshihiro Ohnita, and David Brander for their \nencouragement; and he is grateful to Lars Sch\\\"{a}fer for his valuable advice. \n\\end{acknowledgments}\n \n \n\n\n\\section{Pluriharmonic maps and para-pluriharmonic maps}\\label{sec-2} \n\n\\subsection{Para-complex manifolds}\\label{subsec-2.1} \n We first recall the notion of a para-complex manifold, in order to \nintroduce the notion of a para-pluriharmonic map. \n\\begin{definition}[{cf.\\ Libermann \\cite{Li1}, \\cite[p.\\ 82, p.\\ 83]{Li2}}]\n\\label{def-2.1.1}\\quad\\par \n (i) Let $M$ be a $2n$-dimensional real smooth manifold, and let $\\frak{X}M$ \ndenote the Lie algebra of smooth vector fields on $M$. \n Then $M$ is called a {\\it para-complex manifold}, if there exists a smooth \n$(1,1)$-tensor field $I$ on $M$ such that \n\\begin{enumerate}\n\\item \n $I^2=\\operatorname{id}$; \n\\item \n $\\dim_\\mathbb{R}T_p^+M=n=\\dim_\\mathbb{R}T_p^-M$ for each \n$p\\in M$; \n\\item \n $[IX,IY]-I[IX,Y]-I[X,IY]+[X,Y]=0$ for any \n$X,Y\\in\\frak{X}M$, \n\\end{enumerate} \nwhere $T^\\pm_pM$ denotes the $\\pm$-eigenspace of $I_p$ ($=$ the value of $I$ \nat $p$) in $T_pM$.\\par \n (ii) Let $(M,I)$ and $(M',I')$ be two para-complex manifolds. \n Then a smooth map $f:(M,I)\\to(M',I')$ is called {\\it para-holomorphic} \n(resp.\\ {\\it para-antiholomorphic}), if it satisfies $df\\circ I=I'\\circ df$ \n(resp.\\ $df\\circ I=-I'\\circ df$). \n\\end{definition} \n\n Every para-complex manifold can be endowed with a set of special, local \ncoordinates $(x_\\alpha^1,\\cdots,x_\\alpha^n,y_\\alpha^1,\\cdots,y_\\alpha^n)$ \nwhich are called {\\it para-holomorphic coordinates}: \n\\begin{proposition}[{cf.\\ Kaneyuki-Kozai \\cite[p.\\ 83]{Ka-Ko}}]\n\\label{prop-2.1.2} \n Let $(M,I)$ be a para-complex manifold with $\\dim_\\mathbb{R}M=2n$. \n Then, $M$ has an atlas $\\{(U_\\alpha,\\varphi_\\alpha)\\}_{\\alpha\\in A}$ with \n$U_\\alpha$ open and \n$\\varphi_\\alpha=(x_\\alpha^1,\\cdots,x_\\alpha^n,y_\\alpha^1,\\cdots,y_\\alpha^n)$\na coordinate map satisfying \n\\begin{enumerate}\n\\item[{\\rm (1)}] \n $I(\\partial\/\\partial x_\\alpha^a)=\\partial\/\\partial x_\\alpha^a$ \nand \n $I(\\partial\/\\partial y_\\alpha^a)=-\\partial\/\\partial y_\\alpha^a$ \nfor all $1\\leq a\\leq n;$\n\\item[{\\rm (2)}]\n $\\partial y_\\beta^b\/\\partial x_\\alpha^a\n =0=\\partial x_\\beta^b\/\\partial y_\\alpha^a$ \non $U_\\alpha\\cap U_\\beta\\neq\\emptyset$ for all $1\\leq a,b\\leq n$. \n\\end{enumerate} \n\\end{proposition} \n\n A Lorentz surface and a one sheeted hyperboloid are one of the examples of \npara-complex manifold. \n\n\n\n\n\\subsection{Para-pluriharmonic maps}\\label{subsec-2.2}\n\\subsubsection{}\\label{subsec-2.2.1} \n Now, let us recall the notion of a para-pluriharmonic map: \n\\begin{definition}[{cf.\\ Sch\\\"{a}fer \\cite[p.\\ 72]{Sc1}}]\n\\label{def-2.2.1}\n Let $(M,I)$ be a para-complex manifold with $\\dim_\\mathbb{R}M=2n$, and let \n$N$ be a smooth manifold with a torsion-free affine connection $\\nabla^N$. \n Then a smooth map $f:(M,I)\\to(N,\\nabla^N)$ is called {\\it para-pluriharmonic}, \nif it satisfies \n\\begin{equation}\\label{P}\\tag{P}\n (\\nabla df)\\bigl(\\frac{\\partial}{\\partial y^a},\n \\frac{\\partial}{\\partial x^b}\\bigr)=0 \n\\quad \\mbox{for all $1\\leq a,b\\leq n$}, \n\\end{equation}\nfor any local para-holomorphic coordinate $(x^1,\\cdots,x^n,y^1,\\cdots,y^n)$ on \n$(M,I)$. \n Here $\\nabla$ denotes the connection on $\\operatorname{End}(TM,f^{-1}TN)$ \nwhich is induced from $D$ and $\\nabla^N$, where $D$ is any para-complex (i.e., \n$DI=0$) torsion-free affine connection on $(M,I)$. \n\\end{definition}\n\n\n\\begin{remark}\\label{rem-2.2.2} \n Every para-complex manifold admits a para-complex torsion-free affine \nconnection (cf.\\ \\cite[p.\\ 64]{Sc1}). \n\\end{remark}\n\n The following lemma implies that the equation \\eqref{P} in Definition \n\\ref{def-2.2.1} is independent of the choice of para-complex torsion-free \naffine connections on $(M,I)$: \n\\begin{lemma}\\label{lem-2.2.3}\n Let $(M,I)$ be a para-complex manifold with $\\dim_\\mathbb{R}M=2n$, and let \n$D$ be any para-complex torsion-free affine connection on $(M,I)$.\n Then, every local para-holomorphic coordinate \n$(x^1,\\cdots,x^n,y^1,\\cdots,y^n)$ on $(M,I)$ satisfies \n$D_{\\partial_+^a}\\partial_-^b=0=D_{\\partial_-^a}\\partial_+^b$ for all \n$1\\leq a,b\\leq n$. \n Here, $\\partial_+^a:=\\partial\/\\partial x^a$ and \n$\\partial_-^a:=\\partial\/\\partial y^a$. \n\\end{lemma}\n\\begin{proof}\n It follows from $DI=0$ that for any $1\\leq a,b\\leq n$, \n\\[\n I(D_{\\partial_+^a}\\partial_-^b)=D_{\\partial_+^a}I(\\partial_-^b)\n -(D_{\\partial_+^a}I)\\partial_-^b\n =D_{\\partial_+^a}I(\\partial_-^b)=-D_{\\partial_+^a}\\partial_-^b. \n\\] \n This yields $D_{\\partial_+^a}\\partial_-^b\\in T^-M$. \n Similarly one has $D_{\\partial_-^b}\\partial_+^a\\in T^+M$. \n Therefore we conclude \n\\[\nT^-M\\ni \n D_{\\partial_+^a}\\partial_-^b\n=D_{\\partial_-^b}\\partial_+^a+[\\partial_+^a,\\partial_-^b]\n=D_{\\partial_-^b}\\partial_+^a\n \\in T^+M \n\\]\nbecause the torsion of $D$ is free. \n Thus $D_{\\partial_+^a}\\partial_-^b=0=D_{\\partial_-^b}\\partial_+^a$. \n\\end{proof}\n\n\n\n\\subsubsection{}\\label{subsec-2.2.2}\n Our goal in this subsection is to show Proposition \\ref{prop-2.2.4} (below) \nwhich will play an important role in Section \\ref{sec-3}. \n First, let us fix the setting and the notation of the proposition.\\par \n\n Let $G$ be a connected matrix group, and let $\\sigma$ be an involution of \n$G$. \n We denote by $H$ the fixed point set of $\\sigma$ in $G$, and get an affine \nsymmetric space $(G\/H,\\sigma)$. \n Let $(M,I)$ be a para-complex manifold of dimension $2n$, and let $F$ be a \nsmooth map from $(M,I)$ into $G$. \n Then we consider: \n\\begin{enumerate}[(2.2.1)] \n\\item \n $\\pi$: the projection from $G$ onto $G\/H$, \n\\item \n $\\nabla^1$: the canonical affine connection on $(G\/H,\\sigma)$ \n(see \\cite[p.\\ 54]{No} for the definition of the canonical affine connection),\n\\item \n $\\alpha:=F^{-1}\\cdot dF$: the pullback of the left-invariant Maurer-Cartan \nform on $G$ along $F$, \n\\item \n $\\frak{g}:=\\operatorname{Lie}G$,\\quad \n $\\frak{h}:=\\operatorname{Fix}(\\frak{g},d\\sigma)$,\\quad \n $\\frak{m}:=\\operatorname{Fix}(\\frak{g},-d\\sigma)$,\n\\item \n $\\alpha_\\frak{h}$ (resp.\\ $\\alpha_\\frak{m}$): the $\\frak{h}$-component \n(resp.\\ the $\\frak{m}$-component) of $\\alpha$ with respect to \n$\\frak{g}=\\frak{h}\\oplus\\frak{m}$, \n\\item \n $\\alpha_\\frak{h}^\\pm:=(1\/2)\\cdot(\\alpha_\\frak{h}\\pm{}^tI(\\alpha_\\frak{h}))$, \n\\quad \n $\\alpha_\\frak{m}^\\pm:=(1\/2)\\cdot(\\alpha_\\frak{m}\\pm{}^tI(\\alpha_\\frak{m}))$,\n\\item \n $\\partial_-\\alpha_\\frak{m}^++[\\alpha_\\frak{h}^-\\wedge\\alpha_\\frak{m}^+]=0$ \nas an abbreviation for \n$\\partial_-^a(\\alpha_\\frak{m}(\\partial_+^b))\n +[\\alpha_\\frak{h}(\\partial_-^a),\\alpha_\\frak{m}(\\partial_+^b)]=0$ \nfor all $1\\leq a,b\\leq n$, where $(x^1,\\cdots,x^n,y^1,\\cdots,y^n)$ is \nany local para-holomorphic coordinate system on $(M,I)$,\n\\item \n $T^\\pm M$: the subbundle of the tangent bundle $TM$ determined by the \n$\\pm 1$-eigenspace of $I$ in $TM$,\n\\item\n $[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$ as an abbreviation of: \n$[\\alpha_\\frak{m}\\wedge\\alpha_\\frak{m}]\\equiv 0$ on $T^+ M\\times T^+ M$ and \non $T^- M\\times T^- M$,\n\\item\n$\\mathbb{C}^*:=\\mathbb{C}\\setminus\\{0\\}$. \n\\end{enumerate}\n\\setcounter{equation}{10}\n Now, we are in a position to state \n\\begin{proposition}\\label{prop-2.2.4}\n With the above setting and notation, the following statements {\\rm (a)} and \n{\\rm (b)} are equivalent$:$ \n\\begin{enumerate}\n\\item[{\\rm (a)}] \n A map $f:=\\pi\\circ F:(M,I)\\to(G\/H,\\nabla^1)$ is para-pluriharmonic and \nsatisfies \n$[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]=0\n =[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-];$ \n\\item[{\\rm (b)}] \n $d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]=0$ for any \n$\\mu\\in\\mathbb{C}^*$, where \n$\\alpha^\\mu:=\\alpha_\\frak{h}+\\mu^{-1}\\cdot\\alpha_\\frak{m}^+\n +\\mu\\cdot\\alpha_\\frak{m}^-$. \n\\end{enumerate} \n\\end{proposition} \n\n In order to prove the above proposition, we first show \n\\begin{lemma}\\label{lem-2.2.5}\n $f=\\pi\\circ F:(M,I)\\to(G\/H,\\nabla^1)$ is a para-pluriharmonic map if and \nonly if \n$\\partial_-\\alpha_\\frak{m}^++[\\alpha_\\frak{h}^-\\wedge\\alpha_\\frak{m}^+]=0$ \n$($cf.\\ $(2.2.7))$. \n\\end{lemma}\n\\begin{proof} \n Lemma \\ref{lem-2.2.3} allows us to reduce the equation \\eqref{P} in \nDefinition \\ref{def-2.2.1} as follows: \n$(\\nabla df)({\\partial_-^a},\\partial_+^b)\n =\\nabla^1_{\\partial_-^a}\\bigl(df(\\partial_+^b)\\bigr)$. \n This implies that \n\\[\n\\mbox{$f=\\pi\\circ F$ is para-pluriharmonic if and only if \n$\\beta\\bigl(\\nabla^1_{\\partial_-^a}\\bigl(df(\\partial_+^b)\\bigr)\\bigr)=0$}\n\\]\nbecause $\\beta:T(G\/H)\\to G\/H\\times\\frak{g}$ is injective (see \\cite{Bu-Ra} or \n\\cite[p.\\ 403]{Hg} for $\\beta$). \n Accordingly, it suffices to show that \n\\begin{equation}\\label{eq-2.2.11} \n\\mbox{\n$\\beta\\bigl(\\nabla^1_{\\partial_-^a}\\bigl(df(\\partial_+^b)\\bigr)\\bigr)=0$ \nif and only if \n$\\partial_-\\alpha_\\frak{m}^++[\\alpha_\\frak{h}^-\\wedge\\alpha_\\frak{m}^+]=0$}.\n\\end{equation}\n To prove this we note first that it is known that $\\nabla^1$ coincides with \nthe canonical affine connection of the second kind (cf.\\ \\cite[p.\\ 53]{No}). \n Therefore, Proposition 1.4 and Lemma 1.1 in \\cite[p.\\ 404, p.\\ 403]{Hg} \nassure that \n\\[\n \\beta\\bigl(\\nabla^1_{\\partial_-^a}\\bigl(df(\\partial_+^b)\\bigr)\\bigr) \n=\\partial_-^a\\bigl(\\beta\\bigl(df(\\partial_+^b)\\bigr)\\bigr)\n -\\big[\\beta\\bigl(df(\\partial_-^a)\\bigr),\n \\beta\\bigl(df(\\partial_+^b)\\bigr)\\big]. \n\\]\n Let us compute each term on the right-hand side of the above equation. \n We note that $f^*\\beta=\\operatorname{Ad}F\\cdot\\alpha_\\frak{m}$ (cf.\\ \n\\cite[p.\\ 409]{Hg}) implies \n\\begin{multline*}\n\\partial_-^a\\bigl(\\beta\\bigl(df(\\partial_+^b)\\bigr)\\bigr)\n=\\partial_-^a\\bigl((f^*\\beta)(\\partial_+^b)\\bigr) \n=\\partial_-^a\\bigl(F\\cdot\\alpha_\\frak{m}(\\partial_+^b)\\cdot F^{-1}\\bigr)\\\\\n=(\\partial_-^a F)\\cdot\\alpha_\\frak{m}(\\partial_+^b)\\cdot F^{-1}\n +F\\cdot\\partial_-^a\\bigl(\\alpha_\\frak{m}(\\partial_+^b)\\bigr)\\cdot F^{-1} \n -F\\cdot\\alpha_\\frak{m}(\\partial_+^b)\n \\cdot F^{-1}\\cdot(\\partial_-^a F)\\cdot F^{-1}\\\\\n=\\operatorname{Ad}F\\cdot\\left\\{\n \\partial_-^a\\bigl(\\alpha_\\frak{m}(\\partial_+^b)\\bigr)\n +\\bigl[F^{-1}\\cdot(\\partial_-^a F),\n \\alpha_\\frak{m}(\\partial_+^b)\\bigr] \\right\\}\\\\\n=\\operatorname{Ad}F\\cdot\n \\left\\{\\partial_-^a\\bigl(\\alpha_\\frak{m}(\\partial_+^b)\\bigr)\n +\\bigl[\\alpha(\\partial_-^a),\n \\alpha_\\frak{m}(\\partial_+^b)\\bigr]\n \\right\\}. \n\\end{multline*}\n Moreover, $f^*\\beta=\\operatorname{Ad}F\\cdot\\alpha_\\frak{m}$ yields \n\\[\n\\big[\\beta\\bigl(df(\\partial_-^a)\\bigr),\\beta\\bigl(df(\\partial_+^b)\\bigr)\\big]\n=\\big[(f^*\\beta)(\\partial_-^a),(f^*\\beta)(\\partial_+^b)\\big]\n=\\operatorname{Ad}F\\cdot\\big[\\alpha_\\frak{m}(\\partial_-^a),\n \\alpha_\\frak{m}(\\partial_+^b)\\big]. \n\\] \n Therefore we obtain \n\\[\n \\beta\\bigl(\\nabla^1_{\\partial_-^a}\\bigl(df(\\partial_+^b)\\bigr)\\bigr)\n=\\operatorname{Ad}F\\cdot\n \\left\\{\\partial_-^a\\bigl(\\alpha_\\frak{m}(\\partial_+^b)\\bigr)\n +\\bigl[\\alpha_\\frak{h}(\\partial_-^a),\\alpha_\\frak{m}(\\partial_+^b)\\bigr]\n \\right\\}.\n\\]\n Hence we have shown \\eqref{eq-2.2.11}. \n\\end{proof} \n \n \n\n\\begin{proof}[Proof of Proposition {\\rm \\ref{prop-2.2.4}}]\n First we rewrite the expression \n$d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]$. \n Since $\\alpha=F^{-1}\\cdot dF$ we have \n$d\\alpha+(1\/2)\\cdot[\\alpha\\wedge\\alpha]=0$. \n From \n$[\\frak{h},\\frak{h}]\\subset\\frak{h}$, $[\\frak{h},\\frak{m}]\\subset\\frak{m}$ and \n$[\\frak{m},\\frak{m}]\\subset\\frak{h}$ we obtain \n$d\\alpha_\\frak{h}\n +(1\/2)\\cdot\\bigl([\\alpha_\\frak{h}\\wedge\\alpha_\\frak{h}]\n +[\\alpha_\\frak{m}\\wedge\\alpha_\\frak{m}]\\bigr)=0=\nd\\alpha_\\frak{m}+[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}]$. \n Thus we can assert that \n\\begin{equation}\\label{eq-2.2.12}\n\\begin{split}\n&\nd\\alpha_\\frak{h}\n +\\frac{1}{2}\\cdot[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{h}]\n +[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^-]\n=-\\frac{1}{2}\\cdot\n \\left([\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]\n +[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-]\\right),\\\\\n&\nd\\alpha_\\frak{m}+[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}]=0.\n\\end{split} \n\\end{equation}\n By a direct computation we obtain \n\\[\n\\begin{split}\nd\\alpha^\\mu+\\frac{1}{2}\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]\n=& d\\alpha_\\frak{h}+\\frac{1}{2}\\cdot[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{h}]\n +[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^-]\\\\\n&+\\mu^{-1}\\cdot\\left(d\\alpha_\\frak{m}^+\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]\\right)\n +\\mu\\cdot\\left(d\\alpha_\\frak{m}^-\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^-]\\right)\\\\\n&+\\frac{1}{2}\\cdot\\mu^{-2}\\cdot[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]\n +\\frac{1}{2}\\cdot\\mu^2\\cdot[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-]. \n\\end{split} \n\\] \n Consequently, by virtue of \\eqref{eq-2.2.12} one can rewrite \n$d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]$ as follows: \n\\begin{equation}\\label{eq-2.2.13}\n\\begin{split}\nd\\alpha^\\mu+\\frac{1}{2}\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]\n=&\\mu^{-1}\\cdot\\left(d\\alpha_\\frak{m}^+\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]\\right)\n +\\mu\\cdot\\left(d\\alpha_\\frak{m}^-\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^-]\\right)\\\\\n&+\\frac{1}{2}\\cdot(\\mu^{-2}-1)\\cdot[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]\n +\\frac{1}{2}\\cdot(\\mu^2-1)\\cdot[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-].\n\\end{split}\n\\end{equation}\n\n\n (a)$\\to$(b): Suppose that $f=\\pi\\circ F$ is para-pluriharmonic and \nsatisfies \n$[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]=0\n =[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-]$. \n Then, \\eqref{eq-2.2.13} yields \n\\[\nd\\alpha^\\mu+\\frac{1}{2}\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]\n =\\mu^{-1}\\cdot\\left(d\\alpha_\\frak{m}^+\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]\\right)\n +\\mu\\cdot\\left(d\\alpha_\\frak{m}^-\n +[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^-]\\right). \n\\]\n So it suffices to show \n\\[\nd\\alpha_\\frak{m}^++[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]\n =0= \n d\\alpha_\\frak{m}^-+[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^-].\n\\] \n Equation \\eqref{P} in Definition \\ref{def-2.2.1} is symmetric with respect \nto the variables $y^a$ and $x^b$. \n This and Lemma \\ref{lem-2.2.5} imply that \n\\[\n\\begin{split}\n\\mbox{$f=\\pi\\circ F$ is para-pluriharmonic}\n&\\mbox{ if and only if \n $\\partial_-\\alpha_\\frak{m}^++[\\alpha_\\frak{h}^-\\wedge\\alpha_\\frak{m}^+]=0$}\\\\\n&\\mbox{ if and only if \n $\\partial_+\\alpha_\\frak{m}^-+[\\alpha_\\frak{h}^+\\wedge\\alpha_\\frak{m}^-]=0$}.\n\\end{split}\n\\]\n Therefore it follows from \n$d\\alpha_\\frak{m}+[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}]=0$ (cf.\\ \n\\eqref{eq-2.2.12}) that \n$d\\alpha_\\frak{m}^++[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]\n =0=d\\alpha_\\frak{m}^-+[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^-]$.\n\\par \n \n (b)$\\to$(a): Suppose that \n$d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]=0$ for any \n$\\mu\\in\\mathbb{C}^*$. \n We obtain $d\\alpha_\\frak{m}^++[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]=0$ \nand \n$[\\alpha_\\frak{m}^+\\wedge\\alpha_\\frak{m}^+]\n =0=[\\alpha_\\frak{m}^-\\wedge\\alpha_\\frak{m}^-]$ \nfrom \\eqref{eq-2.2.13}. \n So Lemma \\ref{lem-2.2.5} allows us to obtain the conclusion, if one has \n$\\partial_-\\alpha_\\frak{m}^++[\\alpha_\\frak{h}^-\\wedge\\alpha_\\frak{m}^+]=0$. \n But, this equation is immediate from \n$d\\alpha_\\frak{m}^++[\\alpha_\\frak{h}\\wedge\\alpha_\\frak{m}^+]=0$. \n\\end{proof}\n \n \n\n\n\\subsubsection{}\\label{subsec-2.2.3}\n We recall the notion of the extended framing of a para-pluriharmonic map \n(cf.\\ Definition \\ref{def-2.2.6}). \n One will see that the framing is an element of the loop group \n$\\widetilde{\\Lambda}G_\\sigma$ in Section \\ref{sec-3}. \n \n Let $G^\\mathbb{C}$ be a simply connected, simple, complex linear algebraic \nsubgroup of $SL(m,\\mathbb{C})$, let $\\sigma$ be a holomorphic involution of \n$G^\\mathbb{C}$, and let $\\nu$ be an antiholomorphic involution of \n$G^\\mathbb{C}$ such that $[\\sigma,\\nu]=0$ (i.e., \n$\\sigma\\circ\\nu=\\nu\\circ\\sigma$). \n Define $H^\\mathbb{C}$, $G$ and $H$ by \n\\begin{equation}\\label{eq-2.2.14}\n\\begin{array}{lll} \n H^\\mathbb{C}:=\\operatorname{Fix}(G^\\mathbb{C},\\sigma), \n& G:=\\operatorname{Fix}(G^\\mathbb{C},\\nu), \n& H:=\\operatorname{Fix}(G,\\sigma)=\\operatorname{Fix}(H^\\mathbb{C},\\nu). \n\\end{array} \n\\end{equation}\n Note that $(G\/H,\\sigma|_G)$ is an affine symmetric space. \n Now, let $p_o$ be a base point in a simply connected para-complex manifold \n$(M,I)$. \n Then, Proposition \\ref{prop-2.2.4} assures that for any para-pluriharmonic \nmap $f=\\pi\\circ F:(M,I)\\to (G\/H,\\nabla^1)$ with $F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$, the \n$\\frak{g}^\\mathbb{C}$-valued $1$-form \n$\\alpha^\\mu=\\alpha_\\frak{h}+\\mu^{-1}\\cdot\\alpha_\\frak{m}^+\n +\\mu\\cdot\\alpha_\\frak{m}^-$ \non $(M,I)$, parameterized by $\\mu\\in\\mathbb{C}^*$, is integrable; and \nfurthermore, one can obtain a smooth map \n\\[\n\\begin{array}{ll}\n F:M\\times\\mathbb{C}^*\\to G^\\mathbb{C}, & (p,\\mu)\\mapsto F_\\mu(p),\n\\end{array}\n\\]\nfrom the integrability condition \n$d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]=0$ and \n$F_\\mu^{-1}\\cdot dF_\\mu=\\alpha^\\mu$ with $F_\\mu(p_o)\\equiv\\operatorname{id}$. \n The above map $F=F_\\mu:\\mathbb{C}^*\\to G^\\mathbb{C}$ satisfies \n\\begin{enumerate} \n\\item[(2.2.15)] \n $\\sigma(F_\\mu)=F_{-\\mu}$ for all $\\mu\\in\\mathbb{C}^*$, \n\\item[(2.2.16)] \n $F_\\lambda:=F|_{S^1}:S^1\\to G^\\mathbb{C}$, where \n$S^1:=\\{\\lambda\\in\\mathbb{C}^*\\,|\\,|\\lambda|=1\\}$, \n\\item[(2.2.17)] \n $F_\\theta:=F|_{\\mathbb{R}^+}:\\mathbb{R}^+\\to \nG=\\operatorname{Fix}(G^\\mathbb{C},\\nu)$ ($\\subset G^\\mathbb{C}$), \nwhere $\\mathbb{R}^+:=\\{\\theta\\in\\mathbb{R}\\,|\\,\\theta>0\\}$. \n\\end{enumerate} \n\\setcounter{equation}{17}\n Indeed, (2.2.15) follows from $d\\sigma(\\alpha^\\mu)=\\alpha^{-\\mu}$ and \n$\\sigma(F_\\mu(p_o))=F_{-\\mu}(p_o)$; (2.2.16) is obvious; and (2.2.17) follows \nfrom $\\alpha^\\theta$ being $\\frak{g}$-valued for any $\\theta\\in\\mathbb{R}^+$. \n\n\\begin{definition}\\label{def-2.2.6} \n The map $F_\\theta$ is called the {\\it extended framing} of the \npara-pluriharmonic map $f=\\pi\\circ F:(M,I)\\to(G\/H,\\nabla^1)$; and \n$\\{f_\\theta\\}_{\\theta\\in\\mathbb{R}^+}$ is called an {\\it associated family} of \n$f$, where $f_\\theta:=\\pi\\circ F_\\theta$. \n Here, we remark that $f_1=f$ and $F_1=F$ are immediate from \n$\\alpha^1=\\alpha$ and $F_1(p_o)=F(p_o)$. \n\\end{definition}\n \n\\begin{remark}\\label{rem-2.2.7}\n Throughout this paper we consider that for the extended framing $F_\\theta$ \nof a para-pluriharmonic map, its variable $\\theta$ varies in the whole \n$\\mathbb{C}^*$ which contains not only $\\mathbb{R}^+$ but also $S^1$. \n\\end{remark}\n\n\n\n\n\n\n\\subsection{Pluriharmonic maps}\\label{subsec-2.3} \n\\subsubsection{}\\label{subsec-2.3.1} \n In this subsection we will survey some basic facts and results about \npluriharmonic maps. \n First, let us recall the notion of a pluriharmonic map: \n\\begin{definition}\\label{def-2.3.1}\n Let $(M,J)$ be a real $2n$-dimensional complex manifold, and let $N$ be a \nsmooth manifold with a torsion-free affine connection $\\nabla^N$. \n Then a smooth map $f:(M,J)\\to(N,\\nabla^N)$ is called {\\it pluriharmonic}, \nif it satisfies \n\\begin{equation}\\label{H}\\tag{H}\n (\\nabla df)\\bigl(\\frac{\\partial}{\\partial \\bar{z}^a}, \n \\frac{\\partial}{\\partial z^b}\\bigr)=0 \n\\quad \\mbox{for all $1\\leq a,b\\leq n$}, \n\\end{equation}\nfor any local holomorphic coordinate \n$(z^1,\\cdots,z^n,\\bar{z}^1,\\cdots,\\bar{z}^n)$ on $(M,J)$. \n Here $\\nabla$ denotes the connection on $\\operatorname{End}(TM,f^{-1}TN)$ \nwhich is induced from $D$ and $\\nabla^N$, where $D$ is any complex torsion-free \naffine connection on $(M,J)$. \n\\end{definition} \n\n\n\\begin{remark}\\label{rem-2.3.2}\n (i) We utilize the terminology ``pluriharmonic map,'' in a sense that is \nmore general than the one originally given by Siu \\cite{Si}. \\par \n (ii) Any complex manifold admits a complex torsion-free affine connection \n(cf.\\ \\cite[p.\\ 145]{Ko-No}).\\par \n (iii) The equation \\eqref{H} in Definition \\ref{def-2.3.1} is independent \nof the choice of complex torsion-free affine connections $D$ on $(M,J)$ (ref.\\ \nthe proof of Lemma \\ref{lem-2.2.3}). \n\\end{remark}\n \n\n\n\\subsubsection{}\\label{subsec-2.3.2} \n In Section \\ref{sec-3} we will study the relation between pluriharmonic maps \nand the loop group method. \n For this we will use a result of Ohnita \\cite{Oh} about pluriharmonic maps \n(see Proposition \\ref{prop-2.3.3}). \n First, let us fix the setting for Proposition \\ref{prop-2.3.3}.\\par \n \n Let $(G\/H,\\sigma)$ denote the affine symmetric space defined in \nSubsection \\ref{subsec-2.2.2}, and let $F$ be a smooth map from a real \n$2n$-dimensional complex manifold $(M,J)$ into $G$. \n Then, we consider: \n\\begin{enumerate}[(2.3.1)] \n\\item \n $\\pi$: the same as in (2.2.1), \n\\item \n $\\nabla^1$: the same as in (2.2.2),\n\\item \n $\\alpha$: the same as in (2.2.3), \n\\item \n $\\frak{g}$, $\\frak{h}$, $\\frak{m}$: the same as in (2.2.4),\n\\item \n $\\alpha_\\frak{h}$, $\\alpha_\\frak{m}$: the same as in (2.2.5), \n\\item \n $\\alpha_X':=(-i\/2)\\cdot(i\\alpha_X+{}^tJ(\\alpha_X))$,\n\\quad \n $\\alpha_X'':=(-i\/2)\\cdot(i\\alpha_X-{}^tJ(\\alpha_X))$ for \n$X=\\frak{h}$, $\\frak{m}$, \n\\item \n $\\overline{\\partial}\\alpha_\\frak{m}'\n +[\\alpha_\\frak{h}''\\wedge\\alpha_\\frak{m}']=0$ \nas an abbreviation for \n$\\overline{\\partial}^a(\\alpha_\\frak{m}(\\partial^b))\n +[\\alpha_\\frak{h}(\\overline{\\partial}^a),\\alpha_\\frak{m}(\\partial^b)]=0$ \nfor all $1\\leq a,b\\leq n$, where $(z^1,\\cdots,z^n,\\bar{z}^1,\\cdots,\\bar{z}^n)$ \nis any local holomorphic coordinate system on $(M,J)$, and where \n$\\partial^b:=\\partial\/\\partial z^b$ and \n$\\overline{\\partial}^a:=\\partial\/\\partial\\bar{z}^a$, \n\\item\n $[\\alpha_\\frak{m}'\\wedge\\alpha_\\frak{m}']=0$ as an abbreviation for \n$[\\alpha_\\frak{m}(\\partial^a),\\alpha_\\frak{m}(\\partial^b)]=0$ \nfor all $1\\leq a,b\\leq n$, where $(z^1,\\cdots,z^n,\\bar{z}^1,\\cdots,\\bar{z}^n)$ \nis any local holomorphic coordinate system on $(M,J)$. \n\\end{enumerate}\n\\setcounter{equation}{8} \n\n\\begin{proposition}[{cf.\\ Ohnita \\cite{Oh}}]\\label{prop-2.3.3}\n With the above notation, a map $f:=\\pi\\circ F:(M,J)\\to(G\/H,\\nabla^1)$ \nis pluriharmonic if and only if \n$\\overline{\\partial}\\alpha_\\frak{m}'\n +[\\alpha_\\frak{h}''\\wedge\\alpha_\\frak{m}']=0$. \n Moreover, the following statements {\\rm (a)} and {\\rm (b)} are equivalent$:$ \n\\begin{enumerate}\n\\item[{\\rm (a)}] \n $f=\\pi\\circ F$ is pluriharmonic and satisfies \n$[\\alpha_\\frak{m}'\\wedge\\alpha_\\frak{m}']=0;$ \n\\item[{\\rm (b)}] \n $d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]=0$ \nfor any $\\mu\\in\\mathbb{C}^*$, where \n$\\alpha^\\mu:=\\alpha_\\frak{h}+\\mu^{-1}\\cdot\\alpha_\\frak{m}'\n +\\mu\\cdot\\alpha_\\frak{m}''$. \n\\end{enumerate} \n\\end{proposition} \n\n\n\n\n\\subsubsection{}\\label{subsec-2.3.3} \n We will first recall the notion of the extended framing of a pluriharmonic \nmap (cf.\\ Definition \\ref{def-2.3.4}), and afterwards point out a crucial \ndifference between the extended framings of pluriharmonic maps and \npara-pluriharmonic maps in view of the loop group method (cf.\\ Remark \n\\ref{rem-2.3.5}).\\par\n\n The arguments below will be similar to those in Subsection \n\\ref{subsec-2.2.3}. \n Let $G^\\mathbb{C}$, $H^\\mathbb{C}$, $G$ and $H$ denote the same Lie groups \nas in \\eqref{eq-2.2.14}. \n Fix a base point $p_o$ in a simply connected complex manifold $(M,J)$. \n For a pluriharmonic map $f=\\pi\\circ F:(M,J)\\to(G\/H,\\nabla^1)$ with \n$F(p_o)=\\operatorname{id}$ and $[\\alpha_\\frak{m}'\\wedge\\alpha_\\frak{m}']=0$, \nProposition \\ref{prop-2.3.3} shows that the $\\frak{g}^\\mathbb{C}$-valued \n$1$-form \n$\\alpha^\\mu=\\alpha_\\frak{h}+\\mu^{-1}\\cdot\\alpha_\\frak{m}'\n +\\mu\\cdot\\alpha_\\frak{m}''$ \non $(M,J)$ parameterized by $\\mu\\in\\mathbb{C}^*$ is integrable. \n Then there exists a unique map \n\\[ \n\\begin{array}{ll} \n F:M\\times\\mathbb{C}^*\\to G^\\mathbb{C}, \n& (p,\\mu)\\mapsto F_\\mu(p),\n\\end{array}\n\\]\nsuch that $F_\\mu^{-1}\\cdot dF_\\mu=\\alpha^\\mu$ and \n$F_\\mu(p_o)\\equiv\\operatorname{id}$, by virtue of the integrability condition \n$d\\alpha^\\mu+(1\/2)\\cdot[\\alpha^\\mu\\wedge\\alpha^\\mu]=0$. \n Here we remark that $F=F_\\mu:\\mathbb{C}^*\\to G^\\mathbb{C}$ satisfies \n\\begin{enumerate}\n\\item[(2.3.9)] \n $\\sigma(F_\\mu)=F_{-\\mu}$ for all $\\mu\\in\\mathbb{C}^*$, \n\\item[(2.3.10)] \n $F_\\lambda:=F|_{S^1}:S^1\\to G=\\operatorname{Fix}(G^\\mathbb{C},\\nu)$ \n($\\subset G^\\mathbb{C}$). \n\\end{enumerate} \n\\setcounter{equation}{10}\n Indeed, (2.3.10) follows from $\\alpha^\\lambda$ being $\\frak{g}$-valued for \nany $\\lambda\\in S^1$.\n\\begin{definition}\\label{def-2.3.4} \n The map $F_\\lambda$ is called the {\\it extended framing} of the \npluriharmonic map $f=\\pi\\circ F:(M,J)\\to(G\/H,\\nabla^1)$; and \n$\\{f_\\lambda\\}_{\\lambda\\in S^1}$ is called an {\\it associated family} of $f$, \nwhere $f_\\lambda(p):=\\pi\\circ F_\\lambda(p)$ for $(p,\\lambda)\\in M\\times S^1$. \n\\end{definition}\n\n\\begin{remark}\\label{rem-2.3.5}\n The map $F=F_\\mu:\\mathbb{C}^*\\to G^\\mathbb{C}$ defined above becomes \n$G$-valued if its variable $\\mu$ varies in $S^1$; and $F_\\lambda=F|_{S^1}$ is \nthe extended framing of a pluriharmonic map. \n By contrast, the map $F=F_\\mu:\\mathbb{C}^*\\to G^\\mathbb{C}$ in \nSubsection \\ref{subsec-2.2.3} becomes $G$-valued if its variable $\\mu$ varies \nin $\\mathbb{R}^+$; and $F_\\theta=F|_{\\mathbb{R}^+}$ is the extended framing of \na para-pluriharmonic map. \n\\end{remark}\n\n\n\n\n\n\n\\section{The loop group method}\\label{sec-3}\n First, we introduce three kinds of loop groups $\\Lambda G^\\mathbb{C}_\\sigma$, \n$\\Lambda G_\\sigma$ and $\\widetilde{\\Lambda} G_\\sigma$, and review their \ndecomposition theorems. \n Next, we explain the relation between para-pluriharmonic maps and the loop \ngroup method, and interrelate para-pluriharmonic maps with para-pluriharmonic \npotentials. \n Finally, we treat the pluriharmonic case. \n\n\n\\subsection{Decomposition theorems of loop groups}\\label{subsec-3.1}\n\\subsubsection{}\\label{subsec-3.1.1} \n Let $G^\\mathbb{C}$ be a simply connected, simple, complex linear algebraic \nsubgroup of $SL(m,\\mathbb{C})$, and let $\\sigma$ be a holomorphic involution \nof $G^\\mathbb{C}$. \n In this case the twisted loop group $\\Lambda G^\\mathbb{C}_\\sigma$ is \ndefined as follows: \n\\[\n\\Lambda G^\\mathbb{C}_\\sigma:=\\left\\{\n \\begin{array}{@{\\,}l|r@{\\,}} \n A_\\lambda:S^1\\to G^\\mathbb{C} \n & A_\\lambda=\\sum_{k\\in\\mathbb{Z}}A_k\\lambda^k, \\, \n \\sum||A_k||<\\infty,\\\\\n \n & \\mbox{$\\sigma(A_\\lambda)=A_{-\\lambda}$ for all \n $\\lambda\\in S^1$} \n \\end{array}\\right\\}, \n\\]\nwhere $||\\cdot||$ denotes some matrix norm satisfying \n$||A\\cdot B||\\leq||A||\\cdot||B||$ and $||\\operatorname{id}||=1$. \n Then $\\Lambda G^\\mathbb{C}_\\sigma$, with this norm \n$||A_\\lambda||=\\sum||A_k||$, is a complex Banach Lie group (see \\cite{Ba-Do}, \n\\cite{Go-Wa} and \\cite{Pr-Se} for more details). \n Here, the Lie algebra $\\Lambda\\frak{g}^\\mathbb{C}_\\sigma$ of \n$\\Lambda G^\\mathbb{C}_\\sigma$ is given by \n\\begin{equation}\\label{eq-3.1.1}\n\\Lambda\\frak{g}^\\mathbb{C}_\\sigma\n:=\\left\\{\n \\begin{array}{@{\\,}l|r@{\\,}} \n X_\\lambda:S^1\\to\\frak{g}^\\mathbb{C} \n & X_\\lambda=\\sum_{k\\in\\mathbb{Z}}X_k\\lambda^k, \\, \n \\sum||X_k||<\\infty,\\\\\n \n & \\mbox{$d\\sigma(X_\\lambda)=X_{-\\lambda}$ for all \n $\\lambda\\in S^1$} \n \\end{array}\\right\\}. \n\\end{equation}\n Define four subgroups $\\Lambda^\\pm G^\\mathbb{C}_\\sigma$ and \n$\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma$ of $\\Lambda G^\\mathbb{C}_\\sigma$ by \n\\[\n\\begin{array}{l}\n\\Lambda^\\pm G^\\mathbb{C}_\\sigma\n :=\\{ A_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma \\,|\\, \n \\mbox{$A_\\lambda$ has a holomorphic extension \n $\\widehat{A}_z:\\mathbb{D}_\\pm\\to G^\\mathbb{C}$} \\},\\\\\n\\Lambda^+_* G^\\mathbb{C}_\\sigma\n :=\\{ A_\\lambda\\in\\Lambda^+ G^\\mathbb{C}_\\sigma \\,|\\, \n \\widehat{A}_0=\\operatorname{id} \\},\\quad \n\\Lambda^-_* G^\\mathbb{C}_\\sigma\n :=\\{ A_\\lambda\\in\\Lambda^- G^\\mathbb{C}_\\sigma \\,|\\, \n \\widehat{A}_\\infty=\\operatorname{id} \\}, \n\\end{array}\n\\]\nwhere $\\mathbb{D}_+:=\\{z\\in\\mathbb{C}\\,|\\,|z|<1\\}$ and \n$\\mathbb{D}_-:=\\{z\\in\\mathbb{C}\\,|\\,|z|>1\\}\\cup\\{\\infty\\}$. \n With this notation, we can state the following two Theorems \n\\ref{thm-3.1.1} and \\ref{thm-3.1.2}, which are called the {\\it Iwasawa \ndecomposition} of \n$\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma$ and the \n{\\it Birkhoff decomposition} of $\\Lambda G^\\mathbb{C}_\\sigma$, respectively \n(see \\cite{Ba-Do}, \\cite{Go-Wa}, \\cite{Pr-Se}): \n\\begin{theorem}[Iwasawa decomposition of \n$\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma$]\n\\label{thm-3.1.1} \n The multiplication maps \n\\[\n\\begin{array}{l}\n \\triangle(\\Lambda G^\\mathbb{C}_\\sigma\n \\times\\Lambda G^\\mathbb{C}_\\sigma)\n \\times \n (\\Lambda^-_* G^\\mathbb{C}_\\sigma\\times\n \\Lambda^+ G^\\mathbb{C}_\\sigma) \n \\to \n \\Lambda G^\\mathbb{C}_\\sigma\n \\times\\Lambda G^\\mathbb{C}_\\sigma,\\\\\n \\triangle(\\Lambda G^\\mathbb{C}_\\sigma\n \\times\\Lambda G^\\mathbb{C}_\\sigma)\n \\times \n (\\Lambda^+_* G^\\mathbb{C}_\\sigma\\times\n \\Lambda^- G^\\mathbb{C}_\\sigma) \n \\to \n \\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma\n\\end{array} \n\\] \nare holomorphic diffeomorphisms onto open subsets of \n$\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma$, \nrespectively. \n Here \n$\\triangle(\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma)$ \ndenotes the diagonal subgroup of \n$\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma$. \n\\end{theorem}\n \n \n\\begin{theorem}[Birkhoff decomposition of $\\Lambda G^\\mathbb{C}_\\sigma$]\n\\label{thm-3.1.2} \n The multiplication maps \n\\[\n\\begin{array}{ll}\n \\Lambda^-_*G^\\mathbb{C}_\\sigma\\times\\Lambda^+G^\\mathbb{C}_\\sigma \n \\to\\Lambda G^\\mathbb{C}_\\sigma, \n& \n \\Lambda^+_*G^\\mathbb{C}_\\sigma\\times\\Lambda^-G^\\mathbb{C}_\\sigma \n \\to\\Lambda G^\\mathbb{C}_\\sigma\n\\end{array}\n\\] \nare holomorphic diffeomorphisms onto the open subsets \n$\\mathcal{B}_\\mp^\\mathbb{C}:=\n \\Lambda^\\mp_*G^\\mathbb{C}_\\sigma\\cdot\\Lambda^\\pm G^\\mathbb{C}_\\sigma$ \nof $\\Lambda G^\\mathbb{C}_\\sigma$, respectively. \n In particular, each element \n$A_\\lambda\\in\\mathcal{B}^\\mathbb{C}:=\\mathcal{B}_-^\\mathbb{C}\n \\cap\\mathcal{B}_+^\\mathbb{C}$ \ncan be uniquely factorized$:$ \n\\[\n\\begin{array}{lrr}\n A_\\lambda=A^-_\\lambda\\cdot B^+_\\lambda=A^+_\\lambda\\cdot B^-_\\lambda, \n& A^\\pm_\\lambda\\in\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma, \n& B^\\pm_\\lambda\\in\\Lambda^\\pm G^\\mathbb{C}_\\sigma. \n\\end{array}\n\\] \n\\end{theorem}\n\n\n\n \n \n\n\\subsubsection{Almost split real forms of $\\Lambda G^\\mathbb{C}_\\sigma$}\n\\label{subsec-3.1.2} \n Now, let $\\nu$ be an antiholomorphic involution of $G^\\mathbb{C}$ such that \n$[\\sigma,\\nu]=0$ (i.e., $\\sigma\\circ\\nu=\\nu\\circ\\sigma$). \n Then one can define an antiholomorphic involution $\\nu_S$ of \n$\\Lambda G^\\mathbb{C}_\\sigma$ by setting \n\\begin{equation}\\label{eq-3.1.2}\n\\begin{array}{ll}\n \\nu_S(A_\\lambda):=\\nu(A_{\\overline{\\lambda}}) \n& \\mbox{for $A_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma$}. \n\\end{array} \n\\end{equation} \n This involution $\\nu_S$ is said to be of {\\it the first kind}, and its \nfixed point set \n$\\Lambda G_\\sigma:=\\operatorname{Fix}(\\Lambda G^\\mathbb{C}_\\sigma,\\nu_S)$ \nis called an {\\it almost split real form} of $\\Lambda G^\\mathbb{C}_\\sigma$. \n Note that $\\nu_S$ satisfies \n$\\nu_S(\\Lambda^\\pm G^\\mathbb{C}_\\sigma)=\\Lambda^\\pm G^\\mathbb{C}_\\sigma$ and \n$\\nu_S(\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma)=\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma$. \n That allows us to define four subgroups $\\Lambda^\\pm G_\\sigma$ and \n$\\Lambda^\\pm_*G_\\sigma$ as follows: \n\\[\n\\begin{array}{ll}\n\\Lambda^\\pm G_\\sigma\n :=\\operatorname{Fix}(\\Lambda^\\pm G^\\mathbb{C}_\\sigma,\\nu_S), \n&\n\\Lambda^\\pm_* G_\\sigma\n :=\\operatorname{Fix}(\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma,\\nu_S). \n\\end{array}\n\\] \n With this notation, one can state the following theorems (see \\cite{Br}, \n\\cite{Br-Do}): \n\\begin{theorem}[Iwasawa decomposition of \n$\\Lambda G_\\sigma\\times\\Lambda G_\\sigma$]\\label{thm-3.1.3} \n The multiplication maps \n\\[\n\\begin{array}{l}\n \\triangle(\\Lambda G_\\sigma\\times\\Lambda G_\\sigma)\n \\times(\\Lambda^-_* G_\\sigma\\times\\Lambda^+ G_\\sigma) \n \\to \n \\Lambda G_\\sigma\\times\\Lambda G_\\sigma,\\\\\n \\triangle(\\Lambda G_\\sigma\\times\\Lambda G_\\sigma)\n \\times(\\Lambda ^+_* G_\\sigma\\times\\Lambda^- G_\\sigma) \n \\to \n \\Lambda G_\\sigma\\times\\Lambda G_\\sigma\n\\end{array} \n\\] \nare holomorphic diffeomorphisms onto open subsets of \n$\\Lambda G_\\sigma\\times\\Lambda G_\\sigma$, respectively. \n\\end{theorem}\n\n\n\\begin{theorem}[Birkhoff decomposition of $\\Lambda G_\\sigma$]\\label{thm-3.1.4} \n The multiplication maps \n\\[\n\\begin{array}{ll}\n \\Lambda^-_*G_\\sigma\\times\\Lambda^+G_\\sigma\\to\\Lambda G_\\sigma, \n& \n \\Lambda^+_*G_\\sigma\\times\\Lambda^-G_\\sigma\\to\\Lambda G_\\sigma\n\\end{array}\n\\] \nare holomorphic diffeomorphisms onto the open subsets \n$\\mathcal{B}_\\mp:=\\Lambda^\\mp_*G_\\sigma\\cdot\\Lambda^\\pm G_\\sigma$ \nof $\\Lambda G_\\sigma$, respectively. \n In particular, each element \n$A_\\lambda\\in\\mathcal{B}:=\\mathcal{B}_-\\cap\\mathcal{B}_+$ can be uniquely \nfactorized$:$ \n\\[\n\\begin{array}{lrr}\n A_\\lambda=A^-_\\lambda\\cdot B^+_\\lambda=A^+_\\lambda\\cdot B^-_\\lambda, \n& A^\\pm_\\lambda\\in\\Lambda^\\pm_* G_\\sigma, \n& B^\\pm_\\lambda\\in\\Lambda^\\pm G_\\sigma. \n\\end{array}\n\\] \n\\end{theorem}\n\n\n\n\\subsubsection{}\\label{subsec-3.1.3}\n For a general element $A_\\lambda\\in\\Lambda G_\\sigma^\\mathbb{C}$, its \nvariable $\\lambda$ only varies in $S^1$. \n However, for the framing $F_\\lambda$ of a para-pluriharmonic map, the \nvariable $\\lambda$ of $F_\\lambda$ can vary in the whole $\\mathbb{C}^*$ \n(cf.\\ Subsection \\ref{subsec-2.2.3}). \n Toda \\cite{To} has addressed this relevant point, since in her work \n$\\lambda$ is for all geometric purposes a positive real number. \n She proposed to consider the following subgroup \n$\\widetilde{\\Lambda}G_\\sigma$ of $\\Lambda G_\\sigma$: \n\\begin{equation}\\label{eq-3.1.3}\n \\widetilde{\\Lambda}G_\\sigma\n :=\\{A_\\lambda\\in\\Lambda G_\\sigma \\,|\\, \n \\mbox{$A_\\lambda$ has an analytic extension \n $\\widetilde{A}_\\mu:\\mathbb{C}^*\\to G^\\mathbb{C}$} \\}.\n\\end{equation}\n One equips $\\widetilde{\\Lambda}G_\\sigma$ with the induced topology from \n$\\Lambda G_\\sigma$, where $\\Lambda G_\\sigma$ is considered as a loop group \nwith $\\lambda\\in S^1$; and in a similar way, one defines four subgroups \n$\\widetilde{\\Lambda}^\\pm G_\\sigma$ and $\\widetilde{\\Lambda}^\\pm_*G_\\sigma$ of \n$\\Lambda^\\pm G_\\sigma$ and $\\Lambda^\\pm_*G_\\sigma$, respectively. \n Then, the following two decomposition theorems hold (cf.\\ \\cite{Br}, \n\\cite{Do-In-To}, \\cite{To}): \n\\begin{theorem}[Iwasawa decomposition of \n$\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma$]\\label{thm-3.1.5} \n The multiplication maps \n\\[\n\\begin{array}{l}\n \\triangle(\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma)\n \\times(\\widetilde{\\Lambda}^-_* G_\\sigma\\times\\widetilde{\\Lambda}^+ G_\\sigma) \n \\to \n \\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma,\\\\\n \\triangle(\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma)\n \\times(\\widetilde{\\Lambda}^+_* G_\\sigma\\times\\widetilde{\\Lambda}^- G_\\sigma) \n \\to \n \\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma\n\\end{array} \n\\] \nare real analytic diffeomorphisms onto open subsets of \n$\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma$, respectively. \n\\end{theorem}\n\n\n\\begin{theorem}[Birkhoff decomposition of $\\widetilde{\\Lambda}G_\\sigma$]\n\\label{thm-3.1.6} \n The multiplication maps \n\\[\n\\begin{array}{ll}\n \\widetilde{\\Lambda}^-_*G_\\sigma\\times\\widetilde{\\Lambda}^+G_\\sigma \n \\to\\widetilde{\\Lambda}G_\\sigma, \n& \n \\widetilde{\\Lambda}^+_*G_\\sigma\\times\\widetilde{\\Lambda}^-G_\\sigma \n \\to\\widetilde{\\Lambda}G_\\sigma\n\\end{array}\n\\] \nare real analytic diffeomorphisms onto the open subsets \n$\\widetilde{\\mathcal{B}}_\\mp:=\n \\widetilde{\\Lambda}^\\mp_*G_\\sigma\\cdot\\widetilde{\\Lambda}^\\pm G_\\sigma$ \nof $\\widetilde{\\Lambda}G_\\sigma$, respectively. \n In particular, each element \n$A_\\lambda\\in\\widetilde{\\mathcal{B}}:=\\widetilde{\\mathcal{B}}_-\n \\cap\\widetilde{\\mathcal{B}}_+$ \ncan be uniquely factorized$:$ \n\\[\n\\begin{array}{lrr}\n A_\\lambda=A^-_\\lambda\\cdot B^+_\\lambda=A^+_\\lambda\\cdot B^-_\\lambda, \n& A^\\pm_\\lambda\\in\\widetilde{\\Lambda}^\\pm_* G_\\sigma, \n& B^\\pm_\\lambda\\in\\widetilde{\\Lambda}^\\pm G_\\sigma. \n\\end{array}\n\\] \n\\end{theorem}\n\n\n\\begin{remark}\\label{rem-3.1.7}\n Throughout this paper, we consider that for \n$A_\\lambda\\in\\widetilde{\\Lambda}G_\\sigma$, its variable $\\lambda$ varies \nnot only in $S^1$ but also in $\\mathbb{R}^+$ (or more generally in \n$\\mathbb{C}^*$). \n\\end{remark}\n\n\n We end this subsection with showing the following lemma:\n\\begin{lemma}\\label{lem-3.1.8} \n Each element $C_\\lambda\\in\\widetilde{\\Lambda}G_\\sigma$ satisfies \n$C_\\theta\\in G:=\\operatorname{Fix}(G^\\mathbb{C},\\nu)$ for all \n$\\theta\\in\\mathbb{R}^+$. \n\\end{lemma}\n\\begin{proof} \n Since \n$C_\\lambda\\in\\widetilde{\\Lambda}G_\\sigma\\subset\\Lambda G_\\sigma\n =\\operatorname{Fix}(\\Lambda G^\\mathbb{C}_\\sigma,\\nu_S)$, \nit satisfies $\\nu(C_{\\overline{\\lambda}})=\\nu_S(C_\\lambda)=C_\\lambda$ for all \n$\\lambda\\in S^1$. \n Hence, one has $\\nu(C_{\\overline{\\mu}})=C_\\mu$ for all $\\mu\\in\\mathbb{C}^*$; \nand therefore $\\nu(C_\\theta)=C_\\theta$ for all $\\theta\\in\\mathbb{R}^+$. \n\\end{proof} \n\n\n\n\n\\subsection{Para-pluriharmonic maps and the loop group method}\n\\label{subsec-3.2}\n In this subsection, we will study the relation between para-pluriharmonic \nmaps and the loop group method. \n\n\\subsubsection{}\\label{subsec-3.2.1} \n Let $G^\\mathbb{C}$ be a simply connected, simple, complex linear algebraic \nsubgroup of $SL(m,\\mathbb{C})$, let $\\sigma$ be a holomorphic involution of \n$G^\\mathbb{C}$, and let $\\nu$ be an antiholomorphic involution of \n$G^\\mathbb{C}$ such that $[\\sigma,\\nu]=0$. \n Define subgroups $H^\\mathbb{C}$, $G$ and $H$ by the same conditions as in \nSubsection \\ref{subsec-2.2.3}, respectively---that is, \n\\[\n\\begin{array}{lll} \n H^\\mathbb{C}:=\\operatorname{Fix}(G^\\mathbb{C},\\sigma), \n& G:=\\operatorname{Fix}(G^\\mathbb{C},\\nu), \n& H:=\\operatorname{Fix}(G,\\sigma)=\\operatorname{Fix}(H^\\mathbb{C},\\nu). \n\\end{array} \n\\]\n We will conclude that the extended framing $F_\\theta$ of a \npara-pluriharmonic map belongs to the loop group $\\widetilde{\\Lambda}G_\\sigma$ \n(see \\eqref{eq-3.1.3} for $\\widetilde{\\Lambda}G_\\sigma$). \n Let $(M,I)$ be a simply connected para-complex manifold, and let $F_\\theta$ \nbe the extended framing of a para-pluriharmonic map \n$f=\\pi\\circ F:(M,I)\\to(G\/H,\\nabla^1)$ with $F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$, where $p_o$ is a base point \nin $(M,I)$. \n Then it follows from (2.2.15) and (2.2.16) that $F_\\lambda$ belongs to \n$\\Lambda G^\\mathbb{C}_\\sigma$. \n Moreover, the variable $\\lambda$ of $F_\\lambda$ can vary in all of \n$\\mathbb{C}^*$ (cf.\\ Subsection \\ref{subsec-2.2.3}). \n Accordingly one can assert that the framing $F_\\lambda$ belongs to \n$\\widetilde{\\Lambda}G_\\sigma$, if it satisfies \n\\begin{equation}\\label{eq-3.2.1}\n \\nu_S(F_\\lambda)=F_\\lambda\n\\end{equation}\n(see \\eqref{eq-3.1.2} for $\\nu_S$). \n Let us show \\eqref{eq-3.2.1}. \n From (2.2.17) we know $\\nu(F_\\theta)=F_\\theta$ for any \n$\\theta\\in\\mathbb{R}^+$. \n This yields that $\\nu_S(F_\\lambda)=\\nu(F_{\\overline{\\lambda}})=F_\\lambda$ \nfor any $\\lambda\\in S^1$ because $\\nu(F_{\\overline{\\mu}})=F_\\mu$ for any \n$\\mu\\in\\mathbb{C}^*$ follows from $\\nu(F_\\theta)=F_\\theta$ for any \n$\\theta\\in\\mathbb{R}^+$. \n Hence, we have shown \\eqref{eq-3.2.1}. \n Consequently the framing $F_\\lambda$ belongs to \n$\\widetilde{\\Lambda}G_\\sigma$. \n\n\n\n\\subsubsection{Para-pluriharmonic potentials}\\label{subsec-3.2.2} \n We have just shown that $F_\\lambda$ belongs to $\\widetilde{\\Lambda}G_\\sigma$, \nwhere $F_\\lambda$ is the extended framing of a para-pluriharmonic map \n$f=\\pi\\circ F:(M,I)\\to(G\/H,\\nabla^1)$ with $F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$. \n To $F_\\lambda\\in\\widetilde{\\Lambda}G_\\sigma$, one can apply the Birkhoff \ndecomposition theorem (cf.\\ Theorem \\ref{thm-3.1.6}). \n We will obtain a pair of $\\frak{m}$-valued $1$-forms $\\eta_\\theta$ and \n$\\tau_\\theta$ on $(M,I)$ parameterized $\\theta\\in\\mathbb{R}^+$, from the \nframing $F_\\theta$.\\par \n\n Since $F_\\lambda(p_o)\\equiv\\operatorname{id}\\in\\widetilde{\\mathcal{B}}$, \none can perform a Birkhoff decomposition of the framing \n$F_\\lambda\\in\\widetilde{\\Lambda}G_\\sigma$: \n\\[\n\\begin{array}{lrr}\n F_\\lambda=F^-_\\lambda\\cdot L^+_\\lambda=F^+_\\lambda\\cdot L^-_\\lambda, \n& F^\\pm_\\lambda\\in\\widetilde{\\Lambda}^\\pm_* G_\\sigma, \n& L^\\pm_\\lambda\\in\\widetilde{\\Lambda}^\\pm G_\\sigma, \n\\end{array}\n\\]\non an open neighborhood $U$ of $M$ at $p_o$ (cf.\\ Theorem \\ref{thm-3.1.6}). \n Define $\\eta_\\theta$ and $\\tau_\\theta$ by \n\\[\n\\begin{array}{ll}\n \\eta_\\theta:=(F^-_\\theta)^{-1}\\cdot dF^-_\\theta, \n& \\tau_\\theta:=(F^+_\\theta)^{-1}\\cdot dF^+_\\theta,\n\\end{array}\n\\]\nrespectively. \n Then for any $\\theta\\in\\mathbb{R}^+$, both $\\eta_\\theta$ and $\\tau_\\theta$ \nbecome $\\frak{m}$-valued $1$-forms on the para-complex manifold $(U,I)$; and \nfurthermore, $\\eta_\\theta$ is para-holomorphic and $\\tau_\\theta$ is \npara-antiholomorphic. \n Indeed, it is immediate from $F_\\theta^{-1}\\cdot dF_\\theta=\\alpha^\\theta$ \nthat \n\\[\n\\begin{split}\n \\alpha_\\frak{h}+\\theta^{-1}\\cdot\\alpha_\\frak{m}^+\n +\\theta\\cdot\\alpha_\\frak{m}^- \n =\\alpha^\\theta\n&=(L^+_\\theta)^{-1}\\cdot((F^-_\\theta)^{-1}\\cdot dF^-_\\theta)\\cdot L^+_\\theta \n +(L^+_\\theta)^{-1}\\cdot dL^+_\\theta\\\\\n&=(L^-_\\theta)^{-1}\\cdot((F^+_\\theta)^{-1}\\cdot dF^+_\\theta)\\cdot L^-_\\theta \n +(L^-_\\theta)^{-1}\\cdot dL^-_\\theta,\n\\end{split} \n\\]\nand that $\\eta_\\theta=\\theta^{-1}\\cdot\\operatorname{Ad}(L^+_0)\\alpha_\\frak{m}^+$ \nand $\\tau_\\theta=\\theta\\cdot\\operatorname{Ad}(L^-_0)\\alpha_\\frak{m}^-$, \nwhere $L^\\pm_\\lambda=\\sum_{\\pm k\\geq 0}L_k^\\pm\\lambda^k$. \n Here, we remark that $L^\\pm_0\\in H$ by Lemma \\ref{lem-3.1.8}.\\par \n\n From the extended framing $F_\\theta$, we have obtained the pair \n$(\\eta_\\theta,\\tau_\\theta)$ of an $\\frak{m}$-valued para-holomorphic $1$-form \nand an $\\frak{m}$-valued para-antiholomorphic $1$-form on $(U,I)$ parameterized \nby $\\theta\\in\\mathbb{R}^+$. \n In the next subsection, we will see that the pair \n$(\\eta_\\theta,\\tau_\\theta)$ is a {\\it para-pluriharmonic potential} (cf.\\ \nDefinition \\ref{def-3.2.1}). \n\n\n\\subsubsection{}\\label{subsec-3.2.3}\n We are going to introduce the notion of a para-pluriharmonic potential. \n Consider two linear subspaces \n$\\widetilde{\\Lambda}_{-1,\\infty}\\frak{g}_\\sigma$ and \n$\\widetilde{\\Lambda}_{-\\infty,1}\\frak{g}_\\sigma$ of \n$\\widetilde{\\Lambda}\\frak{g}_\\sigma$: \n\\[\n\\begin{array}{l}\n \\widetilde{\\Lambda}_{-1,\\infty}\\frak{g}_\\sigma\n :=\\{ X_\\lambda\\in\\widetilde{\\Lambda}\\frak{g}_\\sigma\n \\,|\\, X_\\lambda=\\sum_{i=-1}^\\infty X_i\\lambda^i \\},\\\\\n \\widetilde{\\Lambda}_{-\\infty,1}\\frak{g}_\\sigma\n :=\\{ Y_\\lambda\\in\\widetilde{\\Lambda}\\frak{g}_\\sigma\n \\,|\\, Y_\\lambda=\\sum_{j=-\\infty}^1 Y_j\\lambda^j \\}, \n\\end{array}\n\\]\nwhere $\\widetilde{\\Lambda}\\frak{g}_\\sigma$ denotes the Lie algebra of \n$\\widetilde{\\Lambda}G_\\sigma$ (see \\eqref{eq-3.1.3} for \n$\\widetilde{\\Lambda}G_\\sigma$). \n Let $\\widetilde{\\mathcal{P}}_+=\\widetilde{\\mathcal{P}}_+(\\frak{g})$ and \n$\\widetilde{\\mathcal{P}}_-=\\widetilde{\\mathcal{P}}_-(\\frak{g})$ denote the \nsets of all $\\widetilde{\\Lambda}_{-1,\\infty}\\frak{g}_\\sigma$-valued \npara-holomorphic and $\\widetilde{\\Lambda}_{-\\infty,1}\\frak{g}_\\sigma$-valued \npara-antiholomorphic $1$-forms on a simply connected para-complex manifold \n$(M,I)$, respectively. \n\\begin{definition}\\label{def-3.2.1} \n An element \n$(\\eta_\\lambda,\\tau_\\lambda)\n \\in\\widetilde{\\mathcal{P}}_+\\times\\widetilde{\\mathcal{P}}_-$ \nis called a {\\it para-pluriharmonic potential} (or a {\\it potential}, \nfor short) on $(M,I)$. \n\\end{definition}\n \n \n\\begin{remark}\\label{rem-3.2.2} \n (1) For each potential \n$(\\eta_\\lambda,\\tau_\\lambda)\n \\in\\widetilde{\\mathcal{P}}_+\\times\\widetilde{\\mathcal{P}}_-$, \none may assume that the variable $\\lambda$ of $\\eta_\\lambda$ (resp.\\ \n$\\tau_\\lambda$) varies in $\\mathbb{R}^+$ by virtue of \n$\\eta_\\lambda\\in\\widetilde{\\Lambda}\\frak{g}_\\sigma$ (resp.\\ \n$\\tau_\\lambda\\in\\widetilde{\\Lambda}\\frak{g}_\\sigma$).\\par\n\n (2) Note that we has just obtained a para-pluriharmonic potential \n$(\\eta_\\theta,\\tau_\\theta)$ from the extended framing $F_\\theta$ of a \npara-pluriharmonic map $f:(M,I)\\to(G\/H,\\nabla^1)$ with \n$F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$.\\par\n\n (3) It is unfortunate that the condition \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$ for the applicability of \nthe loop group method is necessary, but not always satisfied as shown by \nKrahe \\cite{Kr}. \n The condition is always true for surfaces and if the pseudo metric of the \ntarget space is positive definite. \n Other natural conditions for the existence of \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$ are not known. \n\\end{remark}\n\n We has just obtained a para-pluriharmonic potential \n$(\\eta_\\theta,\\tau_\\theta)$ from the extended framing $F_\\theta$ of a \npara-pluriharmonic map $f:(M,I)\\to(G\/H,\\nabla^1)$ with \n$F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$. \n The converse statement is also true---that is, one can obtain a \npara-pluriharmonic map and its extended framing from any para-pluriharmonic \npotential and this framing satisfies \n$[\\alpha_\\frak{m}^\\pm\\wedge\\alpha_\\frak{m}^\\pm]=0$: \n\\begin{proposition}\\label{prop-3.2.3} \n Let \n$(\\eta_\\theta,\\tau_\\theta)\n \\in\\widetilde{\\mathcal{P}}_+(\\frak{g})\n \\times\\widetilde{\\mathcal{P}}_-(\\frak{g})$ \nbe a para-pluriharmonic potential on the para-complex manifold $(M,I)$. \n Then, the following steps provide an $\\mathbb{R}^+$-family \n$\\{f_\\theta\\}_{\\theta\\in\\mathbb{R}^+}$ of para-pluriharmonic maps$:$ \n\\begin{enumerate}\n\\item[{\\rm (S1)}] \n Solve the two initial value problems$:$ \n$(A^-_\\theta)^{-1}\\cdot dA^-_\\theta=\\eta_\\theta$ and \n$(A^+_\\theta)^{-1}\\cdot dA^+_\\theta=\\tau_\\theta$ with \n$A^\\pm_\\theta(p_o)\\equiv\\operatorname{id}$, where $p_o$ is a base point in $(M,I)$. \n\\item[{\\rm (S2)}] \n Factorize \n$(A^-_\\theta,A^+_\\theta)\n \\in\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma$ \nin the Iwasawa decomposition $($cf.\\ Theorem {\\rm \\ref{thm-3.1.5}}$):$ \n$(A^-_\\theta,A^+_\\theta)=(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta)$, \nwhere $C_\\theta\\in\\widetilde{\\Lambda}G_\\sigma$, \n$B^+_\\theta\\in\\widetilde{\\Lambda}^+_*G_\\sigma$ and \n$B^-_\\theta\\in\\widetilde{\\Lambda}^- G_\\sigma$. \n\\item[{\\rm (S3)}] \n Then, $f_\\theta:=\\pi\\circ C_\\theta:(W,I)\\to(G\/H,\\nabla^1)$ becomes a \npara-pluriharmonic map for every $\\theta\\in\\mathbb{R}^+$. \n Here, $W$ is any open neighborhood of $M$ at $p_o$ such that both {\\rm (S1)} \nand {\\rm (S2)} are solved on $W$. \n\\end{enumerate}\n In particular, $C_\\theta(p_o)\\equiv\\operatorname{id}$ and $C_\\theta$ is \nthe extended framing of the para-pluriharmonic map \n$f_1=\\pi\\circ C_1:(W,I)\\to(G\/H,\\nabla^1)$. \n\\end{proposition}\n\\begin{proof}\n (S1), (S2): The solution $(A_\\theta^-,A_\\theta^+)$ to (S1) satisfies \n$A_\\theta^\\mp\\in\\widetilde{\\Lambda}G_\\sigma$ and \n$A_\\theta^\\mp(p_o)\\equiv\\operatorname{id}$. \n Therefore, it belongs to the open subset of \n$\\widetilde{\\Lambda}G_\\sigma\\times\\widetilde{\\Lambda}G_\\sigma$ locally. \n Hence, one can factorize $(A_\\theta^-,A_\\theta^+)$ by means of (S2).\\par\n\n (S3): Let $W$ be any open neighborhood of $M$ at $p_o$ such that both (S1) \nand (S2) are solved on $W$. \n First, let us show $C_\\theta(p_o)\\equiv\\operatorname{id}$. \n Since $A_\\theta^\\mp(p_o)\\equiv\\operatorname{id}$ we have \n$\\widetilde{\\Lambda}^+_* G_\\sigma\\ni B^+_\\theta(p_o)=C_\\theta(p_o)^{-1}\n =B^-_\\theta(p_o)\\in\\widetilde{\\Lambda}^- G_\\sigma$. \n Hence, \n$C_\\theta(p_o)\\in(\\widetilde{\\Lambda}^+_* G_\\sigma\n \\cap\\widetilde{\\Lambda}^- G_\\sigma)=\\{\\operatorname{id}\\}$. \n Now, let $\\beta^\\mu:=C_\\mu^{-1}\\cdot dC_\\mu$ for $\\mu\\in\\mathbb{C}^*$. \n Lemma \\ref{lem-3.1.8} implies that $\\beta^\\theta$ is a $\\frak{g}$-valued \n$1$-form on $W$ for any $\\theta\\in\\mathbb{R}^+$.\n Therefore, one can express it as \n$\\beta^\\theta=(\\beta^\\theta)_\\frak{h}+(\\beta^\\theta)_\\frak{m}\n=(\\beta^\\theta)_\\frak{h}+(\\beta^\\theta)_\\frak{m}^++(\\beta^\\theta)_\\frak{m}^-$ \nby taking $\\frak{g}=\\frak{h}\\oplus\\frak{m}$ into consideration (see (2.2.5) \nand (2.2.6) for $(\\beta^\\theta)_\\frak{h}$ and $(\\beta^\\theta)_\\frak{m}^\\pm$). \n Then we obtain the conclusion, if one has \n\\begin{equation}\\label{eq-3.2.2}\n \\beta^\\theta=(\\beta^1)_\\frak{h}+\\theta^{-1}\\cdot(\\beta^1)_\\frak{m}^+\n +\\theta\\cdot(\\beta^1)_\\frak{m}^-\n\\end{equation}\nbecause $\\beta^\\theta=C_\\theta^{-1}\\cdot dC_\\theta$ satisfies \n$d\\beta^\\theta+(1\/2)\\cdot[\\beta^\\theta\\wedge\\beta^\\theta]=0$ \nfor any $\\theta\\in\\mathbb{R}^+$, and thus the proof of Proposition \n\\ref{prop-2.2.4} and \\eqref{eq-3.2.2} allow us to conclude that \n$f_\\theta=\\pi\\circ C_\\theta:(W,I)\\to(G\/H,\\nabla^1)$ is a para-pluriharmonic \nmap for every $\\theta\\in\\mathbb{R}^+$. \n Hence, it suffices to prove \\eqref{eq-3.2.2}. \n Direct computation, together with \n$C_\\theta=A_\\theta^-\\cdot(B^+_\\theta)^{-1}=A_\\theta^+\\cdot(B^-_\\theta)^{-1}$, \ngives us \n\\[\n\\begin{split}\n (\\beta^\\theta,\\beta^\\theta)\n&=\\bigl(C_\\theta^{-1}\\cdot dC_\\theta,\\,\\,\n C_\\theta^{-1}\\cdot dC_\\theta\\bigr)\\\\\n&=\\bigl(B^+_\\theta\\cdot\\eta_\\theta\\cdot(B^+_\\theta)^{-1}\n +B^+_\\theta\\cdot d(B^+_\\theta)^{-1},\\,\\, \n B^-_\\theta\\cdot\\tau_\\theta\\cdot(B^-_\\theta)^{-1}\n +B^-_\\theta\\cdot d(B^-_\\theta)^{-1}\\bigr). \n\\end{split}\n\\]\n Therefore, the Fourier series \n$\\beta^\\lambda=\\sum_{k\\in\\mathbb{Z}}\\beta_k\\lambda^k$ has actually the simple \nform:\n\\[\\label{a}\\tag{a}\n \\beta^\\lambda\n=\\lambda^{-1}\\cdot\\beta_{-1}+\\beta_0+\\lambda\\cdot\\beta_{+1}\n\\] \nbecause the $n$-th and $m$-th Fourier coefficients of \n$B^+_\\lambda\\cdot\\eta_\\lambda\\cdot(B^+_\\lambda)^{-1}\n +B^+_\\lambda\\cdot d(B^+_\\lambda)^{-1}$ \nand \n$B^-_\\lambda\\cdot\\tau_\\lambda\\cdot(B^-_\\lambda)^{-1}\n +B^-_\\lambda\\cdot d(B^-_\\lambda)^{-1}$ \nare zero for all $n\\leq -2$ and $2\\leq m$, respectively. \n Let us denote by $(\\beta_j)^+$ and $(\\beta_j)^-$ the para-holomorphic \ncomponent and the para-antiholomorphic component of $\\beta_j$, respectively \n(i.e., $(\\beta_j)^\\pm:=(1\/2)\\cdot(\\beta_j\\pm{}^tI(\\beta_j))$) for $j=\\pm 1$, and \nrewrite the above \\eqref{a} as \n\\[\\label{a'}\\tag{a$'$}\n \\beta^\\lambda\n=\\lambda^{-1}\\cdot((\\beta_{-1})^++(\\beta_{-1})^-)\n +\\beta_0+\\lambda\\cdot((\\beta_{+1})^++(\\beta_{+1})^-).\n\\] \n Then, \\eqref{a'} simplifies to \n\\[\\label{a''}\\tag{a$''$}\n \\beta^\\lambda\n=\\lambda^{-1}\\cdot(\\beta_{-1})^++\\beta_0+\\lambda\\cdot(\\beta_{+1})^-\n\\]\nbecause the $-1$st and $+1$st Fourier coefficients of \n$B^+_\\lambda\\cdot\\eta_\\lambda\\cdot(B^+_\\lambda)^{-1}\n +B^+_\\lambda\\cdot d(B^+_\\lambda)^{-1}$ \nand \n$B^-_\\lambda\\cdot\\tau_\\lambda\\cdot(B^-_\\lambda)^{-1}\n +B^-_\\lambda\\cdot d(B^-_\\lambda)^{-1}$ \nare para-holomorphic and para-antiholomorphic, respectively. \n From \\eqref{a''} and $\\beta^\\lambda\\in\\widetilde{\\Lambda}\\frak{g}_\\sigma$ \nwe see that $(\\beta^1)_\\frak{h}=\\beta_0$ and \n$(\\beta^1)_\\frak{m}=(\\beta_{-1})^++(\\beta_{+1})^-$. \n This implies $(\\beta^1)_\\frak{m}^+=(\\beta_{-1})^+$, \n$(\\beta^1)_\\frak{m}^-=(\\beta_{+1})^-$ and \n$\\beta^\\lambda=\\lambda^{-1}\\cdot(\\beta^1)_\\frak{m}^++(\\beta^1)_\\frak{h}\n +\\lambda\\cdot(\\beta^1)_\\frak{m}^-$; \nand \\eqref{eq-3.2.2} follows. \n\\end{proof}\n \n\n\n\n\\subsection{Pluriharmonic maps and the loop group method}\\label{subsec-3.3} \n We have explained the relation between para-pluriharmonic maps and the loop \ngroup method in Subsection \\ref{subsec-3.2}. \n In this subsection, we will explain the relation between pluriharmonic maps \nand the loop group method. \n The arguments below will be similar to those in Subsection \\ref{subsec-3.2}. \n \n\n\\subsubsection{}\\label{subsec-3.3.1} \n In Subsection \\ref{subsec-3.2.1} we have learned that the extended \nframing $F_\\theta'$ of a para-pluriharmonic map belongs to the almost split \nreal form $\\Lambda G_\\sigma$---that is, it satisfies \n$\\nu_S(F_\\lambda')=F_\\lambda'$ for the involution $\\nu_S$ of {\\it the first \nkind} (cf.\\ \\eqref{eq-3.1.2} for $\\nu_S$). \n In this subsection, we will first confirm that the extended framing \n$F_\\lambda$ of a pluriharmonic map satisfies $\\nu_C(F_\\lambda)=F_\\lambda$ for \nthe involution $\\nu_C$ of {\\it the second kind} defined below.\\par\n \n \n Let $G^\\mathbb{C}$ be a simply connected, simple, complex linear algebraic \nsubgroup of $SL(m,\\mathbb{C})$, let $\\sigma$ be a holomorphic involution of \n$G^\\mathbb{C}$, and let $\\nu$ be an antiholomorphic involution of \n$G^\\mathbb{C}$ such that $[\\sigma,\\nu]=0$. \n Denote by $H^\\mathbb{C}$, $G$ and $H$, the subgroups defined in Subsection \n\\ref{subsec-2.2.3}, respectively (cf.\\ \\eqref{eq-2.2.14}). \n Now, let us define an antiholomorphic involution $\\nu_C$ of \n$\\Lambda G^\\mathbb{C}_\\sigma$ by \n\\begin{equation}\\label{eq-3.3.1}\n\\begin{array}{ll}\n \\nu_C(A_\\lambda):=\\nu(A_{1\/\\overline{\\lambda}}) \n& \\mbox{for $A_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma$}. \n\\end{array} \n\\end{equation} \n This involution $\\nu_C$ is said to be of {\\it the second kind}, and \nsatisfies the following: \n\\begin{equation}\\label{eq-3.3.2}\n\\begin{array}{ll}\n \\nu_C(\\Lambda^\\pm G^\\mathbb{C}_\\sigma)=\\Lambda^\\mp G^\\mathbb{C}_\\sigma, \n& \\nu_C(\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma)=\\Lambda^\\mp_* G^\\mathbb{C}_\\sigma. \n\\end{array} \n\\end{equation} \n Let $p_o$ be a base point in a simply connected complex manifold $(M,J)$, \nand let $F_\\lambda$ be the extended framing of a pluriharmonic map \n$f=\\pi\\circ F:(M,J)\\to(G\/H,\\nabla^1)$ with $F(p_o)=\\operatorname{id}$ and \n$[\\alpha_\\frak{m}'\\wedge\\alpha_\\frak{m}']=0$. \n From (2.3.9) it follows that $F_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma$. \n In particular, (2.3.10) implies that $F_\\lambda$ satisfies \n\\begin{equation}\\label{eq-3.3.3}\n \\nu_C(F_\\lambda)=F_\\lambda. \n\\end{equation} \n\n\n\n\\subsubsection{Pluriharmonic potentials}\\label{subsec-3.3.2}\n Since $F_\\lambda(p_o)\\equiv\\operatorname{id}$ we perform a Birkhoff \ndecomposition of the framing $F_\\lambda$. \n Therefore we obtain a pair of $\\frak{m}^\\mathbb{C}$-valued $1$-forms \n$\\eta_\\lambda$ and $\\tau_\\lambda$ on $(M,J)$ parameterized by $\\lambda\\in S^1$. \n Here \n$\\frak{m}^\\mathbb{C}:=\\operatorname{Fix}(\\frak{g}^\\mathbb{C},-d\\sigma)$. \n We will see later that the pair $(\\eta_\\lambda,\\tau_\\lambda)$ is a \npluriharmonic potential (cf.\\ Definition \\ref{def-3.3.1}). \n\n Since $F_\\lambda(p_o)\\equiv\\operatorname{id}\\in\\mathcal{B}^\\mathbb{C}$, \nwe factorize the framing $F_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma$ in the \nBirkhoff decomposition: \n\\[\n\\begin{array}{lrr}\n F_\\lambda=F^-_\\lambda\\cdot L^+_\\lambda=F^+_\\lambda\\cdot L^-_\\lambda, \n& F^\\pm_\\lambda\\in\\Lambda^\\pm_* G^\\mathbb{C}_\\sigma, \n& L^\\pm_\\lambda\\in\\Lambda^\\pm G^\\mathbb{C}_\\sigma, \n\\end{array}\n\\]\non an open neighborhood $U$ of $M$ at $p_o$ (cf.\\ Theorem \\ref{thm-3.1.2}). \n Define $\\eta_\\lambda$ and $\\tau_\\lambda$ by \n\\[\n\\begin{array}{ll}\n \\eta_\\lambda:=(F^-_\\lambda)^{-1}\\cdot dF^-_\\lambda, \n& \\tau_\\lambda:=(F^+_\\lambda)^{-1}\\cdot dF^+_\\lambda,\n\\end{array}\n\\]\nrespectively. \n Then for any $\\lambda\\in S^1$, both $\\eta_\\lambda$ and $\\tau_\\lambda$ \nbecome $\\frak{m}^\\mathbb{C}$-valued $1$-forms on the complex manifold $(U,J)$. \n In addition, $\\eta_\\lambda$ is holomorphic and $\\tau_\\lambda$ is \nantiholomorphic. \n Indeed, $F_\\lambda^{-1}\\cdot dF_\\lambda=\\alpha^\\lambda$ yields \n\\[\n\\begin{split}\n \\alpha_\\frak{h}+\\lambda^{-1}\\cdot\\alpha_\\frak{m}'\n +\\lambda\\cdot\\alpha_\\frak{m}'' \n =\\alpha^\\lambda\n&=(L^+_\\lambda)^{-1}\\cdot((F^-_\\lambda)^{-1}\\cdot dF^-_\\lambda)\\cdot L^+_\\lambda \n +(L^+_\\theta)^{-1}\\cdot dL^+_\\theta\\\\\n&=(L^-_\\lambda)^{-1}\\cdot((F^+_\\lambda)^{-1}\\cdot dF^+_\\lambda)\\cdot L^-_\\lambda \n +(L^-_\\lambda)^{-1}\\cdot dL^-_\\lambda,\n\\end{split} \n\\]\nand \n$\\eta_\\lambda=\\lambda^{-1}\\cdot\\operatorname{Ad}(L^+_0)\\alpha_\\frak{m}'$ \nand $\\tau_\\lambda=\\lambda\\cdot\\operatorname{Ad}(L^-_0)\\alpha_\\frak{m}''$, \nwhere $L^\\pm_\\lambda=\\sum_{\\pm k\\geq 0}L_k^\\pm\\lambda^k$. \n Now, it follows from \\eqref{eq-3.3.2} and \\eqref{eq-3.3.3} that \n$\\nu_C(F^-_\\lambda)=F^+_\\lambda$. \n This implies that $\\eta_\\lambda$ is related with $\\tau_\\lambda$ by \nthe formula $d\\nu_C(\\eta_\\lambda)=\\tau_\\lambda$. \n Consequently we obtain from the extended framing $F_\\lambda$ of \na pluriharmonic map the pair $(\\eta_\\lambda,\\tau_\\lambda)$ of \nan $\\frak{m}^\\mathbb{C}$-valued holomorphic $1$-form and an \n$\\frak{m}^\\mathbb{C}$-valued antiholomorphic $1$-form on $(U,J)$ satisfying \n$d\\nu_C(\\eta_\\lambda)=\\tau_\\lambda$.\n\n \n\\subsubsection{}\\label{subsec-3.3.3} \n Let us introduce the following subspaces \n$\\Lambda_{-1,\\infty}\\frak{g}^\\mathbb{C}_\\sigma$ and \n$\\Lambda_{-\\infty,1}\\frak{g}^\\mathbb{C}_\\sigma$ of \n$\\Lambda\\frak{g}^\\mathbb{C}_\\sigma$, in order to recall the notion of a \npluriharmonic potential: \n\\[\n\\begin{array}{l}\n \\Lambda_{-1,\\infty}\\frak{g}^\\mathbb{C}_\\sigma\n :=\\{ X_\\lambda\\in\\Lambda\\frak{g}^\\mathbb{C}_\\sigma\n \\,|\\, X_\\lambda=\\sum_{i=-1}^\\infty X_i\\lambda^i \\},\\\\\n \\Lambda_{-\\infty,1}\\frak{g}^\\mathbb{C}_\\sigma\n :=\\{ Y_\\lambda\\in\\Lambda\\frak{g}^\\mathbb{C}_\\sigma\n \\,|\\, Y_\\lambda=\\sum_{j=-\\infty}^1 Y_j\\lambda^j \\} \n\\end{array}\n\\] \n(cf.\\ \\eqref{eq-3.1.1} for $\\Lambda\\frak{g}^\\mathbb{C}_\\sigma$). \n Let $\\mathcal{P}'=\\mathcal{P}'(\\frak{g}^\\mathbb{C})$ and \n$\\mathcal{P}''=\\mathcal{P}''(\\frak{g}^\\mathbb{C})$ denote the set of all \n$\\Lambda_{-1,\\infty}\\frak{g}^\\mathbb{C}_\\sigma$-valued holomorphic \nand $\\Lambda_{-\\infty,1}\\frak{g}^\\mathbb{C}_\\sigma$-valued antiholomorphic \n$1$-forms on a simply connected complex manifold $(M,J)$, respectively. \n\n\\begin{definition}\\label{def-3.3.1}\n An element $(\\eta_\\lambda,\\tau_\\lambda)\\in\\mathcal{P}'\\times\\mathcal{P}''$ \nis called a {\\it pluriharmonic potential} (or a {\\it potential}, for short) on \n$(M,J)$, if it satisfies $d\\nu_C(\\eta_\\lambda)=\\tau_\\lambda$ (cf.\\ \n\\eqref{eq-3.3.1} for $\\nu_C$). \n\\end{definition}\n\n In Subsection \\ref{subsec-3.3.2} one has obtained a pluriharmonic potential \n$(\\eta_\\lambda,\\tau_\\lambda)$ from the extended framing $F_\\lambda$ of a \npluriharmonic map $f=\\pi\\circ F:(M,J)\\to(G\/H,\\nabla^1)$ with \n$F(p_o)=\\operatorname{id}$ and $[\\alpha_\\frak{m}'\\wedge\\alpha_\\frak{m}']=0$. \n Next we recall from \\cite{Do-Es} that one can obtain a pluriharmonic map \nand its extended framing from a pluriharmonic potential: \n\\begin{proposition}\\label{prop-3.3.2}\n Let \n$(\\eta_\\lambda,\\tau_\\lambda)=(\\eta_\\lambda,d\\nu_C(\\eta_\\lambda))\n \\in\\mathcal{P}'(\\frak{g}^\\mathbb{C})\\times\\mathcal{P}''(\\frak{g}^\\mathbb{C})$ \nbe any pluriharmonic potential on the complex manifold $(M,J)$. \n Then, the following steps provide an $S^1$-family \n$\\{f_\\lambda\\}_{\\lambda\\in S^1}$ of pluriharmonic maps$:$ \n\\begin{enumerate}\n\\item[{\\rm (S1)}] \n Solve the two initial value problems$:$ \n$A_\\lambda^{-1}\\cdot dA_\\lambda=\\eta_\\lambda$, \n$B_\\lambda^{-1}\\cdot dB_\\lambda=\\tau_\\lambda$ with \n$A_\\lambda(p_o)\\equiv\\operatorname{id}\\equiv B_\\lambda(p_o)$, where $p_o$ is \na base point in $(M,J)$. \n\\item[{\\rm (S2)}] \n Factorize \n$(A_\\lambda,B_\\lambda)\\in\\Lambda G^\\mathbb{C}_\\sigma\\times\n \\Lambda G^\\mathbb{C}_\\sigma$ \nin the Iwasawa decomposition $($cf.\\ Theorem $\\ref{thm-3.1.1}):$ \n$(A_\\lambda,B_\\lambda)\n =(C_\\lambda,C_\\lambda)\\cdot(B^+_\\lambda,B^-_\\lambda)$, \nwhere $C_\\lambda\\in\\Lambda G^\\mathbb{C}_\\sigma$, \n$B^+_\\lambda\\in\\Lambda^+_*G^\\mathbb{C}_\\sigma$ and \n$B^-_\\lambda\\in\\Lambda^-G^\\mathbb{C}_\\sigma$. \n\\item[{\\rm (S3)}] \n Take an open neighborhood $V$ of $M$ at $p_o$ and a smooth map \n$h^\\mathbb{C}=h^\\mathbb{C}(p): V\\to H^\\mathbb{C}$ such that \n \\begin{enumerate}\n \\item[{\\rm (1)}] \n $C_\\lambda'(p)\\in G$ for all $(p,\\lambda)\\in V\\times S^1$,\n \\item[{\\rm (2)}] \n $C_\\lambda'(p_o)\\equiv\\operatorname{id}$, where \n $C_\\lambda':=C_\\lambda\\cdot h^\\mathbb{C}$.\n \\end{enumerate} \n\\item[{\\rm (S4)}] \n Then, $f_\\lambda:=\\pi\\circ C_\\lambda':(V,J)\\to(G\/H,\\nabla^1)$ \nbecomes an $S^1$-family of pluriharmonic maps. \n\\end{enumerate} \n\\end{proposition}\n\\begin{proof}\n (S1), (S2): For the solutions $A_\\lambda$ and $B_\\lambda$ to (S1), we \ndeduce that they satisfy \n\\begin{equation}\\label{eq-3.3.4}\n \\nu_C(A_\\lambda)=B_\\lambda\n\\end{equation}\nin terms of $d\\nu_C(\\eta_\\lambda)=\\tau_\\lambda$. \n Since $A_\\lambda(p_o)\\equiv\\operatorname{id}\\equiv B_\\lambda(p_o)$ and \n$(A_\\lambda(p_o),B_\\lambda(p_o))$ belongs to a suitable open subset of \n$\\Lambda G^\\mathbb{C}_\\sigma\\times\\Lambda G^\\mathbb{C}_\\sigma$, one can \nfactorize $(A_\\lambda,B_\\lambda)$ by means of (S2).\\par\n\n (S3): Let us assume that both (S1) and (S2) hold on an open neighborhood \n$W$ of $M$ at $p_o$. \n We will confirm that there exist an open neighborhood $V$ ($\\subset W$) of \n$M$ at $p_o$ and a smooth map \n$h^\\mathbb{C}=h^\\mathbb{C}(p):V\\to H^\\mathbb{C}$ such that \n\\begin{enumerate}\n\\item \n $C_\\lambda(p)\\cdot h^\\mathbb{C}(p)\\in G=\\operatorname{Fix}(G^\\mathbb{C},\\nu)$ \nfor all $(p,\\lambda)\\in V\\times S^1$; \n\\item \n $C_\\lambda(p_o)\\cdot h^\\mathbb{C}(p_o)\\equiv\\operatorname{id}$\n\\end{enumerate}\n---that is, we want to assert that (S3) holds. \n First, let us verify \n\\[\n C_\\lambda(p_o)\\equiv\\operatorname{id}.\n\\] \n By $A_\\lambda(p_o)\\equiv\\operatorname{id}\\equiv B_\\lambda(p_o)$ we conclude \n$\\Lambda^+_* G^\\mathbb{C}_\\sigma\\ni\n B^+_\\lambda(p_o)=C_\\lambda(p_o)^{-1}\n =B^-_\\lambda(p_o)\\in\\Lambda^- G^\\mathbb{C}_\\sigma$; \nso that \n$C_\\lambda(p_o)\n \\in(\\Lambda^+_*G^\\mathbb{C}_\\sigma\\cap\\Lambda^- G^\\mathbb{C}_\\sigma)\n =\\{\\operatorname{id}\\}$, \nand $C_\\lambda(p_o)\\equiv\\operatorname{id}$. \n Next, we will deduce that \n\\begin{equation}\\label{eq-3.3.5}\n\\begin{array}{ll}\n (C_\\lambda(q))^{-1}\\cdot\\nu(C_\\lambda(q))\\in H^\\mathbb{C} \n& \\mbox{for any point $(q,\\lambda)\\in W\\times S^1$}. \n\\end{array}\n\\end{equation}\n Since \\eqref{eq-3.3.4}, \\eqref{eq-3.3.2} and \n$C_\\lambda=A_\\lambda\\cdot(B^+_\\lambda)^{-1}=B_\\lambda\\cdot(B^-_\\lambda)^{-1}$, \nwe obtain \n\\[\n (C_\\lambda)^{-1}\\cdot\\nu_C(C_\\lambda)\n=\\bigl(B_\\lambda\\cdot(B^-_\\lambda)^{-1}\\bigr)^{-1}\\cdot \n \\nu_C\\bigl(A_\\lambda\\cdot(B^+_\\lambda)^{-1}\\bigr)\n=B^-_\\lambda\\cdot\\nu_C((B^+_\\lambda)^{-1})\\in\\Lambda^- G^\\mathbb{C}_\\sigma. \n\\] \n The above also leads to \n$(C_\\lambda)^{-1}\\cdot\\nu_C(C_\\lambda)\n =\\nu_C\\bigl(\\{ (C_\\lambda)^{-1}\\cdot\\nu_C(C_\\lambda) \\}^{-1} \\bigr)\n\\in\\nu_C(\\Lambda^- G^\\mathbb{C}_\\sigma)=\\Lambda^+ G^\\mathbb{C}_\\sigma$. \n Therefore we have \n$(C_\\lambda)^{-1}\\cdot\\nu_C(C_\\lambda)\n \\in(\\Lambda^- G^\\mathbb{C}_\\sigma\\cap\\Lambda^+ G^\\mathbb{C}_\\sigma)=H^\\mathbb{C}$, \nand so \\eqref{eq-3.3.5} follows. \n It remains to show that there exist an open neighborhood $V$ of $M$ at $p_o$ \nand a smooth map $h^\\mathbb{C}=h^\\mathbb{C}(p): V\\to H^\\mathbb{C}$ satisfying \nthe equations (1) and (2) above. \n Let $U_H$ and $O_\\frak{h}$ denote open neighborhoods of $H^\\mathbb{C}$ at \n$\\operatorname{id}$ and of $\\frak{h}^\\mathbb{C}$ at $0$ such that \n$\\exp:O_\\frak{h}\\to U_H$ is a diffeomorphism and $\\nu(U_H)\\subset U_H$. \n Since \\eqref{eq-3.3.5} and \n$(C_\\lambda(p_o))^{-1}\\cdot\\nu(C_\\lambda(p_o))=\\operatorname{id}\\in U_H$, there \nexists an open neighborhood $V$ ($\\subset W$) of $p_o$ in $M$ such that \n$(C_\\lambda(p))^{-1}\\cdot\\nu(C_\\lambda(p))\\in U_H$ for all $p\\in V$. \n Hence, \n\\begin{equation}\\label{eq-3.3.6}\n\\begin{array}{ll}\n (C_\\lambda(p))^{-1}\\cdot\\nu(C_\\lambda(p))=\\exp X(p) \n& \\mbox{on $V$}, \n\\end{array}\n\\end{equation}\nwhere $X=X(p):V\\to O_\\frak{h}$ is a smooth map with $X(p_o)=0$. \n This yields \n\\[\n \\exp d\\nu(X(p))\n=\\nu\\bigl((C_\\lambda(p))^{-1}\\cdot\\nu(C_\\lambda(p))\\bigr)\\\\\n=\\nu(C_\\lambda(p))^{-1}\\cdot(C_\\lambda(p)) \n=\\exp(-X(p)) \n\\] \nand $d\\nu(X(p))=-X(p)$. \n Accordingly we conclude that (1)\n$\\nu(C_\\lambda(p)\\cdot h^\\mathbb{C}(p))=C_\\lambda(p)\\cdot h^\\mathbb{C}(p)$ \nfor all $(p,\\lambda)\\in V\\times S^1$ and (2) \n$C_\\lambda(p_o)\\cdot h^\\mathbb{C}(p_o)\\equiv\\operatorname{id}$, by setting \n$h^\\mathbb{C}(p):=\\exp((1\/2)\\cdot X(p))$ (cf.\\ \\eqref{eq-3.3.6}).\\par\n \n (S4): The arguments below will be similar to those of the proof of (S3) in \nProposition \\ref{prop-3.2.3}. \n Define a $\\frak{g}$-valued $1$-form $\\beta^\\lambda$ on $(V,J)$ by \n$\\beta^\\lambda:=(C_\\lambda')^{-1}\\cdot dC_\\lambda'$, and express it as \n$\\beta^\\lambda=(\\beta^\\lambda)_\\frak{h}+(\\beta^\\lambda)_\\frak{m}\n=(\\beta^\\lambda)_\\frak{h}+(\\beta^\\lambda)_\\frak{m}'+(\\beta^\\lambda)_\\frak{m}''$, \nwhere $\\frak{g}=\\frak{h}\\oplus\\frak{m}$ (see (2.3.6) for \n$(\\beta^\\lambda)_\\frak{m}'$ and $(\\beta^\\lambda)_\\frak{m}''$). \n Then, it suffices to verify \\eqref{eq-3.3.7}: \n\\begin{equation}\\label{eq-3.3.7}\n \\beta^\\lambda=(\\beta^1)_\\frak{h}+\\lambda^{-1}\\cdot(\\beta^1)_\\frak{m}'\n +\\lambda\\cdot(\\beta^1)_\\frak{m}''. \n\\end{equation}\n Indeed, $\\beta^\\lambda=(C_\\lambda')^{-1}\\cdot dC_\\lambda'$ satisfies \n$d\\beta^\\lambda+(1\/2)\\cdot[\\beta^\\lambda\\wedge\\beta^\\lambda]=0$ for any \n$\\lambda\\in S^1$, and so Proposition \\ref{prop-2.3.3} and \\eqref{eq-3.3.7} \nallow us to conclude that \n$f_\\lambda=\\pi\\circ C_\\lambda':(V,J)\\to(G\/H,\\nabla^1)$ is a pluriharmonic map \nfor every $\\lambda\\in S^1$. \n Direct computation, together with \n$C_\\lambda=A_\\lambda\\cdot(B^+_\\lambda)^{-1}=B_\\lambda\\cdot(B^-_\\lambda)^{-1}$ \nand $C_\\lambda'=C_\\lambda\\cdot h^\\mathbb{C}$, gives us \n\\[\n\\begin{split}\n (\\beta^\\lambda,\\beta^\\lambda)\n&=\\bigl((C_\\lambda')^{-1}\\cdot dC_\\lambda',\\,\\,\n (C_\\lambda')^{-1}\\cdot dC_\\lambda'\\bigr)\\\\\n&=\\bigl(D^+_\\lambda\\cdot\\eta_\\lambda\\cdot(D^+_\\lambda)^{-1}\n +D^+_\\lambda\\cdot d(D^+_\\lambda)^{-1},\\,\\, \n D^-_\\lambda\\cdot\\tau_\\lambda\\cdot(D^-_\\lambda)^{-1}\n +D^-_\\lambda\\cdot d(D^-_\\lambda)^{-1}\\bigr), \n\\end{split}\n\\]\nwhere $(D^\\pm_\\lambda)^{-1}:=(B^\\pm_\\lambda)^{-1}\\cdot h^\\mathbb{C}$. \n It follows from $h^\\mathbb{C}\\in H^\\mathbb{C}$ that \n$D^\\pm_\\lambda\\in\\Lambda^\\pm G^\\mathbb{C}_\\sigma$. \n Therefore, the Fourier series \n$\\beta^\\lambda=\\sum_{k\\in\\mathbb{Z}}\\beta_k\\lambda^k$ is actually a Laurent \npolynomial of the form \n\\[\\label{a}\\tag{a}\n \\beta^\\lambda\n=\\lambda^{-1}\\cdot\\beta_{-1}+\\beta_0+\\lambda\\cdot\\beta_{+1}\n=\\lambda^{-1}\\cdot((\\beta_{-1})'+(\\beta_{-1})'')\n +\\beta_0+\\lambda\\cdot((\\beta_{+1})'+(\\beta_{+1})'')\n\\] \nbecause the $n$-th and $m$-th Fourier coefficients of \n$D^+_\\lambda\\cdot\\eta_\\lambda\\cdot(D^+_\\lambda)^{-1}\n +D^+_\\lambda\\cdot d(D^+_\\lambda)^{-1}$ \nand \n$D^-_\\lambda\\cdot\\tau_\\lambda\\cdot(D^-_\\lambda)^{-1}\n +D^-_\\lambda\\cdot d(D^-_\\lambda)^{-1}$ \nare zero for all $n\\leq -2$ and $2\\leq m$, respectively. \n Moreover, \\eqref{a} simplifies to \n\\[\\label{a'}\\tag{a$'$}\n \\beta^\\lambda\n=\\lambda^{-1}\\cdot(\\beta_{-1})'+\\beta_0+\\lambda\\cdot(\\beta_{+1})''\n\\]\nbecause the $-1$st and $+1$st Fourier coefficients of \n$D^+_\\lambda\\cdot\\eta_\\lambda\\cdot(D^+_\\lambda)^{-1}\n +D^+_\\lambda\\cdot d(D^+_\\lambda)^{-1}$ \nand \n$D^-_\\lambda\\cdot\\tau_\\lambda\\cdot(D^-_\\lambda)^{-1}\n +D^-_\\lambda\\cdot d(D^-_\\lambda)^{-1}$ \nare holomorphic and antiholomorphic, respectively. \n In view of \\eqref{a'} and \n$\\beta^\\lambda\\in\\Lambda\\frak{g}^\\mathbb{C}_\\sigma$ it turns out that \n$(\\beta^1)_\\frak{h}=\\beta_0$ and \n$(\\beta^1)_\\frak{m}=(\\beta_{-1})'+(\\beta_{+1})''$. \n Therefore \\eqref{eq-3.3.7} follows from $(\\beta^1)_\\frak{m}'=(\\beta_{-1})'$ \nand $(\\beta^1)_\\frak{m}''=(\\beta_{+1})''$. \n\\end{proof}\n \n \n\n\n\n\n\n\\section{{\\bf Relation between pluriharmonic maps and para-pluriharmonic maps}}\n\\label{sec-4} \n In this section, by utilizing the loop group method, we interrelate \npluriharmonic maps with para-pluriharmonic maps. \n We consider two real subspaces $\\mathbb{A}^{2n}$ and \n$\\mathbb{B}^{2n}$ of $\\mathbb{C}^{2n}$ (cf.\\ Subsection \\ref{subsec-4.1}), \nand two symmetric closed subspaces $G_1\/H_1$ and $G_2\/H_2$ of \n$G^\\mathbb{C}\/H^\\mathbb{C}$ (cf.\\ Subsection \\ref{subsec-4.2}), and we \ninvestigate the relation between certain pluriharmonic maps \n$f_1:\\mathbb{A}^{2n}\\to G_1\/H_1$ and certain para-pluriharmonic maps \n$f_2:\\mathbb{B}^{2n}\\to G_2\/H_2$ (cf.\\ Subsection \\ref{subsec-4.3}). \n\n\\begin{center}\n\\unitlength=1mm\n\\begin{picture}(75,21)\n\\put(1,17){$f_1:\\mathbb{A}^{2n}\\longrightarrow G_1\/H_1$, pluriharmonic}\n\\put(7,13){$\\cap$}\n\\put(25,13){$\\cap$}\n\\put(7,9){$\\mathbb{C}^{2n}$}\n\\put(21,9){$G^\\mathbb{C}\/H^\\mathbb{C}$}\n\\put(7,5){$\\cup$}\n\\put(25,5){$\\cup$}\n\\put(1,1){$f_2:\\mathbb{B}^{2n}\\longrightarrow G_2\/H_2$, para-pluriharmonic}\n\\end{picture} \n\\end{center} \n\n \n\\subsection{The real subspaces $\\mathbb{A}^{2n}$ and $\\mathbb{B}^{2n}$ of \n$\\mathbb{C}^{2n}$}\\label{subsec-4.1} \n Let $\\mathbb{A}^{2n}$ and $\\mathbb{B}^{2n}$ be the real subspaces of \n$\\mathbb{C}^{2n}$ given by \n\\[\n\\begin{split}\n \\mathbb{A}^{2n}:\n&=\\{(z^1,\\cdots,z^n,w^1,\\cdots,w^n)\\in\\mathbb{C}^n\\times\\mathbb{C}^n \n \\,|\\, \\mbox{$\\bar{z}^a=w^a$ for all $1\\leq a\\leq n$} \\}\\\\\n&=\\{(z,w)\\in\\mathbb{C}^n\\times\\mathbb{C}^n \\,|\\, w=\\bar{z}\\},\\\\ \n \\mathbb{B}^{2n}:\n&=\\{(z^1,\\cdots,z^n,w^1,\\cdots,w^n)\\in\\mathbb{C}^n\\times\\mathbb{C}^n \n \\,|\\, \\mbox{$z^a=\\bar{z}^a$ and $w^a=\\bar{w}^a$ for all $1\\leq a\\leq n$}\\}\\\\\n&=\\mathbb{R}^n\\times\\mathbb{R}^n. \n\\end{split} \n\\] \n Let $(x^1,\\cdots,x^n,y^1,\\cdots,y^n)$ denote the global coordinate system \non $\\mathbb{B}^{2n}$ defined by $x^a:={\\rm Re}(z^a)$ and \n$y^a:={\\rm Re}(w^a)$ for $1\\leq a\\leq n$. \n Define smooth $(1,1)$-tensor fields $J$ on $\\mathbb{A}^{2n}$ and $I$ on \n$\\mathbb{B}^{2n}$ by \n\\[\n J\\Big(\\frac{\\partial}{\\partial z^a}\\Big)\n :=i\\frac{\\partial}{\\partial z^a},\\,\n J\\Big(\\frac{\\partial}{\\partial\\bar{z}^a}\\Big)\n :=-i\\frac{\\partial}{\\partial\\bar{z}^a}\n\\mbox{ and } \n I\\Big(\\frac{\\partial}{\\partial x^a}\\Big)\n :=\\frac{\\partial}{\\partial x^a},\\, \n I\\Big(\\frac{\\partial}{\\partial y^a}\\Big)\n :=-\\frac{\\partial}{\\partial y^a}. \n\\] \n Then $(\\mathbb{A}^{2n},J)$ and $(\\mathbb{B}^{2n},I)$ are simply connected \ncomplex and para-complex manifolds, respectively. \n Henceforth, for the natural coordinate systems \n$(z^1,\\cdots,z^n,\\bar{z}^1,\\cdots,\\bar{z}^n)$ on $\\mathbb{A}^{2n}$, \n$(x^1,\\cdots,x^n,y^1,\\cdots,y^n)$ on $\\mathbb{B}^{2n}$ and \n$(z^1,\\cdots,z^n,w^1,\\cdots,w^n)$ on $\\mathbb{C}^{2n}$, we will use the \nnotation $({\\bf z},{\\bf \\bar{z}})$, $({\\bf x},{\\bf y})$ and $({\\bf z},{\\bf w})$, \nrespectively. \n\n\n\n\\subsection{The symmetric subspaces $G_1\/H_1$ and $G_2\/H_2$ of \n$G^\\mathbb{C}\/H^\\mathbb{C}$}\\label{subsec-4.2} \n In this subsection, we introduce two symmetric subspaces $G_1\/H_1$ and \n$G_2\/H_2$ of $G^\\mathbb{C}\/H^\\mathbb{C}$. \n Let $G^\\mathbb{C}$ be a simply connected, simple, complex linear algebraic \nsubgroup of $SL(m,\\mathbb{C})$, let $\\sigma$ be a holomorphic involution of \n$G^\\mathbb{C}$, and let $\\nu_1$ and $\\nu_2$ be antiholomorphic involutions of \n$G^\\mathbb{C}$ satisfying $[\\sigma,\\nu_1]=[\\sigma,\\nu_2]=[\\nu_1,\\nu_2]=0$. \n Then we define $H^\\mathbb{C}$, $G_i$, $H_i$, $\\pi_i$ ($i=1,2$) and \n$\\frak{g}_2$ as follows: \n\\begin{enumerate}[(4.2.1)] \n\\item \n $H^\\mathbb{C}:=\\operatorname{Fix}(G^\\mathbb{C},\\sigma)$, \n\\item \n $G_i:=\\operatorname{Fix}(G^\\mathbb{C},\\nu_i)$, \n\\item \n $H_i:=\\operatorname{Fix}(G_i,\\sigma)=\\operatorname{Fix}(H^\\mathbb{C},\\nu_i)$, \n\\item \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$, \n\\item \n $\\frak{g}_2:=\\operatorname{Lie}G_2$. \n\\end{enumerate} \n Clearly, $(G^\\mathbb{C}\/H^\\mathbb{C},\\sigma)$ is an affine symmetric space, \nand both $G_1\/H_1$ and $G_2\/H_2$ are symmetric closed subspaces of \n$(G^\\mathbb{C}\/H^\\mathbb{C},\\sigma)$ (ref.\\ \\cite[p.\\ 227]{Ko-No} for the \ndefinition of symmetric closed subspace). \n In particular, $(G_i\/H_i,\\sigma|_{G_i})$, $i=1,2$, are affine symmetric \nspaces. \n\n\n\n\\subsection{The main result}\\label{subsec-4.3} \n With the notation in Subsections \\ref{subsec-4.1} and \\ref{subsec-4.2} we \nassert the following (see \\eqref{eq-3.3.1} for $(\\nu_1)_C$): \n\\begin{theorem}\\label{thm-4.3.1} \n Let \n$(\\eta_\\theta,\\tau_\\theta)=(\\eta_\\theta({\\bf x}),\\tau_\\theta({\\bf y}))\n \\in\\widetilde{\\mathcal{P}}_+(\\frak{g}_2)\\times\\widetilde{\\mathcal{P}}_-(\\frak{g}_2)$ \nbe a real analytic, para-pluriharmonic potential on $(\\mathbb{B}^{2n},I)$, \nand let \n$(f_2)_\\theta=\\pi_2\\circ C_\\theta({\\bf x},{\\bf y}):(W,I)\\to(G_2\/H_2,\\nabla^1)$ \ndenote the $\\mathbb{R}^+$-family of para-pluriharmonic maps constructed \nfrom $(\\eta_\\theta,\\tau_\\theta)$ in the neighborhood $W$ of \n$\\mathbb{B}^{2n}$ at $({\\bf 0},{\\bf 0})$ in Proposition $\\ref{prop-3.2.3}$. \n Suppose that $(\\eta_\\theta,\\tau_\\theta)$ satisfies the morphing condition \n\\[\\label{M}\\tag{M}\n d(\\nu_1)_C(\\eta_\\lambda({\\bf z}))=\\tau_\\lambda({\\bf \\bar{z}}). \n\\]\n Then, there exist an open neighborhood $V$ of $\\mathbb{A}^{2n}$ at \n$({\\bf 0},{\\bf 0})$ and a smooth map \n$h^\\mathbb{C}({\\bf z},{\\bf \\bar{z}}):V\\to H^\\mathbb{C}$ such that \n\\begin{enumerate} \n\\item[{\\rm (1)}] \n $C_\\lambda'({\\bf z},{\\bf \\bar{z}})\\in G_1$ for all \n$({\\bf z},{\\bf \\bar{z}};\\lambda)\\in V\\times S^1;$ \n\\item[{\\rm (2)}] \n $(f_1)_\\lambda:=\\pi_1\\circ C_\\lambda'({\\bf z},{\\bf \\bar{z}}):\n(V,J)\\to(G_1\/H_1,\\nabla^1)$ \nis an $S^1$-family of pluriharmonic maps with \n$C_\\lambda'({\\bf 0},{\\bf 0})\\equiv\\operatorname{id}$, where \n$C_\\lambda'({\\bf z},{\\bf \\bar{z}})\n :=C_\\lambda({\\bf z},{\\bf \\bar{z}})\\cdot h^\\mathbb{C}({\\bf z},{\\bf \\bar{z}})$. \n\\end{enumerate} \n\\end{theorem}\n \n \n\\begin{remark}\\label{rem-4.3.2} \n (i) Since both $\\eta_\\theta({\\bf x})$ and $\\tau_\\theta({\\bf y})$ are analytic on \n$\\mathbb{B}^{2n}$ and $\\mathbb{B}^{2n}$ is a totally real submanifold of \n$\\mathbb{C}^{2n}$, one can uniquely extend them as holomorphic $1$-forms \n$\\eta_\\theta({\\bf z})$ and $\\tau_\\theta({\\bf w})$ to an open subset \n$\\widetilde{W}$ of $\\mathbb{C}^{2n}$ such that \n$\\mathbb{B}^{2n}\\subset\\widetilde{W}$. \n For this reason, the notation $\\eta_\\lambda({\\bf z})$ and \n$\\tau_\\lambda({\\bf \\bar{z}})$ in Theorem \\ref{thm-4.3.1} makes sense.\\par\n \n (ii) Similarly, one can verify that the notation \n$C_\\lambda({\\bf z},{\\bf \\bar{z}})$, used in Theorem \\ref{thm-4.3.1}, makes \nsense. \n\\end{remark}\n\n\n\n\\begin{proof}[Proof of Theorem {\\rm \\ref{thm-4.3.1}}] \n Let \n$(A_\\lambda({\\bf x}),B_\\lambda({\\bf y}))\n =(C_\\lambda({\\bf x},{\\bf y}),C_\\lambda({\\bf x},{\\bf y}))\n \\cdot(B^+_\\lambda({\\bf x},{\\bf y}),B^-_\\lambda({\\bf x},{\\bf y}))$ \ndenote the Iwasawa decomposition in (S2) of Proposition \\ref{prop-3.2.3}. \n Note that $A_\\lambda({\\bf x})$ and $B_\\lambda({\\bf y})$ satisfy \n\\[ \n\\begin{array}{lll}\n (A_\\lambda^{-1}\\cdot dA_\\lambda)({\\bf x})=\\eta_\\lambda({\\bf x}), \n& (B_\\lambda^{-1}\\cdot dB_\\lambda)({\\bf y})=\\tau_\\lambda({\\bf y}), \n& A_\\lambda({\\bf 0})\\equiv\\operatorname{id}\\equiv B_\\lambda({\\bf 0}).\n\\end{array}\n\\] \n Since $(\\eta_\\lambda({\\bf x}),\\tau_\\lambda({\\bf y}))$ is analytic, we deduce \nthat $A_\\lambda({\\bf x})$, $B_\\lambda({\\bf y})$, $C_\\lambda({\\bf x},{\\bf y})$ \nand $B^\\pm_\\lambda({\\bf x},{\\bf y})$ are analytic with respect to the variables \n${\\bf x}$ and ${\\bf y}$. \n Therefore these matrices have unique analytic extensions \n$A_\\lambda({\\bf z})$, $B_\\lambda({\\bf w})$, $C_\\lambda({\\bf z},{\\bf w})$ and \n$B^\\pm_\\lambda({\\bf z},{\\bf w})$ to an open neighborhood $\\widetilde{W}$ of \n$\\mathbb{C}^{2n}$ at $({\\bf 0},{\\bf 0})$, respectively, because \n$\\mathbb{B}^{2n}$ is a totally real submanifold of $\\mathbb{C}^{2n}$. \n Then on the neighborhood $\\widetilde{W}\\cap\\mathbb{A}^{2n}$ of \n$\\mathbb{A}^{2n}$ at $({\\bf 0},{\\bf 0})$, we confirm that \n$A_\\lambda({\\bf z})$ and $B_\\lambda({\\bf \\bar{z}})$ satisfy \n$(A_\\lambda^{-1}\\cdot dA_\\lambda)({\\bf z})=\\eta_\\lambda({\\bf z})$, \n$(B_\\lambda^{-1}\\cdot dB_\\lambda)({\\bf \\bar{z}})=\\tau_\\lambda({\\bf \\bar{z}})$ \nand \n$A_\\lambda({\\bf 0})\\equiv\\operatorname{id}\\equiv B_\\lambda({\\bf 0})$; \nand furthermore, \n$(A_\\lambda({\\bf z}),B_\\lambda({\\bf \\bar{z}}))\n =(C_\\lambda({\\bf z},{\\bf \\bar{z}}),C_\\lambda({\\bf z},{\\bf \\bar{z}}))\n \\cdot(B^+_\\lambda({\\bf z},{\\bf \\bar{z}}),\n B^-_\\lambda({\\bf z},{\\bf \\bar{z}}))$ \nbecomes the Iwasawa decomposition in (S2) of Proposition \\ref{prop-3.3.2}, \nwhere we remark that $(\\eta_\\lambda({\\bf z}),\\tau_\\lambda({\\bf \\bar{z}}))$ \nsatisfy \n$(\\eta_\\lambda({\\bf z}),\\tau_\\lambda({\\bf \\bar{z}}))\n \\in\\mathcal{P}'(\\frak{g}^\\mathbb{C})\\times\\mathcal{P}''(\\frak{g}^\\mathbb{C})$ \nand $d(\\nu_1)_C(\\eta_\\lambda({\\bf z}))=\\tau_\\lambda({\\bf \\bar{z}})$. \n Consequently, the proof of Proposition \\ref{prop-3.3.2} assures that there \nexist an open neighborhood $V$ $\\subset\\widetilde{W}\\cap\\mathbb{A}^{2n}$ of \n$\\mathbb{A}^{2n}$ at $({\\bf 0},{\\bf 0})$ and a smooth map \n$h^\\mathbb{C}({\\bf z},{\\bf \\bar{z}}):V\\to H^\\mathbb{C}$ satisfying the \nconditions {\\rm (1)} and {\\rm (2)}. \n\\end{proof}\n\n\n\n\n\n\n\n\\section{Appendix}\\label{sec-5}\n We will interrelate concretely some pluriharmonic maps with para-pluriharmonic \nmaps by means of Theorem \\ref{thm-4.3.1}. \n In Subsection \\ref{subsec-5.2} we will focus on harmonic maps and Lorentz \nharmonic maps. \n This will yield a relation between CMC-surfaces in $\\mathbb{R}^3$ and \nCMC-surface in $\\mathbb{R}^3_1$. \n\n\\subsection{A relation between certain pluriharmonic maps and certain \npara-pluriharmonic maps}\\label{subsec-5.1} \n\n\\subsubsection{$f_1:\\mathbb{A}^4\\to Gr_{2,4}(\\mathbb{C})$ $\\Longleftrightarrow$ \n$f_2:\\mathbb{B}^4\\to Gr_{2,4}(\\mathbb{C}')$}\\label{subsec-5.1.1} \n Following the main result of this paper, we construct in this subsection a \npluriharmonic map \n$f_1(z^1,z^2,\\bar{z}^1,\\bar{z}^2):\\mathbb{A}^4\\to Gr_{2,4}(\\mathbb{C})$ and \na para-pluriharmonic map \n$f_2(x^1,x^2,y^1,y^2):\\mathbb{B}^4\\to Gr_{2,4}(\\mathbb{C}')$ from one potential \n\\eqref{eq-5.1.1} below, where $Gr_{2,4}(\\mathbb{C})$ (resp.\\ \n$Gr_{2,4}(\\mathbb{C}')$) denotes a complex (resp.\\ para-complex) Grassmann \nmanifold. \n In this subsection, we will use the following notation: \n\\begin{enumerate}\n\\item[(5.1.1)] \n $G^\\mathbb{C}=SL(4,\\mathbb{C})$, \n\\item[(5.1.2)] \n $\\sigma(A):=I_{2,2}\\cdot A\\cdot I_{2,2}$ for $A\\in G^\\mathbb{C}$, \nwhere $I_{2,2}:=\\operatorname{diag}(-1,-1,1,1)$, \n\\item[(5.1.3)] \n $\\nu_1(A):={}^t(\\overline{A})^{-1}$ for $A\\in G^\\mathbb{C}$, \n\\item[(5.1.4)] \n $\\nu_2(A):=\\overline{A}$ for $A\\in G^\\mathbb{C}$, \n\\item[(5.1.5)] \n $G^\\mathbb{C}\/H^\\mathbb{C}\n =SL(4,\\mathbb{C})\/S(GL(2,\\mathbb{C})\\times GL(2,\\mathbb{C}))$, \n\\item[(5.1.6)] \n $G_1\/H_1=SU(4)\/S(U(2)\\times U(2))\\simeq Gr_{2,4}(\\mathbb{C})$, \n\\item[(5.1.7)] \n $G_2\/H_2=SL(4,\\mathbb{R})\/S(GL(2,\\mathbb{R})\\times GL(2,\\mathbb{R}))\n \\simeq Gr_{2,4}(\\mathbb{C}')$, \n\\item[(5.1.8)] \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$), \n\\item[(5.1.9)] \n $\\frak{g}_2:=\\operatorname{Lie}G_2=\\frak{sl}(4,\\mathbb{R})$. \n\\end{enumerate}\\par \n\\setcounter{equation}{9} \n\n First, we define a \n$\\widetilde{\\Lambda}_{-1,\\infty}(\\frak{g}_2)_\\sigma$-valued, real analytic \npara-holomorphic $1$-form $\\eta_\\theta(x^1,x^2)$ on $(\\mathbb{B}^4,I)$ by \n\\begin{equation}\\label{eq-5.1.1}\n\\eta_\\theta(x^1,x^2):=\n \\theta^{-1}\\begin{pmatrix}\n 0 & 0 & 0 & 1 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n -1 & 0 & 0 & 0 \n \\end{pmatrix}dx^1\n+\\theta^{-1}\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \n \\end{pmatrix}dx^2. \n\\end{equation}\n Taking the morphing condition \\eqref{M} of Theorem \\ref{thm-4.3.1} into \nconsideration, we define a \n$\\widetilde{\\Lambda}_{-\\infty,1}(\\frak{g}_2)_\\sigma$-valued, real analytic \npara-antiholomorphic $1$-form $\\tau_\\theta(y^1,y^2)$ on $(\\mathbb{B}^4,I)$ by \nsetting \n\\[\n\\tau_\\theta(y^1,y^2):=\n \\theta\\begin{pmatrix}\n 0 & 0 & 0 & 1 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n -1 & 0 & 0 & 0 \n \\end{pmatrix}dy^1\n+\\theta\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & -1 & 0 \\\\\n 0 & -1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \n \\end{pmatrix}dy^2. \n\\]\n Hence we obtain the real analytic, para-pluriharmonic potential \n$(\\eta_\\theta(x^1,x^2),\\tau_\\theta(y^1,y^2))$.\\par\n\n From Proposition \\ref{prop-3.2.3} we obtain a para-pluriharmonic map \n$f_2:(\\mathbb{B}^4,I)\\to G_2\/H_2\\simeq Gr_{2,4}(\\mathbb{C}')$.\\par\n \n (S1): Solve the two initial value problems: \n\\[\n\\begin{array}{lll}\n (A^-_\\theta)^{-1}\\cdot dA^-_\\theta=\\eta_\\theta, \n& (A^+_\\theta)^{-1}\\cdot dA^+_\\theta=\\tau_\\theta, \n& A^\\pm_\\theta(0,0)\\equiv\\operatorname{id}.\n\\end{array}\n\\] \n The solutions are \n\\[\n\\begin{array}{l}\n A^-_\\theta(x^1,x^2)\n=\\begin{pmatrix}\n \\cos(x^1\/\\theta) & 0 & 0 & \\sin(x^1\/\\theta) \\\\\n 0 & \\cosh(x^2\/\\theta) & \\sinh(x^2\/\\theta) & 0 \\\\\n 0 & \\sinh(x^2\/\\theta) & \\cosh(x^2\/\\theta) & 0 \\\\\n -\\sin(x^1\/\\theta) & 0 & 0 & \\cos(x^1\/\\theta) \\\\ \n \\end{pmatrix},\\\\\n A^+_\\theta(y^1,y^2)\n=\\begin{pmatrix}\n \\cos(\\theta y^1) & 0 & 0 & \\sin(\\theta y^1) \\\\\n 0 & \\cosh(-\\theta y^2) & \\sinh(-\\theta y^2) & 0 \\\\\n 0 & \\sinh(-\\theta y^2) & \\cosh(-\\theta y^2) & 0 \\\\\n -\\sin(\\theta y^1) & 0 & 0 & \\cos(\\theta y^1) \\\\ \n \\end{pmatrix}.\n\\end{array} \n\\]\n (S2): Factorize \n$(A^-_\\theta,A^+_\\theta)\n \\in\\widetilde{\\Lambda}^-_*(G_2)_\\sigma\n \\times\\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ \nin the Iwasawa decomposition Theorem \\ref{thm-3.1.5}: \n\\[\n (A^-_\\theta,A^+_\\theta)=(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta),\n\\] \nwhere $C_\\theta\\in\\widetilde{\\Lambda}(G_2)_\\sigma$, \n$B^+_\\theta\\in\\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ and \n$B^-_\\theta\\in\\widetilde{\\Lambda}^-(G_2)_\\sigma$. \n Here, $B^\\pm_\\theta$ and $C_\\theta$ are given by \n$B^\\pm_\\theta=(A^\\pm_\\theta)^{-1}$ and \n\\begin{multline*}\n C_\\theta(x^1,x^2,y^1,y^2)\\\\\n=\\begin{pmatrix}\n \\cos(x^1\/\\theta+\\theta y^1) & 0 & 0 & \\sin(x^1\/\\theta+\\theta y^1) \\\\\n 0 & \\cosh(x^2\/\\theta-\\theta y^2) & \\sinh(x^2\/\\theta-\\theta y^2) & 0 \\\\\n 0 & \\sinh(x^2\/\\theta-\\theta y^2) & \\cosh(x^2\/\\theta-\\theta y^2) & 0 \\\\\n -\\sin(x^1\/\\theta+\\theta y^1) & 0 & 0 & \\cos(x^1\/\\theta+\\theta y^1) \\\\ \n \\end{pmatrix}.\n\\end{multline*} \n (S3): The last step of Proposition \\ref{prop-3.2.3} assures \n\\[\n\\mbox{\n $(f_2)_\\theta:=\\pi_2\\circ C_\\theta(x^1,x^2,y^1,y^2): \n (\\mathbb{B}^4,I)\\to Gr_{2,4}(\\mathbb{C}')$ \n is para-pluriharmonic}\n\\]\nfor every $\\theta\\in\\mathbb{R}^+$.\\par\n\n We will construct a pluriharmonic map \n$f_1:(\\mathbb{A}^4,J)\\to G_1\/H_1\\simeq Gr_{2,4}(\\mathbb{C})$ from \n$C_\\theta(x^1,x^2,y^1,y^2)$ given above. \n Substituting $\\lambda$, $z^i$ and $\\bar{z}^i$ for $\\theta$, $x^i$ and $y^i$, \nrespectively ($i=1,2$) we obtain \n\\begin{multline*}\n C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2)\\\\\n=\\begin{pmatrix}\n \\cos(z^1\/\\lambda+\\lambda\\bar{z}^1) & 0 & 0 & \\sin(z^1\/\\lambda+\\lambda\\bar{z}^1) \\\\\n 0 & \\cosh(z^2\/\\lambda-\\lambda\\bar{z}^2) & \\sinh(z^2\/\\lambda-\\lambda\\bar{z}^2) & 0 \\\\\n 0 & \\sinh(z^2\/\\lambda-\\lambda\\bar{z}^2) & \\cosh(z^2\/\\lambda-\\lambda\\bar{z}^2) & 0 \\\\\n -\\sin(z^1\/\\lambda+\\lambda\\bar{z}^1) & 0 & 0 & \\cos(z^1\/\\lambda+\\lambda\\bar{z}^1) \\\\ \n \\end{pmatrix}\n\\end{multline*}\nfor $C_\\theta(x^1,x^2,y^1,y^2)$. \n Then $C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2)\\in G_1=SU(4)$ for all \n$(z^1,z^2,\\bar{z}^1,\\bar{z}^2;\\lambda)\\in\\mathbb{A}^4\\times S^1$ because \n$z^1\/\\lambda+\\lambda\\bar{z}^1$ is a real number and \n$z^2\/\\lambda-\\lambda\\bar{z}^2$ is a purely imaginary number. \n Hence, we conclude that \n\\[\n\\mbox{\n $(f_1)_\\lambda:=\\pi_1\\circ C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2): \n (\\mathbb{A}^4,J)\\to Gr_{2,4}(\\mathbb{C})$ \n is a pluriharmonic map}\n\\]\nfor every $\\lambda\\in S^1$. \n Consequently, we have constructed a pluriharmonic map \n$f_1:\\mathbb{A}^4\\to Gr_{2,4}(\\mathbb{C})$ and a para-pluriharmonic map \n$f_2:\\mathbb{B}^4\\to Gr_{2,4}(\\mathbb{C}')$ from the potential \\eqref{eq-5.1.1}.\n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{130mm}\n$(f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2): \n (\\mathbb{A}^4,J)\\to Gr_{2,4}(\\mathbb{C})$ \nis pluriharmonic\\par\n\\centerline{$\\Updownarrow$} \n$(f_2)_\\theta=\\pi_2\\circ C_\\theta(x^1,x^2,y^1,y^2): \n (\\mathbb{B}^4,I)\\to Gr_{2,4}(\\mathbb{C}')$ \n is para-pluriharmonic \n\\end{minipage}\n}\n\\end{center}\n \n\n\n\n\n\\subsubsection{$f_1:\\mathbb{A}^4\\to S^4$ $\\Longleftrightarrow$ \n$f_2:\\mathbb{B}^4\\to H^4$}\\label{subsec-5.1.2} \n In this subsection, we will construct a pluriharmonic map \n$f_1(z^1,z^2,\\bar{z}^1,\\bar{z}^2):\\mathbb{A}^4\\to S^4$ and a para-pluriharmonic \nmap $f_2(x^1,x^2,y^1,y^2):\\mathbb{B}^4\\to H^4$ by arguments similar to those in \nSubsection \\ref{subsec-5.1.1}. \n Here $S^4$ and $H^4$ denote a sphere and a upper half space of dimension \n$4$, respectively. \n Henceforth, we will use the following notation: \n\\begin{enumerate} \n\\item[(5.1.11)] \n $G^\\mathbb{C}=Sp(2,\\mathbb{C})$ (see \\cite[p.\\ 445]{He} for \n$Sp(2,\\mathbb{C})$), \n\\item[(5.1.12)] \n $\\sigma(A):=K_{1,1}\\cdot A\\cdot K_{1,1}$ for $A\\in G^\\mathbb{C}$, \nwhere $K_{1,1}:=\\operatorname{diag}(-1,1,-1,1)$, \n\\item[(5.1.13)] \n $\\nu_1(A):={}^t(\\overline{A})^{-1}$ for $A\\in G^\\mathbb{C}$, \n\\item[(5.1.14)] \n $\\nu_2(A):=K_{1,1}\\cdot{}^t(\\overline{A})^{-1}\\cdot K_{1,1}$ for \n$A\\in G^\\mathbb{C}$, \n\\item[(5.1.15)] \n $G^\\mathbb{C}\/H^\\mathbb{C}\n =Sp(2,\\mathbb{C})\/(Sp(1,\\mathbb{C})\\times Sp(1,\\mathbb{C}))$, \n\\item[(5.1.16)] \n $G_1\/H_1=Sp(2)\/(Sp(1)\\times Sp(1))\\simeq S^4$, \n\\item[(5.1.17)] \n $G_2\/H_2=Sp(1,1)\/(Sp(1)\\times Sp(1))\\simeq H^4$, \n\\item[(5.1.18)] \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$), \n\\item[(5.1.19)] \n $\\frak{g}_2:=\\operatorname{Lie}G_2=\\frak{sp}(1,1)$. \n\\end{enumerate}\n\\setcounter{equation}{19}\n\n Define a $\\widetilde{\\Lambda}_{-1,\\infty}(\\frak{g}_2)_\\sigma$-valued \npara-holomorphic $1$-form $\\eta_\\theta(x^1,x^2)$ on $(\\mathbb{B}^4,I)$ by \n\\[\n \\eta_\\theta(x^1,x^2):=\\theta^{-1}\n \\begin{pmatrix} \n 0 & 1 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 \\\\\n 0 & 0 & -1 & 0 \\\\\n \\end{pmatrix}dx^1 +\\theta\n \\begin{pmatrix} \n 0 & -1 & 0 & 0 \\\\\n -1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n 0 & 0 & 1 & 0 \\\\\n \\end{pmatrix}dx^2. \n\\]\n In view of the morphing condition \\eqref{M}, it is natural that one defines \na $\\widetilde{\\Lambda}_{-\\infty,1}(\\frak{g}_2)_\\sigma$-valued \npara-antiholomorphic $1$-form $\\tau_\\theta(y^1,y^2)$ as follows: \n\\[\n \\tau_\\theta(y^1,y^2):=\\theta\n \\begin{pmatrix} \n 0 & -1 & 0 & 0 \\\\\n -1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n 0 & 0 & 1 & 0 \\\\\n \\end{pmatrix}dy^1 +\\theta^{-1}\n \\begin{pmatrix} \n 0 & 1 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 \\\\\n 0 & 0 & -1 & 0 \\\\\n \\end{pmatrix}dy^2. \n\\]\n Let us solve the two initial value problems: \n$(A^-_\\theta)^{-1}\\cdot dA^-_\\theta=\\eta_\\theta$ and \n$(A^+_\\theta)^{-1}\\cdot dA^+_\\theta=\\tau_\\theta$ with \n$A^\\pm_\\theta(0,0)\\equiv\\operatorname{id}$, and factorize \n$(A^-_\\theta,A^+_\\theta)\\in\\widetilde{\\Lambda}(G_2)_\\sigma\n \\times\\widetilde{\\Lambda}(G_2)_\\sigma$ \nin the Iwasawa decomposition (cf.\\ Theorem \\ref{thm-3.1.5}): \n$(A^-_\\theta,A^+_\\theta)=(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta)$,\nwhere $C_\\theta\\in\\widetilde{\\Lambda}(G_2)_\\sigma$, \n$B^+_\\theta\\in\\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ and \n$B^-_\\theta\\in\\widetilde{\\Lambda}^-(G_2)_\\sigma$. \n Then, it follows that \n\\allowdisplaybreaks{\n\\begin{align*}\n&\nA^-_\\theta(x^1,x^2)\n =\\begin{pmatrix} \n \\cosh(\\frac{x^1-\\theta^2x^2}{\\theta}) & \\sinh(\\frac{x^1-\\theta^2x^2}{\\theta}) & 0 & 0 \\\\\n \\sinh(\\frac{x^1-\\theta^2x^2}{\\theta}) & \\cosh(\\frac{x^1-\\theta^2x^2}{\\theta}) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\frac{x^1-\\theta^2x^2}{\\theta}) & -\\sinh(\\frac{x^1-\\theta^2x^2}{\\theta}) \\\\\n 0 & 0 & -\\sinh(\\frac{x^1-\\theta^2x^2}{\\theta}) & \\cosh(\\frac{x^1-\\theta^2x^2}{\\theta}) \\\\\n \\end{pmatrix},\\\\\n&\nA^+_\\theta(y^1,y^2)\n =\\begin{pmatrix} \n \\cosh(\\frac{\\theta^2y^1-y^2}{\\theta}) & -\\sinh(\\frac{\\theta^2y^1-y^2}{\\theta}) & 0 & 0 \\\\\n -\\sinh(\\frac{\\theta^2y^1-y^2}{\\theta}) & \\cosh(\\frac{\\theta^2y^1-y^2}{\\theta}) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\frac{\\theta^2y^1-y^2}{\\theta}) & \\sinh(\\frac{\\theta^2y^1-y^2}{\\theta}) \\\\\n 0 & 0 & \\sinh(\\frac{\\theta^2y^1-y^2}{\\theta}) & \\cosh(\\frac{\\theta^2y^1-y^2}{\\theta}) \\\\\n \\end{pmatrix},\\\\\n&\nB^+_\\theta(x^2,y^1)\\\\\n&\n =\\begin{pmatrix} \n \\cosh(\\theta(x^2-y^1)) & -\\sinh(\\theta(x^2-y^1)) & 0 & 0 \\\\\n -\\sinh(\\theta(x^2-y^1)) & \\cosh(\\theta(x^2-y^1)) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\theta(x^2-y^1)) & \\sinh(\\theta(x^2-y^1)) \\\\\n 0 & 0 & \\sinh(\\theta(x^2-y^1)) & \\cosh(\\theta(x^2-y^1)) \\\\\n \\end{pmatrix},\\\\\n&\nB^-_\\theta(x^1,y^2)\n =\\begin{pmatrix} \n \\cosh(\\frac{x^1-y^2}{\\theta}) & -\\sinh(\\frac{x^1-y^2}{\\theta}) & 0 & 0 \\\\\n -\\sinh(\\frac{x^1-y^2}{\\theta}) & \\cosh(\\frac{x^1-y^2}{\\theta}) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\frac{x^1-y^2}{\\theta}) & \\sinh(\\frac{x^1-y^2}{\\theta}) \\\\\n 0 & 0 & \\sinh(\\frac{x^1-y^2}{\\theta}) & \\cosh(\\frac{x^1-y^2}{\\theta}) \\\\\n \\end{pmatrix},\\\\ \n&\nC_\\theta(x^1,x^2,y^1,y^2)=\n \\begin{pmatrix} \n \\cosh(\\frac{x^1-\\theta^2y^1}{\\theta}) & \\sinh(\\frac{x^1-\\theta^2y^1}{\\theta}) & 0 & 0 \\\\\n \\sinh(\\frac{x^1-\\theta^2y^1}{\\theta}) & \\cosh(\\frac{x^1-\\theta^2y^1}{\\theta}) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\frac{x^1-\\theta^2y^1}{\\theta}) & -\\sinh(\\frac{x^1-\\theta^2y^1}{\\theta}) \\\\\n 0 & 0 & -\\sinh(\\frac{x^1-\\theta^2y^1}{\\theta}) & \\cosh(\\frac{x^1-\\theta^2y^1}{\\theta}) \\\\\n \\end{pmatrix}.\n\\end{align*}\n} Substitute $\\lambda$, $z^i$ and $\\bar{z}^i$ for $\\theta$, $x^i$ and $y^i$ \n($i=1,2$), respectively: \n\\[\n C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2)\n=\\begin{pmatrix}\n \\cosh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & \\sinh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & 0 & 0 \\\\\n \\sinh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & \\cosh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & 0 & 0 \\\\\n 0 & 0 & \\cosh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & -\\sinh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) \\\\\n 0 & 0 & -\\sinh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) & \\cosh(\\frac{z^1-\\lambda^2\\bar{z}^1}{\\lambda}) \\\\\n \\end{pmatrix}\n\\] \nfor $C_\\theta(x^1,x^2,y^1,y^2)$. \n Since $(z^1-\\lambda^2\\bar{z}^1)\/\\lambda$ is a purely imaginary number, one \nsees that $C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2)\\in G_1=Sp(2)$ for all \n$(z^1,z^2,\\bar{z}^1,\\bar{z}^2;\\lambda)\\in\\mathbb{A}^4\\times S^1$. \n Accordingly, we obtain a pluriharmonic map $f_1$ and a para-pluriharmonic \nmap $f_2$, \n\\[\n\\begin{array}{ll} \n (f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2):\n (\\mathbb{A}^4,J)\\longrightarrow G_1\/H_1\\simeq S^4, \n& \\lambda\\in S^1,\\\\\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x^1,x^2,y^1,y^2):\n (\\mathbb{B}^4,I)\\longrightarrow G_2\/H_2\\simeq H^4, \n& \\theta\\in\\mathbb{R}^+\n\\end{array}\n\\]\n(ref.\\ Subsection \\ref{subsec-5.1.1}). \n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{130mm}\n$(f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z^1,z^2,\\bar{z}^1,\\bar{z}^2):\n (\\mathbb{A}^4,J)\\to S^4$ \nis pluriharmonic\\par\n\\centerline{$\\Updownarrow$} \n$(f_2)_\\theta=\\pi_2\\circ C_\\theta(x^1,x^2,y^1,y^2):(\\mathbb{B}^4,I)\\to H^4$ \n is para-pluriharmonic \n\\end{minipage}\n}\n\\end{center}\n \n\n\n\n\n\n\\subsection{Harmonic maps, Lorentz harmonic maps and CMC-surfaces}\n\\label{subsec-5.2} \n In this subsection we will interrelate some harmonic maps \n$f_1(z,\\bar{z}):\\mathbb{A}^2\\to G_1\/H_1$ with Lorentz harmonic maps \n$f_2(x,y):\\mathbb{B}^2\\to G_2\/H_2$ by means of Theorem \\ref{thm-4.3.1}; and in \naddition, we will interrelate CMC-surfaces with other CMC-surfaces in \n$\\mathbb{R}^3$ or $\\mathbb{R}^3_1$, by use of $f_1(z,\\bar{z})$ and \n$f_2(x,y)$. \n More precisely, we interrelate a cylinder in $\\mathbb{R}^3$ with a \nhyperbolic cylinder in $\\mathbb{R}^3_1$ (cf.\\ Subsection \\ref{subsec-5.2.1}), \na two sheeted hyperboloid in $\\mathbb{R}^3_1$ with a one sheeted hyperboloid \nin $\\mathbb{R}^3_1$ (cf.\\ Subsection \\ref{subsec-5.2.2}), a sphere in \n$\\mathbb{R}^3$ with a one sheeted hyperboloid in $\\mathbb{R}^3_1$ (cf.\\ \nSubsection \\ref{subsec-5.2.3}), a Smyth surface in $\\mathbb{R}^3$ with a \ntimelike Smyth surface in $\\mathbb{R}^3_1$ (cf.\\ Subsection \n\\ref{subsec-5.2.4}), and a Delaunay surface in $\\mathbb{R}^3$ with a \n$K$-surface of revolution in $\\mathbb{R}^3$ (cf.\\ Subsection \n\\ref{subsec-5.2.5}). \n \n\n\n\n\\subsubsection{Cylinder in $\\mathbb{R}^3$ $\\Leftrightarrow$ Hyperbolic cylinder \nin $\\mathbb{R}^3_1$}\\label{subsec-5.2.1} \n In this subsection we will use the following notation: \n\\begin{enumerate}[(5.2.1)]\n\\item \n $G^\\mathbb{C}=SL(2,\\mathbb{C})$, \n\\item \n $\\sigma(A):=I_{1,1}\\cdot A\\cdot I_{1,1}$ for $A\\in G^\\mathbb{C}$, where \n$I_{1,1}:=\\operatorname{diag}(-1,1)$, \n\\item \n $\\nu_1(A):={}^t(\\overline{A})^{-1}$ for $A\\in G^\\mathbb{C}$, \n\\item \n $\\nu_2(A):=\\overline{A}$ for $A\\in G^\\mathbb{C}$,\n\\item \n $G^\\mathbb{C}\/H^\\mathbb{C}\n=SL(2,\\mathbb{C})\/S(GL(1,\\mathbb{C})\\times GL(1,\\mathbb{C}))$,\n\\item \n $G_1\/H_1=SU(2)\/S(U(1)\\times U(1))\\simeq S^2$, \n\\item \n $G_2\/H_2=SL(2,\\mathbb{R})\/S(GL(1,\\mathbb{R})\\times GL(1,\\mathbb{R}))\n \\simeq S^2_1$, \n\\item \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$), \n\\item \n $\\frak{g}_2:=\\operatorname{Lie}G_2=\\frak{sl}(2,\\mathbb{R})$. \n\\end{enumerate}\n\\setcounter{equation}{9} \n\n We will construct a harmonic map $f_1:(\\mathbb{A}^2,J)\\to S^2$ and a Lorentz \nharmonic map $f_2:(\\mathbb{B}^2,I)\\to S^2_1$ by means of Theorem \\ref{thm-4.3.1}; \nand moreover, a cylinder in $\\mathbb{R}^3$ and a hyperbolic cylinder in \n$\\mathbb{R}^3_1$ from $f_1$ and $f_2$, respectively. \n\n In the first place, we define a \n$\\widetilde{\\Lambda}_{-1,\\infty}(\\frak{g}_2)_\\sigma$-valued, real analytic \npara-holomorphic $1$-form $\\eta_\\theta(x)$ on $(\\mathbb{B}^2,I)$ by \n\\begin{equation}\\label{eq-5.2.1}\n \\eta_\\theta(x):=\\theta^{-1}\n \\begin{pmatrix} \n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix}dx. \n\\end{equation}\n In the second place, we define a \n$\\widetilde{\\Lambda}_{-\\infty,1}(\\frak{g}_2)_\\sigma$-valued \npara-antiholomorphic $1$-form $\\tau_\\theta(y)$ on $(\\mathbb{B}^2,I)$ by taking \nthe morphing condition \\eqref{M} in Theorem \\ref{thm-4.3.1} into \nconsideration, i.e., \n\\[\n \\tau_\\theta(y):=\\theta\n \\begin{pmatrix} \n 0 & -1 \\\\\n -1 & 0\n \\end{pmatrix}dy. \n\\] \n In the third place, let us solve the two initial value problems: \n$A_\\theta^{-1}\\cdot dA_\\theta=\\eta_\\theta(x)$, \n$B_\\theta^{-1}\\cdot dB_\\theta=\\tau_\\theta(y)$ and \n$A_\\theta(0)\\equiv\\operatorname{id}\\equiv B_\\theta(0)$. \n In this case, one can obtain \n\\[\n\\begin{array}{ll}\n A_\\theta(x)\n=\\begin{pmatrix}\n \\cosh(\\theta^{-1}x) & \\sinh(\\theta^{-1}x) \\\\\n \\sinh(\\theta^{-1}x) & \\cosh(\\theta^{-1}x) \n \\end{pmatrix}, \n& \n B_\\theta(y)\n=\\begin{pmatrix}\n \\cosh(-\\theta y) & \\sinh(-\\theta y) \\\\\n \\sinh(-\\theta y) & \\cosh(-\\theta y) \n \\end{pmatrix}\n\\end{array} \n\\] \nand the Iwasawa decomposition: \n$(A_\\theta(x),B_\\theta(y))=(C_\\theta(x,y),C_\\theta(x,y))\n \\cdot(B^+_\\theta(x,y),B^-_\\theta(x,y))$, \nwhere $B^+_\\theta(x,y):=B_\\theta(y)^{-1}\\in\\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ \nand $B^-_\\theta(x,y):=A_\\theta(x)^{-1}\\in\\widetilde{\\Lambda}^-_*(G_2)_\\sigma$. \n Here $C_\\theta(x,y)$ is given as follows: \n\\begin{equation}\\label{eq-5.2.2}\n C_\\theta(x,y)\n=\\begin{pmatrix}\n \\cosh(\\theta^{-1}x-\\theta y) & \\sinh(\\theta^{-1}x-\\theta y) \\\\\n \\sinh(\\theta^{-1}x-\\theta y) & \\cosh(\\theta^{-1}x-\\theta y) \n \\end{pmatrix}. \n\\end{equation}\n This $C_\\theta(x,y)$ provides us with an $\\mathbb{R}^+$-family of Lorentz \nharmonic maps \n\\[\n\\begin{array}{ll}\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(\\mathbb{B}^2,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n& \\theta\\in\\mathbb{R}^+ \n\\end{array} \n\\] \n(cf.\\ Proposition \\ref{prop-3.2.3}). \n In the fourth place, we substitute $\\lambda$, $z$ and $\\bar{z}$ for $\\theta$, \n$x$ and $y$, respectively: \n\\begin{equation}\\label{eq-5.2.3}\n C_\\lambda(z,\\bar{z})\n=\\begin{pmatrix}\n \\cosh(\\lambda^{-1}z-\\lambda\\bar{z}) & \\sinh(\\lambda^{-1}z-\\lambda\\bar{z}) \\\\\n \\sinh(\\lambda^{-1}z-\\lambda\\bar{z}) & \\cosh(\\lambda^{-1}z-\\lambda\\bar{z}) \n \\end{pmatrix}\n\\end{equation}\nfor $C_\\theta(x,y)$. \n Remark that $C_\\lambda(z,\\bar{z})\\in G_1=SU(2)$ for all \n$(z,\\bar{z};\\lambda)\\in \\mathbb{A}^2\\times S^1$ because \n$(\\lambda^{-1}z-\\lambda\\bar{z})$ is a purely imaginary number. \n As a consequence, one can construct a harmonic map $f_1$ and a Lorentz \nharmonic map $f_2$, \n\\[\n\\begin{array}{l@{\\,}ll} \n (f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z,\\bar{z})&:(\\mathbb{A}^2,J)\n \\longrightarrow G_1\/H_1\\simeq S^2, \n& \\lambda\\in S^1,\\\\\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y)&:(\\mathbb{B}^2,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n& \\theta\\in\\mathbb{R}^+,\\\\ \n\\end{array}\n\\]\nfrom the potential \\eqref{eq-5.2.1} $\\eta_\\theta(x)$. \n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{100mm}\n$(f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z,\\bar{z}):(\\mathbb{A}^2,J)\\to S^2$ \nis harmonic\\par\n\\centerline{$\\Updownarrow$} \n$(f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(\\mathbb{B}^2,I)\\to S^2_1$ \nis Lorentz harmonic \n\\end{minipage}\n}\n\\end{center}\n Here $C_\\lambda(z,\\bar{z})$ and $C_\\theta(x,y)$ are given by \n\\eqref{eq-5.2.3} and \\eqref{eq-5.2.2}, respectively.\\par \n\n Note that we have constructed the extended framing \n$C_\\lambda(z,\\bar{z}):\\mathbb{A}^2\\to S^2$ of a harmonic map and the extended \nframing $C_\\theta(x,y):\\mathbb{B}^2\\to S^2_1$ of a Lorentz harmonic map. \n For this reason, the Sym-Bobenko formula will enable us to obtain a \nCMC-surface $\\phi_1(z,\\bar{z}):\\mathbb{A}^2\\to\\mathbb{R}^3$ and a timelike \nCMC-surface $\\phi_2(x,y):\\mathbb{B}^2\\to\\mathbb{R}^3_1$ from them. \n\n For $C_\\lambda(z,\\bar{z})$, the Sym-Bobenko formula in \n\\cite[p.\\ 30]{Fu-Ko-Ro} yields \n\\[\n\\begin{split}\n \\phi_1(z,\\bar{z}):\n&=-\n \\left\\{i\\cdot\\lambda\\cdot\\frac{\\partial C_\\lambda}{\\partial\\lambda}\n \\cdot C_\\lambda^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\lambda)\\cdot\n \\begin{pmatrix}\n i & 0 \\\\\n 0 & -i \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\lambda=1}\\\\\n&=\\frac{-i}{2}\n \\begin{pmatrix} \n \\cosh 2(z-\\bar{z}) & -2(z+\\bar{z})-\\sinh 2(z-\\bar{z})\\\\\n -2(z+\\bar{z})+\\sinh 2(z-\\bar{z}) & -\\cosh 2(z-\\bar{z})\n \\end{pmatrix}\\\\\n&\\simeq (-2(z+\\bar{z}),i\\cdot\\sinh 2(z-\\bar{z}),-\\cosh 2(z-\\bar{z})). \n\\end{split}\n\\] \n This CMC-surface $\\phi_1(z,\\bar{z}):\\mathbb{A}^2\\to\\mathbb{R}^3$ is a \ncylinder. \n For $C_\\theta(x,y)$, the Sym-Bobenko formula in \\cite{Do-In-To}\\footnote{\nWe must change $(\\partial \\Phi\/\\partial t)$ into $-(\\partial \\Phi\/\\partial t)$ \nin the Sym-Bobenko formula \\cite[Proposition 5.1]{Do-In-To} because the \nparameter $\\lambda$ in this paper corresponds to the parameter $\\lambda^{-1}$ \nin \\cite{Do-In-To}.} is given as follows: \n\\[\n\\begin{split}\n \\phi_2(x,y):\n&=-2\n \\left\\{-\\theta\\cdot\\frac{\\partial C_\\theta}{\\partial\\theta}\n \\cdot C_\\theta^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\theta)\\cdot\n \\begin{pmatrix}\n -1 & 0 \\\\\n 0 & 1 \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\theta=1}\\\\\n&=\n \\begin{pmatrix} \n \\cosh 2(x-y) & -2(x+y)-\\sinh 2(x-y)\\\\\n -2(x+y)+\\sinh 2(x-y) & -\\cosh 2(x-y)\n \\end{pmatrix}\\\\\n&\\simeq(\\sinh 2(x-y),-2(x+y),-\\cosh 2(x-y)). \n\\end{split}\n\\] \n This timelike CMC-surface $\\phi_2(x,y):\\mathbb{B}^2\\to\\mathbb{R}^3_1$ is a \nhyperbolic cylinder, because \n$-(\\sinh 2(x-y))^2+(-2(x+y))^2+(-\\cosh 2(x-y))^2=4(x+y)^2+1$ (see Section 1.1 \nin \\cite{Do-In-To} for the metric on $\\mathbb{R}^3_1$). \n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{95mm}\nCMC-surface in $\\mathbb{R}^3$: Cylinder\\\\\n$\\phi_1(z,\\bar{z})=\n (-2(z+\\bar{z}),i\\cdot\\sinh 2(z-\\bar{z}),-\\cosh 2(z-\\bar{z}))$\\par\n\\centerline{$\\Updownarrow$} \nTimelike CMC-surface in $\\mathbb{R}^3_1$: Hyperbolic cylinder\\\\ \n $\\phi_2(x,y)=(\\sinh 2(x-y),-2(x+y),-\\cosh 2(x-y))$ \n\\end{minipage}\n}\n\\end{center}\n\n\n\n\n\n\\subsubsection{Two sheeted hyperboloid in $\\mathbb{R}^3_1$ $\\Leftrightarrow$ \nOne sheeted hyperboloid in $\\mathbb{R}^3_1$}\\label{subsec-5.2.2} \n In this subsection we will use the following notation: \n\\begin{enumerate}\n\\item[(5.2.13)] \n $G^\\mathbb{C}$: the same notation (5.2.1) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.14)] \n $\\sigma$: the same notation (5.2.2) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.15)] \n $\\nu_1(A):=I_{1,1}\\cdot{}^t(\\overline{A})^{-1}\\cdot I_{1,1}$ for \n$A\\in G^\\mathbb{C}$, \n\\item[(5.2.16)] \n $\\nu_2(A):=I_{1,1}\\cdot\\overline{A}\\cdot I_{1,1}$ for $A\\in G^\\mathbb{C}$,\n\\item[(5.2.17)] \n $G^\\mathbb{C}\/H^\\mathbb{C}$: the same notation (5.2.5) as in Subsection \n\\ref{subsec-5.2.1}, \n\\item[(5.2.18)] \n $G_1\/H_1=SU(1,1)\/S(U(1)\\times U(1))\\simeq H^2$, \n\\item[(5.2.19)] \n $G_2\/H_2=SL_*(2,\\mathbb{R})\/S(GL(1,\\mathbb{R})\\times GL(1,\\mathbb{R}))\n\\simeq S^2_1$, \n\\item[(5.2.20)] \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$), \n\\item[(5.2.21)] \n $\\frak{g}_2:=\\operatorname{Lie}G_2=\\frak{sl}_*(2,\\mathbb{R})$. \n\\end{enumerate}\nwhere the above notation $SL_*(2,\\mathbb{R})$ and $\\frak{sl}_*(2,\\mathbb{R})$ \nare the same as those in \\cite{Ko}. \n\\setcounter{equation}{21}\n\n The arguments below are similar to those in Subsection \\ref{subsec-5.2.1}. \n Define a $\\widetilde{\\Lambda}_{-1,\\infty}(\\frak{g}_2)_\\sigma$-valued \nanalytic para-holomorphic $1$-form $\\eta_\\theta(x)$ on $(\\mathbb{B}^2,I)$ by \n\\begin{equation}\\label{eq-5.2.4}\n\\begin{array}{ll}\n \\eta_\\theta(x):=\\theta^{-1}\n \\begin{pmatrix} \n 0 & i \\\\\n 0 & 0\n \\end{pmatrix}dx. \n\\end{array} \n\\end{equation} \n We want $\\tau_\\theta(y)\\in\\widetilde{\\mathcal{P}}^-(\\frak{g}_2)$ to satisfy \nthe morphing condition \\eqref{M} in Theorem \\ref{thm-4.3.1}; and therefore we \ndefine $\\tau_\\theta(y)$ as follows: \n\\[ \n \\tau_\\theta(y):=\\theta\n \\begin{pmatrix} \n 0 & 0 \\\\\n -i & 0\n \\end{pmatrix}dy.\n\\] \n Solve the two initial value problems: \n$A_\\theta^{-1}\\cdot dA_\\theta=\\eta_\\theta(x)$, \n$B_\\theta^{-1}\\cdot dB_\\theta=\\tau_\\theta(y)$ and \n$A_\\theta(0)\\equiv\\operatorname{id}\\equiv B_\\theta(0)$. \n Then one has \n\\[\n\\begin{array}{ll}\n A_\\theta(x)\n=\\begin{pmatrix}\n 1 & i\\theta^{-1}x \\\\\n 0 & 1 \n \\end{pmatrix}, \n& \n B_\\theta(y)\n=\\begin{pmatrix}\n 1 & 0 \\\\\n -i\\theta y & 1 \n \\end{pmatrix}.\n\\end{array} \n\\] \n Let us factorize \n$(A_\\theta,B_\\theta)\\in\\widetilde{\\Lambda}(G_2)_\\sigma\\times\n \\widetilde{\\Lambda}(G_2)_\\sigma$ \nin the Iwasawa decomposition around $(0,0)$: \n$(A_\\theta,B_\\theta)=(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta)$, \n$C_\\theta\\in\\widetilde{\\Lambda}(G_2)_\\sigma$ and \n$B^\\pm_\\theta\\in\\widetilde{\\Lambda}^\\pm(G_2)_\\sigma$ (cf.\\ Theorem \n\\ref{thm-3.1.5}). \n Here $B^\\pm_\\theta$ and $C_\\theta$ are given as follows: \n\\[\n\\begin{array}{ll}\n B^+_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1-xy}}}\n \\begin{pmatrix}\n 1 & 0 \\\\\n i\\theta y & 1-xy \n \\end{pmatrix}, \n& \n B^-_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1-xy}}}\n \\begin{pmatrix}\n 1-xy & -i\\theta^{-1} x \\\\\n 0 & 1 \n \\end{pmatrix}, \n\\end{array} \n\\] \n\\begin{equation}\\label{eq-5.2.5}\n C_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1-xy}}}\n \\begin{pmatrix}\n 1 & i\\theta^{-1}x \\\\\n -i\\theta y & 1 \n \\end{pmatrix}. \n\\end{equation}\n From $C_\\theta(x,y)$ one obtains an $\\mathbb{R}^+$-family of Lorentz \nharmonic maps \n\\[\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(W,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n\\] \nwhere $W:=\\{(x,y)\\in\\mathbb{B}^2\\,|\\, xy\\neq 1\\}$. \n Substituting $\\lambda$, $z$ and $\\bar{z}$ for $\\theta$, $x$ and $y$, \nrespectively, we have \n\\begin{equation}\\label{eq-5.2.6}\n C_\\lambda(z,\\bar{z})\n={\\displaystyle \\frac{1}{\\sqrt{1-|z|^2}}}\n \\begin{pmatrix}\n 1 & i\\lambda^{-1} z \\\\\n -i\\lambda\\bar{z} & 1 \n \\end{pmatrix} \n\\end{equation}\nfor $C_\\theta(x,y)$. \n It is obvious that $C_\\lambda(z,\\bar{z})\\in G_1=SU(1,1)$ for all \n$(z,\\bar{z};\\lambda)\\in V\\times S^1$, where $V:=\\mathbb{A}^2\\setminus S^1$. \n Consequently, one can get a harmonic map $f_1(z,\\bar{z})$ and a Lorentz \nharmonic map $f_2(x,y)$, \n\\[\n\\begin{array}{ll} \n (f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z,\\bar{z}):(V,J)\n \\longrightarrow G_1\/H_1\\simeq H^2, \n& \\lambda\\in S^1,\\\\\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(W,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n& \\theta\\in\\mathbb{R}^+,\n\\end{array}\n\\]\nfrom the potential \\eqref{eq-5.2.4}. \n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{100mm}\n$(f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z,\\bar{z}):(V,J)\\to H^2$ is harmonic\\par\n\\centerline{$\\Updownarrow$} \n$(f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(W,I)\\to S^2_1$ is Lorentz harmonic \n\\end{minipage}\n}\n\\end{center}\n Here $C_\\lambda(z,\\bar{z})$ and $C_\\theta(x,y)$ are given by \n\\eqref{eq-5.2.6} and \\eqref{eq-5.2.5}, respectively; and \n$V=\\mathbb{A}^2\\setminus S^1$ and $W=\\{(x,y)\\in\\mathbb{B}^2\\,|\\, xy\\neq 1\\}$.\\par \n\n Now, let us obtain a spacelike CMC-surface \n$\\phi_1(z,\\bar{z}):V\\to\\mathbb{R}^3_1$ and \na timelike CMC-surface $\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ from the above \n$f_1(z,\\bar{z})$ and $f_2(x,y)$, respectively.\\par\n \n On the one hand, the Sym-Bobenko formula in \\cite{Br-Ro-Sc}, together with \n\\eqref{eq-5.2.6}, gives us \n\\[\n\\begin{split}\n \\phi_1(z,\\bar{z}):\n&=-\n \\left\\{i\\cdot\\lambda\\cdot\\frac{\\partial C_\\lambda}{\\partial\\lambda}\n \\cdot C_\\lambda^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\lambda)\\cdot\n \\begin{pmatrix}\n i & 0 \\\\\n 0 & -i \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\lambda=1}\\\\\n&=\n \\begin{pmatrix} \n -i(1+3|z|^2)\/2(1-|z|^2) & -2z\/(1-|z|^2)\\\\\n -2\\bar{z}\/(1-|z|^2) & i(1+3|z|^2)\/2(1-|z|^2)\n \\end{pmatrix}. \n\\end{split}\n\\]\n Thus we have a spacelike CMC-surface in $\\mathbb{R}^3_1$, \n\\[\n\\begin{array}{ll}\n \\phi_1:V\\longrightarrow\\mathbb{R}^3_1, \n& {\\displaystyle \n(z,\\bar{z})\\mapsto \n \\Bigl(-\\frac{z+\\bar{z}}{1-|z|^2},\n \\frac{i(z-\\bar{z})}{1-|z|^2},\n -\\frac{1+3|z|^2}{2(1-|z|^2)}\\Bigr)}\n\\end{array}\n\\]\n(cf.\\ Subsection 3.2.1 in \\cite{Br-Ro-Sc}). \n This $\\phi_1(z,\\bar{z})$ is a two sheeted hyperboloid centered at \n$(0,0,1\/2)$ because \n\\[\n \\Bigl(-\\frac{z+\\bar{z}}{1-|z|^2}\\Bigr)^2\n +\\Bigl(\\frac{i(z-\\bar{z})}{1-|z|^2}\\Bigr)^2 \n -\\Bigl(-\\frac{1+3|z|^2}{2(1-|z|^2)}-\\frac{1}{2}\\Bigr)^2\n =-1\n\\]\n(see Subsection 3.2.1 in \\cite{Br-Ro-Sc} for the metric on $\\mathbb{R}^3_1$).\n One the other hand, the Sym-Bobenko formula in \\cite{Ko}, combined with \n\\eqref{eq-5.2.5}, gives us \n\\[\n\\begin{split}\n \\phi_2(x,y)\n&:=-{\\displaystyle \\frac{1}{2}}\n \\left\\{\\theta\\cdot\\frac{\\partial C_\\theta}{\\partial\\theta}\n \\cdot C_\\theta^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\theta)\\cdot\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & -1 \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\theta=1}\\\\\n&=\n \\begin{pmatrix} \n -(1+3xy)\/4(1-xy) & ix\/(1-xy)\\\\\n iy\/(1-xy) & (1+3xy)\/4(1-xy)\n \\end{pmatrix} \n\\end{split}\n\\]\n(ref.\\ Proof of Corollary 3.4 in \\cite{Ko}). \n Then it turns out that \n\\[\n\\begin{array}{ll}\n \\phi_2:W\\longrightarrow\\mathbb{R}^3_1, \n& {\\displaystyle \n (x,y)\\mapsto \n \\Bigl(-\\frac{x+y}{1-xy},-\\frac{x-y}{1-xy},-\\frac{1+3xy}{2(1-xy)}\\Bigr)}\n\\end{array}\n\\]\n(cf.\\ Subsection 3.1 in \\cite{Ko}). \n This $\\phi_2(x,y)$ is a one sheeted hyperboloid centered at $(0,0,1\/2)$. \n Indeed, we deduce\n\\[\n\\Bigl(-\\frac{x+y}{1-xy}\\Bigr)^2\n -\\Bigl(-\\frac{x-y}{1-xy}\\Bigr)^2\n -\\Bigl(-\\frac{1+3xy}{2(1-xy)}-\\frac{1}{2}\\Bigr)^2\n =-1\n\\]\nby a direct computation (see Remark 3.2 in \\cite{Ko} for the metric on \n$\\mathbb{R}^3_1$). \n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{105mm}\nSpacelike CMC-surface in $\\mathbb{R}^3_1$: Two sheeted hyperboloid\\\\\n$\\phi_1(z,\\bar{z})=\\bigl(-\\frac{z+\\bar{z}}{1-|z|^2},\n \\frac{i(z-\\bar{z})}{1-|z|^2},-\\frac{1+3|z|^2}{2(1-|z|^2)}\\bigr)$\\par\n\\centerline{$\\Updownarrow$} \nTimelike CMC-surface in $\\mathbb{R}^3_1$: One sheeted hyperboloid \\\\ \n$\\phi_2(x,y)\n =\\bigl(-\\frac{x+y}{1-xy},-\\frac{x-y}{1-xy},-\\frac{1+3xy}{2(1-xy)}\\bigr)$\n\\end{minipage}\n}\n\\end{center}\n\n\n\n\n\n\\subsubsection{Sphere in $\\mathbb{R}^3$ $\\Leftrightarrow$ One sheeted \nhyperboloid in $\\mathbb{R}^3_1$}\\label{subsec-5.2.3} \n In this subsection, we utilize the same potential as in Subsection \n\\ref{subsec-5.2.2}, but we will obtain other CMC-surfaces. \n For this we will use the following notation: \n\\begin{enumerate}\n\\item[(5.2.25)] \n $G^\\mathbb{C}$: the same notation (5.2.1) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.26)] \n $\\sigma$: the same notation (5.2.2) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.27)] \n $\\nu_1$: the same notation (5.2.3) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.28)] \n $\\nu_2$: the same notation (5.2.16) as in Subsection \\ref{subsec-5.2.2}, \n\\item[(5.2.29)] \n $G^\\mathbb{C}\/H^\\mathbb{C}$: the same notation (5.2.5) as in Subsection \n\\ref{subsec-5.2.1}, \n\\item[(5.2.30)]\n $G_1\/H_1$: the same notation (5.2.6) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.31)] \n $G_2\/H_2$: the same notation (5.2.19) as in Subsection \\ref{subsec-5.2.2}, \n\\item[(5.2.32)] \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$), \n\\item[(5.2.33)] \n $\\frak{g}_2$: the same notation (5.2.21) as in Subsection \\ref{subsec-5.2.2}. \n\\end{enumerate}\n\\setcounter{equation}{33}\n\n Let $\\eta_\\theta(x)$ denote the potential \\eqref{eq-5.2.4}. \n Define $\\tau_\\theta(y)\\in\\widetilde{\\mathcal{P}}^-(\\frak{g}_2)$ by \n\\[\n \\tau_\\theta(y):=\\theta\n \\begin{pmatrix} \n 0 & 0 \\\\\n i & 0\n \\end{pmatrix}dy. \n\\]\n Here we remark that $(\\eta_\\theta(x),\\tau_\\theta(y))$ is a real analytic \npara-pluriharmonic potential on $(\\mathbb{B}^2,I)$ satisfying the morphing \ncondition \\eqref{M}. \n Solve the two initial value problems: \n$A_\\theta^{-1}\\cdot dA_\\theta=\\eta_\\theta(x)$, \n$B_\\theta^{-1}\\cdot dB_\\theta=\\tau_\\theta(y)$ and \n$A_\\theta(0)\\equiv\\operatorname{id}\\equiv B_\\theta(0)$; and factorize \n$(A_\\theta,B_\\theta)\\in\\widetilde{\\Lambda}(G_2)_\\sigma\\times\n \\widetilde{\\Lambda}(G_2)_\\sigma$ \nin the Iwasawa decomposition (cf.\\ Theorem \\ref{thm-3.1.5}): \n$(A_\\theta,B_\\theta)=(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta)$, \n$C_\\theta\\in\\widetilde{\\Lambda}(G_2)_\\sigma$ and \n$B^\\pm_\\theta\\in\\widetilde{\\Lambda}^\\pm(G_2)_\\sigma$. \n In this case it follows that \n\\[\n\\begin{array}{ll}\n A_\\theta(x)\n=\\begin{pmatrix}\n 1 & i\\theta^{-1}x \\\\\n 0 & 1 \n \\end{pmatrix}, \n& \n B_\\theta(y)\n=\\begin{pmatrix}\n 1 & 0 \\\\\n i\\theta y & 1 \n \\end{pmatrix};\\\\\n B^+_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1+xy}}}\n \\begin{pmatrix}\n 1 & 0 \\\\\n -i\\theta y & 1+xy \n \\end{pmatrix}, \n& \n B^-_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1+xy}}}\n \\begin{pmatrix}\n 1+xy & -i\\theta^{-1} x \\\\\n 0 & 1 \n \\end{pmatrix};\\\\\n C_\\theta(x,y)\n={\\displaystyle \\frac{1}{\\sqrt{1+xy}}}\n \\begin{pmatrix}\n 1 & i\\theta^{-1}x \\\\\n i\\theta y & 1 \n \\end{pmatrix}. \n& \n\\end{array} \n\\]\n It is easy to see that $C_\\lambda(z,\\bar{z})\\in G_1=SU(2)$ for all \n$(z,\\bar{z};\\lambda)\\in \\mathbb{A}^2\\times S^1$. \n Accordingly, we obtain a harmonic map $f_1$ and a Lorentz harmonic map \n$f_2$, \n\\[\n\\begin{array}{ll} \n (f_1)_\\lambda=\\pi_1\\circ C_\\lambda(z,\\bar{z}):(\\mathbb{A}^2,J)\n \\longrightarrow G_1\/H_1\\simeq S^2, \n& \\lambda\\in S^1,\\\\\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(W,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n& \\theta\\in\\mathbb{R}^+,\\\\\n\\end{array}\n\\]\nfrom \\eqref{eq-5.2.4}. \n Here $W:=\\{(x,y)\\in\\mathbb{B}^2\\,|\\,xy\\neq -1\\}$. \n The above maps will provide us with a CMC-surface \n$\\phi_1:\\mathbb{A}^2\\to\\mathbb{R}^3$ and a timelike CMC-surface \n$\\phi_2:W\\to\\mathbb{R}^3_1$. \n The Sym-Bobenko formula in \\cite{Fu-Ko-Ro}, combined with \n$C_\\lambda(z,\\bar{z})$, gives \n\\[\n\\begin{split}\n \\phi_1(z,\\bar{z}):\n&=-\n \\left\\{i\\cdot\\lambda\\cdot\\frac{\\partial C_\\lambda}{\\partial\\lambda}\n \\cdot C_\\lambda^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\lambda)\\cdot\n \\begin{pmatrix}\n i & 0 \\\\\n 0 & -i \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\lambda=1}\\\\\n&={\\displaystyle \\frac{-i}{2}}\n \\begin{pmatrix} \n (1-3|z|^2)\/(1+|z|^2) & -4iz\/(1+|z|^2)\\\\\n 4i\\bar{z}\/(1+|z|^2) & -(1-3|z|^2)\/(1+|z|^2)\n \\end{pmatrix}\\\\\n&\\simeq \n \\Bigl(\\frac{-2i(z-\\bar{z})}{1+|z|^2},\\frac{-2(z+\\bar{z})}{1+|z|^2},\n \\frac{-1+3|z|^2}{1+|z|^2}\\Bigr). \n\\end{split}\n\\]\n This CMC-surface $\\phi_1(z,\\bar{z}):\\mathbb{A}^2\\to\\mathbb{R}^3$ is a \nsphere centered at $(0,0,1)$. \n By the above $C_\\theta(x,y)$ and the Sym-Bobenko formula in \\cite{Ko}, we \nobtain \n\\[\n\\begin{split}\n \\phi_2(x,y):\n&=-{\\displaystyle \\frac{1}{2}}\n \\left\\{\\theta\\cdot\\frac{\\partial C_\\theta}{\\partial\\theta}\n \\cdot C_\\theta^{-1} \n +\\frac{1}{2}\\cdot\\operatorname{Ad}(C_\\theta)\\cdot\n \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & -1 \\\\\n \\end{pmatrix}\\right\\}\\bigg|_{\\theta=1}\\\\\n&=\n \\begin{pmatrix} \n -(1-3xy)\/4(1+xy) & ix\/(1+xy)\\\\\n -iy\/(1+xy) & (1-3xy)\/4(1+xy)\n \\end{pmatrix}\\\\\n&\\simeq \\Bigl(-\\frac{x-y}{1+xy},-\\frac{x+y}{1+xy},-\\frac{1-3xy}{2(1+xy)}\\Bigr). \n\\end{split}\n\\]\n This timelike CMC-surface $\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ is a one sheeted \nhyperboloid centered at $(0,0,1\/2)$ because \n\\[\n\\Bigl(-\\frac{x-y}{1+xy}\\Bigr)^2\n -\\Bigl(-\\frac{x+y}{1+xy}\\Bigr)^2\n -\\Bigl(-\\frac{1-3xy}{2(1+xy)}-\\frac{1}{2}\\Bigr)^2\n =-1\n\\]\n(see Remark 3.2 in \\cite{Ko} for the metric on $\\mathbb{R}^3_1$). \n\n\n\\begin{center}\n\\fbox{\n\\begin{minipage}{100mm}\nCMC-surface in $\\mathbb{R}^3$: Sphere\\\\\n$\\phi_1(z,\\bar{z})= \\bigl(\\frac{-2i(z-\\bar{z})}{1+|z|^2},\n \\frac{-2(z+\\bar{z})}{1+|z|^2},\\frac{-1+3|z|^2}{1+|z|^2}\\bigr)$\\par\n\\centerline{$\\Updownarrow$} \nTimelike CMC-surface in $\\mathbb{R}^3_1$: One sheeted hyperboloid \\\\ \n$\\phi_2(x,y)=\\bigl(-\\frac{x-y}{1+xy},-\\frac{x+y}{1+xy},\n -\\frac{1-3xy}{2(1+xy)}\\bigr)$ \n\\end{minipage}\n}\n\\end{center}\n\n\n\n\n\\subsubsection{Smyth surface in $\\mathbb{R}^3$ $\\Leftrightarrow$ \nTimelike Smyth surface in $\\mathbb{R}^3_1$}\\label{subsec-5.2.4} \n In this subsection we construct a timelike CMC-surface, \n$\\phi_2(x,y):W\\to\\mathbb{R}^3_1$, from the potential of Smyth surface in \n$\\mathbb{R}^3$ (cf.\\ \\eqref{eq-5.2.7}); and we study the relation between the \nGau{\\ss} equation for $\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ and the Painlev\\'{e} \nequation of type (III). \n Henceforth we will use the same notation as in Subsection \\ref{subsec-5.2.1}. \n \n Define a $\\widetilde{\\Lambda}_{-1,\\infty}(\\frak{g}_2)_\\sigma$-valued, real \nanalytic para-holomorphic $1$-form $\\eta_\\theta(x)$ on $(\\mathbb{B}^2,I)$ by \n\\begin{equation}\\label{eq-5.2.7}\n\\begin{array}{ll}\n \\eta_\\theta(x):=\\theta^{-1}\n \\begin{pmatrix} \n 0 & 1 \\\\\n x^m & 0\n \\end{pmatrix}dx, \n\\end{array} \n\\end{equation}\nwhere $m\\in\\mathbb{N}$.\n Taking the morphing condition \\eqref{M} into consideration, we define \n$\\tau_\\theta(y)$ as follows: \n\\[\n \\tau_\\theta(x):=\\theta\n \\begin{pmatrix} \n 0 & -y^m \\\\\n -1 & 0\n \\end{pmatrix}dy.\n\\]\n Solve the two initial value problems: \n$A_\\theta^{-1}\\cdot dA_\\theta=\\eta_\\theta(x)$, \n$B_\\theta^{-1}\\cdot dB_\\theta=\\tau_\\theta(y)$ and \n$A_\\theta(0)\\equiv\\operatorname{id}\\equiv B_\\theta(0)$. \n In terms of Theorem \\ref{thm-3.1.5} we factorize \n$(A_\\theta,B_\\theta)\\in\\widetilde{\\Lambda}^-_*(G_2)_\\sigma\\times\n \\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ \nas follows: \n$(A_\\theta,B_\\theta)\n =(C_\\theta,C_\\theta)\\cdot(B^+_\\theta,B^-_\\theta)$, \n$C_\\theta\\in\\widetilde{\\Lambda}(G_2)_\\sigma$ and \n$B^+_\\theta\\in\\widetilde{\\Lambda}^+_*(G_2)_\\sigma$ and \n$B^-_\\theta\\in\\widetilde{\\Lambda}^-(G_2)_\\sigma$. \n Then Proposition \\ref{prop-3.2.3} enables us to obtain an \n$\\mathbb{R}^+$-family of Lorentz harmonic maps \n\\[\n (f_2)_\\theta=\\pi_2\\circ C_\\theta(x,y):(W,I)\n \\longrightarrow G_2\/H_2\\simeq S^2_1, \n\\]\nwhere $W$ is an open neighborhood of $\\mathbb{B}^2$ at $(0,0)$. \n Furthermore, Theorem \\ref{thm-4.3.1} tells us that there is an $S^1$-family \nof harmonic maps \n\\[\n\\begin{array}{ll}\n (f_1)_\\lambda=\\pi_1\\circ C'_\\lambda(z,\\bar{z}):(V,J)\n \\longrightarrow G_1\/H_1\\simeq S^2, \n& C'_\\lambda(0,0)\\equiv\\operatorname{id}, \n\\end{array} \n\\]\nwhere \n$C'_\\lambda(z,\\bar{z}):=C_\\lambda(z,\\bar{z})\\cdot h^\\mathbb{C}(z,\\bar{z})$ \n(see Theorem \\ref{thm-4.3.1} for $V$ and $h^\\mathbb{C}(z,\\bar{z})$). \n From the above harmonic map $f_1(z,\\bar{z})$, the Sym-Bobenko formula \nenables us to obtain a CMC-surface $\\phi_1(z,\\bar{z}):V\\to\\mathbb{R}^3$ (ref.\\ \nSubsection \\ref{subsec-5.2.1}), which is called the {\\it Smyth surface} (cf.\\ \n\\cite[p.\\ 662]{Do-Pe-Wu}). \n In addition, one can obtain a timelike CMC-surface \n$\\phi_2(x,y):W\\to\\mathbb{R}^3_1$, from the above Lorentz harmonic map \n$f_2(x,y)$. \n We end this subsection with clarifying an important property of \n$\\phi_2(x,y):W\\to\\mathbb{R}^3_1$: \n\\begin{proposition}\\label{prop-5.2.1} \n With the above setting and notation, the Gau{\\ss} equation for \n$\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ is the Painlev\\'{e} equation of type \n{\\rm (III)}. \n\\end{proposition}\n\\begin{proof} \n Our first aim is to deduce \\eqref{eq-5.2.9} below. \n For \n$k=\\operatorname{diag}(s,1\/s)\\in H_2=S(GL(1,\\mathbb{R})\\times GL(1,\\mathbb{R}))$, \nlet us define real numbers $a=a(k)$ and $b=b(k)$ by $a(k):=s^{-4\/m}$ and \n$b(k):=s^{-(4+2m)\/m}$, respectively. \n Since \n$k\\cdot\\eta_\\lambda(x)\\cdot k^{-1}=\\eta_{(b\\cdot\\lambda)}(a\\cdot x)$, \n$k\\cdot\\tau_\\lambda(y)\\cdot k^{-1}\n =\\tau_{(b\\cdot\\lambda)}(a^{-1}\\cdot y)$ \nand $A_\\lambda(0)\\equiv\\operatorname{id}\\equiv B_\\lambda(0)$, we understand \nthat \n\\begin{equation}\\label{eq-5.2.8}\n\\begin{array}{ll}\nk\\cdot A_\\lambda(x)\\cdot k^{-1} = A_{b\\cdot\\lambda}(a\\cdot x),\n& k\\cdot B_\\lambda(y)\\cdot k^{-1} = B_{b\\cdot\\lambda}(a^{-1}\\cdot y). \n\\end{array} \n\\end{equation}\n It is immediate from \n$B_\\lambda^{-1}\\cdot A_\\lambda=(B^-_\\lambda)^{-1}\\cdot B^+_\\lambda$ and \n\\eqref{eq-5.2.8} that \n$(k\\cdot B^-_\\lambda(x,y)\\cdot k^{-1})^{-1}\n \\cdot(k\\cdot B^+_\\lambda(x,y)\\cdot k^{-1})\n =B^-_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y)^{-1}\n \\cdot B^+_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y)$ \nfor any $k\\in H_2$ and $\\lambda\\in S^1$. \n Therefore, the uniqueness of the Birkhoff decomposition allows us to \nconclude \n\\[\n\\begin{array}{ll}\n k\\cdot B^+_\\lambda(x,y)\\cdot k^{-1}\n=B^+_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y) \n& \\mbox{for any $k\\in H_2$ and $\\lambda\\in S^1$}. \n\\end{array} \n\\]\n The above and \\eqref{eq-5.2.8} imply that \n\\begin{multline*}\n k\\cdot C_\\lambda(x,y)\\cdot k^{-1}\n =k\\cdot A_\\lambda(x)\\cdot B^+_\\lambda(x,y)^{-1}\\cdot k^{-1}\\\\\n =A_{b\\cdot\\lambda}(a\\cdot x)\\cdot B^+_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y)^{-1}\n =C_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y)\n\\end{multline*} \n---that is, they imply that \n\\begin{equation}\\label{eq-5.2.9}\n\\begin{array}{ll}\n k\\cdot C_\\lambda(x,y)\\cdot k^{-1}=C_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y) \n& \\mbox{for any $k\\in H_2$ and $\\lambda\\in S^1$}. \n\\end{array} \n\\end{equation}\n Now, let \n$U_\\lambda(x,y):=C_\\lambda(x,y)^{-1}\\cdot\\partial_x C_\\lambda(x,y)$ and \n$V_\\lambda(x,y):=C_\\lambda(x,y)^{-1}\\cdot\\partial_y C_\\lambda(x,y)$. \n We express these Maurer-Cartan forms explicitly as follows: \n\\begin{equation}\\label{eq-5.2.10}\n\\begin{split}\n& U_\\lambda(x,y)\n =\\begin{pmatrix}\n u_x(x,y)\/4 & -(\\lambda^{-1}\/2)\\cdot H\\cdot e^{u(x,y)\/2} \\\\\n \\lambda^{-1}\\cdot Q(x)\\cdot e^{-u(x,y)\/2} & -u_x(x,y)\/4\n \\end{pmatrix},\\\\\n& V_\\lambda(x,y)\n =\\begin{pmatrix}\n -u_y(x,y)\/4 & -\\lambda\\cdot R(y)\\cdot e^{-u(x,y)\/2} \\\\\n (\\lambda\/2)\\cdot H\\cdot e^{u(x,y)\/2} & u_y(x,y)\/4,\n \\end{pmatrix}\n\\end{split} \n\\end{equation}\nwhere $H$ ($\\neq0$) is constant (cf.\\ (2.1.5) in \\cite{Ko}). \n Then, the Gau{\\ss} equation for $\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ is \n\\begin{equation}\\label{eq-5.2.11}\n u_{xy}(x,y)-2\\cdot Q(x)\\cdot R(y)\\cdot e^{-u(x,y)}\n +\\frac{1}{2}\\cdot H^2\\cdot e^{u(x,y)}=0 \n\\end{equation}\n(cf.\\ (2.1.7) in \\cite{Ko}). \n This equation will become the Painlev\\'{e} equation of type (III) \nlater (cf.\\ \\eqref{eq-5.2.11''}). \n It follows from \\eqref{eq-5.2.9} that \n$\\alpha^\\lambda(x,y):=C_\\lambda(x,y)^{-1}\\cdot dC_\\lambda(x,y)$ satisfies \n$\\alpha^\\lambda(x,y)=U_\\lambda(x,y)dx+V_\\lambda(x,y)dy$ and \n$k\\cdot\\alpha^\\lambda(x,y)\\cdot k^{-1}\n =\\alpha^{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y)$. \n Hence \n\\[\n\\begin{array}{ll}\n k\\cdot U_\\lambda(x,y)\\cdot k^{-1}\n =a\\cdot U_{b\\cdot\\lambda}(a\\cdot x, a^{-1}\\cdot y), \n& \n k\\cdot V_\\lambda(x,y)\\cdot k^{-1}\n =a^{-1}\\cdot V_{b\\cdot\\lambda}(a\\cdot x,a^{-1}\\cdot y). \n\\end{array}\n\\] \n Accordingly we obtain \n\\[\n\\begin{array}{lll}\n u(a\\cdot x,a^{-1}\\cdot y)=u(x,y), \n& Q(a\\cdot x)=a^m\\cdot Q(x), \n& R(a^{-1}\\cdot y)=a^{-m}\\cdot R(y)\n\\end{array}\n\\]\nfrom \\eqref{eq-5.2.10}. \n Let $\\Omega(x\\cdot y):=u(1,x\\cdot y)$. \n Then $\\Omega(x\\cdot y)=u(x,y)$ follows from \n$u(a\\cdot x,a^{-1}\\cdot y)=u(x,y)$ and $a:=x^{-1}$. \n Hence we conclude that \n\\begin{equation}\\label{eq-5.2.12}\n u_{xy}=\\partial_x\\partial_y\\Omega(x\\cdot y)\n =\\partial_x(\\Omega(x\\cdot y)'\\cdot x)\n =\\Omega(x\\cdot y)''\\cdot x\\cdot y+\\Omega(x\\cdot y)'. \n\\end{equation}\n Since $Q(a\\cdot x)=a^m\\cdot Q(x)$ and $R(a^{-1}\\cdot y)=a^{-m}\\cdot R(y)$ \none can express $Q(x)$ and $R(x)$ as $Q(x)=Q_0\\cdot x^m$ and \n$R(y)=R_0\\cdot y^m$, respectively, where both $Q_0$ and $R_0$ are constant. \n Therefore we show \n\\begin{equation}\\label{eq-5.2.13}\n\\begin{split}\n& -2\\cdot Q(x)\\cdot R(y)\\cdot e^{-u(x,y)}\n +\\frac{1}{2}\\cdot H^2\\cdot e^{u(x,y)}\\\\\n&=-2\\cdot Q_0\\cdot R_0\\cdot (x\\cdot y)^m\\cdot e^{-\\Omega(x\\cdot y)}\n +\\frac{1}{2}\\cdot H^2\\cdot e^{\\Omega(x\\cdot y)}. \n\\end{split}\n\\end{equation}\n In terms of \\eqref{eq-5.2.12} and \\eqref{eq-5.2.13} we rewrite \n\\eqref{eq-5.2.11} as follows: \n\\[\\tag{5.2.38$'$}\\label{eq-5.2.11'}\n \\Omega(t)''\\cdot t+\\Omega(t)'\n -2\\cdot Q_0\\cdot R_0\\cdot t^m\\cdot e^{-\\Omega(t)}\n +\\frac{1}{2}\\cdot H^2\\cdot e^{\\Omega(t)}=0,\n\\]\nwhere $t:=x\\cdot y$. \n Furthermore, one can rewrite \\eqref{eq-5.2.11'} as follows: \n\\[\\tag{5.2.38$''$}\\label{eq-5.2.11''}\n \\frac{d^2v}{du^2}\n=\\frac{1}{v}\\Bigl(\\frac{dv}{du}\\Bigr)^2-\\frac{1}{u}\\frac{dv}{du}\n +\\frac{1}{u}\\Bigl(-\\frac{H^2}{2+m}v^2+4\\frac{R_0\\cdot Q_0}{2+m}\\Bigr)\n\\]\nby setting $u:=(2t^{(2+m)\/2})\/(2+m)$ and $v:=e^{\\Omega(t)}\\cdot t^{-m\/2}$. \n Consequently we assert that the Gau{\\ss} equation \\eqref{eq-5.2.11} for \n$\\phi_2(x,y):W\\to\\mathbb{R}^3_1$ is the Painlev\\'{e} equation \\eqref{eq-5.2.11''} \nof type (III). \n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsubsection{Delaunay surface in $\\mathbb{R}^3$ $\\Leftrightarrow$ \n$K$-surface of revolution in $\\mathbb{R}^3$}\\label{subsec-5.2.5} \n In this subsection we will use the following notation: \n\\begin{enumerate}\n\\item[(5.2.41)] \n $G^\\mathbb{C}$: the same notation (5.2.1) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.42)] \n $\\sigma$: the same notation (5.2.2) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.43)] \n $\\nu_1$: the same notation (5.2.3) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.44)] \n $\\nu_2:=\\nu_1$,\n\\item[(5.2.45)] \n $G^\\mathbb{C}\/H^\\mathbb{C}$: the same notation (5.2.5) as in Subsection \n\\ref{subsec-5.2.1}, \n\\item[(5.2.46)] \n $G_1\/H_1$: the same notation (5.2.6) as in Subsection \\ref{subsec-5.2.1}, \n\\item[(5.2.47)] \n $G_2\/H_2=G_1\/H_1=SU(2)\/S(U(1)\\times U(1))\\simeq S^2$, \n\\item[(5.2.48)] \n $\\pi_i$: the projection from $G_i$ onto $G_i\/H_i$ ($i=1,2$). \n\\end{enumerate} \n\\setcounter{equation}{48}\n\n The main purpose in this subsection is to interrelate a surface of \nrevolution in $\\mathbb{R}^3$ (i.e., a Delaunay surface in $\\mathbb{R}^3$) with \na $K$-surface of revolution in $\\mathbb{R}^3$ by means of Theorem \n\\ref{thm-4.3.1} (see Theorem \\ref{thm-5.2.7}). \n Here, a {\\it $K$-surface} means a surface of constant negative \ncurvature $K=-1$. \n Such a surface is sometimes called a {\\it pseudospherical surface}.\\par\n \n According to Toda \\cite{To} (see \\cite{To2} also), one can characterize \neach $K$-surface $M$ in $\\mathbb{R}^3$ by an arc length asymptotic line \ncoordinate system $(x,y)$ on $M$ and the angle function $\\omega(x,y)$ with \nrespect to $(x,y)$ by the loop group method. \n For our purpose, we need to specialize her way concretely to surfaces \nof revolution. \n First we recall \n\\begin{lemma}\\label{lem-5.2.2}\n Let $f_{{\\mbox{\\tiny pseud}}}(u,v)$, $f_{\\mbox{\\tiny hyper}}(u,v)$ and \n$f_{\\mbox{\\tiny conic}}(u,v)$ denote the $K$-surfaces of revolution given in \nGray \\cite[Chapter 19.3]{Gray}\\footnote{Erratum: p.\\ 381, the equation (19.4) \nin \\cite{Gray}, should be $-i\\sqrt{a^2-b^2}E(iv\/a,-b^2\/(a^2-b^2))$ instead of \n$-i\\sqrt{a^2-b^2}E(iv\/a,b^2\/(a^2-b^2))$.}, respectively$:$\n\\[\n\\begin{array}{ll}\n{\\displaystyle\n f_{\\mbox{\\tiny pseud}}(u,v)\n =\\bigl(\\cos u\\sin v,\\sin u\\sin v,\\cos v+\\log(\\tan v\/2)\\bigr)}; \n& \\\\\n{\\displaystyle\n f_{\\mbox{\\tiny hyper}}(u,v)\n =\\bigl(b\\cos u\\cosh v,b\\sin u\\cosh v,\n \\int^v_0\\sqrt{1-b^2\\sinh^2(t)}dt\\bigr)}, \n& 0 R\\um{aug}$,\naround the atomic position where the representation of the projector functions on the grid is non-zero.\nThis strongly adds to the cost of the Hamiltonian action proportional to ${(R\\um{prj} \/ R\\um{aug})}^3$.\n\n\\noindent\nTo summarize:\nfor reasons of costs it is essential to preserve the localization of PAW projector functions in real-space.\nFor reasons of accuracy also localization in reciprocal space is mandatory.\n\n\\subsection{Data Compression}\\label{sec:compress}\nOne goal is to make the Hamiltonian action perform as efficient as possible\non current HPC compute systems, i.e.~vectorized many-core architectures.\nThe projection operation~(1) and its counterpart, the expansion~(3),\ncan be written as matrix-matrix multiplications.\nHere, one operand is a sparse matrix since the projector functions\nare non-zero only close to the atom.\nThe sparsity leads to a strong limitation of the performance by the memory BW.\n\nA major strategy for the reduction of BW requirements\nis to decrease the communication volume, i.e.~by data compression.\nA projector functions represented on a uniform\nCartesian real-space grid with grid points at $\\vec \\jmath \\in \\mathbb N^3$\ncan be viewed as rank-3 tensor $P_{j_x j_y j_z}$.\nHere, tensor compression methods\nas studied intensively by Khoromskaia and Khoromskij~\\cite{C5CP01215E}\ntry to approximate high-rank tensors by sums of dyadic products.\nIn this particular case we seek a representation\n\\begin{equation}\n P_{j_x j_y j_z} \\approx \\sum_{n_x n_y n_z} \\bar P_{n_x n_y n_z} \\, v^{[x]}_{n_x j_x} \\, v^{[y]}_{n_y j_y} \\, v^{[z]}_{n_z j_z}\n \\label{eqn:TuckerTensor}\n\\end{equation}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/TuckerTensor3D}\n \\caption{\\label{fig:TuckerTensor}\n Rank-3 Tucker tensor decomposition.\n\t$P_{j_x j_y j_z}$ is compressed into\n\t$v^{[x]}_{n_x j_x}$, $v^{[y]}_{n_y j_y}$, $v^{[z]}_{n_z j_z}$\n\tand $\\bar P_{n_x n_y n_z}$, see eq.~(\\ref{eqn:TuckerTensor}).\n }%\n \t\\Description{\n \tDecomposition of a tensor. The bulk 3D cube is decomposed into a\n \tsmaller cube times the Cartesian outer product of three sets of\n \tvectors.\n \t}%\n\\end{figure}\nas depicted in fig.~\\ref{fig:TuckerTensor}.\nIf it is possible to find such an approximation\nwith reduced ranges of the indices $(n_x,n_y,n_z)$ compared to the ranges of $(j_x,j_y,j_z)$\nthen $\\bar P$ is a rank-3 tensor of much lower volume.\nFurthermore, the total amount of memory needed for loading\nthe rectangular matrices $v^{[x|y|z]}$ in addition to loading $\\bar P$\ncan be lower than the total volume of $P$,\ndepending on the original ranges and its compressibility.\nIn particular if we consider a set of projectors, as usual in PAW,\nthat can make shared use of the same 1D function sets $v^{[x|y|z]}$,\nwe expect an overall good compression ratio.\n\nTransferring this scheme to the factorizable SHO basis, eq.~(\\ref{eqn:SHO-factorizable-basis}),\nwe are seeking an approximation of the projector set $\\tilde p_i(\\vec r)$ of each atom as\n\\begin{equation}\n \\tilde p_i(x,y,z) \\approx \\sum_{n_x n_y n_z} \\bar p_{i\\, n_x n_y n_z} \\, \\psi_{n_x}(x) \\, \\psi_{n_y}(y) \\, \\psi_{n_z}(z)\n\\end{equation}\nwith small ranges for $(n_x,n_y,n_z)$.\n\nDealing with compressed representations,\nan additional decompression operation is necessary.\nHowever, executing a task that is limited by the memory BW\nusually implies idle arithmetic units in the processor,\nso that the decompression phase is expected not add to the total execution time.\n\n\\subsection{SHO Basis}\nConsider injecting the SHO basis projector $\\hat {\\mathcal B}_{\\nu_{\\max}}$ as defined in eq.~(\\ref{eqn:SHO-basis}),\nbut for a finite $\\nu_{\\max}$ and given $\\sigma$,\ninto the projection operation: $\\braketop{ \\tilde p_j }{ \\hat {\\mathcal B}_{\\nu_{\\max}} }{ \\Psi_k }$.\nFor simplicity of the notation we suppress the atom index $a$ here.\nThen, we can evaluate the inner products\n\\begin{equation}\n\t\\braket{ \\tilde p_j }{ n_r \\ell m } \\coloneqq F_{j n_r \\ell m}\n\\end{equation}\non radial grids during the initialization phase of the DFT calculation.\nThe operation left to be performed every time\nis a projection onto the Cartesian factorizable SHO basis introduced in sec.~\\ref{sec:SHO}:\n\\begin{align}\n C_{n_x n_y n_z k} &= \\braket{ n_x n_y n_z }{ \\Psi_k } \\\\\n &= \\iiint\\limits_{\\mathbb R^3} \\mathrm d^3 \\ \\vec r\n \\, \\psi_{n_x}(x) \\, \\psi_{n_y}(y) \\, \\psi_{n_z}(z) \\, \\Psi_k(x,y,z)\n\\end{align}\n\n\\noindent\nIn order to retrieve the original projection coefficients $c_{jk}$\nwe can transform the new coefficients $C_{n_x n_y n_z k}$\nwith $\\braket{ \\tilde p_j }{ n_r \\ell m } \\equiv \\hat F$\nand $\\braketR{n_r \\ell m}{n_x n_y n_z} \\equiv \\hat U$.\nHowever, there is no need to retrieve these coefficients as\nthese are temporary quantities existing only for the duration of projection and expansion operations.\nIn fact, we can inject $\\hat {\\mathcal B}^\\dagger_{\\nu_{\\max}}$ also into the expansion operation.\nThen, we collect all terms such that the original dyadic operator is fully transformed:\n\\begin{align}\n & \\ket{ \\tilde p_i } D_{ij} \\bra{ \\tilde p_j }\n \\stackrel{ \\text{\\tiny{SHO}} }{ \\longrightarrow }\n \\ \\ \\hat {\\mathcal B}^{\\dagger}_{\\nu_{\\max}} \\, \\ket{ \\tilde p_i } \\, D_{ij} \\, \\bra{ \\tilde p_j } \\, \\hat {\\mathcal B}_{\\nu_{\\max}} \\\\\n &\\phantom{:}= \\ket{ n_x n_y n_z } \\ U^{n_x n_y n_z}_{n_r \\ell m} \\ F_{i n_r \\ell m}\n \\ D_{ij} \\ F_{j n_r' \\ell' m'} \\ U^{n_r' \\ell' m'}_{n'_x n'_y n'_z}\n \\bra{n'_x n'_y n'_z}\n \\nonumber \\\\\n &\\coloneqq \\ket{ n_x n_y n_z } \\ {\\mathcal D}^{n_x n_y n_z}_{n'_x n'_y n'_z} \\bra{n'_x n'_y n'_z}\n\\end{align}\nwith contraction over adjacent indices (Einstein notation). \\\\\nThe atomic Hamiltonian term $\\hat D$ is now replaced by\n$\\hat {\\mathcal D}$ which can be computed as $\\hat U^\\dagger \\hat F^\\dagger \\hat D \\hat F \\hat U$ at initialization of each SCF cycle.\n$\\hat {\\mathcal D}$ represents a real-valued matrix of dimension $N\\um{SHO}$.\nSee~eq.~(\\ref{eqn:SHO_basis_size}) for the definition of $N\\um{SHO}$.\n\\subsection{Smoothness}\nAnother advantage of the SHO basis for projectors is that it does not\nrequire any filtering in reciprocal space\nbefore it can be represented on a uniform Cartesian grid, c.f.~sec.~\\ref{sec:Filtering}.\n\\\\\nIn fact,\nthe highest kinetic energy in a 1D Hermite function\nis given by its eigenenergy $E_{1D}=(2n$+$1)\\,\\sigma^{-2}$.\nAccording to the sampling theorem, $k_{\\max} \\cdot h_{\\max} = \\pi$,\nit is possible to sample the 1D Hermite functions\non a uniform real-space grid with maximum grid spacing\n\\begin{equation}\n h_{\\max} = \\frac{ \\pi \\, \\sigma }{ \\sqrt{2\\nu_{\\max} + 1} }\n \\label{eqn:max_grid_spacing} \\text{.}\n\\end{equation}\n\\\\\nSince Hermite functions are eigenfunctions of the Fourier transform\ntheir representation in reciprocal space reads\n\\begin{equation}\n \\mathcal F\\left\\{ H_n(x)\\,\\exp(-\\frac{x^2}{2}) \\right\\}(k) \\propto {(-1)}^n \\, H_n(k) \\, \\exp(-\\frac{k^2}{2})\n \\text{.}\n\\end{equation}\nThe Gaussian suppresses high Fourier components in reciprocal space\nand, thus, controls the smoothness of the function set.\nWe show in the appendix sec.~\\ref{sec:Bessel-transform-eigenfunctions}\nthat the radial SHO eigenstates $R_{n_r \\ell}(r)$ are eigenfunctions of the Fourier-Bessel transform.\nHence, the radial SHO eigenstates also feature a Gaussian decay in Fourier space.\nThis shows that SHO states are the simultaneously best localized functions in both spaces.\n\n\\subsection{On-the-fly Basis}\nUsing the recursion relation for Hermite polynomials\n\\begin{equation}\n H_0(x) = 1, \\ \\ \\ H_1(x) = x, \\ \\ \\ H_{n+1}(x) = x \\, H_n(x) - \\frac{n}{2} \\, H_{n-1}(x)\n\\end{equation}\nit is even possible to cheaply evaluate the grid-sampled 1D Hermite functions\nduring the decompression phase.\nThis reduces the total transferred memory volume\nto a few Bytes.\nHence, it effectively eliminates the memory BW requirement\nfor the data list of the sparse matrix\nthat results from the projector functions.\nThe missing factors ensuring normalization of the Hermite functions\ncan be absorbed into the transformed atomic matrices $\\hat {\\mathcal D}$.\n\nWe might experience that in some situations the SHO basis\nneeds to be considerably larger than the original set of numerically given projectors\nwhich increases the BW requirements for the intermediate coefficients $C_{n_x n_y n_z k}$\nfor each atom\nthat have to be stored after projection and loaded before expansion.\nTherefore, we investigate on the necessary minimum size of a SHO basis\nthat still yields an acceptable representation quality of the PAW projector functions.\n\n\n\n\n\n\\section{Quality of Representation}\\label{sec:QLT}\nWe investigate how well commonly used projector functions $P_{n\\ell}(r)$ can\nbe represented in the radial SHO basis $R_{n_r \\ell}(r\/\\sigma)$\nas a function of the spread $\\sigma$.\nWe define the representation quality $Q$ as\n\\begin{equation}\n\tQ_\\nu(\\sigma) = \\sum_{n_r = 0}^{ \\lfloor \\frac 12(\\nu - \\ell) \\rfloor }\n\t\\left| \\braket{ P_{n\\ell}(r) }{ R_{n_r \\ell}(r\/\\sigma) } \\right|^2\n\t\\text{.}\n\\end{equation}\nApplying this metric to all PAW data sets (\\ttt{*.PBE})\npublicly available in the package \\ttt{gpaw-setups-0.9.9672}\nfrom GPAW~\\cite{GPAW-website}\nshows that we can represent most unfiltered projector functions\nwith at least \\unit[90]{\\%} quality.\nEven higher values are seen\nif the projector functions are mask-filtered~\\cite{doi:10.1063\/1.2193514} before.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=.92\\linewidth]{figs\/quality_Pt_d-projector}\n\n \\caption{\\label{fig:quality_convergence_Pt_d_raw}\n Convergence of the representation quality\n for the $d$-projector of platinum.\n Up to seven radial SHO basis functions were used ($\\nu_{\\max} = 14$).\n With respect to the size of the SHO basis,\n the strongest increase is between $\\nu_{\\max} = 2$ and $4$.\n\t}%\n \t\\Description{\n \tThe maxima become broader the more basis functions are added.\n \t}%\n\\end{figure}\nA typical convergence of $Q$ w.r.t.~$\\nu_{\\max}$ can be observed\nat the example of the platinum (Pt) $d$-projector,\nsee fig.~\\ref{fig:quality_convergence_Pt_d_raw}.\nFor $\\nu_{\\max} = 2$, the radial SHO basis in the $d$-channel consists of a single function\nthat features no radial nodes.\nThe Pt-$d$-projector function from the GPAW data base\ncan only be represented up to $Q = \\unit[77]{\\%}$ at $\\sigma = \\unit[0.84]{Bohr}$ (solid black line).\nAlready for two basis function ($\\nu_{\\max} = 4$),\nwe can reach up to $Q = \\unit[99.4]{\\%}$ at $\\sigma = \\unit[0.59]{Bohr}$ (solid green line).\nAdding more basis functions does not increase $Q$ much\nbut makes the maxima wider, i.e.~a larger basis is capable of accurately representing\nthe numerically given projectors with some flexibility on $\\sigma$.\nWe can see that the best increase in quality is reached with $\\nu_{\\max} = 4$.\nFor higher $\\nu_{\\max}$, the gain in $Q$ is small compared to the increase in costs\nsince the SHO basis size grows $\\propto \\nu_{\\max}^3$, c.f.~eq.~(\\ref{eqn:SHO_basis_size}).\n\nThe simplicity of the SHO basis is an advantage\nand a drawback at the same time.\nOnly a single $\\sigma$ can be selected for all projector functions of one atom.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=.92\\linewidth]{figs\/quality_Platinum_raw}\n\n\n \t\\caption{\\label{fig:quality_Pt_raw}\n \tBest representation quality as a function of $\\sigma$ for\n \tall projectors of platinum with $\\nu_{\\max} = 4$.\n\tColors are chosen according to $\\ell$-character,\n\tsolid and dashed lines show the first and second~(*) projector, respectively.\n\tWe select $\\sigma = \\unit[0.59]{Bohr}$ (green dot).\n\tThe solid green line again shows the quality of the Pt-$d$-projector as above.\n\t}%\n \t\\Description{\n \tThe peaks of the quality curves are at different $\\sigma$.\n \tWe manually select the narrowest curve.\n \t}%\n\\end{figure}\nAs visible in fig.~\\ref{fig:quality_Pt_raw} the quality functions for\nthe $s$, $s^*$, $p$, $p^*$, $d$ and $d^*$-projectors\nhave their maxima at slightly different values of $\\sigma$.\nTherefore, we selected the peak of the sharpest curve\nwhich for the case of platinum is the $d$-projector.\nAs discussed above, the best quality for $\\nu_{\\max} = 4$\nis found at $\\sigma = \\unit[0.59]{Bohr}$ which is marked with a dot in fig.~\\ref{fig:quality_Pt_raw}.\n\nWith respect to costs, we would like to keep the SHO basis as small as possible.\nThe minimal number of radial basis functions can be derived\nfrom the number of projectors in each $\\ell$-channel.\nThe minimal $\\nu_{\\max}$ for all PAW data sets analysed can be found in the appendix sec.~\\ref{sec:minimum_nu_max}.\nThis is, as a rule of thumb, $\\nu_{\\max} = 4$ for transition metals,\n$\\nu_{\\max} = 3$ for non-metals and $\\nu_{\\max} = 2$ for light elements, $Z < 5$.\nFurther, we suggest $\\nu_{\\max} = 5$ for the treatment of rare earth elements\nin order to host two projectors in the $f$-channel.\nFor some elements with the minimal $\\nu_{\\max}$\nit is, however, difficult to find a single value $\\sigma$\nthat produces good representation qualities for all projectors.\nAs it would be desirable in terms of costs to be able to use the minimal $\\nu_{\\max}$\nwe suggest a modified PAW dataset generation procedure described in the following section.\n\n\\section{Modified PAW data generation}\\label{sec:MOD}\nFor the generation of PAW data sets several\ndifferent recipes can be found in the literature,\nsee e.g.~\\cite{JOLLET20141246} or~\\cite{PhysRevB.59.1758} and references therein.\nThis starts at the pseudization of the true local potential\n$V\\um{eff}(r) \\rightarrow \\tilde V\\um{eff}(r)$\ni.e.~there are different shapes for a smooth continuation of the potential inside the augmentation sphere.\nFor the generation of true partial waves $\\phi_{n \\ell}(r)$,\na set of energy parameters $\\varepsilon_{n \\ell}$ needs to be chosen.\nAssuming a spherically symmetric true potential $V\\um{eff}(r)$,\na true partial wave $\\phi_{n\\ell}(r)$ is found by outwards integration\nof the radial ordinary differential equation (ODE) for a given energy $\\varepsilon_{n \\ell}$.\nThis can be either the scalar relativistic equation or the Schr\\\"{o}dinger equation.\nFurthermore, the generation of smooth partial waves $\\tilde \\phi_{n \\ell}(r)$ and\nprojector functions $\\tilde p_{n \\ell}(r)$ depends on the recipe one follows.\nThe simplest prescription is to use $r^\\ell$ times a low-order polynomial in $r^2$\nfor the construction of the smooth partial waves, $\\tilde \\phi_{n \\ell}(r)$.\nThis is matched with value and derivative\nto the true partial wave $\\phi_{n\\ell}(r)$ at $r$=$R\\um{aug}$~\\cite{Rostgaard:2009:PAW-Note}.\nOnce obtained $\\tilde \\phi_{n \\ell}(r)$, a first guess for the shape\nof the radial part of the projector function comes from\n\\begin{equation}\n \\ket{ \\tilde p_{n \\ell} } =\n \\left( \\hat T + \\tilde V\\um{eff} - \\varepsilon_{n \\ell} \\right)\n \\ket{ \\tilde \\phi_{n \\ell}}\n \\label{eqn:usual-projector-generation}\n\\end{equation}\nas suggested by Bl\\\"ochl in his original work on PAW~\\cite{PhysRevB.50.17953}.\nIn order to keep the SHO basis small (minimum $\\nu_{\\max}$, if possible)\nwe suggest to invert this scheme.\nWe regard eq.~(\\ref{eqn:usual-projector-generation}) as inhomogeneous\nODE and solve it for $\\tilde \\phi_{n \\ell}(r)$\nusing the radial SHO eigenstates $R_{n_r \\ell}(r\/\\sigma)$ with $n_r$=$n$ as projector functions:\n\\begin{equation}\n \\ket{ \\tilde \\phi_{n \\ell}} =\n {\\left( \\hat T + \\tilde V\\um{eff} - \\varepsilon_{n \\ell} \\right)}^{-1}\n \\ket{ R_{n \\ell} }\n \\label{eqn:suggested-projector-generation}\n\\end{equation}\nThis removes a lot of free parameters from the PAW generation scheme\nas we are only left with $\\nu_{\\max}$ and $\\sigma$.\nThe cut-off radius $R\\um{aug}$ is only required\nfor the construction of $\\tilde V\\um{eff}(r)$.\nWe can even try to eliminate this dependency:\nGPAW constructs $\\tilde V\\um{eff}(r)$ as parabola for $r < R\\um{aug}$\nwhich matches the true potential at $R\\um{aug}$ in value and first derivative.\nIf we constrain the curvature of this parabola to a fixed function of $\\sigma$, e.g.~${(2\\sigma)}^{-4}$,\nthe parabola and the true potential touch at a radius $R\\um{lid}$\nwhich comes out of the procedure instead of being an input.\nWe call this the \\emph{potential lid technique}\nsince the parabola's offset is lowered until it touches the true potential,\nmuch like a lid that closes a pot.\nSee fig.~\\ref{fig:potential-lid-technique} for a schematic picture.\nThe factor of two rescaling $\\sigma$ has been chosen to produce good results for iron.\nHowever, we suspect it to work well for a wider range of elements\nwhich would allow to eliminate it from the list of tunable parameters.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/potential_lid_technique}\n\n \t\\caption{\\label{fig:potential-lid-technique}\n \tPseudization of the local potential according to the \\emph{potential lid technique}.\n \tA parabola of given curvature closes the true deep core potential like a lid closes a pot.\n\t}%\n \t\\Description{\n \tA convex parabola is touching a concave potential function with equal tangent.\n \t}%\n\\end{figure}\n\nAs we want the pseudo Hamiltonian to produce the same energy ordering in the valence range\nas the true Hamiltonian, we demand that\nthat $\\varepsilon_{0 \\ell} < \\varepsilon_{1 \\ell} < \\cdots < \\varepsilon_{n_{\\max, \\ell} \\ell}$\nwhere $n_{\\max, \\ell} = \\left\\lfloor \\frac 1 2 (\\nu_{\\max} - \\ell) \\right\\rfloor$.\n\n\\subsection{Results}\\label{sec:results}\nAs an example, we will discuss the results obtained for iron ($Z = 26$).\nHere, we find a lid radius $R\\um{lid}$ of \\unit[2.08]{Bohr}\nwhich is very close to the radius used in the GPAW data set.\nA spread parameter of $\\sigma = \\unit[0.65]{Bohr}$ for the analytical SHO projectors\nwas selected to yield the simultaneously best representation quality\nof all projectors with $\\nu_{\\max} = 4$.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/Fe-partial-waves}\n \\caption{\\label{fig:partial-waves-for-Fe}\n Iron true partial waves (solid lines) and smooth partial waves (dashed lines)\n generated by the inverted PAW generation scheme.\n\t}%\n \t\\Description{\n \tIn the $s$-channel, true partial waves feature rapid oscillations close to the core.\n \tIn the $d$-channel, the true partial waves show the Fe-3$d$-states localized mostly inside $r \\leq 2\\,$Bohr.\n \t}%\n\\end{figure}\nThe selected reference energies are $-0.4$ (Fe-4$s$), $-0.11$ (Fe-4$p$)\nand \\unit[-0.57]{Rydberg} (Fe-3$d$). The latter is also used for $\\ell > 2$.\nAdditional true partial waves are generated at $\\frac{n}{\\ell + 1}\\,$Rydberg\nabove their reference energies.\nThe resulting true and smooth partial waves are shown in fig.~\\ref{fig:partial-waves-for-Fe}.\n\n\nWe verify the scattering properties of the new PAW dataset for iron\ngenerated according to eq.~(\\ref{eqn:suggested-projector-generation})\nby comparing its logarithmic derivative $L_\\ell(E)$ to that of the true potential,\nsee fig.~\\ref{fig:logder-for-Fe}.\nExcept for a \\unit[4]{mRydberg} shift in the position of the $s$-resonance,\nthe (solid) lines of the true $L_\\ell(E)$ lie on top of the pseudo $L_\\ell(E)$ (dashed lines)\nwhich is a reasonably good agreement.\nThe Fe-3$d$-states are strongly localized which makes them appear\nas a narrow resonance at \\unit[-0.57]{Rydberg}, see inset in fig.~\\ref{fig:logder-for-Fe}.\nSince the spread $\\sigma$ and the cutoff $\\nu_{\\max}$\nare tunable input parameters to the generation procedure\nwe have sufficient freedom to produce a transferable PAW dataset.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/Fe_logder}\n\n \\caption{\\label{fig:logder-for-Fe}\n Logarithmic derivative (at $r$=$\\unit[7.1]{Bohr}$) generated by the inverted PAW generation scheme for iron.\n\tThe strongest deviation inside the valence energy range\n\tis a \\unit[4]{mRydberg} shift of the resonance in the $s$-channel.\n\tAll other dashed lines (pseudo logarithmic derivatives) are covered by the solid lines (true logarithmic derivatives).\n\t}%\n \t\\Description{\n \tAll logarithmic derivative are falling curves with poles at\n \tenergies where resonances appear. Their shape is similar to that of a tangent.\n \t}%\n\\end{figure}\n\n\\subsection{Rescaling of partial waves}\nBl\\\"{o}chl's PAW generation scheme~\\cite{PhysRevB.50.17953} \nallows to rescale partial waves and projectors according to\n\\begin{equation}\n \\left( \\phi_{n\\ell}, \\tilde \\phi_{n\\ell}, \\tilde p_{n\\ell} \\right)\n \\rightarrow \\left( s\\,\\phi_{n\\ell}, s\\, \\tilde \\phi_{n\\ell}, s^{-1}\\tilde p_{n\\ell} \\right)\n ,\\ \\ \\ s \\in \\mathbb R \\setminus \\{0\\}\n \\label{eqn:PAW_partial_waves_rescaling}\n\\end{equation}\nindependently for each $n$ and $\\ell$.\nThis is because after a successful PAW generation procedure, the dual orthogonality\n\\begin{equation}\n\t\\braket{ \\tilde p_{n\\ell} }{ \\tilde \\phi_{n'\\ell} } = \\delta_{nn'}\n\t\\label{eqn:dual_orthogonality_of_smooth_partial_waves_and_projectors}\n\\end{equation}\nmust hold.\nTypically, this orthogonality is established by rescaling\nthe lowest projector and the lowest partial waves\nand orthogonalizing the higher ones to the lowest.\nThis procedure is equivalent to an $LU$-decomposition.\n\nWith the SHO projectors, rescaling is not allowed\nif we aim to exploit the 3D factorization,\nc.f.~eq.~(\\ref{eqn:SHO-transform-Cartesian-to-radial}).\nIt is rather important that the $R_{n\\ell}(r\/\\sigma)$ enter eq.~(\\ref{eqn:suggested-projector-generation})\nwith proper normalization.\nTherefore,\neq.~(\\ref{eqn:dual_orthogonality_of_smooth_partial_waves_and_projectors})\nis enforced by applying the inverse of $\\braket{ \\tilde p_{n\\ell} }{ \\tilde \\phi_{n'\\ell} }$\nonly to the preliminary pairs of true and smooth partial waves.\n\n\n\\section{Performance Measurements}\\label{sec:PERF}\nFor an impression of the expectable savings\nwe assess the performance of projection (\\texttt{prj}) and expansion (\\texttt{add})\noperations as defined in sec.~\\ref{sec:HMT} by (1) and (3), respectively,\nfor a first implementation on NVIDIA GPUs.\nIn order to provide a meaningful comparison between\nthe runtime of \\texttt{SHO} projectors which are generated on-the-fly\nand the reference (\\texttt{USU}) using precomputed projector values,\nwe work in the same framework and exchange only the kernel.\nThe test system is a cubic domain of edge length \\unit[16]{\\AA}\nwith a grid spacing of \\unit[0.25]{\\AA}, i.e.~$64 \\times 64 \\times 64$ grid points.\nAssuming a face-centered cubic crystal of atoms with a\nlattice constant of \\unit[4.08]{\\AA} (e.g.~silver or gold)\nlets us find $241$ atom positions inside the domain.\nWith a projection radius of \\unit[3.55]{\\AA} exactly $665$ atoms contribute with a non-zero overlap of\ntheir projection sphere\nand the cubic domain,\nsee fig.~\\ref{fig:gold_projection_radii}.\nMind that there is also a non-zero overlap between projection spheres of\nneighboring atoms. Unlike the augmentation spheres,\nthe projection spheres may overlap.\n\n\nThe Hamiltonian is applied to $1024$ wave functions at a time.\nThis index is used for vectorization over warps of $32$ GPU-threads.\nThe performance results are converged w.r.t.~this number.\n\nFor stable performance results the kernel execution is repeated $10$ times.\nAfter a warm-up run which is not considered,\nthe median of $15$ runs is reported in tab.~\\ref{tab:perf-results}.\n\nThe memory consumption for the precomputed projectors is\n\\unit[76]{MByte} per projection coefficient.\nThis results in an additional GPU memory request of up to \\unit[4.2]{GByte}.\n\n\nWe executed on an {IBM PowerNV 8335-GTB}\nwith {NVIDIA Tesla P100-SXM2} (Pascal) GPUs connected by \\ttt{NVlink}.\nRuntime and compilers are \\ttt{CUDA 9.2.148}, \\ttt{GCC 7.2.0}.\nFurthermore, we benchmarked the latest {NVIDIA Tesla V100} (Volta) GPUs\nmounted on an Intel system (Xeon Gold 6148) connected via PCIe.\nHere, \\ttt{CUDA 9.2.148} and \\ttt{GCC 7.3.0} toolchains were used.\n\nEstimating the upper limit of floating point operations executed\nwe can verify that all kernels are performance-limited by the device memory BW.\nHence, an improvement in the timings directly relates to savings in the BW.\n\nFrom tab.~\\ref{tab:perf-results} we can see an up to $6.6$ times faster execution\ndepending on the kernel, the basis size and the device used.\nThe speedup $S$ compares the timings of the\n\\ttt{USU}al projection operations using precomputed projector functions\nto timings of the \\ttt{SHO} projection operations.\nWhile \\ttt{USU} kernels can take any number of projectors\na SHO basis of size $18$ cannot be constructed, therefore,\nthe corresponding entries for \\ttt{SHO} are empty.\nHowever, we included the timing result for \\ttt{USU18}\nas it allows to consider a typical calculation setup for transition metal elements.\nIn order to cover $s$, $s^*$, $p$, $p^*$, $d$ and $d^*$ projectors,\nthe SHO basis must be constructed with $\\nu_{\\max} = 4$, i.e.~$35$ projection\ncoefficients.\nHence, a fair comparison of the two methods is to take \\ttt{USU18} vs.~\\ttt{SHO35}\nwhich translates into an algorithmic speedup\ngiven in~tab.~\\ref{tab:perf-fair-comparison}.\nExecuting both kernels after each other, \\ttt{prj+add},\nin \\ttt{double} precision\nresults in an algorithmic speedup of $2.6$ on P100 GPUs\nand $2.0$ on the more recent Volta architecture, V100.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[trim=3cm 3cm 3cm 3cm,clip,width=.5\\linewidth]{figs\/gold_projection_radii}\n\n \\caption{\\label{fig:gold_projection_radii}\n 2D sketch of the overlapping projection spheres inside a cubic domain of edge length \\unit[16]{\\AA}.\n This setup is used for the GPU performance benchmarks in \\ttt{double} precision.\n\tIn average, the wave function values of each grid point\n\tare involved into eleven projection operations.\n\tThe radius of each projection sphere is \\unit[3.55]{\\AA}\n\twhile the nearest neighbor distance of atomic nuclei is \\unit[2.9]{\\AA}.\n\t}%\n \t\\Description{\n \tTest system for performance benchmark\n \t}%\n\\end{figure}\n\n\\begin{table}\n \\caption{\n\t\tPerformance improvement on NVIDIA GPUs. All times are reported in seconds.\n\t\tThe speedup $S$ is the unitless ratio of timings \\ttt{USU}\/\\ttt{SHO} for the same \\# of projectors.\n }\n \\label{tab:perf-results}\n \\begin{tabular}{r rrr rrr}\n \\toprule\n\t\t & \\multicolumn{6}{c}{NVIDIA P100 Pascal} \\\\\n \\# & \\ttt{USUprj} & \\ttt{SHOprj} & $S$ & \\ttt{USUadd} & \\ttt{SHOadd} & $S$ \\\\\n \\midrule\n\t 1 & 0.480 & 0.453 & 1.1 & 0.274 & 0.187 & 1.5 \\\\\n\t 4 & 0.834 & 0.471 & 1.8 & 0.803 & 0.363 & 2.2 \\\\\n\t 10 & 1.259 & 0.430 & 2.9 & 1.820 & 0.520 & 3.5 \\\\\n\t 18 & 2.080 & & & 3.173 & & \\\\\n\t 20 & 2.366 & 0.482 & 4.9 & 3.511 & 0.733 & 4.8 \\\\\n\t 35 & 3.904 & 0.967 & 4.0 & 6.052 & 1.027 & 5.9 \\\\\n\t 56 & 6.058 & 1.214 & 5.0 & 9.625 & 1.455 & 6.6 \\\\\n \\midrule\n\t\t & \\multicolumn{6}{c}{NVIDIA V100 Volta} \\\\\n \\# & \\ttt{USUprj} & \\ttt{SHOprj} & $S$ & \\ttt{USUadd} & \\ttt{SHOadd} & $S$ \\\\\n \\midrule\n\t 1 & 0.291 & 0.271 & 1.1 & 0.140 & 0.118 & 1.2 \\\\\n\t 4 & 0.354 & 0.310 & 1.1 & 0.378 & 0.189 & 2.0 \\\\\n\t 10 & 0.555 & 0.284 & 2.0 & 0.823 & 0.277 & 3.0 \\\\\n\t 18 & 0.830 & & & 1.411 & & \\\\\n\t 20 & 1.013 & 0.300 & 3.4 & 1.557 & 0.398 & 3.9 \\\\\n\t 35 & 2.087 & 0.575 & 3.6 & 2.660 & 0.564 & 4.7 \\\\\n\t 56 & 1.806 & 0.793 & 2.3 & 4.208 & 0.792 & 5.3 \\\\\n \\bottomrule\n \\end{tabular}\n\n\n\n\n\n\\end{table}\n\n\\begin{table}\n \\caption{\n\t\tAlgorithmic speedup comparing \\ttt{USU18} vs.~\\ttt{SHO35}.\n }\n \\label{tab:perf-fair-comparison}\n \\begin{tabular}{r rrr rrr}\n \\toprule\n & \\multicolumn{3}{c}{\\ttt{float}} & \\multicolumn{3}{c}{\\ttt{double}} \\\\\n GPU & \\ttt{prj} & \\ttt{add} & \\ttt{both} & \\ttt{prj} & \\ttt{add} & \\ttt{both} \\\\\n \\midrule\n\tP100 & 2.3 & 2.9 & 2.7 & 2.2 & 3.1 & 2.6 \\\\\n\tV100 & 1.4 & 1.7 & 1.6 & 1.4 & 2.5 & 2.0 \\\\\n \\bottomrule\n \\end{tabular}\n\n\n\n\n\n\n\n\n\\end{table}\n\n\n\\section{Discussion}\\label{sec:discuss}\nThe suggested application of a SHO basis for projection operations\nor, even going further, analytical SHO projector functions\nbrings many advantages but also comes with some drawbacks.\nIn this section, we discuss pro and contra.\n\n\\subsection{Advantages}\nThe SHO basis\n\\begin{itemize}\n \\item factorizes in the Cartesian coordinates $(x,y,z)$.\n \\item can be sampled on uniform grids without filtering.\n \\item has only two parameters, $\\sigma$ and $\\nu_{\\max}$.\n \\item is given analytically.\n\\end{itemize}\nIn particular the last point, the analytical shapes,\nallows to save memory bandwidth and memory capacity\nfor implementations of grid-based projection operations,\nas shown in sec.~\\ref{sec:PERF}.\n\n\\subsection{Drawbacks}\nThe SHO basis does not offer the flexibility\nto represent any projector function with good quality.\nHowever, we have shown that the PAW generation scheme\ncan be adjusted to using radial SHO basis as projectors.\n\n\nThe SHO basis leads to more projection coefficients than usual PAW.\\@\nFor example, two projectors for the $s$, $p$ and $d$-channel makes $18$ coefficients.\nIn order to fit a $d^*$-projector into a SHO basis,\nwe need a minimum $\\nu_{\\max}$ of $4$, i.e.~$35$ coefficients.\nHowever, we have to take it as an upside.\nWith $\\nu_{\\max} = 4$ the SHO basis contains\nan additional $s^{**}$, seven $f$ and nine $g$-projectors.\nAdding projectors and partial waves\nof higher $\\ell$\nmay potentially improve the transferability of the PAW data sets\nfor chemical environments with low symmetry\nsince gradients in the potential scatter into the next higher $\\ell$-channel.\nCompared to commonly used projector sets, SHO projectors treat also higher $\\ell$-channels in full-potential accuracy. \nHowever, the true transferability remains to be shown in 3D calculations.\n\n\\section{Related work}\\label{sec:related}\n\\noindent\nAlready 1986, Obara and Saika presented the ansatz of 3D products of\nprimitive Cartesian Gaussian (pCG) functions\n$x^{\\nu}\\exp(-\\frac 12 x^2)$\nin order to evaluate two- and three-center integrals~\\cite{doi:10.1063\/1.450106}.\nIn fact, there is a strong relation between pCGs and the 1D harmonic oscillator states.\nThe pCG functions form a non-orthogonal basis set.\nApplying Gram-Schmidt orthogonalization to pCGs leads to 1D Hermite functions.\n\\noindent\nMany works in quantum chemistry,\nas e.g.~Schlegel and Frisch~\\cite{doi:10.1002\/qua.560540202},\nhave mentioned the increased basis size $\\frac 12(\\ell_{\\max} + 2)(\\ell_{\\max} + 1)$\nof the pCGs compared to $(2\\ell_{\\max} + 1)$\nbut this has usually been taken as a weakness rather than a strength.\n\nThe idea of using the SHO basis has probably not received attention as the related basis set\n\\begin{equation}\n\t \\chi_{n_r \\ell m}(\\vec r) = r^{\\ell + 2n_r} \\, \\exp(-\\frac {r^2}{2\\sigma^2}) \\ Y_{\\ell m}(\\hat r)\n\\end{equation}\nhas been found not to converge as rapidly as a set of only Gaussian-type orbitals (GTOs)\ncontracted over a set of different spread parameters $\\sigma$~\\cite{doi:10.1002\/jcc.540070403}\nwhen used as a basis set for quantum chemistry calculations~\\cite{doi:10.1063\/1.463096}.\n\\\\\nNote that the subset of radial SHO states with $n_r = 0$ are GTOs.\n\n\\section{Conclusions}\\label{sec:summary}\nThe eigenstates of the Spherical Harmonic Oscillator (SHO)\nform a natural link between 3D Cartesian grids\nand radial representations with sharp quantum numbers for the angular momentum, $(\\ell,m)$,\nwithout the use of global Fourier transforms.\nProjection and expansion operations\nare the heart of the Projector Augmented Wave (PAW) method~\\cite{PhysRevB.50.17953} for\nelectronic structure calculations.\nUsing the SHO basis for these operations\nallows to exploit its special properties.\nSimilar to compressed tensor representations,\nthe 3D SHO states can be factorized into eigenstates of the 1D harmonic oscillator\nfor each of the three Cartesian coordinates $(x,y,z)$.\nThis offer a great potential for\nreducing memory bandwidth requirements and memory capacity constraints.\nIn particular since it is cheap to recompute\nthe analytically given 1D basis functions\ncompared to loading them from memory.\nA first implementation on GPUs shows that we can expect\nan at least two times faster application of the non-local parts\nof the PAW Hamiltonian for transition metals.\nCommonly used PAW projector functions can be represented in a SHO basis,\nhowever, the cost-saving minimal basis size is not suitable for representing arbitrary functions.\nTherefore, we suggest a modified PAW generation scheme\nin which the projector functions are fixed to the analytically given radial SHO basis functions\neliminating the freedom of choosing the shape and scale of the smooth partial waves.\n\n\n\\section*{Acknowledgements}\nThe authors thank Thorsten Hater for proofreading\nand Miriam Hinzen for assistance with the Fourier-Bessel transform.\n\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}